OpenAI Says, Hallucinations Are Here to Stay?

September 22, 2025

Dino 5 18 25_thumb_thumbSadly I am a dinobaby and too old and stupid to use smart software to create really wonderful short blog posts.

I read “OpenAI Admits AI Hallucinations Are Mathematically Inevitable, Not Just Engineering Flaws.” I am not sure the information in the write up will make people who are getting smart software whether they want it or not happy. Even less thrilled with the big outfits who are implementing AI with success ranging from five percent to 90 percent hoorahs. Close enough for horse shoes works for putting shoes on equines. I am not sure how that will work out for medical and financial applications. I won’t comment on the kinetic applications of smart software, but hallucination may not be a plus in some situations.

The write up begins with what may make some people — how shall I say it? — nervous, frightened, squeamish. I quote:

… OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

I quite liked the word always. It is obviously a statement that must persist for eternity, which to a dinobaby like me, quite a long time. I found the distinction between plausible and false delicious. The burden to figure out what is “correct,” “wrong,” slightly wonky, and false shifts to the user of smart software. But there is another word that struck me as significant: Perfect. Now that is another logical tar pit.

After this, I am not sure where the write up is going. I noted this passage:

OpenAI, the creator of ChatGPT, acknowledged in its own research that large language models will always produce hallucinations due to fundamental mathematical constraints that cannot be solved through better engineering, marking a significant admission from one of the AI industry’s leading companies.

There you go. The fundamental method in use today and believed to be the next big thing is always going to produce incorrect information. Always.

The Computerworld story points to the “research paper.” Computerworld points out that industry evaluations of smart software are slippery fish. Computerworld reminds its readers that “enterprises must adapt strategies.” (I would imagine. If smart software gets chemical formula wrong or outputs information that leads to a substantial loss of revenue, problems might arise, might they not?) Computerworld concludes with a statement that left me baffled; to wit: “Market already adapting.”

Okay.

I wonder how many Computerworld readers will consume this story standing next to a burning pile of cash tossed into the cost black holes of smart software.

Stephen E Arnold, September 22, 2025

Comments

One Response to “OpenAI Says, Hallucinations Are Here to Stay?”

  1. AI and Data Exhaustion: Just Use Synthetic Data and Recycle User Prompts : Stephen E. Arnold @ Beyond Search on October 23rd, 2025 5:07 am

    […] that, as a result, AI models will increasingly rely on synthetic data. Get ready for exponential hallucinations. Writer Anthony Cuthbertson quotes […]

Got something to say?





  • Archives

  • Recent Posts

  • Meta