AI: Errors? Hey, No Problemo.
March 5, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
I love the AI razzle dazzle. Some of the functions available to dinobabies like me are semi-useful. However, I am generally unimpressed with some of the “magic” functions these systems provide. Probabilities, flawed data used for training them, and humanoid (for now) wizard programmers doing their thing make me cautious.

Thanks, Venice.ai. Good enough.
That’s why I got a chuckle from “Unbelievably Dangerous: Experts Sound Alarm after ChatGPT Health Fails to Recognize Medical Emergencies.” The write up reports as actual factual:
The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.
Medical writing is as wonky as the information output by crypto bros. Here’s my translation of the statement. AI will miss more than half of serious health problems. My hunch is that real doctors and real AI wizards will say, “Hey, this is one study.” and “Wow, the sample is statistically flawed.”
Maybe.
The write up points out:
While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure. In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.
I understand that smart software is a work in progress. But MBAs and would be world visionaries want AI now, now, now. Move fast. Yep, and break things. I suppose putting a person’s life in jeopardy is an insignificant, trivial even.
Here’s the conclusion of the article:
Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper. “If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”…“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users…”
Several observations:
Net net: With so much money and so many egos caught in this “we have the answer” AI thing, why worry? Big tech has the answers, the lawyers, and the obsession to fill deliver reality their way.
Stephen E Arnold, March 5, 2026
Comments
Got something to say?

