AI: Errors? Hey, No Problemo.

March 5, 2026

green-dino_thumbAnother dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.

I love the AI razzle dazzle. Some of the functions available to dinobabies like me are semi-useful. However, I am generally unimpressed with some of the “magic” functions these systems provide. Probabilities, flawed data used for training them, and humanoid (for now) wizard programmers doing their thing make me cautious.

image

Thanks, Venice.ai. Good enough.

That’s why I got a chuckle from “Unbelievably Dangerous: Experts Sound Alarm after ChatGPT Health Fails to Recognize Medical Emergencies.” The write up reports as actual factual:

The first independent safety evaluation of ChatGPT Health, published in the February edition of the journal Nature Medicine, found it under-triaged more than half of the cases presented to it.

Medical writing is as wonky as the information output by crypto bros. Here’s my translation of the statement. AI will miss more than half of serious health problems. My hunch is that real doctors and real AI wizards will say, “Hey, this is one study.” and “Wow, the sample is statistically flawed.”

Maybe.

The write up points out:

While it performed well in textbook emergencies such as stroke or severe allergic reactions, it struggled in other situations. In one asthma scenario, it advised waiting rather than seeking emergency treatment despite the platform identifying early warning signs of respiratory failure. In 51.6% of cases where someone needed to go to the hospital immediately, the platform said stay home or book a routine medical appointment, a result Alex Ruani, a doctoral researcher in health misinformation mitigation with University College London, described as “unbelievably dangerous”.

I understand that smart software is a work in progress. But MBAs and would be world visionaries want AI now, now, now. Move fast. Yep, and break things. I suppose putting a person’s life in jeopardy is an insignificant, trivial even.

Here’s the conclusion of the article:

Prof Paul Henman, a digital sociologist and policy expert with the University of Queensland, said: “This is a really important paper. “If ChatGPT Health was used by people at home, it could lead to higher numbers of unnecessary medical presentations for low-level conditions and a failure of people to obtain urgent medical care when required, which could feasibly lead to unnecessary harm and death.”…“It is not clear what OpenAI is seeking to achieve by creating this product, how it was trained, what guardrails it has introduced and what warnings it provides to users…”

Several observations:

  • OpenAI is trying to find a way to make money. Health care is a discipline with money sloshing around. Therefore, a health play should work, right? (Remember Google tried health too and where is that now?)
  • This is one of those “if we build it, they will come” applications. Perfect use case because it made sense at lunch last month.
  • What happens when AI as it is today makes other important decisions? I think I know.
  • Net net: With so much money and so many egos caught in this “we have the answer” AI thing, why worry? Big tech has the answers, the lawyers, and the obsession to fill deliver reality their way.

    Stephen E Arnold, March 5, 2026

    Comments

    Got something to say?





    • Archives

    • Recent Posts

    • Meta