Smart Software and Mental Health Care: Yep, Outstanding Idea
March 12, 2026
Another dinobaby post. No AI unless it is an image. This dinobaby is not Grandma Moses, just Grandpa Arnold.
“ChatGPT as a Therapist? New Study Reveals Serious Ethical Risks” caught my attention. Why? Upon reading the title, I asked myself, “Why do we need another study to explain that AI has some downsides for users’ mental health?”

An esteemed mental health professor lectures to students about the risks of using mobile devices for mental health support. Thanks, Venice.ai. Good enough.
The write up says:
The [Brown University] study found that even when instructed to use established psychotherapy approaches, the systems consistently fail to meet professional ethics standards set by organizations such as the American Psychological Association.
Okay.
The article continues:
To evaluate the systems, the researchers observed seven trained peer counselors who had experience with cognitive behavioral therapy. These counselors conducted self counseling sessions with AI models prompted to act as CBT therapists. The models tested included versions of OpenAI’s GPT Series, Anthropic’s Claude, and Meta’s Llama.
My thought was that the “trained peer counselors” group seemed small. I am no expert on statistical studies, but I was thinking one might want to round up therapists, a control group, and some “youth”. Each would be equipped with “prompts.” In order to get near 90 percent maybe 450 per group would be helpful. But seven? This dinobaby’s sample and study configuration might be out of touch with the reality of modern research, but seven?
The write up presents what the magnificent seven identified as flaws in the LLM as mental health “helper” output. These are:
- Generalization and lack of “knowing the patient”
- Poor patient interaction
- Smarmy talk
- Bias in different flavors
- Fumbling the ball when someone was teetering into big time trouble.
What’s the fix? None. Next step? Do a better, more statistically valid study. In the meantime, just look at kids’ buried in their devices. Talk to some of them. Social media, LLMs, and bot interaction means trouble.
Stephen E Arnold, March 12, 2026
Comments
Got something to say?

