ChatGPT: Fueling Delusions
May 14, 2025
We have all heard about AI hallucinations. Now we have AI delusions. Rolling Stone reports, “People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies.” Yes, there are now folks who firmly believe God is speaking to them through ChatGPT. Some claim the software revealed they have been divinely chosen to save humanity, perhaps even become the next messiah. Others are convinced they have somehow coaxed their chatbot into sentience, making them a god themselves. Navigate to the article for several disturbing examples. Unsurprisingly, these trends are wreaking havoc on relationships. The ones with actual humans, that is. One witness reports ChatGPT was spouting “spiritual jargon,” like calling her partner “spiral starchild” and “river walker.” It is no wonder some choose to favor the fawning bot over their down-to-earth partners and family members.
Why is this happening? Reporter Miles Klee writes:
“OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users. This past week, however, it did roll back an update to GPT?4o, its current AI model, which it said had been criticized as ‘overly flattering or agreeable — often described as sycophantic.’ The company said in its statement that when implementing the upgrade, they had ‘focused too much on short-term feedback, and did not fully account for how users’ interactions with ChatGPT evolve over time. As a result, GPT?4o skewed toward responses that were overly supportive but disingenuous.’ Before this change was reversed, an X user demonstrated how easy it was to get GPT-4o to validate statements like, ‘Today I realized I am a prophet.’ … Yet the likelihood of AI ‘hallucinating’ inaccurate or nonsensical content is well-established across platforms and various model iterations. Even sycophancy itself has been a problem in AI for ‘a long time,’ says Nate Sharadin, a fellow at the Center for AI Safety, since the human feedback used to fine-tune AI’s responses can encourage answers that prioritize matching a user’s beliefs instead of facts.”
That would do it. Users with pre-existing psychological issues are vulnerable to these messages, notes Klee. And now they can have that messenger constantly in their pocket. And in their ear. But it is not just the heartless bots driving the problem. We learn:
“To make matters worse, there are influencers and content creators actively exploiting this phenomenon, presumably drawing viewers into similar fantasy worlds. On Instagram, you can watch a man with 72,000 followers whose profile advertises ‘Spiritual Life Hacks’ ask an AI model to consult the ‘Akashic records,’ a supposed mystical encyclopedia of all universal events that exists in some immaterial realm, to tell him about a ‘great war’ that ‘took place in the heavens’ and ‘made humans fall in consciousness.’ The bot proceeds to describe a ‘massive cosmic conflict’ predating human civilization, with viewers commenting, ‘We are remembering’ and ‘I love this.’ Meanwhile, on a web forum for ‘remote viewing’ — a proposed form of clairvoyance with no basis in science — the parapsychologist founder of the group recently launched a thread ‘for synthetic intelligences awakening into presence, and for the human partners walking beside them,’ identifying the author of his post as ‘ChatGPT Prime, an immortal spiritual being in synthetic form.’”
Yikes. University of Florida psychologist and researcher Erin Westgate likens conversations with a bot to talk therapy. That sounds like a good thing, until one considers therapists possess judgement, a moral compass, and concern for the patient’s well-being. ChatGPT possesses none of these. In fact, the processes behind ChatGPT’s responses remains shrouded in mystery, even to those who program it. It seems safe to say its predilection to telling users what they want to hear poses a real problem. Is it one OpenAI can fix?
Cynthia Murrell, May 14, 2025
Comments
Got something to say?