Un-Aliving Violates TOS and Some Linguistic Boundaries

December 18, 2025

Ah, lawyers.

Depression is a dark emotionally state and sometimes makes people take their lives. Before people unalive themselves, they usually investigate the act and/or reach out to trusted sources. These days the “trusted sources” are the host of AI chatbots that populate the Internet. Ars Technica shares the story about how one teenager committed suicide after using a chatbot: “OpenAI Says Dead Teen Violated TOS When He Used ChatGPT To Plan Suicide.”

OpenAI is facing a total of five lawsuits about wrongful deaths associated with ChatGPT. The first lawsuit came to court and OpenAI defended itself by claiming that the teen in question, Adam Raine, violated the terms of service because they prohibited self-harm and suicide. While pursuing the world’s “most engaging chatbot,” OpenAI relaxed their safety measures for ChatGPT which became Raine’s suicide coach.

OpenAI’s lawyers argued that Raine’s parents selected the most damaging chat logs. They also claim that the logs show that Raine had had suicidal ideations since age eleven and that his medication increased his un-aliving desires.

Along with the usual allegations about shifting the blame onto the parents and others, OpenAI says that people use the chatbot at their own risk. It’s a way to avoid any accountability.

“To overcome the Raine case, OpenAI is leaning on its usage policies, emphasizing that Raine should never have been allowed to use ChatGPT without parental consent and shifting the blame onto Raine and his loved ones. ‘ChatGPT users acknowledge their use of ChatGPT is ‘at your sole risk and you will not rely on output as a sole source of truth or factual information,’ the filing said, and users also “must agree to ‘protect people’ and ‘cannot use [the] services for,’ among other things, ‘suicide, self-harm,’ sexual violence, terrorism or violence.’”

OpenAI employees were also alarmed by the amount of “liberties” used to make the chatbot more engaging.

How far will OpenAI go with ChatGPT to make it intuitive, human-like, and intelligent? Raines already had underlying conditions that caused his death, but ChatGPT did exasperate them. Remember the terms of service.

Whitney Grace, December 18, 2025

Comments

Got something to say?





  • Archives

  • Recent Posts

  • Meta