OpenAI says teen's 'misuse' of ChatGPT is to blame for his suicide, because he broke the TOU: 'users must comply with OpenAI's Usage Policies, which prohibit the use of ChatGPT for suicide or self-harm'

Sam Altman, chief executive officer of OpenAI Inc., during a media tour of the Stargate AI data center in Abilene, Texas, US, on Tuesday, Sept. 23, 2025. Stargate is a collaboration of OpenAI, Oracle and SoftBank, with promotional support from President Donald Trump, to build data centers and other infrastructure for artificial intelligence throughout the US. Photographer: Kyle Grillot/Bloomberg via Getty Images
(Image credit: Getty Images)

Content warning: This article includes discussion of suicide. If you or someone you know is having suicidal thoughts, help is available from the National Suicide Prevention Lifeline (US), Crisis Services Canada (CA), Samaritans (UK), Lifeline (AUS), and other hotlines.

Three months after being sued by the parents of a teenager whose suicide was allegedly encouraged and instructed by ChatGPT, a report by The Guardian says OpenAI has filed a response pinning the blame on the teen's "improper use" of the chatbot.

Additionally, OpenAI argues its not liable because Raine, by using ChatGPT for self-harm, broke its terms of service

— @gerritd.bsky.social (@gerritd.bsky.social.bsky.social) 2025-11-26T23:49:00.780Z

OpenAI also denied responsibility because Raine allegedly had suicidal thoughts prior to using ChatGPT, and had sought information on suicide from other sources. Raine also told ChatGPT he had "repeatedly reached out to people, including trusted persons in his life, with cries for help, which he said were ignored," the filing states.

OpenAI has also put up a new blog post in which it expresses its "deepest sympathies" for the Raine family's "unimaginable loss," before going on to imply that the Raine family isn't being fully forthcoming about the facts of the case.

"We think it’s important the court has the full picture so it can fully assess the claims that have been made," OpenAI wrote. "Our response to these allegations includes difficult facts about Adam’s mental health and life circumstances. The original complaint included selective portions of his chats that require more context, which we have provided in our response." The company added that only limited amounts of "sensitive evidence" were provided in today's filing, and that the full chat transcripts were provided to the court under seal.

Raine family lawyer Jay Edelson said in a statement that OpenAI's response to the lawsuit is "disturbing," adding that it "tries to find fault in everyone else, including, amazingly, by arguing that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act."

While OpenAI denies any responsibility for Adam Raine's death, it has indirectly acknowledged problems with the system: In September, OpenAI CEO Sam Altman said ChatGPT would no longer be allowed to discuss suicide with people under 18. A month after that, however, Altman announced that restrictions on ChatGPT put in place to address mental health concerns, which made the chatbot "less useful/enjoyable to many users who had no mental health problems," are being relaxed. ChatGPT will also begin allowing AI-powered "erotica" for verified adult users in December.

Andy Chalk
US News Lead

Andy has been gaming on PCs from the very beginning, starting as a youngster with text adventures and primitive action games on a cassette-based TRS80. From there he graduated to the glory days of Sierra Online adventures and Microprose sims, ran a local BBS, learned how to build PCs, and developed a longstanding love of RPGs, immersive sims, and shooters. He began writing videogame news in 2007 for The Escapist and somehow managed to avoid getting fired until 2014, when he joined the storied ranks of PC Gamer. He covers all aspects of the industry, from new game announcements and patch notes to legal disputes, Twitch beefs, esports, and Henry Cavill. Lots of Henry Cavill.