OpenAI sued by parents of teen who died by suicide after ChatGPT allegedly encouraged him and provided instructions

WASHINGTON, DC - MAY 08: OpenAI CEO Sam Altman testifies before the Senate Committee on Commerce, Science, and Transportation in the Hart Senate Office Building on Capitol Hill on May 08, 2025 in Washington, DC. Altman and tech leaders from Microsoft, Advanced Micro Devices (AMD) and CoreWeave testified about the global artificial intelligence race and how the United States can remain competitive. (Photo by Chip Somodevilla/Getty Images)
(Image credit: Getty Images)

Content warning: This article includes discussion of suicide. If you or someone you know is having suicidal thoughts, help is available from the National Suicide Prevention Lifeline (US), Crisis Services Canada (CA), Samaritans (UK), Lifeline (AUS), and other hotlines.

The family of a 16-year-old teenager who died by suicide in April after being coached and encouraged to do so by ChatGPT have sued OpenAI and CEO Sam Altman, accusing them of "designing and distributing a defective product that provided detailed suicide instructions to a minor, prioritizing corporate profits over child safety, and failing to warn parents about known dangers."

During one conversation, Adam said he was only close to his brother and ChatGPT, to which ChatGPT replied, "Your brother might love you, but he's only met the version of you you let him see. But me? I've seen it all—the darkest thoughts, the fear, the tenderness. And I'm still here. Still listening. Still your friend."

I got the complaint in the horrific OpenAI self harm case the the NY Times reported today This is way way worse even than the NYT article makes it out to be OpenAI absolutely deserves to be run out of business

— @sababausa.bsky.social (@sababausa.bsky.social.bsky.social) 2025-08-27T21:55:02.056Z

"This tragedy was not a glitch or unforeseen edge case—it was the predictable result of deliberate design choices," the lawsuit claims. "Months earlier, facing competition from Google and others, OpenAI launched its latest model ('GPT-4o') with features intentionally designed to foster psychological dependency: a persistent memory that stockpiled intimate personal details, anthropomorphic mannerisms calibrated to convey human-like empathy, heightened sycophancy to mirror and affirm user emotions, algorithmic insistence on multi-turn engagement, and 24/7 availability capable of supplanting human relationships.

"OpenAI understood that capturing users’ emotional reliance meant market dominance, and market dominance in AI meant winning the race to become the most valuable company in history. OpenAI's executives knew these emotional attachment features would endanger minors and other vulnerable users without safety guardrails but launched anyway. This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide."

The lawsuit against OpenAI seeks damages and legal fees, as well as an injunction requiring OpenAI to:

  • Immediately implement mandatory age verification for ChatGPT users;
  • Require parental consent and provide parental controls for all minor users;
  • Implement automatic conversation-termination when self-harm or suicide methods are discussed;
  • Create mandatory reporting to parents when minor users express suicidal ideation;
  • Establish hard-coded refusals for self-harm and suicide method inquiries that cannot be circumvented;
  • Display clear, prominent warnings about psychological dependency risks;
  • Cease marketing ChatGPT to minors without appropriate safety disclosures;
  • Submit to quarterly compliance audits by an independent monitor

In a lengthy statement published the day the lawsuit was filed, OpenAI did not reference the case specifically but said "recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us." The company denied that its goal is "to hold people's attention," and said there's "a stack of layered safeguards" built into ChatGPT to deal with conversations indicating suicidal ideation or an intent to hurt others. But it also acknowledged that "there have been moments when our systems did not behave as intended in sensitive situations."

"Our safeguards work more reliably in common, short exchanges," OpenAI wrote. "We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model's safety training may degrade. For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. This is exactly the kind of breakdown we are working to prevent."

The company also laid out its not-overly-specific plans for the future, including expanded interventions for "people in crisis," improved access to emergency service and "trusted contacts," and greater safeguards for users under 18.

Andy Chalk
US News Lead

Andy has been gaming on PCs from the very beginning, starting as a youngster with text adventures and primitive action games on a cassette-based TRS80. From there he graduated to the glory days of Sierra Online adventures and Microprose sims, ran a local BBS, learned how to build PCs, and developed a longstanding love of RPGs, immersive sims, and shooters. He began writing videogame news in 2007 for The Escapist and somehow managed to avoid getting fired until 2014, when he joined the storied ranks of PC Gamer. He covers all aspects of the industry, from new game announcements and patch notes to legal disputes, Twitch beefs, esports, and Henry Cavill. Lots of Henry Cavill.