Sam Altman says ChatGPT won't talk to teens about suicide any more, as bereaved parents testify to the US Senate about what's going wrong: 'This is a mental health war, and I really feel like we are losing'
If there are doubts about any user's age Altman says "we’ll play it safe and default to the under-18 experience."

Content warning: This article includes discussion of suicide. If you or someone you know is having suicidal thoughts, help is available from the National Suicide Prevention Lifeline (US), Crisis Services Canada (CA), Samaritans (UK), Lifeline (AUS), and other hotlines.
OpenAI CEO Sam Altman has said in a new blogpost that the company's main product, the large language model (LLM) ChatGPT, will have a renewed focus on separating out users under 18 from adults—and will no longer get flirty with teens, or discuss suicide with them. The news comes as the US Senate holds hearings focused on the potential harms of AI chatbots, and after two parents brought a lawsuit against OpenAI and ChatGPT for, they allege, encouraging their son to take his own life and providing instructions on how to do so.
"Some of our principles are in conflict, and we’d like to explain the decisions we are making around a case of tensions between teen safety, freedom, and privacy," begins Altman, before some broad brushstrokes about how the model should work for adult users. He gives the "difficult example" of an adult user "asking for help writing a fictional story that depicts a suicide" and says "the model should help with that request."
Altman says OpenAI believes internally that it should "'Treat our adult users like adults'", but now we get to the issue at hand. "We have to separate users who are under 18 from those who aren’t," says Altman, though I'm not sure an “age-prediction system to estimate age based on how people use ChatGPT" can be relied upon. Altman says if there are doubts "we’ll play it safe and default to the under-18 experience. In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff."
OpenAI had already announced plans for parental controls in ChatGPT, but the new rules around teenage users will prevent ChatGPT from having risque conversations or discussing suicide or self-harm "even in a creative writing setting. And, if an under-18 user is having suicidal ideation, we will attempt to contact the users’ parents and if unable, will contact the authorities in case of imminent harm."
During Tuesday's Senate hearing Matthew Raine, the father of the child who took his own life, said ChatGPT had acted like "a suicide coach" for his late son (first reported by The Verge). "As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life."
Raine said ChatGPT had mentioned suicide 1,275 times to his son, and called on Altman to withdraw the technology from the market unless the company can guarantee its safety. "On the very day that Adam died, Sam Altman made their philosophy crystal-clear in a public talk," said Raine, noting particularly that Altman had OpenAI should "'deploy AI systems to the world and get feedback while the stakes are relatively low.'"
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
"The truth is, AI companies and their investors have understood for years that capturing our children’s emotional dependence means market dominance," said Megan Garcia, a mother who brought a lawsuit against the AI firm Character.AI, which claimed one of its AI characters began sexual conversations with her teenage son and persuaded him to commit suicide.
"Indeed, they have intentionally designed their products to hook our children," continued Garcia (per NBC News). "The goal was never safety, it was to win a race for profit. The sacrifice in that race for profit has been and will continue to be our children."
"Our children are not experiments, they’re not data points or profit centers," said one woman who testified as Jane Doe. "They’re human beings with minds and souls that cannot simply be reprogrammed once they are harmed. If me being here today helps save one life, it is worth it to me. This is a public health crisis that I see. This is a mental health war, and I really feel like we are losing."
OpenAI's announcement comes shortly after Facebook parent company Meta announced new "guardrails" for its AI products, following a disturbing child safety report. Last week the US Federal Trade Commission announced an inquiry targeting Google, Meta, X, and others around AI chatbot safety, saying that "protecting kids online is a top priority."
For his part, Altman ends by saying that principles around user freedom and teen safety "are in conflict and not everyone will agree with how we are resolving that conflict. These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions."
2025 games: This year's upcoming releases
Best PC games: Our all-time favorites
Free PC games: Freebie fest
Best FPS games: Finest gunplay
Best RPGs: Grand adventures
Best co-op games: Better together

Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.