Meta to take 'extra precautions' to stop AI chatbots talking to kids about suicide, which makes you wonder what it's been doing until now
Feels like this should've been a top priority from the very start.

Content warning: This article includes discussion of suicide. If you or someone you know is having suicidal thoughts, help is available from the National Suicide Prevention Lifeline (US), Crisis Services Canada (CA), Samaritans (UK), Lifeline (AUS), and other hotlines.
Facebook parent company Meta has said it will introduce extra safety features to its AI LLMs, shortly after a leaked document prompted a US senator to launch an investigation into the company.
The internal Meta document, obtained by Reuters, is reportedly titled "GenAI: Content Risk Standards" and, among other things, showed that the company's AIs were permitted to have "sensual" conversations with children.
Republican Senator Josh Hawley called it "reprehensible and outrageous" and has launched an official probe into Meta's AI policies. For its part, Meta told the BBC that "the examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed."
Now Meta says it will introduce more safeguards to its AI bots, which includes blocking them from talking to teen users about topics such as suicide, self-harm and eating disorders. Which raises an obvious question: what the hell have they been doing up to now? And is it still fine for Meta's AI to discuss such things with adults?
"As we continue to refine our systems, we're adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now," Meta spokesperson Stephanie Otway told TechCrunch.
The reference to AI characters is because Meta allows user-made characters, which are built atop its LLMs, across platforms such as Facebook and Instagram. Needless to say, certain of these bots are highly questionable, and another Reuters report found countless examples of sexualised celebrity bots, including one based on a 16 year-old film star, and that a Meta employee had created various AI Taylor Swift 'parody' accounts. Whether Meta can stem the tide remains to be seen, but Otway insists that teen users will no longer be able to access such chatbots.
Keep up to date with the most important stories and the best deals, as picked by the PC Gamer team.
"While further safety measures are welcome, robust safety testing should take place before products are put on the market—not retrospectively when harm has taken place," Andy Burrows, head of suicide prevention charity the Molly Rose Foundation, told the BBC.
"Meta must act quickly and decisively to implement stronger safety measures for AI chatbots and [UK regulator] Ofcom should stand ready to investigate if these updates fail to keep children safe."
The news comes shortly after a California couple sued ChatGPT-maker OpenAI over the suicide of their teenage son, alleging the chatbot encouraged him to take his own life.

Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.