OpenAI CEO Sam Altman says some folk are using AI in 'self-destructive ways' so it's on us as a society to work out how 'to make it a big net positive'

OpenAI CEO Sam Altman throws up his hands.
(Image credit: Bloomberg via Getty Images)

As AI encroaches further into all of our lives, one of the more questionable use cases has come to the fore: people using LLMs as a therapist or life coach. This behaviour is so widespread that OpenAI has been actively taking steps to dial-back the kind of advice that ChatGPT will give users on certain topics.

So if, for example, you ask ChatGPT "should I break up with my boyfriend" then the LLM, per OpenAI, "shouldn't give you an answer" but instead talk around the topic with the user. The company also promises that "new behavior for high-stakes personal decisions is rolling out soon."

Such tweaks hardly address the larger issue, however, which is folk using unproven and deeply unreliable technology in order to make real-life decisions. A new post from OpenAI CEO Sam Altman addresses this, though not perhaps in the manner some would wish: in a roundabout way, he says this is actually society's problem.

Altman begins by acknowledging the "attachment some people have to specific AI models" in the wake of ChatGPT 5.0's release, an issue he says OpenAI has been tracking despite it not receiving "much mainstream attention." There's something of the cargo cult about this but, when OpenAI releases a new model that supersedes the existing one, some users are apparently left bereft by the change.

"People have used technology including AI in self-destructive ways," says Altman, which is one hell of a way to hedge it, before adding that "if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that." This is an incredible statement, because it pre-supposes that AI can or will be able to discern users in such mental states.

"Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot," says Altman. "Encouraging delusion in a user that is having trouble telling the difference between reality and fiction is an extreme case and it’s pretty clear what to do, but the concerns that worry me most are more subtle."

This can probably be taken as an oblique reference to some early issues with ChatGPT 4.0 turning out to be a pathological ass-kisser, and proving so sycophantic towards users that OpenAI had to take action. Altman says the company wants ChatGPT "pushing back on users to ensure they are getting what they really want" but then it all goes a bit LinkedIn.

"A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way," says Altman. "This can be really good! A lot of people are getting value from it already today.

"If people are getting good advice, leveling up toward their own goals, and their life satisfaction is increasing over years, we will be proud of making something genuinely helpful, even if they use and rely on ChatGPT a lot. If, on the other hand, users have a relationship with ChatGPT where they think they feel better after talking but they’re unknowingly nudged away from their longer term well-being (however they define it), that’s bad."

OpenAI representatives using a rotary phone to call ChatGPT via the 1-800-ChatGPT phone number

(Image credit: OpenAI)

I don't know about you, but as soon as someone starts talking about "leveling up" and "life satisfaction" my alarm bells start ringing. There's the unquestionable whiff of the self-help guru about such language, and to my mind an unfounded assumption that ChatGPT can fulfill such a role.

"I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions," says Altman, channelling a little of that Lex Luthor energy. "Although that could be great, it makes me uneasy."

Me too Sam! I just cannot imagine delegating big life decisions to an unthinking LLM that is incapable of reasoning, does not understand context, and frequently lies to its users (something that OpenAI claims should happen less with this latest iteration: we'll see).

Altman adopts a weary and world-wise tone here, before making the incredible assertion that "soon billions of people may be talking to an AI in this way." Current global population estimate: 8.2 billion people. This is the prelude to palming off the issue on society as a whole: "So we (we as in society, but also we as in OpenAI) have to figure out how to make it a big net positive."

Any clue as to how? You must be new here. Altman blithely hand-waves while saying that "we have a good shot at getting this right" because, drum roll please, "we have much better tech to help us measure how we are doing than previous generations of technology." He says this is because ChatGPT can, for example, "talk to users" about "their short- and long-term goals" and OpenAI can "explain sophisticated and nuanced issues to our models."

That's your lot. Users are increasingly using ChatGPT as a life coach, with I would say questionable results, and rather than proceeding cautiously with such use cases, the OpenAI approach only calls to mind that old canard for Silicon Valley firms: move fast and break things. That's a bold approach when you're talking about disrupting a market. When you're talking about living peoples' lives for them, however, it seems like nothing so much as tempting fate.

Razer Blade 16 gaming laptop
Best gaming rigs 2025

👉Check out our list of guides👈

1. Best gaming laptop: Razer Blade 16

2. Best gaming PC: HP Omen 35L

3. Best handheld gaming PC: Lenovo Legion Go S SteamOS ed.

4. Best mini PC: Minisforum AtomMan G7 PT

5. Best VR headset: Meta Quest 3

Rich Stanton
Senior Editor

Rich is a games journalist with 15 years' experience, beginning his career on Edge magazine before working for a wide range of outlets, including Ars Technica, Eurogamer, GamesRadar+, Gamespot, the Guardian, IGN, the New Statesman, Polygon, and Vice. He was the editor of Kotaku UK, the UK arm of Kotaku, for three years before joining PC Gamer. He is the author of a Brief History of Video Games, a full history of the medium, which the Midwest Book Review described as "[a] must-read for serious minded game historians and curious video game connoisseurs alike."

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.