Altman estimated that approximately 1,500 people per week discuss suicide with ChatGPT before going on to kill themselves. The company acknowledged it had been tracking users’ “attachment
issues” for over a year.
I didn't realize Altman was citing figures like this, but he's one of the few people who would know, and could shut down accounts with a hardcoded command if suicidal discussion is detected in any chat.
He floated the idea of maybe preventing these conversions[0], but as far as I can tell, no such thing was implemented.
That’s misleading. Altman was simply doing a napkin calculation based on the scale at which ChatGPT operates and not estimating based on internal data:
“There are 15,000 people a week that commit suicide,” Altman told the podcaster. “About 10% of the world are talking to ChatGPT. That’s like 1,500 people a week that are talking, assuming this is right, to ChatGPT and still committing suicide at the end of it. They probably talked about it. We probably didn’t save their lives. Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about ‘hey, you need to get this help, or you need to think about this problem differently, or it really is worth continuing to go on and we’ll help you find somebody that you can talk to’.”
You could similarly say something like 10k+ people used Google or spoke to a friend this week and still killed themselves.
Many of those people may have never mentioned their depression or suicidal tendencies to ChatGPT at all.
I think Altman appropriately recognizes that at the scale at which they operate, there’s probably a lot more good they can do in this area, but I don’t think he thinks (nor should he think) that they are responsible for 1,500 deaths per week.
ChatGPT sort of fits the friend analogy. It's been marketed as an expert and a sort of companion. If a real-life person with the level of authority and repute of ChatGPT was caught encouraging minors to commit suicide and engage in other harmful activities, surely there would be some investigation into this person's behavior.
"..our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent."
Roughly 700 million weekly active users, it's more like 1 million people discussing suicide with ChatGPT every week.
For reference, 12.8 million Americans are reported as thinking about suicide and 1.5 million are reported as attempting suicide in a year.
I didn't realize Altman was citing figures like this, but he's one of the few people who would know, and could shut down accounts with a hardcoded command if suicidal discussion is detected in any chat.
He floated the idea of maybe preventing these conversions[0], but as far as I can tell, no such thing was implemented.
[0]: https://www.theguardian.com/technology/2025/sep/11/chatgpt-m...