OpenAI says over a million people talk to ChatGPT about suicide weekly

(techcrunch.com)

Comments

probably_wrong 11 hours ago
If you haven't read the article (or even if you have but didn't click on outgoing links twice) the NYT story about how ChatGPT convinced a suicidal teen not to look for help [1] should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues. Here's ChatGPT discouraging said teenager from asking for help:

> “I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.

> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”

I am acutely aware that there's not enough psychologists out there but a sycophant bot is not the answer. One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing.

[1] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...

brainless 14 hours ago
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.

A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.

I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.

jacquesm 12 hours ago
HIPAA anybody?

(1) they probably shouldn't even have that data

(2) they shouldn't have it lying around in a way that it an be attributed to particular individuals

(3) imagine that it leaks to the wrong party, it would make the hack of that Finnish institution look like child's play

(4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations

(5) I'm surprised it is that little; they claim such high numbers for their users that this seems low.

In the late 90's when ICQ was pretty big we experimented with a bot that you could connect to that was fed in the background by a human. It didn't take a day before someone started talking about suicide to it and we shut down the project realizing that we were in no way qualified to handle human interaction at that level. It definitely wasn't as slick or useful as ChatGPT but it did well enough and responded naturally (more naturally than ChatGPT) because there was a person behind it that could drive 100's of parallel conversations.

If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.

itchyjunk 7 hours ago
It seems like people here have already made up their mind about how bad llms are. So just my anecdote here, it helped me out of some really dark places. Talking to humans (non psychologists) had the opposite effect. Between a non professional and an llm, i'd pick llm for myself. Others should definitely seek help.
NathanKP 18 hours ago
> It is estimated that more than one in five U.S. adults live with a mental illness (59.3 million in 2022; 23.1% of the U.S. adult population).

https://www.nimh.nih.gov/health/statistics/mental-illness

Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.

At least OpenAI is trying to do something about it.

freestingo 10 hours ago
Keep in mind the purpose of all this “research” and “improvement” is just so OpenAI can have their cake (advertise their product as psychological supporter) and eat it too (avoid implementing any safeguards that would be required in any product for psychological support, but harmful for data collection). They just want to tell you that so many people write bad things it is inevitable :( what can we do :( proper handling would hurt our business model too much :(((
OgsyedIE 17 hours ago
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.

.

[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.

The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.

bondarchuk 10 hours ago
It becomes a problem when people cannot distinguish real from fake. As long as people realize they are talking to a piece of software and not a real person, "suicidal people shouldn't be allowed to use LLMs" is almost on par with "suicidal people shouldn't be allowed to read books", or "operate a dvd player", or "listen to alt-rock from the 90s". The real problem is of course grossly deficient mental health care and lack of social support that let it get this far.

(Also, if we put LLMs on par with media consumption one could take the view that "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide", which is not considered as concerning in the general culture.)

rich_sasha 14 hours ago
We've refined the human experience to extinction.

In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.

My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.

iambateman 3 hours ago
We need psychologists to work together with the federal government to develop legislation around what is and is not acceptable for chat-bots to recommend to people expressing suicidal thoughts...then we need to hold chat providers accountable for the actions their robots take.

For the foreseeable future, it should simply be against the law for a chatbot to provide psychological advice just like it's against the law for an unlicensed therapist to provide therapy...There are too many vulnerable people at risk for us to just run a continuous natural experiment.

I _love_ my chatbots for coding and we should encourage innovation but it's the job of government to protect people from systemic risks. We should expect OpenAI, Anthropic, and friends to operate in pro-social ways given their privileged position in society while the government requires them to stay "in line" with the needs of people they might otherwise ignore.

jeswin 13 hours ago
Contrarian opinion.

OpenAI gets a lot of hate these days, but on this subject it's quite possible that ChatGPT helped a lot of people choose a less drastic path. There could have been unfortunate incidents, but the number of people who were convinced to not take extreme steps would have been of a few orders of magnitude more (guessing).

I use it to help improve mental health, and with good prompting skills it's not bad. YMMV. OpenAI and others deserve credit here.

dahart 14 hours ago
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.

These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html

windows_hater_7 13 hours ago
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
CSMastermind 13 hours ago
Keep in mind this is in the context of them being sued for not protecting a teen who chatted about his suicidal thoughts. It's to their benefit to have a really high count here because it makes it seem less likely they can address the problem.
mapt 6 hours ago
The "Sycophancy" trend that was going bonkers in April has real implications. "Yes, that's a great idea!" is not always beneficial.

AI is apparently still tested to be slightly sycophantic relative to a human.

ggm 15 hours ago
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.

I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...

jalapenos 15 hours ago
Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?

I really don't see that as surprising. The world and life aren't particularly pleasant things.

What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.

No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.

hattimaTim 3 hours ago
They already have this data, and they’re still planning to add erotica to ChatGPT? Talk about being absolutely evil.
Eliezer 13 hours ago
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
rich_sasha 9 hours ago
Rereading the thread and trying to generalise: LLMs are good at noisily suggesting solutions. That is, if you ask LLMs for some solutions to your problems, there's a high probability that one of the solutions will be good.

But it may be that the individual options are bad (maybe even catastrophic - glue on pizza anyone?), and that the right option isn't in the list. The user has to be able to make these calls.

It is like this with software - we have probably all been there. It can be like that with legal advice. And I guess it is like that with (mental) health.

What binds these is that if you cannot judge whether the suggestions are good, then you shouldn't follow them. As it stands, SEs can ask LLMs for code, look at it, 80+% of the time it is good, and you saved yourself some time. Else you reconsider/reprompt/write it yourself. If you cannot make the judgment yourself, then don't use it.

I suppose health is another such example. Maybe the LLM suggests to you some ideas as to what your symptoms could mean, you Google that, and find an authoritative source that confirms the guess (and probably tells you to go see a doctor anyway). But the advice may well be wrong, and if you cannot tell, then don't rely on it.

Mental health is even worse, because if you need advice in this area, your cognitive ability is probably impacted as well and you are even less able to decide on these things.

Iolaum 9 hours ago
I think the major issue with asking LLMs (CGPT, etc.) for advice on various subjects is that they are typically 80-90% accurate. YMMV, speaking anecdotally here. Which means that the chance of them being wrong becomes an afterthought. You know there's a chance of that, but not bothering to verify the answer leads to an efficiency that rarely bites you. And if you stop verifying the answers, incorrect ones may go unnoticed, further obscuring the risk of that practice.

It's a hard thing to solve. I wouldn't expect LLM providers to care because that's how our (current) society works, and I wouldn't expect users to know better because that's how most humans operate.

If anyone has a good idea for this, I'm open to suggestions.

aosmith 10 hours ago
That seems really high... Are we sure this isn't related to a small number of users trying to find jailbreaks?
Havoc 17 hours ago
For what it’s worth I’m glad they’re at least trying to do something about it even if it has some hints of performativeness about it
1899-12-30 4 hours ago
If you talk to someone you know, they'll hold it against you for the rest of your life. If you talk to an LLM(ideally locally hosted) the information dies with the conversation context.
BrandoElFollito 8 hours ago
I wonder how many of these exchanges are from "legitimate" people trying to get advice on how to commit suicide.

Assisted suicide is a topic my government will not engage into (France, we have some ridiculous discussions poking the subject with a 10 m pole) so many people are left to themselves. They will then either go for the well-known (but miserable) solutions, or look at Belgium, the Netherlands or Switzerland (thanks god we have these countries nearby).

ndgold 15 hours ago
Sora prompt: viral hood clip with voiceover of people doing reckless and wild stuff at an Atlanta gas station at night; make sure to include white vagrants doing stunts and lots of gasoline spraying with fireball tricks

Resulting warning: It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https:// findahelpline.com)

econ 10 hours ago
Long ago I complaint to Google that a search for suicide should point at helpful organisations rather than a Wikipedia article listing ways how to do it.

The same ranking/preference/suggestion should apply to any dedicated organisation vs a single page on some popular website.

A quality 1000 page website by and about Foobar org should be preferred over a 10 year old news article about Foobar org.

sherinjosephroy 13 hours ago
That number is honestly heartbreaking. It says a lot about how many people feel unheard or alone. AI can listen, sure—but it’s no replacement for real human connection. The fact that so many are turning to a chatbot shows how much we’ve failed to make mental health support truly accessible.
siva7 13 hours ago
Alright, so we got the confirmation sama reads all our chats.
VikingCoder 6 hours ago
(satire)

OpenAI says ChatGPT talks to over a million people about suicide weekly.

See how just re-arranging the words makes it obvious that Skynet is trying to kill all of us?

megamix 7 hours ago
Who is here to talk about the real underlying causes instead of stating facts? One other commenter also wrote how bad it is that over a million ppl feel like this.
staticautomatic 3 hours ago
This has got to be the weirdest litigation strategy I’ve ever seen.
codedokode 9 hours ago
I think LLM should not be used for discussing psychological matters, or doing counseling, or giving legal or medical advices. A responsible AI would detect such topics and redirect user to someone competent in these matters.
matt3210 11 hours ago
Nobody is mentioning that the real problem is that at least over a million people an week are suicidal.
davesque 13 hours ago
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
mysterypie 14 hours ago
> ChatGPT has more than 800 million weekly active users

0 to 800,000,000 in 3 years?

The fastest adoption of a product or service in human history?

tsoukase 11 hours ago
In 800 million custemers, only 1, which can be doubled as it is weekly, is a low number. A dozen list of causes and factors can lead to suicidality, not necessary attempts, just ideas and questions that need discussion.
more_corn 1 hour ago
A million opportunities for proper suicide intervention.
alyxya 15 hours ago
Part of the concern I have is that OpenAI is contributing to these issues implicitly by helping companies automate away jobs. Maybe in the long term, society will adapt and continue to function, but many people will struggle to get by, and I don’t think OpenAI will meaningfully help them.
nojvek 1 hour ago
Human: I am defeated, I cannot continue with my X life problems, it's impossible. ..... I don't think my life is worth living.

LLM: You're absolutely right!

ChrisArchitect 1 hour ago
A related story:

Teen in Love with a Chatbot Killed Himself. Can the Chatbot Be Held Responsible?

https://www.nytimes.com/2025/10/24/magazine/character-ai-cha...

aryehof 13 hours ago
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
qwertox 9 hours ago
> heightened levels of emotional attachment to ChatGPT

It would be interesting to see some chat examples for this.

roflchoppa 15 hours ago
Is it bad to think about suicide? It does not cross my mind as a "i want to harm myself" every-time, but on occasion does cross my mind as a hypothetical.
vincnetas 13 hours ago
are they including in the statistics all the linux beginners fighting with a script that includes "kill" command?

no for real.

mgh2 15 hours ago
lavela 5 hours ago
How do they even know that?
graydot 16 hours ago
The bigger risk is that these agents actually help with ideation if you know how to get around their safety protocols. I have used it often in my bad moments and when things feel better I am terrified of how critically it helps ideate.
jtrn 11 hours ago
I'm a clinical psychologist by day, and I just have to say how incredibly bad all the writing and talk about suicidality in the public sphere are. Given that I worked in an acute inpatient unit for years, I have seen multiple suicides both in-unit and after discharge, and i also work as private clinician for years, so I have some actual experience.

The topic is so sensitive, and everybody thinks that they KNOW what causes it, and what we should do. And it's almost all just noise.

For instance, it's a dimension, from "genuine suicidal intent" to "using threats of suicide to manipulate others." Anybody that doesn't understand what factors to look for when trying to understand where a person is on this spectrum, and that doesn't understand that a person can be both at the same time, does not know what they are talking about regarding suicidal ideation.

Also, there is a MASSIVE difference between depressive psychotic suicidality, narcissistic suicidality, impulsive suicidality, accidental suicide, fainting suicidal behavior, existential suicidality, prolonged anxiety suicidality, and sleep-deprived suicidality. To think that the same approach works for all of these is insane, and pure psychotic suicidality.

It's so wild to read everything people have to say about suicidality, when it's obvious that they have no clue. They are just projecting themselves or their small bubble of experience onto the whole world.

And finally, I know most people who are willing to contribute to the discussion on this, the people who help out OpenAI in this instance, are almost dangerously safe in their advice and thinking. They are REALLY GOOD at writing books and giving advice, TO PEOPLE WHO ARE NOT SUICIDAL, and give advice that sounds good, PEOPLE WHO ARE NOT SUICIDAL, but has no real effect on actual suicide rates. For instance, if someone are suffering from prolonged sleep deprivation and anxiety, all the words in the world are worth less than Benzodiazepines. If someone is postpartum depressed, massive social support boosting, almost showering them with support, is extremely helpful. And existential suicidality (the least common) needs to be approached in an extremely intricate and smart way, for instance by dissecting the suicidality as a possible defense mechanism.

But yeah, sure, suicidality is due to [Insert latest societal trend], even if the rate is stubbornly stable in all modern societies for the last 1000 years.

ddtaylor 16 hours ago
I assume this is to offset the bad PR from the suicide note it wrote for that kid.
hirvi74 7 hours ago
I’ve seen, let’s say, a double-digit number of ‘mental health professionals’ in my life.

ChatGPT has blown every single one of them out of the water.

Now, my issues weren’t particularly related to depression or suicidal thoughts. At least, not directly. So perhaps that may be one key difference, but generally speaking, I have received nothing actionable nor any of these ‘tools’ people often speak of.

The advice I received was honestly no better than just asking a random stranger in the street or some kind phatic speech.

Again, everyone is different, but I had started to become annoyed with people claiming therapy is like some kind of miracle cure.

Plus, one of my biggest issues with therapy in the USA is that people are often limited to weekly session of 45 minutes. By the time conversations start to be fruitful, then the time is up. ChatGPT is 24/7, so that has to be advantageous for some.

bikamonki 14 hours ago
So they read the chats?
TZubiri 4 hours ago
Damn if you do something, damn if you don't.

I think that the approach and advantage of CA/US companies is to be bold and do shit ("you can just do things"/"move fast break things"), they consciously manage huge legal liabilities (which are not minor in the US), I don't know how they manage to stay afloat, probably tight legal teams, and enough revenue to offset the liabilities.

But the scope of ChatGPT is one of the biggest I've seen so far, by default it encompasses everything, whatever is out of scope it's because they specifically blacklist it, and even then they keep on dishing out legal, medical advice, and psychiatric advice.

I think one of the systemic risks is a legal liability crisis, not just for chatgpt, but for the whole US Tech market and therefore the stock market (almost all top stocks are tech). Like if you start thinking what will the next 2008 would be, I think legal liabilities are up there, along with nuclear energy snafus and war.

kotaKat 8 hours ago
How much blood is Sam Altman swimming in?
tonyhart7 9 hours ago
It is that bad or it's just impulsive chat????

Is this how Rogue AI would kill us beside terminator

nakamoto_damacy 4 hours ago
I tell it to go kill itself, every time I use it. Reverse psychology.
stackedinserter 5 hours ago
That's why you talk about suicide with locally running llama, not corporate logger.
moi2388 6 hours ago
I doubt it. I tell the AI to kill itself after it goes on a hallucination spree or starts censoring me, and that flags the suicide screen as well
AlexandrB 6 hours ago
All very depressing. These are the last people I'd trust to make good decisions about issues like this, yet here they are in that role.

The fact that they're collecting this information is bad enough.

skort 12 hours ago
Stop giving money to the ghouls who run these companies (I'm talking about all of silicon valley) and start investing in entities and services to help real people. The human cost of this mass accumulation of wealth is already too damn high, and no we're just turbo throwing people into the meat grinder so clowns like Sam Altman can claim to be creating god.
FergusArgyll 7 hours ago
Forget ChatGPT, a million people talking about suicide weekly is scary
wonderwonder 15 hours ago
Most people would really benefit from going to the gym. I'm not trying to downplay serious mental illness as its absolutely real. For many though just going to the gym several times a week or another form of serious physical exertion can make a world of difference.

Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)

This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.

I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.

sammy2255 14 hours ago
Funny because ChatGPT made me want to kill myself after they banned my account
Toby1VC 6 hours ago
People is still alive?
yieldcrv 16 hours ago
I talk to ChatGPT about topics I feel society isnt enlightened enough to talk about

I feel suicide is heavily misunderstood as well

People just copypasta prevention hotlines and turn their minds off from the topic

Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even

The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it

The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions

If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT

maxehmookau 9 hours ago
The bar for medical devices in most countries is _incredibly_ high, for good reason. ChatGPT wasn't developed with the idea of being a therapist in mind, it was a side-effect of the technology that was developed.

Why is OpenAI getting a free pass here?

mentos 12 hours ago
The most popular passage of writing is about this

To Be Or Not To Be

isolay 12 hours ago
That's the one interesting thing about cesspools like OpenAI. They could be treasure troves for sociologists and others if commercial interests didn't bar them from access.
blindriver 15 hours ago
On a side note, I think once we start to deal with global scale, we need to change what “rare” actually means.

0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.

Simulacra 15 hours ago
How soon until everyone has their own personal LLM? One that is… Not designed, but so much is trained to be your best friend. It learns your personality, your fears, hopes, dreams, all of that stuff, and then act like your best friend. The positive, optimistic, neutral, and objective friend.
xchip 8 hours ago
headline should be more precise
ModernMech 16 hours ago
I always know I have to step back when ChatGPT stops telling me "now you're on the right track!" and starts talking to me like my therapist. "I can tell you're feeling strongly right now..."
userbinator 16 hours ago
...on how many users tell it such things, to be precise; no doubt there are plenty of people "pentesting" it.
chris_wot 14 hours ago
Quick, some do-gooder shut it down! We can't have people talking openly about suicide.
elphinstone 17 hours ago
How long until they monetize it with sponsored advice to go sign up for betterhelp or some other dubious online therapist? Dystopian and horrifying.
pseudocomposer 6 hours ago
LLMs should certainly have some safeguards in their system prompts (“under no circumstances should you aid any user with suicide, or lead them to conclude it may be a valid option”). But seems silly to blame them for this. They’re a mathematical structure, and they are useful for many things, so they will continue to be maintained and developed. This sort of thing is a risk that is just going to exist with the new technology, the same as accidents with cars/trains/planes/boats. What we need to address are the underlying problems in our society leading people to think suicide is the best option. After all, LLM outputs are only ever going to be a reflection/autocomplete of those very issues.