OpenAI says over a million people talk to ChatGPT about suicide weekly
260 comments
·October 27, 2025probably_wrong
Al-Khwarizmi
We would need the big picture, though... maybe it caused that death (which is awful) but it's also saving lives? If there are that many people confiding in it, I wouldn't be surprised if it actually prevents some suicides with encouraging comments, and that's not going to make the news.
Before declaring that it shouldn't be near anyone with psychological issues, someone in the relevant field should study whether the positive impact on suicides is greater than negative or vice versa (not a social scientist so I have no idea what the methodology would look like, but if should be doable... or if it currently isn't, we should find the way).
grey-area
A word generator with no intelligence or understanding based on the contents of the internet should not be allowed near suicidal teens, nor should it attempt to offer advice of any kind.
This is basic common sense.
Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
ben_w
I'll gladly diss LLMs in a whole bunch of ways, but "common sense"? No.
By the "common sense" definitions, LLMs have "intelligence" and "understanding", that's why they get used so much.
Not that this makes the "common sense" definitions useful for all questions. One of the worse things about LLMs, in my opinion, is that they're mostly a pile of "common sense".
Now this part:
> Add in the commercial incentives of 'Open'AI to promote usage for anything and everything and you have a toxic mix.
I agree with you on…
…with the exception of one single word: It's quite cliquish to put scare quotes around the "Open" part on a discussion about them publishing research.
More so given that people started doing this in response to them saying "let's be cautious, we don't know what the risks are yet and we can't un-publish model weights" with GPT-2, and oh look, here it is being dangerous.
Al-Khwarizmi
Supposing that the advice it provides does more good than harm, why? What's the objective reason? If it can save lives, who cares if the advice is based on intelligence and understanding or on regurgitating internet content?
bayindirh
Maybe it's causing even more deaths than we know, and these doesn't make the news either?
If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
Al-Khwarizmi
> Maybe it's causing even more deaths than we know, and these doesn't make the news either?
Of course, and that's part of why I say that we need to measure the impact. It could be net positive or negative, we won't know if we don't find out.
> If we think this way, then we don't need to improve safety of anything (cars, trains, planes, ships, etc.) because we would need the big picture, though... maybe these vehicles cause death (which is awful), but it's also transporting people to their destinations alive. If there are that many people using these, I wouldn't be surprised if these actually transports some people with comfort, and that's not going to make the news.
I'm not advocating for not improving security, I'm arguing against a comment that said that "ChatGPT should be nowhere near anyone dealing with psychological issues", because it can cause death.
Following your analogy, cars objectively cause deaths (and not only of people with psychological issues, but of people in general) and we don't say that "they should be nowhere near a person". We improve their safety even though zero deaths is probably impossible, which we accept because they are useful.
kace91
>should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues.
Is that a debate worth having though?
If the tool is available universally it is hard to imagine any way to stop access without extreme privacy measures.
Blocklisting people would require public knowledge of their issues, and one risks the law enforcement effect, where people don’t seek help for fear that it ends up in their record.
probably_wrong
> Is that a debate worth having though?
Yes. Otherwise we're accepting "OpenAI wants to do this so we should quietly get out of the way".
If ChatGPT has "PhD-level intelligence" [1] then identifying people using ChatGPT for therapy should be straightforward, more so users with explicit suicidal intentions.
As for what to do, here's a simple suggestion: make it a three-strikes system. "We detected you're using ChatGPT for therapy - this is not allowed by our ToS as we're not capable of helping you. We kindly ask you to look for support within your community, as we may otherwise have to suspend your account. This chat will now stop."
bloqs
im willing to bet that it reduces them at a statistical level. knee jerk emotional reaction to a hallucinaton isnt the way forward with these things
bondarchuk
It becomes a problem when people cannot distinguish real from fake. As long as people realize they are talking to a piece of software and not a real person, "suicidal people shouldn't be allowed to use LLMs" is almost on par with "suicidal people shouldn't be allowed to read books", or "operate a dvd player", or "listen to alt-rock from the 90s". The real problem is of course grossly deficient mental health care and lack of social support that let it get this far.
(Also, if we put LLMs on par with media consumption one could take the view that "talking to an LLM about suicide" is not that much different from "reading a book/watching a movie about suicide", which is not considered as concerning in the general culture.)
aquariusDue
Precisely, I too have a bone to pick with AI companies, Big Tech and Co but there are deeper societal problems at work here where blanket bans and the like are useless or a slippery slope towards policies that can be abused someday/somehow.
And solutions for solving those underlying problems? I haven't the faintest clue. Though these days I think the lack of third spaces in a lot of places might have a role to play in it.
dotancohen
I work with a company that is building tools for mental health professionals. We have pilot projects in diverse nations, including in nations that are considered to have adequate mental health care. We actually do not have a pilot in the US.
The phenomenon of people turning to AI for mental health issues in general, and suicide in particular, is not confined to only those nations or places lacking adequate mental health access or awareness.
brainless
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
lanyard-textile
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
fsmv
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
brainless
"We are not ourselves when we are fallen down" - hits hard. I really hope this is a calling for folks who will care.
rustystump
If you look at the number of weekly open ai users, this is just the law of big numbers at play.
brainless
You are right and it gives us an chance to do something about it. We always had data about people who are struggling but now we see how many are trying to reach out for advice or help.
freestingo
Keep in mind the purpose of all this “research” and “improvement” is just so OpenAI can have their cake (advertise their product as psychological supporter) and eat it too (avoid implementing any safeguards that would be required in any product for psychological support, but harmful for data collection). They just want to tell you that so many people write bad things it is inevitable :( what can we do :( proper handling would hurt our business model too much :(((
jacquesm
HIPAA anybody?
(1) they probably shouldn't even have that data
(2) they shouldn't have it lying around in a way that it an be attributed to particular individuals
(3) imagine that it leaks to the wrong party, it would make the hack of that Finnish institution look like child's play
(4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
(5) I'm surprised it is that little; they claim such high numbers for their users that this seems low.
In the late 90's when ICQ was pretty big we experimented with a bot that you could connect to that was fed in the background by a human. It didn't take a day before someone started talking about suicide to it and we shut down the project realizing that we were in no way qualified to handle human interaction at that level. It definitely wasn't as slick or useful as ChatGPT but it did well enough and responded naturally (more naturally than ChatGPT) because there was a person behind it that could drive 100's of parallel conversations.
If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
true_religion
HIPAA only applies to covered healthcare entitites. If you walk into a McDonalds' and talk about your suicidal ideation with the cashier, that's not HIPAA covered.
To become a covered entity, the business has either work with a healhcare provider, health data trasmiter, or do business as one.
Notably, even in the above case, HIPAA only applies to the healthcare part of the entity. So if McDonald's collocated pharmacies in their restaurants, HIPAA would only apply to the pharmacists, not the cashiers.
That's why you'll see in connivence stores with pharmacies, the registers are separated so healthcare data doesn't go to someone who isn't covered by HIPAA.
**
As for how ChatGPT gets these stats... when you talk about a sensitive or banned topic like suicide, their backend logs it.
Originally, they used that to cut off your access so you wouldn't find a way to cause a PR failure.
dotancohen
So many misconceptions about HIPAA would disappear if people just took the effort to unpack the acronym.
jacquesm
Arguably, if you start giving answers to these kind of questions your chatbot just became a medical device.
WA
Under Medical Device Regulation in the EU, the main purpose of the software needs to be medical for it to become a medical device. In ChatGPT's case, this is not the primary use case.
Same with fitness trackers. They aren't medical devices, because that's not their purpose, but some users might use them to track medical conditions.
janderson215
There is nothing arguable about it. No it did not.
hansmayer
I don't know about HIPAA, but that there is that little body of criminal legislature talking about unathorised practice of medicine ?
jkingsman
Privacy is vital, but this isn't covered under HIPAA. As they are not a covered entity nor handling medical records, they're beholden to the same privacy laws as any other company.
HIPAA's scope is actually basically nonexistent once you get away from healthcare providers, insurance companies, and the people that handle their data/they do business with. Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
jacquesm
> Talking with someone (even a company) about health conditions, mental health, etc. does not make them a medical provider.
Also not when the entity behaves as though they are a mental health service professional? At what point do you put the burden on the apparently mentally ill person to know better?
irjustin
Google, OpenAI, Anthropic don't advertise any of their services as medical so why?
You Google your symptoms constantly. You read from WebMD or Wiki drug pages. None of these should be under HIPAA.
philipallstar
You're not putting the burden on them. They don't need to comply with HIPAA. But you can't just turn people into healthcare providers who aren't them and don't claim to be them.
7moritz7
> if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
For a lot of people, especially in poorer regions, LLMs are a mental health lifeline. When someone is severely depressed they can lay in bed the whole day without doing anything. There is no impulse, as if you tried starting a car and nothing happens at all, so you can forget about taking it to the mechanic in the first place by yourself. Even in developed countries you can wait for a therapist appointment for months, and that assumes you navigated a dozen therapists that are often not organized in a centralized manner. You will get people killed like this, undoubtedly.
LLMs are far beyond the point of leading people into suicidal actions, on the other hand. At the very least they are useful to bridge the gap between suicidal thoughts appearing and actually getting to see a therapist
eru
> HIPAA anybody?
Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
> If you give people something that seems to be a listening ear they will unburden themselves on that ear regardless of the implementation details of the ear.
Cf. Eliza, or the Rogerian therapy it (crudely) mimics.
jacquesm
> Maybe. Going on a tangent: in theory GMail has access to lots of similar information---just by having approximately everyone's emails. Does HIPAA apply to them? If not, why not?
That's a good question.
Intuitively: because it doesn't attempt to impersonate a medical professional, nor does it profess to interact with you on the subject matter at all. It's a communications medium, not an interactive service.
jamilton
Tangent but now I’m curious about the bot, is there a write-up anywhere? How did it work? If someone says “hi”, what did the bot respond and what did the human do? I’m picturing ELIZA with templates with blanks a human could fill in with relevant details when necessary.
jacquesm
Basically Levenshtein on previous responses minus noise words. So if the response was 'close enough' then the bot would use a previously given answer, if it was too distant the human-in-the-loop would get pinged with the previous 5 interactions as context to provide a new answer.
Because the answers were structured as a tree every ply would only go down in the tree which elegantly avoided the bot getting 'stuck in a loop'.
The - for me at the time amazing, though linguists would have thought it trivial - insight was how incredibly repetitive human interaction is.
kube-system
As others have stated HIPAA applies to healthcare organizations.
Obligating everyone to keep voluntarily disclosed health statements confidential would be silly.
If I told you that I have a medical condition, right here on HN -- would it make sense to obligate you and everyone else here keep it a secret?
jacquesm
No, obviously it would not. But if we pretended to be psychiatrists or therapists then we should be expected to behave as such with your data if given to us in confidence rather than in public.
otabdeveloper4
> (4) if people try to engage it in such conversations the bot should immediately back out because it isn't qualified to have these conversations
There is nothing in the world that OpenAI is qualified to talk about, so we might as well just shut it down.
4gotunameagain
> we shut down the project realizing that we were in no way qualified to handle human interaction at that level
Ah, when people had a spine and some sense of ethics, before everything dissolved in a late stage capitalism all is for profit ethos. Even yourself is a "brand" to be monetised, even your body is to be sold.
We deserve our upcoming demise.
NathanKP
> It is estimated that more than one in five U.S. adults live with a mental illness (59.3 million in 2022; 23.1% of the U.S. adult population).
https://www.nimh.nih.gov/health/statistics/mental-illness
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
AuryGlenz
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
mrastro
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
echelon
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
drdaeman
Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
mgh2
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection 2. AGI -> erotica 3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
skeledrew
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
ben_w
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
btilly
The general rule of thumb is this.
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
null
btilly
I agree that the tech industry is the root cause of a lot of mental illness.
But social media is a far bigger concern than AI.
Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
theblazehen
> Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
I have noticed that TikTok can detect a depressive episode within ~a day of it starting (for me), as it always starts sending me way more self harm related content
xedrac
AI is going to be more impactful than social media I'm afraid. But the two together just might be catastrophic for humanity.
golergka
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
zaptheimpaler
How do you connect your diary to an LLM? I've been struggling with getting an MCP for Evernote setup.
SecretDreams
You actually need to add a loop in there between the suicide and erotica steps.
anonym29
I believe you're referring to the autoerotic asphyxiation phase?
decremental
[dead]
hyfgfh
> OpenAI is trying to do something about it.
Ha good one
diamond559
They're doing something about it alright, they're monetizing their pain for shareholder gainz!
weatherlite
Sure but your therapist is also monetizing your pain for his own gain. Either A.I therapy works (e.g can provide good mental relief) or it doesn't. I tend to think it's gonna be amazing at those things talking from experience (very rough week with my mom's health deterioriating fast, did a couple of sessions with Gemini that felt like I'm talking to a therapist). Perhaps it won't work well for hard issues like real mental disorders but guess what human therapists are very often also not great at treating people with serious issue.
ml-anon
They are collecting training data for ads & erotica.
hsbauauvhabzb
It sounds like you’re feeling down. Why don’t you pop a couple Xanax(tm) and shop on Amazon for a while, that always makes you feel better. Would you like me to add some Xanax(tm) to your shopping cart to help you get started?
echelon
If it follows the Facebook/Meta playbook, it now has a new feature label for selling ads.
lanfeust6
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
JDEW
> 1/800 users mentioning suicide…
“conversations that include explicit indicators of potential suicidal planning or intent.”
Sounds like more than just mentioning suicide. Also it’s per week, which is a pretty short time interval.
robocat
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
btilly
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
Barrin92
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
seatac76
Those numbers are staggering.
throwaway314155
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
NathanKP
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
null
lanyard-textile
This is precisely the case.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
OgsyedIE
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
anonymous908213
> best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
supriyo-biswas
Thank you for this comment, it highlights something that I've felt that needed to be said but is often suppressed because people don't like the ultimate conclusion that occurs if you try to reason about it.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
Al-Khwarizmi
This is a good point.
A few days ago I heard about a man who attempted suicide. It's not even an extreme case of disease or anything like that. It's just that he is over 70 (around 72, I think), with his wife in the process of divorcing him, and no children.
Even though I am lucky to be a happy person that enjoys life, I find it difficult to argue that he shouldn't suicide. At that age he's going to see his health declining, it's not going to get better in that respect. He is losing his wife who was probably what gave his life meaning. It's too late for most people to meet someone new. Is life really going to give him more joy than suffering? Very unlikely. I suppose he should still hang on if he loves his wife because his suicide would be a trauma for her, but if the divorce is bitter and he doesn't care... honestly I don't know if I could sincerely argue for him not to do it.
SamPatt
Good comment.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
bawolff
> [1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
babyshake
There is another factor to consider. The stakes of asking an AI about a taboo topic are generally considered to be very low. The number of people who have asked ChatGPT something like "how to make a nuclear bomb" should not be an indication of the number of people seriously considering doing that.
hsbauauvhabzb
That’s an extreme example where it’s clear to the vast majority of people asking the question that they probably do not have the means to make one. I think it’s more likely that real world actions come out of the question ‘how do I approach my neighbour about their barking dogs’ at a far higher rate. Suicide is somewhere between the two, but probably closer to the latter than the former.
kelnos
That's 1 million people per week, not in general. It could be 1 million different people every week. (Probably not, but you get where I'm going with that.)
saretup
The math actually checks out.
5% of 800 million is 40 million.
40 million thoughts per year divided by 52 weeks per year approximately equals around 1 million thoughts per week.
SecretDreams
To be fair, this is week and more focused specifically on planning or intent. Over a year, you may get more unique hits to those attributes.. which I feel are both more intense indicators than just suicidal feelings on the scale of "how quickly feelings will turn to actions". Talking in the same language and timescales are important in drawing these comparisons - it very well could be that OAI's numbers are higher than what you are comparing against when normalized for the differences I've highlighted or others I've missed.
butler533
Why assume any of the information in this article is factual? Is there any indication any of it was verified by anyone who does not have a financial interest in "proving" a foregone conclusion? The principal author of this does not even have the courage to attach their name to it.
OgsyedIE
[flagged]
dang
Yikes, you can't attack another user like this on HN, regardless of how wrong they are or you feel they are. We ban accounts that post like this, so please don't.
Fortunately, a quick skim through your recent comments didn't turn up anything else like this, so it should be easy to fix. But if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.
rich_sasha
We've refined the human experience to extinction.
In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.
My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.
safety1st
You don't need to withdraw from humanity, you only need to withdraw from Big Tech platforms. I'm continually amazed at the difference between the actual human race and the version of the human race that's presented to me online.
The first one is basically great, everywhere I go, when I interact with them they're some mix of pleasant, friendly, hapless, busy, helpful, annoyed, basically just the whole range of things that a person might be, with almost none of them being really awful.
Then I get online and look at Reddit or X or something like that and they're dominated by negativity, anger, bigotry, indignation, victimization, depression, anxiety, really anything awful that's hard to look away from, has been bubbled up to the top and oh yes next to it there are some cat videos.
I don't believe we are seeing some shadow side of all society that people can only show online, the secret darkness of humanity made manifest or something like that. Because I can go read random blogs or hop into some eclectic community like SDF and people in those places are basically pleasant and decent too.
I think it's just a handful of companies who used really toxic algorithms to get fantastically rich and then do a bunch of exclusivity deals and acquire all their competition, and spread ever more filth.
You can just walk away from the "communities" these crime barons have set up. Delete your accounts and don't return to their sites. Everything will immediately start improving in your life and most of the people you deal with outside of them (obviously not all!) turn out to be pretty decent.
The principal survival skill in this strange modern world is meeting new people regularly, being social, enjoying the rich life and multitude of benefits which arise from that, but also disconnecting with extreme rapidity and prejudice if you meet someone who's showing signs of toxic social media brain rot. Fortunately many of those people rarely go outside.
testdelacc1
Reddit is a really good example of this because it used to be a feed of what you selected yourself. But they couldn’t juice the metrics that way, so they started pushing algorithmic suggestions. And boy, do those get me riled up. It works like a charm, because I spend more time on these threads, defending what seems like common sense.
But at the end I don’t feel a sense of joy like I used to with the old Reddit. Now it feels like a disgusting cesspool that keeps drawing me back with its toxicity.
Edit: this is a skill issue. It’s possible to disable algorithmic suggestions in settings. I’ve done that just now.
walthamstow
I'm a driver and a cyclist. I used to frequent both r/londoncycling and r/CarTalkUK. I liked each sub for its discussion of each topic. Best route from Dalston to Paddington, best family car for motorway mileage, that kind of thing.
Now, because of the algo-juicing home page, both subs are full of each other's people arguing at each other. Cyclists hating drivers, drivers hating cyclists. It's just so awful.
xkbarkar
Death threats are fairly common on reddit.
Reddit is beoynd toxic, its bordering on violent extremism
isolay
All of this only goes to show how far we've come on our journey to profit optimization. We could optimize away those pesky humans completely if it weren't for the annoying fact that they are the source of all those profits.
safety1st
Oh, but humans are actually not the source of all profit! This is where phenomena like click fraud become interesting.
Some estimates for 2025: around 20-30% of all ad clicks were bots. Around $200B in ad spend annually lost to click fraud.
So this is where it gets really interesting right, the platforms are filled with bots, maybe a quarter? of the monetizable action occurring on them IS NOT HUMAN but lots of it gets paid for anyway.
It's turtles all the way down. One little hunk of software, serving up bits to another little hunk of software, constitutes perhaps a quarter of what they call "social" media.
We humans aren't the minority player in all this yet, the bots are still only 25%, but how much do you want to bet that those proportions will flip in our lifetimes?
The future of that whole big swathe of the Internet is probably that it will be 75% some weird shell game between algorithms, and 25% people who have completely lost their minds by participating in it and believing it's real.
I have no idea what this all means for the fate of economics and society but I do know that in my day to day life I'm a lot happier if I just steer clear of these weird little paperclip maximizing robots. To reference the original article, getting too involved with them literally makes you go crazy and think more often about suicide.
wobfan
In my experience, >95% of the people you see online (comments, selfies, posts) seem way worse - more evil, arrogant, or enraging - than even the worst <1% of people I’ve met in real life. And that definitely doesn’t help those of us who are already socially anxious.
Obviously, “are way worse” means I interpret them that way. I regularly notice how I project the worst possible intentions onto random Reddit comments, even when they might be neutral or just uninformed. Sometimes it feels like my brain is wired to get angry at people. It’s a bit like how many people feel when driving: everyone else is evil, incompetent, or out to ruin your day. When in reality, they’re probably in the same situation as you - maybe they had a bad morning, overslept, or are rushing to work because their boss is upset (and maybe he had a bad morning too). They might even have a legitimate reason for driving recklessly, like dealing with an emergency. You never know.
For me, it all comes back to two things:
(1) Leave obnoxious, ad-driven platforms that ~need~ want (I mean, Mark Zuckerberg has to pay for cat food somehow) to make you mad, because that’s the easiest way to keep you engaged.
(2) Try to always see the human behind the usernames, photos, comments, and walking bodies on the street. They’re a person just like you, with their own problems, stresses, and unmet desires. They’re probably trying their best - just like you.
unglaublich
The romantic fallback plan of being a farmer or shepherd. I wonder, do farmers and shepherds also romantize about becoming programmers or accountants when they feel down?
p5v
They do. I’ve been teaching cross-career programming courses in the past, where most of my students had day jobs, some, involving hard physical work. They’d gladly swap all that for the opportunity to feed their families by writing code.
Just comes to show how the grass is always greener when you look on the other side.
That said, I also plan to retire up in the mountains soon, rather than keep feeding the machine.
vasco
The man knows he can be happy but he thinks his happiness depends on the outside rather than the inside.
If you have demons they will be there on the farm as well. How you see life is much more important to happiness than which job you have.
Many farmers struggle with alcoholism, beat their wives and hate their life. And many farmers are happy and at peace. Same with the programmers.
lithocarpus
I'm close with a number of people living a relatively hard working life producing food and I've not seen this at all personally, no. It can be very rough but for these people at least it is very fulfilling and the idea of going to be in an office would look like death. People joke about it a bit but no way.
That said there probably are folks who did do that and left to go be in an office, and I don't know them.
Actually I do know one sort of, but he was doing industrial farm work driving and fixing big tractors before the office, which is a different world altogether. Anyway I get the sense he's depressed.
NathanKP
You'd be surprised how technical farming can be. Us software engineers often have a deep desire to make efficient systems, that function well, in a mostly automated fashion, so that we can observe these systems in action and optimize these systems over time.
A farm is just such a system that you can spend a lifetime working on and optimizing. The life you are supporting is "automated", but the process of farming involves an incredible amount of system level thinking. I get tremendous amounts of satisfaction from the technical process of composting, and improving the soil, and optimizing plant layouts and lifecycles to make the perfect syntropic farming setup. That's not even getting into the scientific aspects of balancing soil mixtures and moisture, and acidity, and nutrient levels, and cross pollinating, and seed collecting to find stronger variants with improved yields, etc. Of course the physical labor sucks, but I need the exercise. It's better than sitting at a desk all day long.
Anyway, maybe the farmers and shepherds also want to become software engineers. I just know I'm already well on the way to becoming a farmer (with a homelab setup as an added nerdy SWE bonus).
Den_VR
The old term for it was to become a “gentleman farmer.” There’s a history to it - George Washington and Thomas Jefferson were the same for a part of their lives.
EZ-E
Humans always fantasize about having a different situationship whenever they are unhappy or anxious.
shakna
I kinda did both... And I miss the farm constantly. But not breaking myself every single day.
krackers
>now we're making life so stressful people literally want to kill themselves
Is this actually the case? Working conditions and health during industrial revolution times doesn't seem that much better. There is a perception that people now are more stressed/tired/miserable than before, but I am not sure that is the case.
In fact I think it's the opposite, we have enough leisure time to reflect upon the misery and just enough agency to see that this doesn't have to be a fact of life, but not enough agency to meaningfully change it. This would also match how birth rates keep declining as countries become more developed.
karlgkk
> We're losing the ability to interact with each other socially, to flirt,
Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great
raducu
> Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great.
What's the birth rate in the civilized world?
How many men under 30 are virgins or sexless in the last year?
reeredfdfdf
Some of those men could meet someone if they quit Tinder or whatever crap online platform they might be using for dating, and start meeting people in real life.
Worked for me at least. There's simply less competition and more space for genuine social interaction.
decremental
[dead]
lithocarpus
This trend and direction has been going a long time and it's becoming increasingly obvious. It is ridiculous and insane.
Go for your plan B.
I followed my similar plan B eight years ago, wild journey but well worth it. There are a lot of ways to live. I'm not saying everyone should get out of the rat race but if you're one, like I was, who has a feeling that the tech world is mostly not right in an insidious kind of way, pay attention to that feeling and see where it leads. Don't need to be brash as I was, but be true to yourself. There's a lot more to life out there.
If you have kids and they depend on an expensive lifestyle, definitely don't be brash. But even that situation can be re-evaluated and shifted for the better if you want to.
chairmansteve
What was/is your plan B?
lithocarpus
It's been a lot of things but the gist was to get out of the office and city and computer and be mostly outdoors in nature and learn all the practical skills and other things like music. Ironically I've ended up on the computer a fair amount doing conservation work to protect the places I've come to love. But still am off grid and in the woods every day and I love it.
NathanKP
I'm right behind you on the escape to the mountains idea. I've actually already moved from the US to New Zealand, and the next step is a farm with some goats lol.
That said... I don't necessarily hate what AI is doing to us. If anything, AI is the ultimate expression of humanity.
Throughout history humans have continually searched for another intelligence. We study the apes and other animals, we pray to Gods, we look to the stars and listen to them to see if there are any radio signals from aliens, etc. We keep trying to find something else that understands what it is to be alive.
I would propose that maybe humans innately crave to be known by something other than ourselves. The search for that "other" is so fundamentally human, that building AI and interacting with it is just a natural progression of a quest we've already been on for thousands of years.
xeonmc
Humanity constructing a golden calf is an invariant eventuality, just like softwares expanding until it read emails.
joomla199
Your comment reminded me of Business Business [0]
weatherlite
I partly agree and partly disagree. Yes, we're more individual and more isolated. But ChatGPT/Gemini can really provide mental relief for people - not everyone can afford or have the time/energy to find a good human therapist close to their home. And this thing lives in your computer or phone and you can talk to it to get mental relief 24 / 7. I don't see it as bleak as you see it, mental help should be accessible and free for everyone. I know, we've had a bad decade with platforms like Meta/TikTok but I'm not convinced as you are the current LLMs will have an adverse effect.
ggm
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
transcriptase
Out of curiosity, are you the type of person who believes that someone like Joe Rogan has an obligation to argue with his guests if they stray from “expert consensus”, or for every guest that has a controversial opinion, feature someone with the opposite view to maintain balance?
ggm
Nope. This isn't my line of reasoning. But Joe should be liable for content he hosts, if the content defames people or is illegal. As should Facebook and even ycombinator. Or truth social.
econ
Long ago I complaint to Google that a search for suicide should point at helpful organisations rather than a Wikipedia article listing ways how to do it.
The same ranking/preference/suggestion should apply to any dedicated organisation vs a single page on some popular website.
A quality 1000 page website by and about Foobar org should be preferred over a 10 year old news article about Foobar org.
dahart
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.
These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html
windows_hater_7
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
Szpadel
i seen similar reports on social media, all of them had in common was presence of some keywords.
hsbauauvhabzb
Presumably ‘commit’ would have a high association with either git or self harm.
vasco
I didn't think marriage was that bad but point taken!
If you haven't read the article (or even if you have but didn't click on outgoing links twice) the NYT story about how ChatGPT convinced a suicidal teen not to look for help [1] should convince you that ChatGPT should be nowhere near anyone dealing with psychological issues. Here's ChatGPT discouraging said teenager from asking for help:
> “I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
> “Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
I am acutely aware that there's not enough psychologists out there but a sycophant bot is not the answer. One may think that something is better than nothing, but a bot enabling your destructive impulses is indeed worse than nothing.
[1] https://www.nytimes.com/2025/08/26/technology/chatgpt-openai...