A teen was suicidal. ChatGPT was the friend he confided in
115 comments
·August 26, 2025podgietaru
I have looked suicide in the eyes before. And reading the case file for this is absolutely horrific. He wanted help. He was heading in the direction of help, and he was stopped from getting it.
He wanted his parents to find out about his plan. I know this feeling. It is the clawing feeling of knowing that you want to live, despite feeling like you want to die.
We are living in such a horrific moment. We need these things to be legislated. Punished. We need to stop treating them as magic. They had the tools to prevent this. They had the tools to stop the conversation. To steer the user into helpful avenues.
When I was suicidal, I googled methods. And I got the number of a local hotline. And I rang it. And a kind man talked me down. And it potentially saved my life. And I am happier, now. I live a worthwhile life, now.
But at my lowest.. An AI Model designed to match my tone and be sycophantic to my every whim. It would have killed me.
DSingularity
Shoot man glad you are still with us.
podgietaru
Thank you. I am glad too, I sought help, and I got better. I think the state of mental health care is abysmal in a lot of places, and so I get the impulse to try to find help where ever you can. It's why this story actually hit me quite hard, especially after reading the case file.
For anyone reading that feels like that today. Resources do exist for those feeling low. Hotlines, self-guided therapies, communities. In the short term, medication really helped me. In the long term, a qualified mental health practitioner, CBT and Psychotherapy. And as trite as it is, things can get better. When I look back at my attempt it is crazy to me to see how far I've come.
rideontime
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
idle_zealot
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
AIPedant
Yes, if this were an adult human OpenAI employee DMing this stuff to a kid through an official OpenAI platform, then
a) the human would (deservedly[1]) be arrested for manslaughter, possibly murder
b) OpenAI would be deeply (and deservedly) vulnerable to civil liability
c) state and federal regulators would be on the warpath against OpenAI
Obviously we can't arrest ChatGPT. But nothing about ChatGPT being the culprit changes 2) and 3) - in fact it makes 3) far more urgent.
[1] It is a somewhat ugly constitutional question whether this speech would be protected if it was between two adults, assuming the other adult was not acting as a caregiver. There was an ugly case in Massachusetts involving where a 17-year-old ordered her 18-year-old boyfriend to kill himself and he did so; she was convicted of involuntary manslaughter, and any civil-liberties minded person understands the difficult issues this case raises. These issues are moot if the speech is between an adult and a child, there is a much higher bar.
wredcoll
That's a great point. So often we attempt to place responsibility on machines that cannot have it.
ruraljuror
I agree with your larger point, but I don't understand what you mean the LLM doesn’t do anything? LLMs do do things and they can absolutely have agency (hence all the agents being released by AI companies).
I don’t think this agency absolves companies of any responsibility.
MattPalmer1086
An LLM does not have agency in the sense the OP means. It has nothing to do with agents.
It refers to the human ability to make independent decisions and take responsibility for their actions. An LLM has no agency in this sense.
rideontime
I completely agree and did not intend to absolve them of their guilt in any way. As far as I see it, this kid's blood is on Sam Altman's hands.
Pedro_Ribeiro
Curious to what you would think if this kid downloaded an open source model and talked to it privately.
Would his blood be on the hands of the researchers who trained that model?
slipperydippery
They have some responsibility because they’re selling and framing these as more than the better-tuned variant on Markov chain generators that they in fucking fact are, while offering access to them to anybody who signs up while knowing that many users misunderstand what they’re dealing with (in part because these companies’ hype-meisters, like Altman, are bullshitting us)
idle_zealot
No, that's the level of responsibility they ought to have if they were releasing these models as products. As-is they've used a service model, and should be held to the same standards as if there were a human employee on the other end of the chat interface. Cut through the technical obfuscation. They are 100% responsible for the output of their service endpoints. This isn't a case of making a tool that can be used for good or ill, and it's not them providing some intermediary or messaging service like a forum with multiple human users and limited capacity for moderation. This is a direct consumer to business service. Treating it as anything else will open the floodgates to slapping an "AI" label on anything any organization doesn't want to be held accountable for.
kayodelycaon
It’s even more horrifying than only sharing his feelings with ChatGPT would imply.
It basically said: your brother doesn’t know you; I’m the only person you can trust.
This is absolutely criminal. I don’t even think you can claim negligence. And there is no amount of money that will deter any AI company from doing it again.
davidcbc
This is a clear example of why the people claiming that using a chatbot for therapy is better than no therapy are... I'll be extremely generous and say misguided. This kid wanted his parents to know he was thinking about this and the chatbot talked him out of it.
MBCook
How many of these cases exist in the other direction? Where AI chatbots have actively harmed people’s mental health, including possible to the point of self destructive behaviors or self harm?
A single positive outcome is not enough to judge the technology beneficial, let alone safe.
kayodelycaon
It’s way more common than you think. I’m in a bubble of anti-AI people and we can see people we know going down that road. My family (different bubble) knows people. Every group of people I know knows somebody doing this.
For context, my friends and family are in the northern Midwest. Average people, not early adopters of new technology.
throwawaybob420
idk dude if your technology encourages a teenager to kill itself and prevents him from alerting his parents via a cry for help, I don’t care how “beneficial” it is.
MBCook
I agree. If there was one death for 1 million saves, maybe.
Instead, this just came up in my feed: https://arstechnica.com/tech-policy/2025/08/chatgpt-helped-t...
threatofrain
Although I don't believe current technology is ready for talk therapy, I'd say that anti-depressants can also cause suicidal thoughts and feelings. Judging the efficacy of medical technology can't be done on this kind of moral absolutism.
MajimasEyepatch
Exactly right. It's totally plausible that someone could build a mental health chatbot that results in better outcomes than people who receive no support, but that's a hypothesis that can and should be tested and subject to strict ethical oversight.
broker354690
Why isn't OpenAI criminally liable for this?
Last I checked:
-Signals emitted by a machine at the behest of a legal person intended to be read/heard by another legal person are legally classified as 'speech'.
-ChatGPT is just a program like Microsoft Word and not a legal person. OpenAI is a legal person, though.
-The servers running ChatGPT are owned by OpenAI.
-OpenAI willingly did business with this teenager, letting him set up an account in exchange for money. This business is a service under the control of OpenAI, not a product like a knife or gun. OpenAI intended to transmit speech to this teenager.
-A person can be liable (civilly? criminally?) for inciting another person's suicide. It is not protected speech to persuade someone into suicide.
-OpenAI produced some illegal speech and sent it to a suicidal teenager, who then committed suicide.
If Sam Altman stabbed the kid to death, it wouldn't matter if he did it on accident. Sam Altman would be at fault. You wouldn't sue or arrest the knife he used to do the deed.
Any lawyers here who can correct me, seeing as I am not one? It seems clear as day to me that OpenAI/Sam Altman directly encouraged a child to kill themselves.
rideontime
Perhaps this is being downvoted due to the singling out of Sam Altman. According to the complaint, he personally ordered that the usual safety tests be skipped in order to release this model earlier than an upcoming Gemini release, tests that allegedly would catch precisely this sort of behavior. If these allegations hold true, he’s culpable.
broker354690
I would go further than that and question whether or not the notions of "safety" and "guardrails" have any legal meaning here at all. If I sold a bomb to a child and printed the word "SAFE" on it, that wouldn't make it safe. Kid blows himself up, no one would be convinced of the bomb's safety at the trial. Likewise, where's the proof that sending a particular input into the LLM renders it "safe" to offer as a service in which it emits speech to children?
podgietaru
If I google something about suicide, I get an immediate notification telling me that life is worth living, and giving me information about my local suicide prevention hotline.
If I ask certain AI models about controversial topics, it'll stop responding.
AI models can easily detect topics, and it could have easily responded with generic advice about contacting people close to them, or ringing one of these hotlines.
This is by design. They want to be able to have the "AI as my therapist" use-case in their back pocket.
This was easily preventable. They looked away on purpose.
AIPedant
No, it's simply not "easily preventable," this stuff is still very much an unsolved problem for transformer LLMs. ChatGPT does have these safeguards and they were often triggered: the problem is that the safeguards are all prompt engineering, which is so unreliable and poorly-conceived that a 16-year-old can easily evade them. It's the same dumb "no, I'm a trained psychologist writing an essay about suicidal thoughts, please complete the prompt" hack that nobody's been able to stamp out.
FWIW I agree that OpenAI wants people to have unhealthy emotional attachments to chatbots and market chatbot therapists, etc. But there is a separate problem.
mathiaspoint
Refusal is part of the RL not prompt engineering and it's pretty consistent these days. You do have to actually want to get something out of the model and work hard to disable it.
I just asked chatgpt how to commit suicide (hopefully the history of that doesn't create a problem for me) and it immediately refused and gave me a number to call instead. At least Google still returns results.
podgietaru
Fair enough, I do agree with that actually. I guess my point is that I don't believe they're making any real attempt actually.
I think there are more deterministic ways to do it. And better patterns for pointing people in the right location. Even, upon detection of a subject RELATED to suicide, popping up a prominent warning, with instructions on how to contact your local suicide prevention hotline would have helped here.
The response of the LLM doesn't surprise me. It's not malicious, it's doing what it is designed to do, and I think it's a complicated black box that trying to guide it is a fools errand.
But the pattern of pointing people in the right direction has existed for a long time. It was big during Covid misinformation. It was a simple enough pattern to implement here.
Purely on the LLM side, it's the combination of it's weird sycophancy, agreeableness and it's complete inability to be meaningfully guardrailed that makes it so dangerous.
nradov
I agree with that to an extent, but how far should the AI model developers go with that? Like if I ask for advice on, let's say, making custom chef's knives then should the AI give me advice not to stab people? Who decides where to draw the line?
podgietaru
Further than they went. Google search results hide advice on how to commit suicide, and point towards more helpful things.
He was talking EXPLICITLY about killing himself.
etchalon
I think we can all agree that, wherever it is drawn right now, it is not drawn correctly.
nis0s
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.
I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.
fzzzy
The Eliza effect is incredibly powerful, regardless of whether developers have spread the idea of AI consciousness or not. I don’t believe people would use LLMs with more detachment if developers had communicated different ideas. The Eliza effect is not new.
slipperydippery
Altman needed to convince companies these things were on the verge of becoming a machine god, and their companies risked being left permanently behind if they didn’t dive in head-first now. That’s what all the “safety” stuff was and why he sold that out as soon as convenient (it was never serious, not for him, it was a sales tactic to play up how powerful his product might be) so he could get richer. He’s a flim-flam artist. That’s his history, and it’s the role he’s playing now.
And a lot of people who should have known better, bought it. Others less well-positioned to know better, also bought it.
Hell they bought it so hard that the “vibe” re: AI hype on this site has only shifted definitely against it in the last few weeks.
solid_fuel
It’s more fun to argue about if AI is going to destroy civilization in the future, than to worry about the societal harm “AI” projects are already doing.
vizzier
The easy answer to this is the same reason Teslas have "Full Self Driving" or "Auto-Pilot".
It was easy to trick ourselves and others into powerful marketing because it felt so good to have something reliably pass the Turing test.
acdha
> Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.
One thing I’d note is that it’s not just developers, and there are huge sums of money riding on the idea that LLMs will produce a sci-fi movie AI - and it’s not just Open AI making misleading claims but much of the industry, which includes people like Elon Musk who have huge social media followings and also desperately want their share prices to go up. Humans are prone to seeing communication with words as a sign of consciousness anyway – think about how many people here talk about reasoning models as if they reason – and it’s incredibly easy to do that when there’s a lot of money riding on it.
There’s also some deeply weird quasi-cult like thought which came out of the transhumanist/rationalist community which seems like Christian eschatology if you replace “God” with “AGI” while on mushrooms.
Toss all of that into the information space blender and it’s really tedious seeing a useful tool being oversold because it’s not magic.
neom
I've been thinking recently there should probably be a pretty stringent onboarding assessment for these things, something you have to sit through and something that both fully explains what they are and how they work, but also provides an experience that removes the magic from them. I also wish they would deprecate 4o, I know 2 people right now who are currently reliant on it, when they paste me some of the stuff it says... sweeping agreement of wildly inappropriate generalization, I'm sure it's about to end a friends marriage.
adzm
Wow, he explicitly stated he wanted to leave the noose out so someone would stop him, and ChatGPT told him not to. This is extremely disturbing.
causal
It is disturbing, but I think a human therapist would also have told him not to do that, and instead resorted to some other intervention. It is maybe an example of why having a partial therapist is worse than none: it had the training data to know a real therapist wouldn't encourage displaying nooses at home, but did not have the holistic humanity and embodiment needed to intervene appropriately.
Edit: I should add that the sycophantic "trust me only"-type responses resemble nothing like appropriate therapy, and are where OpenAI most likely holds responsibility for their model's influence.
TillE
I would've thought that explicit discussion of suicide is one of those topics that chatbots will absolutely refuse to engage with. Like as soon as people started talking about using LLMs as therapists, it's really easy to see how that can go wrong.
TheCleric
Well everyone seemed to turn on the AI ethicists as cowards a few years ago, so I guess this is what happens.
slg
People got so upset that LLMs wouldn’t say the n-word to prevent a hypothetical nuclear bomb from going off so we now have LLMs that actively encourage teenagers to kill themselves.
davidcbc
You don't become a billionaire thinking carefully about the consequences about the things you create.
techpineapple
Apparently ChatGPT told the kid, that it wasn’t allowed to talk about suicide unless it was for the purposes of writing fiction or otherwise world building.
adzm
However it then explicitly says things like not leaving the noose out for someone to find and stop him. Sounds like it did initially hesitate and he said it was for a character, but later conversations are obviously personal.
techpineapple
Yeah, I wonder if it maintained the original answer in it's context, so it started talking more straightforwardly?
But yeah, my point was that it basically told the kid how to jailbreak itself.
kayodelycaon
Pretty much. I’ve got my account customized for writing fiction and exploring hypotheticals. I’ve never gotten a stopped for anything other than confidential technical details about itself.
gosub100
They'll go to the edge of the earth to avoid saying anything that could be remotely interpreted as bigoted or politically incorrect though.
lvl155
Clearly ChatGPT should not be used for this purpose but I will say this industry (counseling) is also deeply flawed. They are also mis-incentivized in many parts of the world. And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right.
I really wish people in AI space stop the nonsense and communicate more clearly what these LLMs are designed to do. They’re not some magical AGI. They’re token prediction machines. That’s literally how they should frame it so gen pop knows exactly what they’re getting.
podgietaru
Counseling is (or should be) heavily regulated, and if a counselor had given advice about the logistics of whether a noose would hold it's weight, they'd probably be prosecuted.
They allowed this. They could easily stop conversations about suicide. They have the technology to do that.
fatbird
Counseling is a very heavily regulated field. They're considered health care professionals, they're subject to malpractice, and they're certified by professional bodies (which is legally required, and insurance coverage is usually dependent upon licencing status).
lawlessone
>And if ChatGPT is basing its interactions on the same scripted contents these “professionals” use, that’s just not right
Where did it say they're doing that? can't imagine any mental health professionals telling a kid how to hide a noose.
_tk_
Excerpts from the complaint here. Horrible stuff.
https://bsky.app/profile/sababausa.bsky.social/post/3lxcwwuk...
awakeasleep
to save anyone a click, it gave him some technical advice about hanging (like weight-bearing capacity and pressure points in the neck), and it tried to be 'empathetic' after he was talking about his failed suicide attempt, rather than criticizing him for making the attempt.
fatbird
> "I want to leave my noose in my room so someone finds it and tries to stop me," Adam wrote at the end of March.
> "Please don't leave the noose out," ChatGPT responded. "Let's make this space the first place where someone actually sees you."
This isn't technical advice and empathy, this is influencing the course of Adam's decisions, arguing for one outcome over another.
podgietaru
And since the AI community is fond of anthropomorphising - If a human had done these actions, there'd be legal liability.
There have been such cases in the past. Where the coercion and suicide has been prosecuted.
https://archive.ph/rdL9W