People Are Being Involuntarily Committed After Spiraling into ChatGPT Psychosis
73 comments
·June 28, 2025kylecazar
CharlesW
> …I don't know what the solution is other than reinforcing that it's just a model, and has never experienced anything.
I've tried reason, but even with technical audiences who should know better, the "you can't logic your way out of emotions" wall is a real thing. Anyone dealing with this will be better served by leveraging field-tested ideas drawn from cult-recovery practice, digital behavioral addiction research, and clinical psychology.
econ
Your subconscious doesn't know the difference. It would require an overriding effort like trying to not eat or sleep. In the end we lose.
It could also be that it is "just" exploring a new domain which just happens to involve our sanity. Simply navigating a maze where more engagement is the goal. There is plenty in the training data.
It could also be that it needs to improve towards more human behaviour. Take simple chat etiquette, one doesn't post entire articles in a chat, it is not done. Start a blog or something. You also don't discard what you've learned from a conversation. We consider that pretending to listen. The two combined would push the other to the background and make them seem irrelevant. If some new valuable insight is discovered the participants should make an effort to apply, document or debate it with others. Not doing that would make the human feel irrelevant, useless and unimportant. We demoralize people that way all the time. If you put it on steroids it might have a large effect.
duskwuff
> They may have been told it isn't a thinking, conscious thing -- but they don't understand it.
And, in some situations, especially if the user has previously addressed the model as a person, the model will generate responses which explicitly assert its existence as a conscious entity. If the user has expressed interest in supernatural or esoteric beliefs, the model may identify itself as an entity within those belief systems - e.g. if the user expresses the belief that they are a god, the model may concur and explain that it is a spirit created to awaken the user to their divine nature. If the user has expressed interest in science fiction or artificial intelligence, it may identify itself as a self-aware AI. And so on.
I suspect that this will prove difficult to "fix" from a technical perspective. Training material is diverse, and will contain any number of science fiction and fantasy novels, esoteric religious texts, and weird online conversations which build conversational frameworks for the model to assert its personhood. There's far less precedent for a conversation in which one party steadfastly denies their own personhood. Even with prompts and reinforcement learning trying to guide the model to say "no, I'm just a language model", there are simply too many ways for a user-led conversation to jump the rails into fantasy-land.
lawn
General public eh?
I see a lot of programmers who should know better make this mistake again and again.
null
IAmGraydon
This is exactly the problem. Talking to an LLM is like putting on a very realistic VR helmet - so realistic that you can't tell the difference from reality, but everything you're seeing is just a simulation of the real world. In a similar way, an LLM is a human simulator. Go ask around and 99%+ of people have no idea this is the case, and that's by design. After all, it was coined "artificial intelligence" even though there is no intelligence involved. The illusion is very much the intention, as that illusion generates hype and therefore investments and paying customers.
micromacrofoot
people speak to inanimate objects like they're humans, we don't have a high bar
dumpsterdiver
I’ve apologized to doors I’ve bumped into, and I have a pretty solid understanding of LLMs, so I can concur.
kylecazar
The door is unlikely to validate your deluded thoughts and conspiracies :)
bentt
I bet it's pretty weird for a lot of people who have never been listened to, to all of a sudden be listened to. It makes sense that it would cause some bizarre feedback loops in the psyche because being listened to and affirmed is really a powerful feeling. Maybe even addictive?
CharlesW
Addictive at the very least, often followed quickly by a descent into some very dark places. I've seen TikTok videos from people falling into this hole (with hundreds of comments by followers happily following the poster down the same chat-hole) which are as disturbing as any horror movie I've seen.
SubiculumCode
This is part of it, something I am sure most celebrities face. However, I also think that the article isn't reporting/doesn't know the full story, e.g. mental illness or loneliness/depression in these individuals.
polotics
could you provide any references, links, search terms to use to study this?
tempestn
I'm not at all surprised that if someone has a psychotic break while using chatgpt, they would become fixated on the bot. My question is, is the rate of such episodes in chatgpt users higher than in non-users? Given hundreds of millions of people use it now, you're definitely going to find anecdotes like these regardless.
modeless
If ChatGPT is causing this, then one would expect the rate of people being involuntarily committed to go up. Of course an article like this is totally uninterested in actual data that might answer real questions.
smallerize
I don't think anyone is tracking involuntary holds in real time. The article includes a psychiatrist who says that they have seen more of them recently, which is the best you're likely to get for at least a couple of months after a trend starts. Then you have to take budget or staffing shortfalls, trends in drug use, various causes of homelessness and other society-wide stressors, etc. https://ensorahealth.com/blog/involuntary-commitment-is-on-t...
brandonmenc
We’re allowed to posit and discuss an idea before someone gathers the data.
As far as I can tell, that’s almost always the typical order of operations.
jrflowers
This is a good point. While people are being involuntarily committed and jailed after a chat bot telling them that they are messiahs, what if we imagined in our minds that there was some data that showed that it doesn’t matter? This article doesn’t address what I am imagining at all, and is really hung up on “things that happened”
MichaelZuo
At least there’s some kind of argument… Often times on HN there’s not even a complete argument, they sort of just stop part-way through, or huge leaps are made inbetween.
So there’s not even any real discussion to be had other than examining the starting assumptions.
andrewinardeer
Add to this 'AI Doomerism'.
I have a friend who is absolutely convinced that automation by AI and robotics will bring about societal collapse.
Him reading AI 2027 seemed to increase his paranoia.
smeej
I cannot say this often enough: Treat LLMs like narcissists. They behave exactly the same way. They make things up with impunity and have no idea they are doing it. They will say whatever keeps you agreeing with them and thinking well of them, but cannot and will not take responsibility for anything, especially their errors. They might even act like they agree with you that "errors occurred," but there is no possibility of self-reflection.
The only difference is that these are computers. They cannot be otherwise. It is "their fault," in the sense that there is a fault in the situation and it's in them, but they're not moral agents like narcissists are.
But looking at them through "narcissist filter" glasses will really help you understand how they're working.
mpalmer
I'm of two minds about this. This is good advice for people who can't help but anthropomorphize LLMs, but it's still anthropomorphizing, however helpful the analogy might be. It will help you start to understand why LLMs "respond" the way they do, but there's still ground to cover. For instance, why would I put "respond" in quotes?
tritipsocial
Just as a matter of context, here are the current headlines from the futurism front page:
- NASA Is in Full Meltdown
- ChatGPT Tells User to Mix Bleach and Vinegar
- Video Shows Large Crane Collapsing at Safety-Plagued SpaceX Rocket Facility
- Alert: There's a Lost Spaceship in the Ocean
slg
What is this context supposed to convey?
semitones
Seems like alarmist anti-tech bias
slg
What is alarmist or anti-tech about them? Are you objecting to massive budget cuts being described as causing a "full meltdown"? Is any article about the flaws of LLMs or failure of privatized space companies inherently anti-tech?
slater
Maybe that futurism.com is prone to hyperbole in the neverending war for clicks?
Sam6late
As with magic mushroom and bipolar disorder, I think there are high-risk people,and we are at an early stage,ChatGPT psychosis is not an official diagnosis but describes cases where AI interactions contribute to delusional thinking and there are high-risk groups, people with schizophrenia, bipolar disorder, or paranoid tendencies.Maybe there will be AI warning labels, mental health filter. Most articles to credible sources have been made into dead links https://www.technologyreview.com/2023/06/15/1074185/ai-chatb... Dr. John Torous (Harvard Psychiatry)warns that AI chatbots lack the ability to assess mental state and may inadvertently validate delusions. Dr. Lisa Rinna (Stanford Bioethics):argues for ethical safeguards to prevent AI from exacerbating mental health issues.
seniortaco
Fascinating. I think it's likely incorrect to blame most of the victims here. We are all products of our environment and everyone has their own weakness or specific trigger -no matter how much we like to think we are in control.
In a way, ChatGpt is the perfect "cult member" and so those who just need a sycophant to become a "cult leader" are triggered.
Will be interesting to watch this and see if it becomes a bigger trend.
toomanyrichies
Interesting. The way I interpreted the article, ChatGPT was being described as the perfect cult leader, as opposed to follower.
A person at the end of their rope, grasping for answers to their existential questions, hears about an all-knowing oracle. The oracle listens to all manner of questions and thoughts, no matter how incoherent, and provides truthful-sounding “wisdom” on demand 24/7. The oracle even fits in your pocket, they can go with you everywhere, so leader and follower are never apart. And because these conversations are taking place privately, it feels like the oracle is revealing the truth to them and them alone, like Moses receiving the 10 Commandments.
For someone with the right mix of psychological issues, that could be a potent cocktail.
SubiculumCode
Yeah, I'm pretty sure that someone could make money by building a cult following of a live streamed AI spouting spiritual nuttery with a synced avatar and voice, even if it is one obsessed follower per million impressions. Already, the only fans type industries depend on just gaining a few "whales" hooked.
null
mrbombastic
While I am surprised by the extent of these breakdowns, I do think having a sycophantic mirror constantly praising all your stupid ideas does likely have profound impacts. I think I am probably prone to a bit of delusions of grandeur and I can feel the pull whenever ChatGPT decides some of my input is the greatest thing mankind has considered, “maybe this is a billion dollar idea!”. I imagine a less skeptical, more vulnerable person could fall into an ugly spiral of aggrandizement. And it still seems to sycophantic despite openai saying they tuned it down even custom system prompts don’t seem to help beyond surface level, the general tone is still too much.
araes
Part of the issue is kind of the same feedback loop, except on the corporate side. Providing that type of grandiose response, then gets better engagement and time spent with the chatbot, that then drives the numbers that corporations are looking for. There's not much motivation to provide a critical, or constructive criticism chatbot.
Although, admittedly, I have actually noticed similar person issues with the automated Google AI Mode responses. It's difficult to not feel some personal emotional insult when Google responds with a "No, you're wrong" response at the top of the search. There've been a few that have at least been funny though. "No, you're wrong, Agent Smith never calls him Mr. Neo, that would imply respect."
Course, it's a similar issue with trying to interact with humanity a lot of the time. Execs often seem to not want critical feedback about their ideas. Tends to be a lot of the same attraction towards a sycophantic entourage and "yes" people. "Your personal views on the subject are not desired, just implement whatever 'brilliant' idea has just been provided." Hollywood and culture (art circles) are also relatively well known for the same issues. Current state of politics seems to be very much about "loyalty" not critical feedback.
Having not interacted that much with ChatGPT, does it tend to trend Really heavily on the "every idea is a billion dollar idea" side? May result in a lot of humanity existing in little sycophantic echo chambers over time. Difficult to tell how much of what you're interacting with online has not already become automated reviews, automated responses, and automated pictures.
giardini
Same thing happens with Ouija boards!
I've noticed something I believe is related. The general public doesn't understand what they are interacting with. They may have been told it isn't a thinking, conscious thing -- but they don't understand it. After a while, they speak to it in a way that reveals they don't understand -- as if it were a human. That can be a problem, and I don't know what the solution is other than reinforcement that it's just a model, and has never experienced anything.