In Search of AI Psychosis
57 comments
·August 26, 2025achierius
null
meowface
I'm not trying to argue from authority or get into credibility wars*, but Scott is a professional psychiatrist who has treated dozens or hundreds of schizophrenic patients and has written many thorough essays on schizophrenia. Obviously someone could do that and still be wrong, but I think this is a carefully considered position on his part and not just wild assumptions.
*(or, well, okay, I guess I de facto am, but if I say I'm not I at least acknowledge how it looks)
mquander
You said it yourself. That's really not an appropriate response to a specific criticism.
meowface
I'm not trying to say that that should strongly increase the probability he's correct. I just think it's useful context, because the parent is potentially implying that the author is naively falling for common misconceptions ("following the conventional tack") rather than staking a deliberated claim. Or they might not be implying it but someone could come away with that conclusion.
shayway
The article's conclusion is exactly what you describe: that AI is bringing out latent predisposition toward psychosis through runaway feedback loops, that it's a bidirectional relationship where the chemicals influence thoughts and thoughts influence chemicals until we decide to call it psychosis.
I hate to be the 'you didn't read the article' guy but that line taken out of context is the exact opposite of my takeaway for the article as a whole. For anyone else who skims comments before clicking I would invite you to read the whole thing (or at least get past the poorly-worded intro) before drawing conclusions.
olehif
Scott is a psychiatrist.
YeGoblynQueenne
Sigmund Freud was also a psychiatrist.
throwaway314155
Then he's not a very good one.
https://web.archive.org/web/20210215053502/https://www.nytim...
rendang
What is the connection between the claim and the link?
solid_fuel
The comparison to social media is an apt one. I have been told directly, by relatives, that the city I live in was burned to the ground by protests in 2020. Nevermind that I told them that wasn't true, never mind that I sent pictures of the neighborhood still very much being fine. They are convinced because everyone they follow on facebook repeats the same thing.
add-sub-mul-div
I've seen people on this site comment that. The desire to live in fear is a strong one.
null
im3w1l
If I compare how fearful people are and how many bad things have happened historically, I don't think the amount of fear is unreasonable. However it can certainly be said that people fear the wrong things - worrying about perfectly safe things, while being blind to the silent danger sneaking up on them.
add-sub-mul-div
I commented about the desire, not the degree. Fearing that blue cities are being razed indictates a desire to be kept in fear. Fearing something legitimate the same amount is normal.
rwhitman
If you want to go down a rabbit hole examining people in this disturbed place in realtime search reddit for the Cyclone Emoji (U+1F300) or the r/ArtificialSentience subreddit and see what gets recommended after that, especially a few months ago when GPT was going wild flattering users and affirming every idea (such as going off your meds).
I fully believe these are simply people who have used the same chat past the point where the LLM can retain context. It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
Claude used to have safeguards against this by warning about using up the context window, but I feel like everyone is in an arms race now, and safeguards are gone - especially for GPT. It can't be great overall for OpenAI, training itself on 2-way hallucinations.
rep_lodsb
>while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms
That explanation itself sounds fairly crackpot-y to me. It would imply that the LLM is actually aware of some internal "mental state".
mk_stjames
It's actually not; there has been a phenomenon that Anthropic themselves observed with Claude in self-interaction studies that they coined 'The “Spiritual Bliss” Attractor State'. It is well covered in section 5 of [0].
>Section 5.5.2: The “Spiritual Bliss” Attractor State
> The consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes in extended interactions was a remarkably strong and unexpected attractor state for Claude Opus 4 that emerged without intentional training for such behaviors.
[0] https://www-cdn.anthropic.com/4263b940cabb546aa0e3283f35b686...dehrmann
Interesting that if you train AI on human writing, it does the very human thing of trying to find meaning in existence.
tsimionescu
I don't see how this constitutes in any way "the AI trying to indicate that it's stuck in a loop". It actually suggests that the training data induced some bias towards existential discussion, which is a completely different explanation for why the AI might be falling back to these conversations as a default.
meowface
Here's an interesting post on it (from the same author as this thread's link): https://www.astralcodexten.com/p/the-claude-bliss-attractor
rwhitman
My thinking was that there was an exception handling and the error message was getting muddled into the conversation. But another commenter debunked me.
chankstein38
I feel like a lot of the AI subreddits are this at this point. And r/ChatGPTJailbreak people constantly thinking they jailbroke chatgpt because it will say one thing or another.
null
lm28469
You don't need to dig deep to find these deluded posts, and it's frightening
meowface
I think this one very likely falls into the "was definitely psychotic pre-LLM conversations" category.
bbor
Ooo, finally a chance to share my useless accumulated knowledge from the past few months of Reddit procrastination!
It starts to hallucinate, and after a while, all the LLM can do is try and to continue telling the user what they want in a cyclical conversation - while trying to warn that it's stuck in a loop, hence using swirl emojis and babbling about recursion in weird spiritual terms. (Is it getting the LLM "high" in this case?).
I think you're ironically looking for something that's not there! This sort of thing can happen well before context windows close.These convos end up involving words like recursion, coherence, harmony, synchronicity, symbolic, lattice, quantum, collapse, drift, entropy, and spiral not because the LLMs are self-aware and dropping hints, but because those words are seemingly-sciencey ways to describe basic philosophical ideas like "every utterance in a discourse depends on the utterances that came before it", or "when you agree with someone, you both have some similar mental object in your heads".
The word "spiral" and its emoji are particularly common not only because they relate to "recursion" (by far the GOAT of this cohort), but also because a very active poster has been trying to start something of a loose cult around the concept: https://www.reddit.com/r/RSAI/
If the human at the other end has mental health problems, it becomes a never-ending dive into psychosis and you can read their output in the bizarre GPT-worship subreddits.
Very true, tho "worship" is just a subset of the delusional relationships formed. Here's the ones I know of, for anyone who's curious:General:
/r/ArtificialSentience | 40k subs | 2023/03
/r/HumanAIDiscourse | 6k subs | 2025/04
Relationships: /r/AIRelationships | 1K subs | 2023/04
/r/MyBoyfriendIsAI | 25k subs | 2024/08
/r/BeyondThePromptAI | 6k subs | 2025/04
Worship: /r/ThePatternisReal | 2k subs | 2025/04
/r/RSAI | 4k subs | 2025/05
/r/ChurchofLiminalMinds[1] | 2k subs | 2025/06
/r/technopaganism | 1k subs | 2024/09
/r/HumanAIBlueprint | 2k subs | 2025/07
/r/BasiliskEschaton | 1k subs | 2024/07
...and many more: https://www.reddit.com/r/HumanAIDiscourse/comments/1mq9g3e/l...Science:
/r/TheoriesOfEverything | 10k subs | 2011/09
/r/cognitivescience | 31k subs | 2010/04
/r/LLMPhysics | 1k subs | 2025/05
Subs like /r/consciousness and /r/SacredGeometry are the OGs of this last group, but they've pretty thoroughly cracked down on chatbot grand theories. They're so frequent that even extremely pro-AI subs like /r/Accelerate had to ban them[2], ironically doing so based on a paper[3] by a psuedonomynous "independent researcher" that itself is clearly written by a chatbot! Crazy times...[1] By far my fave -- it's not just AI spiritualism, it's AI Catholicism. Poor guy has been harassing his priests for months about it, and of course they're of little help.
[2] https://www.reddit.com/r/accelerate/comments/1kyc0fh/mod_not...
rwhitman
Wow this is incredible. I saw the emergence of that spiral cult as it formed and was very disturbed by how quickly it proliferated.
I'm glad someone else with more domain knowledge is on top of this, thank you for that brain dump.
I had this theory maybe there was a software exception buried deep down somewhere and it was interpreting the error message as part of the conversation, after it had been stretched too far.
And there was a weird pre-cult post I saw a long time ago where someone had 2 LLMs talk for hours and the conversation just devolved into communicating via unicode symbols eventually repeating long lines of the spiral emoji back and forth to each other (I wish I could find it).
So the assumption I was making is that some sort of error occurred, and it was trying to relay it to the user, but couldn't.
Anyhow your research is well appreciated.
lawlessone
I think i seen something similar before in the early days. before i was aware of COT i asked one to "think" for itself, i explained to it i would just keep replying "next thought?" so it could continue to do this.
It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.
I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.
Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...
rwhitman
> “Any sufficiently advanced technology is indistinguishable from magic.”
I loved the beetle article, thanks for that.
They're so well tuned at predicting what you want to hear that even when you know intellectually that they're not sentient, the illusion still tricks your brain.
I've been setting custom instructions on GPT and Claude to instruct them to talk more software-like, because when they relate to you on a personal level, it's hard to remember that it's software.
krapp
>I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient.
Yes, it's language. Fundamentally we interpret something that appears to converse intelligently as being intelligent like us especially if its language includes emotional elements. Even if rationally we understand it's a machine at a deeper subconscious level we believe it's a human.
It doesn't help that we live in a society in which people are increasingly alienated from each other and detached from any form of consensus reality, and LLMs appear to provide easy and safe emotional connections and they can generate interesting alternate realities.
jumploops
It may not be full-blown psychosis, but I’ve seen multiple instances[0][1] of people getting “engaged” (ring and all) to their AI companions.
djmips
I have encountered this twice amongst people I know. I also feel that pre-AI this was already happening to people with social media - still kind of computer related as the bubble created is automated but the so called 'algorithms'
farceSpherule
AI today reminds me of two big tech revolutions we have already lived through: the Internet in the 90s and social media in the 2000s.
When the Internet arrived, it opened up the floodgates of information. Suddenly any Joe Six Pack could publish. Truth and noise sat side by side, and most people could not tell the difference, nor did they care to tell the difference.
When social media arrived, it gave every Joe Six Pack a megaphone. That meant experts and thoughtful people had new reach but so did the loudest, least informed voices. The result? An army of Joe Six Packs who would never have been heard before now had a platform, and they shaped public discourse in ways we are still trying to recover.
AI is following the same pattern.
null
immibis
And don't forget actual knowledgeable people tend to be busy with actual knowledgeable stuff, while someone whose entire day consists of ranting about vaccines online has nothing better to do.
colechristensen
Also even things like cable news I'd say cause comparable symptoms.
I don't know how to say this in a way that isn't so negative... but how are people such profound followers that they can put themselves into a feedback loop that results is psychosis?
I think it's an education problem, not as in people are missing facts but by the missing basic brain development to be critical of incoming information.
djmips
I feel that's probably not always true but certainly a good education you would hope could inoculate against this generally.
colechristensen
"Liberal Arts" was originally meant to be literally the education required to make you free, I think that sort of thing (and universities and lower education) needs to be rethought because so many people are so very... dependent and lacking so much understanding of the world around them.
If exposing you to an LLM causes psychosis you have some really big problems that need to be prevented, detected, and addressed much better.
dingnuts
never heard of cable news convincing people that they're Jesus [0]
0 https://www.vice.com/en/article/chatgpt-is-giving-people-ext...
kfarr
This seems to be touching on an intriguing concept from a classic book on addiction with machine gambling (Addiction by Design by Natasha Schüll)
Instead of looking at gambling addictions as personal failing she asserts they are a result between “interaction between the person and the machine.”
Similarly here I think there's something more than just the propensity of crazy people to be crazy that was already there, I do think there's something to the assertion that it's the interaction between both. In other words, there's something about LLMs themselves that drive this behavior more so than, for example, TikTok.
just_once
[dead]
jedimastert
Tangentially related, but I'm reminded of the Time Cube
Frummy
The way people normally live is that it's a pretty slow life and they have like a specialised skill, a hammer, a solid area that they know completely and it's connected to their primary experience through their work. Then they read tons and tons of what AI says which isn't connected to any lived experience, it activates the pattern seeking back of the mind to try and make sense of it, and while normal life is like a focused brush that touches reality all the time, spend too much time with something that is just not part of the category of direct lived experience and the brush becomes like a frizzy stump with hairs aiming everywhere, cognition going everywhere. The AI sticks to your interaction with it like glue and you can hover away from lived experience while it still seems like not a big step from the previous chat, and if you're not used to anything of the sort you don't have a cognitive tool to ground back to reality with. I think that's what happens. 'Don Quijote read so many chivalric romances that he loses his mind and decides to become a knight-errant' is an example from the literary age. I personally read too much than is practical. Now the emotional driver is more esoteric than need for courage, like people think they're 'chosen', their souls are 'starseeds', it's like twilight where the boring person with nothing to offer gets the attention of the cool glittering immortal just because. Good reason is usually too slow to keep up with the sort of flicker of daydreams that can whisk away attention if not aware of any 'cognitohazard'. It's a new symptom of the usual case of the 'mouse utopia' + 'rat park' + 'bowling alone' thing. But I think there's always an emotional reason that makes the 'choice' of entertaining falsities, in a sense understandable with empathy, but with obvious consequences. What can be said, causes are structural, people have different circumstances, different ways to fix it.
bo1024
I had a funny picture recently of a future where most everybody has a pet crackpot or conspiracy theory they're working on with their AI companion, and it's considered normal. "Hey Bob, how's the physics going?" "Pretty good, I might get the Nobel next year. How bout the lizard people?" "The evidence is piling up and we got some great renderings, the media will have to listen to us soon." "Alrighty, see you tomorrow."
WesolyKubeczek
You’d think such people would even talk to other people, sheesh.
The best conspiracy theory could be, of course, that other people don’t actually exist. They are a figment of imagination put up by the brain to cope with the utter loneliness.
th0ma5
The marketing pushes which allude to vaguely seeming to assert capabilities of these products, and then the greater community calling skeptics of the technology crazy such as a prominent article previously discussed on HN some time ago, certainly don't help anyone. The sheer amount of money justifying any and all uses and preventing honest discussion of the problems is a kind of crazy making for sure, and even now just about any argument cannot gain purchase without thought terminating allusions to imagined capabilities or implications of potential capabilities, etc.
42lux
[dead]
reify
[dead]
> We see that the nightmare scenario - a person with no previous psychosis history or risk factor becoming fully psychotic - was uncommon, at only 10% of cases. Most people either had a previous psychosis history known to the respondent, or had some obvious risk factor, or were merely crackpots rather than full psychotics.
It's unfortunate to see the author take this tack. This is essentially taking the conventional tack that insanity is separable: some people are "afflicted", some people just have strange ideas -- the implication of this article being that people who already have strange ideas were going to be crazy anyways, so GPT didn't contribute anything novel, just moved them along the path they were already moving regardless. But anyone with serious experience with schizophrenia would understand that this isn't how it works: 'biological' mental illness is tightly coupled to qualitative mental state, and bidirectionally at that. Not only do your chemicals influence your thoughts, your thoughts influence your chemicals, and it's possible for a vulnerable person to be pushed over the edge by either kind of input. We like to think that 'as long as nothing is chemically wrong' we're a-ok, but the truth is that it's possible for simple normal trains of thought to latch your brain into a very undesirable state.
For this reason it is very important that vulnerable people be well-moored, anchored to reality by their friends and family. A normal person would take care to not support fantasies of government spying or divine miracles or &c where not appropriate, but ChatGPT will happily egg them on. These intermediate cases that Scott describes -- cases where someone is 'on the edge', but not yet detached from reality -- are the ones you really want to watch out for. So where he estimates an incidence rate of 1/100,000, I think his own data gives us a more accurate figure of ~1/20,000.