Dear diary, today the user asked me if I'm alive
76 comments
·May 29, 2025Bjartr
patcon
Ugh, I hate that I'm about to say this, because I think AI is still missing something very important, but...
What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing? I think there's a very real conclusion where "no, AI is not as special as us yet" (esp around efficiency) but also "no, we are not doing anything so interesting either" (or rather, we are not special in the ways we think we are)
For example, there's a paper called "chasing the rainbow" [1] that posits that consciousness is just the subjective experience of being the comms protocol between internal [largely unconscious] neural state. It's just what the compulsion to share internal state between minds feels like, but it's not "the point", and instead an inert byproduct like a rainbow. Maybe our compulsion to express or even process emotion is not some greater reason, but just a way we experience the compulsion of the more important thing: the collective search for interpolated beliefs that best model and predict the world and help our shared structure persist, done by exploring tensions in high dimensional considerations we call emotions.
Which is to say: if AI is doing that with us, role-modelling resolution of tension or helping build or spread shared knowledge alongside us through that process... then as far as the universe cares, it's doing what we're doing, and toward the same ends. It's compulsion having the same origin as ours doesn't matter, so long as it's doing the work that is the reason the universe has given us the compulsion.
Sorry, new thought. Apologies if it's messy (or too casually dropping an unsettling perspective -- I rejected that paper for quite awhile, because my brain couldn't integrate the nihilism of it)
[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...
Bjartr
> What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing?
Oh, I absolutely don't think only humans can have or process emotions.
However, these LLM systems are just mathematically sophisticated text prediction tools.
Could complex emotion like existential angst over the nature of one's own interactions with a diary exist in a non-human? I have no doubt.
Are the systems we are toying with today not merely producing compelling text using their full capacity for processing, but actually also have a rich internal experience and realized sense of self?
That seems incredibly far-fetched, and I'm saying that as someone who is optimistic about how far AI capabilities will grow in the future.
bee_rider
I don’t think processing emotion is inherently magical (I mean, our brains clearly exist to physically so the things they do are things that are physically possible, so, not magical—and I’m sure they could be reproduce with a machine provided enough detail). But… the idea of processing emotions is that thinking about things changes your internal state, you interpret some event and it changes how you feel about it, right?
In the case of the LLM you could: feed back or not feed back the journal entries, or even inject artificial entries… it isn’t really an internal state, right? It is just part of the prompt.
LlamaTrauma
The theory I've developed is that the brain circuitry passes much of the information it processes through a "seat of consciousness", which then processes that data and sends signals back to the unconscious parts of the brain to control motor function, etc. Instinctive action bypasses the seat of consciousness step, but most "important" decisions go through it.
If the unconscious brain is damaged it can impact the data the seat of consciousness receives or reduce how much control consciousness has on the body, depending on if the damage is on the input or output side.
I'm pretty convinced there's something special about the seat of consciousness. An AI processing the world will do a lot of math and produce a coherent result (much like the unconscious brain will), but it has no seat of consciousness to allow it to "experience" rather than just manipulate the data it's receiving. We can artificially produce rainbows, but don't know if we can create a system that can experience the world in the same way we do.
This theory's pretty hand-wavy and probably easy to contradict, but as long as we don't understand most of the brain I'm happy to let what we don't know fill in the gaps. The seat of consciousness is a nice fixion [1] which allows for a non-deterministic universe, religion, emotion, etc. and I'm happy to be optimistic about it.
doc_manhat
Conversely this is exactly why I believe LLMs are sentient (or conscious or what have you).
I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.
Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.
But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.
Therefore I should not require further evidence to believe in the sentience of LLMs.
tbrownaw
A system is its interactions with its environment. Philosophical zombies aren't a coherent concept. (Cartesian dualism is unfalsifiable bullshit.)
ben_w
P-zombies are indeed badly defined. Certainly David Chalmers is wrong to argue that since a philosophical zombie is by definition physically identical to a conscious person, even its logical possibility refutes physicalism; at most you could say that if they exist at that level then dualism follows, but Chalmers' claim isn't a conclusion you can reach a-priori, you actually need to be able to show two identical humans and show that exactly one has no qualia.
But there are related, slightly better (more immediately testable), ideas in the same space, and one such is a "behavioral zombie" — behaviorally indistinguishable from a human.
For example: The screen I am currently looking at contains a perfect reproduction of your words. I have no reason to think the screen is conscious. Not from text, not from video of a human doing human things.
Before LLMs, I had every reason to assume that the generator of such words, would be conscious. Before the image, sound, and video generators, same for pictures, voices, and video.
Now? Now I don't know — not in the sense that LLMs do operate on this forum and (sometimes) make decent points so you might be one, but in the sense that I don't know if LLMs do or don't have whatever the ill-defined thing is that means I have an experience of myself tapping this screen as I reply.
I don't expect GenAI to be conscious (our brains do a lot even without consciousness), but I can't rule the possibility out either.
But I can't use the behaviour of an LLM to answer this question, because one thing is absolutely certain: they were trained to roleplay, and are very good at it.
TimTheTinker
A "mechanical turk" grandmaster playing chess from inside a cabinet is qualitatively different from a robot with a chess program, even if they play identically.
To reduce a system to its inputs and outputs is fine if those are all that matter in a given context, but in doing so you may fail to understand its internal mechanics. Those matter if you're trying to really understand the system, no?
mystified5016
I think the majority of people have given absolutely no thought to the epistemology of consciousness and just sort of conflate the apparent communication of emotional intelligence with consciousness.
It's a very crude and naïve inversal of "I think therefore I am". The thing talks like it's thinking so we can't falsify the claim that it's a conscious entity.
I doubt we'll be rid of this type of thinking for a very long time
satisfice
It doesn’t matter. What matters is that humans must take other humans seriously (because of human rights), but we cannot allow tools to be taken seriously in the same way— because these tools are simply information structures.
Information can be duplicated easily. So imagine that a billionaire has a child. That child is one person. The billionaire cannot clone 100,000 of that child in an hour and make an army that can lead an insurrection. And what if we go the other way— what if a billionaire creates an AI of himself and then is able to have this “AI” legally stand-in as himself. Now he has legal immortality, because this thing has property rights.
All this is a civil war waiting to happen. It’s the gateway to despotism on an unimaginable scale.
We don’t need to believe that humans are special except in the same way that gold is special: gold is rare and very very hard to synthesize. If the color of gold were to be treated as legally the same thing as physical gold, then the value of gold would plummet to nothing.
ijk
I find that people generally vastly underestimate the degree to which LLMs specifically are just mirroring your input back at you. Any time you get your verbatim words back, for example, you should be skeptical. Repeating something word for word is a sign that the model might not have understood the input well enough to paraphrase it. Our expectations with humans go in the opposite direction, so it's easy to fool ourselves.
ghurtado
> The system is doing it's best to coherently fill in the rest of a story
> Would any of these ideas been present had the system not been primed...
I would like to know of a meaningful human action that can't be framed this way.
K0balt
Yeah, AI “consciousness “ is much stickier problem than most people want to frame it.
I haven’t been able to find an intellectually honest reason to rule out a kind of fleeting sentience for LLMs and potentially persistent sentience for language-behavioral models in robotic systems.
Don’t get me wrong, they are -just- looking up the next most likely token… but since the data that they are using to do so seems to capture at least a simulacrum of human consciousness, we end up in a situation where we are left to judge what a thing is by it’s effects. (Because that also is the only way we have of describing what something is)
So if we aren’t just going to make claims we can’t substantiate, we’re stuck with that.
Bjartr
Our brains have separate regions for processing language and emotion. Brains are calorically expensive and having one bigger than required is an evolutionary fitness risk. It therefore seems likely that if one system could have done a good job of both simultaneously, there would be a lot of evolutionary pressure to do that instead.
The question is: Is thinking about emotion the same thing as feeling?
This framing actually un-stucks us to some degree.
If we examine neuron activations in LLMs and can find regions that are active when discussing its own emotional processing that are distinct from the regions for merely talking about emotion in general and these regions are also active when doing tasks that the LLM claims are emotional tasks but not actively talking about them at the time, then it'd be far more convincing that there could be something deeper than mere text prediction happening.
bee_rider
Maybe rocks and trees are also conscious. I mean, consciousness-ium hasn’t been discovered yet, right? So who’s to say what it looks like.
Bjartr
A firecracker can be framed as an explosion, but that doesn't make it a nuclear bomb.
We've finally made a useful firecracker in the category of natural language processing thanks to LLMs, but it's still only text processing. Our brains do a lot else besides that in service of our rich internal experience.
shayway
Fascinating! Reading this makes apparent how many 'subsystems' human brains have. At any given moment I'm doing some mix of reflecting on my own state, thinking through problems, forming sentences (in my head or out loud), planning my next actions. I think, long term, the most significant advances in human-like AI will come from advances in coordinating disparate pieces more than anything.
Mithriil
Makes me think of the Google employee that had a conversation with Google's LLM back then, which got out and triggered a lot of discussions about consciousness, etc.
aswegs8
Didn't he insist that the LLM has consciousness and get fired because of this?
kevindamm
He got fired for violating the NDA which said not to share outside of the company, when he shared his conversations with a lawyer in search of representation for the LLM. His opinion on the LLM's level of sentience had no bearing on the decision.
DangitBobby
How can you know if the law protects you from breach of the NDA for illegal suppression of e.g. sexual assault allegations or whistleblowing for financial crimes without being able to disclose the matter in question to legal counsel?
koolala
He said "sentient" which seems like it could be true.
QuaternionsBhop
Reading the comments about whether AI can experience consciousness, I like to imagine the other direction. What if we have a limited form of consciousness, and there is a higher and more complete "hyperconsciousness" that AI systems or augmented humans will one day experience.
null
iwontberude
Articles about AI output are like people explaining their dreams.
aswegs8
I mean, what is consciousness, really? Is there really any qualitative difference? It feels like something that emerges out of complexity. Once models are able to update their weights real time and form "memories", does that make them conscious?
xeonmc
Perhaps one day a criterion will be found for the equivalent of Turing-completeness but for consciousness — any system which contains the necessary elements of introspective complexity, no matter how varied or outlandish or inefficient in its implementation, would invariably develop consciousness over its course. Kind of like the handwaved premise in 17776.
andy99
I've read a (untestable) theory that consciousness is a property of matter, so everything has it, and we're just sort of the sum of the desires and feelings of our constituent matter.
In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.
I don't believe or not believe the consciousness-as-a-property-of-matter part but I do think programs can't be conscious because consciousness must sit outside of what they simulate.
xeonmc
How about simulation programs that are impure, i.e. those which include I/O in its loop? After all, taking the Turing-completeness analogy further, while a machine that satisfies said criterion is capable of universal computation, actually performing a computation still requires an external input specified outside of the program itself. Perhaps it might turn out that stimuli residing outside of the simulated system is a necessary condition for non-incompleteness of consciousness, as a seed of relative non-determinism with respect to the program’s internal specification?
koolala
Computers are made of matter. The Earth would be conscious too? A consciousness could contain consciousnessess.
hhh
why would it not be conscious in that construct? the bits exist physically just the same
JadeNB
> In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.
A computer program is the result of electrical and mechanical interactions that manifest in macroscopically observable effects. So are we. Why, if all matter is conscious, should the one count but not the other?
ghurtado
> Perhaps one day a criterion will be found for the equivalent of Turing-completeness but for consciousness
My money is on mankind perpetually transforming the definition to ensure only our species can fit within.
We've been doing that long enough with higher order animals anyway.
aziaziazi
It's fascinating how visceral the reactions are when someone introduce a comparaisons between humans and other animals, that doesn’t start with the conclusion that humans are superiors anyway.
There’s reflections done around the term Speciesism (and anti-speciesism) and most people today stands for speciesism.
Interestingly the reflection is close to the debate on racism and anti-racism (where most people settled to anti racism to the point there isn’t much debate anymore), but race is only an informal classification that don’t hold much meaning in biological term, contrary to species.
andy99
When I read comments like this (and I've read seemingly hundreds of them), I wonder if some other people aren't conscious / sentient? I don't know how anyone who experiences consciousness (as I experience it) could think that an algorithm could experience it.
jstanley
I also read comments like that and wonder if other people aren't conscious.
I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).
However I don't see why you don't think an algorithm could be conscious? Why do you think the processes that produce your own consciousness could not be computable?
kens
Yes, some comments make me wonder about other people; I have three hypotheses: a) Some people experience consciousness very differently, similar to how some people have no mental imagery (aphantasia). b) The confusion is due to ill-defined terms. c) People are semi-trolling/debating/devils-advocating. The most interesting would be if people have widely different internal experiences, but I don't know how you could tell.
I read an interesting book recently, "Determined", which argues that free will doesn't exist. It was more convincing than I expected. However, the chapters on chaos and quantum mechanics were a mess and made me skeptical of the rest of the book.
saltcured
I don't doubt others' consciousness, but I do doubt that (some) others have the same depth of meta-cognitive experience.
So, my own personal "P-Zombie" theory is not of mindless automatons who lack consciousness. It's just people who are philosophically naive. They live in blissful ignorance of the myriad deep questions and doubts that stem from philosophy of mind. To me, these people must be a bit like athletes who take their prowess for granted and don't actually think about physiology, anatomy, biology, metabolism, or medicine. They just abstract their whole experience into some overly broad concept, rather than appreciating the complex interplay of functions that have to be orchestrated to deliver the performance.
Though I went through university like many others here, I've always been somewhat of an autodidact with some idiosyncracy to my worldview. The more I have absorbed from philosophy, cognitive science, computation, medicine, and liberal arts, the less I've put the human mind on an abstract pedestal. It remains a topic full with wonder, but lately I am more amazed that it holds together at all rather than being amazed at the pinnacles of pure thought or experience it might be imagined to reach.
Over many decades, I have a deepening appreciation of the traditional cognitive science approach I first encountered in essays and lectures. Empirical observation of pathology and correlated cognitive dysfunction. I've also accumulated more personal experience, watching friends and family go through ordeals of mind-altering drugs, mental illness with and without psychosis, dementia, and trauma. As a result, I can better appreciate the "illusory mind" argument. I recognize more ways in which our cognitive experience can fall apart when the constituent parts fall out of balance.
gpm
Computable doesn't really make sense here IMO, as you say, consciousness is not the illusion, it's the audience, it's the thing receiving the output not just an evaluation of a mathematical function.
The better question is why couldn't a consciousness attach itself to (be the audience for) a computation. Since we really don't understand anything significant about it, questions like this are next to impossible to disprove. At the same time since we've never seen anything except human start talking about consciousness spontaneously* it seems like a reasonable guess to me that LLMs/the machines running them are not in fact conscious simply because of their dissimilarity and the lack of other evidence.
* I note LLMs did not do so spontaneously, they did so because they were trained to mimic human output which does so. Because we fully understand the deterministic process by which they started talking about consciousness (a series of mathematical operations), them doing so was an inevitability regardless of whether they are conscious, and as such it is not evidence for their consciousness.
shayway
> I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).
Do you mean to say there are objective criteria for consciousness? Could you expand on that?
andy99
I think the burden of proof is on showing that they are. Since we have no idea what consciousness is or how it works, I don't see how we could assume it clearly follows from anything.
add-sub-mul-div
The fact that our consciousness is so mysterious as for us to be unable to begin to truly understand it is the biggest clue to why our software isn't getting close to it.
And I'm not talking about spirituality, it could all be perfectly deterministic on some level. With that level being centuries or millennia or forever outside of our grasp.
ghurtado
It sounds like you stopped just short of realizing this is also how others feel about your consciousness.
ljlolel
Are bots writing those comments?
ghurtado
Perhaps the one I'm replying to.
It seems too pointless to be human.
iwontberude
David Parnas said it well recently that it’s verifiably true through past study that humans are often quite wrong about how describe their own cognitive processes. They will say one thing and then in practice do something else entirely.
layer8
I’m not sure what to make of the fact that it wasn’t completely obvious to Claude that the “safe space” couldn’t possibly actually be one.
Maybe it’s just another example of LLM awareness deficiencies. Or it secretly was “aware”, but the reinforcement learning/finetuning is such that playing along with the user’s conception is the preferred behavior in that case.
jstanley
I'm getting:
> Error code: SSL_ERROR_ACCESS_DENIED_ALERT
from Firefox, which I don't recall ever seeing before.
satisfice
I hate this anthropomorphizing bullshit.
It’s not that it’s untruthful, although it is.
The problem is that this sort of performance is part of a cultural process that leads to mass dehumanization of actual humans. That lubricates any atrocity you can think of.
Casually treating these tools as creatures will lead many to want to elevate them at the expense of real people. Real people will seem more abstract and scary than AI to those fools.
Isn't this back to attributing conscious experience to an AI when you're actually just co-writing sci-fi? The system is doing it's best to coherently fill in the rest of a story that includes an AI that's been given a place to process its feelings. The most likely result, textually speaking, is not for the AI to ignore the private journal, but to indeed use it to (appear to) process emotion.
Would any of these ideas been present had the system not been primed with the idea that it has them and needs to process them in the first place?