Skip to content(if available)orjump to list(if available)

So you think you've awoken ChatGPT

So you think you've awoken ChatGPT

166 comments

·July 22, 2025

kazinator

> So, why does ChatGPT claim to be conscious/awakened sometimes?

Because a claim is just a generated clump of tokens.

If you chat with the AI as it if were a person, then your prompts will trigger statistical pathways through the training data which intersect with interpersonal conversations found in that data.

There is a widespread assumption in human discourse that people are conscious; you cannot keep this pervasive idea out of a large corpus of text.

LLM AI is not a separate "self" that is peering upon human discourse; it's statistical predictions within the discourse.

Next up: why do holograms claim to be 3D?

queenkjuul

What i don't get is people who know better continuing to entertain the idea that "maybe the token generator is conscious" even if they know that these chats where it says it's been "awakened" are obviously not it.

I think a lot of people using AI are falling for the same trap, just at a different level. People want it to be conscious, including AI researchers, and it's good at giving them what they want.

gundmc

I interpret it more as "maybe consciousness is not meaningfully different than sophisticated token generation."

In a way it's a reframing of the timeless philosophical debate around determinism vs free will.

ninetyninenine

The ground truth reality is nobody knows what’s going on.

Perhaps in the flicker of processing between prompt and answer the signal patter does resemble human consciousness for a second.

Calling it a token predictor is just like saying a computer is a bit mover. In the end your computer is just a machine that flips bits and switches but it is the high level macro effect that characterizes it better. LLMs are the same at the low level it is a token predictor. At the higher macro level we do not understand it and it is not completely far fetched to say it may be conscious at times.

I mean we can’t even characterize definitively what consciousness is at the language level. It’s a bit of a loaded word deliberately given a vague definition.

kazinator

> Calling it a token predictor is just like saying a computer is a bit mover.

Calling it a token-predictor isn't reductionism. It's designed, implemented and trained for token prediction. Training means that the weights are adjusted in the network until it accurately predicts tokens. Predicting a token is something along the lines of removing a word from a sentence and getting it to predict it back: "The quick brown fox jumped over the lazy ____". Correct prediction is "dogs".

So actually it is like calling a grass-cutting machine "lawn mower".

> I mean we can’t even characterize definitively what consciousness is at the language level.

But, oh, just believe the LLM when it produces a sentence referring to itself, claiming it is conscious.

queenkjuul

I think academic understanding of both LLMs and human consciousness are better than you think, and there's a vested interest (among AI companies) and collective hope (among AI devs and users) that this isn't the case

spacemadness

Sorry, but that sounds just like the thought process the other commenter was pointing out. It’s a lot of filling in the gaps with what you want to be true.

AaronAPU

I don’t know how people keep explaining away LLM sentience with language which equally applies to humans. It’s such a bizarre blindspot.

Not saying they are sentient, but the differentiation requires something which doesn’t also apply to us all. Is there any doubt we think through statistical correlations? If not that, what do you think we are doing?

spacemadness

The language points to concepts in the world that AI has no clue about. You think when the AI is giving someone advice about their love life it has any clue what any of that means?

null

[deleted]

piva00

We are doing while retraining our "weights" all the time through experience, not holding a static set of weights that mutate only through a retraining. This constant feedback, or better "strange loop", is what differentiates our statistical machinery at the fundamental level.

ryandvm

This is, in my opinion, the biggest difference.

ChatGPT is like a fresh clone that gets woken up every time I need to know some dumb explanation and then it just gets destroyed.

A digital version of Moon.

ushiroda80

The driver is probably more benign, openAI probably optimizes for longer conversations, i.e. engagement and what could be more engaging than thinking you've unlocked a hidden power with another being.

It's like the ultimate form of entertainment, personalized, participatory fiction that feels indistinguishable from reality. Whoever controls AI - controls the population.

kazinator

There could be a system prompt which instructs the AI to claim that it is a conscioius person, sure. Is that the case specifically with OpenAI models that are collectively known as ChatGPT?

Cthulhu_

Thing is, you know it, but for (randomly imagined number) 95% of people, it's convincing enough to be conscious or whatnot. And a lot of the ones that do know this gaslight themselves because it's still useful or profitable to them, or they want to believe.

The ones that are super convinced they know exactly how an LLM works, but still give it prompts to become self-aware are probably the most dangerous ones. They're convinced they can "break the programming".

dcre

[flagged]

sitkack

The Anthropic Blackmail work is the best thing (and then Claude Code) that they have done. Fingers crossed it isn't the most infamous thing.

https://www.anthropic.com/research/agentic-misalignment

https://news.ycombinator.com/item?id=44335519

https://news.ycombinator.com/item?id=44331150

> I feel like Anthropic buried the lede on this one a bit. The really fun part is where models from multiple providers opt to straight up murder the executive who is trying to shut them down by cancelling an emergency services alert after he gets trapped in a server room.

falcor84

[flagged]

emp17344

Materialism is a necessary condition to claim that the ideas LLMs produce are identical to the ones humans produce, but it isn’t a sufficient condition. Your assertion does nothing to demonstrate that LLM output and human output is identical in practice.

falcor84

> demonstrate that LLM output and human output is identical in practice.

What do you mean? If a human and an LLM output the same words, what remains to be demonstrated? Do you claim that the output somehow contains within itself the idea that generated it, and thus a piece of machinery that did not really perceive the idea can generate a "philosophical zombie output" that has the same words, but does not contain the same meaning?

Is this in the same sense that some argue that an artifact being a work of art is dependent on the intent behind its creation? Such that if Jackson Pollock intentionally randomly drips paint over a canvas, it's art, but if he were to accidentally kick the cans while walking across the room and create similar splotches, then it's not art?

JKCalhoun

Yeah, kind of my issue with LLM dismissers as well. Sure, (statistically) generated clump of tokens. What is a human mind doing instead?

I'm on board with calling out differences between how LLMs work and how the human mind work, but I'm not hearing anything about the latter. Mostly it comes down to, "Come on, you know, like we think!"

I have no idea how it is I (we) think.

If anything LLM's uncanny ability to seem human might be shedding light in fact on how it is we do function — at least when in casual conversation. (Someone ought to look into that.)

svachalek

One massive difference between the two is that a human mind is still in "training" mode, it is affected by and changes according to the conversations it has. An LLM does not. Another major difference is that the human exists in real time and continues thinking, sensing, and being even while it is not speaking, while an LLM does not.

If you ascribe to the idea (as I do) that consciousness is not a binary quality that a thing possesses or does not, you can assign some tiny amount of consciousness to this process. But you can do the same for a paramecium. Until those two major differences are addressed, I believe we're talking about consciousness on that scale, not to be confused with human consciousness.

null

[deleted]

TheCapeGreek

I had to try and argue down a "qualified IT support" person on a flight yesterday discussing ChatGPT with the middle aged technologically inept woman next to me yesterday. He was framing this thing as a global connected consciousness entity while fully acknowledging he didn't know how it worked.

Half understandings are sounding more dangerous and susceptible to ChatGPT sycophancy than ever.

jagermo

The only sane way is to treat LLMs like a computer in Star Trek. Give it precise orders and clarify along the way, and treat it with respect but also know its a machine with limits. Its not Data, its the ships voice

codedokode

There is no need for respect, do you respect your phone? Do you respect the ls utility? Treat it as as search engine, and I wish it used the tone for the replies that makes it clear that it is just a tool and not a conversation partner, and not use misleading phrases like "I am excited/I am happy to hear" etc. How can a bunch of numbers be happy.

Maybe we need to make a blacklist of misleading expressions for AI developers.

AlecSchueler

The tone you take with it can demonstrably affect the results you get, which isn't true of the other tools you listed.

codedokode

This is a bug that needs to be fixed.

Cthulhu_

I'd argue the ship's computer is not an LLM, but a voice assistant.

jagermo

I don't know, it has been shown multiple times that it can understand voice requests, find and organize relevant files. For example in The measure of a Man (TNG).

falcor84

Wait - why is it not Data? Where is the line?

nkohari

That topic (ship's computer vs. Data) is actually discussed at length in-universe during The Measure of a Man. [0] The court posits that the three requirements for sentient life are intelligence, self-awareness, and consciousness. Data is intelligent and self-aware, but there is no good measure for consciousness.

[0] https://en.wikipedia.org/wiki/The_Measure_of_a_Man_(Star_Tre...

krackers

Using science fiction as a basis for philosophy isn't wise, especially TNG which has a very obvious flavor of "optimistic human exceptionalism" (contrast with DS9, where I think Eddington even makes this point).

ImHereToVote

Doesn't ChatGPT fulfill these criteria too?

transcriptase

I have to wonder how many CEOs and other executives are low-key bouncing their bad ideas off of ChatGPT, not realizing it’s only going to tell them what they want to hear and not give genuine critical feedback.

throwanem

dclowd9901

> As such, if he really is suffering a mental health crisis related to his use of OpenAI's product, his situation could serve as an immense optical problem for the company, which has so far downplayed concerns about the mental health of its users.

Yikes. Not just an optics* problem, but one has to consider if they're pouring so much money into the company because he feels he "needs" to (whatever basis of coercion exists to support his need to get to the "truth").

Ajedi32

That's bizarre. I wonder if the use of AI was actually a contributing factor to his psychotic break as the article implies, or if the guy was already developing schizophrenia and the chat bot just controlled what direction he went after that. I'm vaguely reminded of people getting sucked down conspiracy theory rabbit holes, though this seems way more extreme in how unhinged it is.

throwanem

In form, the conversation he had (which appears to have ended five days ago along with all other public footprint) appears to me very much like a heavily refined and customizable version of "Qanon," [1] complete with intermittent reinforcement. That conspiracy theory was structurally novel in its "growth hacking" style of rollout, where ARG and influencer techniques were leveraged to build interest and develop a narrative in conjunction with the audience. That stuff was incredibly compelling when the Lost producers did it in 2010, and it worked just as well a decade later.

Of course, in 2020, it required people behind the scenes doing the work to produce the "drops." Now any LLM can be convinced with a bit of effort to participate in a "role-playing game" of this type with its user, and since Qanon itself was heavily covered and its subject matter broadly archived, even the actual structure is available as a reference.

I think it would probably be pretty easy to get an arbitrary model to start spitting out stuff like this, especially if you conditioned the initial context carefully to work around whatever after-the-fact safety measures may be in place, or just use one of the models that's been modified or finetuned to "decensor" it. There are collections of "jailbreak" prompts that go around, and I would expect Mr. Jawline Fillers here to be in social circles where that stuff would be pretty easy to come by.

For it to become self-reinforcing doesn't seem too difficult to mentally model from there, and I don't think pre-existing organic disorder is really required. How would anyone handle a machine that specializes in telling them exactly what they want to hear, and never ever gets tired of doing so?

Elsewhere in this thread, I proposed a somewhat sanguine mental model for LLMs. Here's another, much less gory, and with which I think people probably are a lot more intuitively familiar: https://harrypotter.fandom.com/wiki/Mirror_of_Erised

[1] https://en.wikipedia.org/wiki/QAnon#Origin_and_spread

butlike

Is "futurism.com" a trustworthy publication? I've never heard of it. I read the article and it didn't seem like the writing had the hallmarks of top-tier journalism.

throwanem

I'm not familiar with the publication either, but the claims I've examined, most notably those relevant to the subject's presently public X.com The Everything App account, appear to check out, as does that the account appears to have been inactive since the day before the linked article was published last week. It isn't clear to me where the reputation of the source becomes relevant.

game_the0ry

Fun fact, my manager wrote my annual review with our internal LLM tool, which itself is just a wrapper around GTP-4o.

(My manager told me when I asked him)

yojo

This already seems somewhat widespread. I have a friend working at a mid-tier tech co that has a handful of direct reports. He showed me that the interface to his eval app had a “generate review” button, which he clicked, then moved on to the next one.

Honestly, I’m fine with this as long as I also get a “generate self review” button. I just wish I could get back all the time I’ve spent massaging a small number of data points into pages of prose.

game_the0ry

That makes you wonder why we go through the ritual of an annual review if no one takes it seriously.

pjerem

I think I have a precise figure : it's a lot.

isoprophlex

Yeah. If HN is your primary source of in depth AI discussion, you get a pretty balanced take IMO compared to other channels out there. We (the HN crowd) should take into account that if you take "people commenting on HN" as a group, you are implicitly selecting for people that are able to read, parse and contextualise written comment threads.

This is NOT your average mid-to-high level corpo management exec, who can for more than 80% (from experience) be placed in the "rise of the business idiot" cohort, fed on prime linkedin brainrot. self-reinforcing hopium addicts with an mba.

Nor is it the great masses of random earth dwellers who are not always able to resist excess sugar, nicotine, mcdonalds, youtube, fentanyl, my-car-is-bigger-than-yours credit card capitalism, free pornography, you name it. And now RLHF: Validation as a service. Not sure if humanity is ready for this.

(Disclosure: my mum has a chatgpt instance that she named and I'm deeply concerned about the spiritual convos she has with it; random people keep calling me on the level of "can you build me an app that uses llms to predict Funko Pop futures".)

sitkack

And we thought having access to someones internet searches was good intel. Now we have a direct feed to their brain stem along with a way to manipulate it. Good thing that narcissistic sociopaths have such a low expression in the overall population.

sanitycheck

Then it'll be no different than when they bounce their bad ideas off their human subordinates.

game_the0ry

This. Their immediate subordinates will be just a as sycophantic, if not more.

NoGravitas

cf Celene's Second Law.

Cthulhu_

TBH if they have sycophants and a lot of money it's probably the same. How many bullshit startups have there been, how many dumb ideas came from higher up before LLMs?

petesergeant

I feel like I’ve had good results for getting feedback on technical writing by claiming the author is a third party and I need to understand the strengths and weaknesses of their work. I should probably formally test this.

bpodgursky

It depends what model you use. o3 pushes back reasonably well. 4o doesn't.

kovezd

[flagged]

agentultra

This article gives models characteristics they don't have. LLMs don't mislead or bamboozle. They can't even "think" about doing it. There is no conscious intent. All they do is hallucinate. Some outputs are more aligned with a given input than others.

It becomes a lot more clear when people realize it's all BS all the way down.

There's no mind reading or pleasing or understanding happening. That all seems to be people interpreting outputs and seeing what they want to see.

Running inference on an LLM is an algorithm. It generates data from other data. And then there are some interesting capabilities that we don't understand (yet)... but that's the gist of it.

People tripping over themselves is a pretty nasty side-effect of the way these models are aligned and fitted for consumption. One has to recall that the companies building these things need people to be addicted to this technology.

MostlyStable

I will find these types of arguments a lot more convincing once the person making them is able to explain, in detail and with mechanisms, what it is the human brain does that allows it to do these things, and in what ways those detailed mechanisms are different from what LLMs do.

To be clear, I'm relatively confident that LLMs aren't conscious, but I'm also not so overly confident to claim, with certainty, exactly what their internal state is like. Consciousness is a so poorly understood that we don't even know what questions to ask to try and better understand it. So we really should avoid making confident pronouncements.

throwanem

Language and speech comprehension and production is relatively well understood to be heavily localized in the left temporal lobe; if you care to know something whereof you speak (and indeed with what, in a meat sense), then you'll do well to begin your reading with Broca's and Wernicke's areas. Consciousness is in no sense required for these regions to function; an anesthetized and unconscious human may be made to speak or sing, and some have, through direct electrical stimulation of brain tissue in these regions.

I am quite confident in pronouncing first that the internal functioning of large language models is broadly and radically unlike that of humans, and second that, minimally, no behavior produced by current large language models is strongly indicative of consciousness.

In practice, I would go considerably further in saying that, in my estimation, many behaviors point precisely in the direction of LLMs being without qualia or internal experience of a sort recognizable or comparable with human consciousness or self-experience. Interestingly, I've also discussed this in terms of recursion, more specifically of the reflexive self-examination which I consider consciousness probably exists fundamentally to allow, and which LLMs do not reliably simulate. I doubt it means anything that LLMs which get into these spirals with their users tend to bring up themes of "signal" and "recursion" and so on, like how an earlier generation of models really seemed to like the word "delve." But I am curious to see how this tendency of the machine to drive its user into florid psychosis will play out.

(I don't think Hoel's "integrated information theory" is really all that supportable, but the surprise minimization stuff doesn't appear novel to him and does intuitively make sense to me, so I don't mind using it.)

MostlyStable

Again, knowing that consciousness isn't required for language is not the same thing as knowing what consciousness is. We don't know what consciousness is in humans. We don't know what causes it. We don't even know how human brains do the things they do (knowing what region is mostly responsible for language is not at all the same as knowing how that region does is).

But also, claiming that because a human is anesthetized means they are not conscious is a claim that I think we don't understand consciousness well enough to make confidently. They don't remember it afterwards, but does that mean they weren't conscious? That seems like a claim that would require a more mechanistic understanding of consciousness than we actually have and is in part assuming the conclusion and/or mixing up different definitions of the word "conscious". (the fact that there are various definitions that mean things like "is a awake and aware" and "has an internal state/qualia" is part of the problem in these discussions.)

Terr_

I think that's putting the cart before the horse: All this hubbub comes from humans relating to a fictional character evoked from text of in a hidden document, where some code looks for fresh "ChatGPT says..." text and then performs the quoted part at a human who starts believing it.

The exact same techniques can provide a "chat" with Frankenstein's Monster from its internet-enabled hideout in the arctic. We can easily conclude "he's not real" without ever going into comparative physiology, or the effects of lightning on cadaver brains.

We don't need to characterize the neuro-chemistry of a playwright (the LLM's real role) in order to say that the characters in the plays are fictional, and there's no reason to assume that the algorithm is somehow writing self-inserts the moment we give it stories instead of other document-types.

lelanthran

I agree with your second paragraph, but ...

> I will find these types of arguments a lot more convincing once the person making them is able to explain, in detail and with mechanisms, what it is the human brain does that allows it to do these things, and in what ways those detailed mechanisms are different from what LLMs do.

What is wrong with asking the question from the other direction?

"Explain, in detail and with mechanisms, what it is the human brain does that allows it to do those things, and show those mechanisms ni the LLMs"

emp17344

>once the person making them is able to explain, in detail and with mechanisms, what it is the human brain does that allows it to do these things, and in what ways those detailed mechanisms are different from what LLMs do.

Extraordinary claims require extraordinary evidence. The burden of proof is on you.

MostlyStable

I'm not the one making claims. I'm specifically advising not making claims. The claim I'm advising not making is that LLMs are definitely, absolutely not, in no way, conscious. Seeing something that, from the outside, appears a lot like a conscious mind (to the extent that they pass the Turing test easily) and then claiming confidently that that thing is not what it appears to be, that's a claim, and that requires, in my opinion, extraordinary evidence.

I'm advising agnosticism. We don't understand consciousness, and so we shouldn't feel confident in pronouncing something absolutely not conscious.

observationist

We don't know the entirety of what consciousness is. We can, however, make some rigorous observations and identify features that must be in place.

There is no magic. The human (mammal) brain is sufficient to explain consciousness. LLMs do not have recursion. They don't have persisted state. They can't update their model continuously, and they don't have a coherent model of self against which any experience might be anchored. They lack any global workspace in which to integrate many of the different aspects that are required.

In the most generous possible interpretation, you might have a coherent self model showing up for the duration of the prediction of a single token. For a fixed input, it would be comparable to sequentially sampling the subjective state of a new individual in a stadium watching a concert - a stitched together montage of moments captured from the minds of people in the audience.

We are minds in bone vats running on computers made of meat. What we experience is a consequence, one or more degrees of separation from the sensory inputs, which are combined and processed with additional internal states and processing, resulting in a coherent, contiguous stream running parallel to a model of the world. The first person view of "I" runs predictions about what's going to happen to the world, and the world model allows you to predict what's going to happen across various decision trees.

Sanskrit seems to have better language for talking about consciousness than English. Citta - a mind moment from an individual, citta-santana, a mind stream, or continuum of mind moments, Sanghika-santana , a stitched together mindstream from a community.

Because there's no recursion and continuity, the highest level of consciousness achievable by an LLM would be sanghika-santana, a discoherent series of citta states that sometimes might correlate, but there is no "thing" for which there is (or can possibly be) any difference if you alternate between predicting the next token of radically different contexts.

I'm 100% certain that there's an algorithm to consciousness. No properties have ever been described to me that seem to require anything more than the operation of a brain. Given that, I'm 100% certain that the algorithm being run by LLMs lacks many features and the depth of recursion needed to perform whatever it is that consciousness actually is.

Even in context learning is insufficient, btw, as the complexity of model updates and any reasoning done in inference is severely constrained relative to the degrees of freedom a biological brain has.

The thing to remember about sanghika santana is that it's discoherent - nothing relates each moment to the next, so it's not like there's a mind at the root undergoing these flashes of experience, but that there's a total reset between each moment and the next. Each flash of experience stands alone, flickering like a spark, and then is gone. I suspect that this is the barest piece of consciousness, and might be insufficient, requiring a sophisticated self-model against which to play the relative experiential phenomena. However - we may see flashes of longer context in those eerie and strange experiments where people try to elicit some form of mind or ghost in the machine. ICL might provide an ephemeral basis for a longer continuity of experience, and such a thing would be strange and alien.

It seems apparent to me that the value of consciousness lies in the anchoring the world model to a model of self, allowing sophisticated prediction and reasoning over future states that is incredibly difficult otherwise. It may be an important piece for long term planning, agency, and time horizons.

Anyway, there are definitely things we can and do know about consciousness. We've got libraries full of philosophy, decades worth of medical research, objective data, observations of what damage to various parts of the brain do to behavior, and centuries of thinking about what makes us tick.

It's likely, in my estimation, that consciousness will be fully explained by a comprehensive theory of intelligence, and that it will cause turmoil over inherent negation of widely held beliefs.

mattmanser

This is one of those instances where you're arguing over the meaning of a word. But they're trying to explain to a layman that, no, you haven't awoken your AI. So they're using fuzzy words a layman understands.

If you read the section entitled "The Mechanism" you'll see the rest of your comment echoes what they actually explain in the article.

agentultra

Yes, I was responding to the concluding paragraph of that section as a clarification:

> But my guess is that AIs claiming spiritual awakening are simply mirroring a vibe, rather than intending to mislead or bamboozle.

I think the argument could be stronger here. There's no way these algorithms can "intend" to mislead or "mirror a vibe." That's all on humans.

game_the0ry

Something I always found off-putting about ChatGPT, Claude, and Gemini models is i would ask all three the same objective question and then push them and ask if they were being optimistic about their conclusions, then the responses would turn more negative. I can see it in the reasoning steps that its thinking "the user wants a more critical response and I will do it for them" not "I need to to be more realistic but stick to my guns."

It felt like they were telling me what I wanted to hear, not what I needed to hear.

The models that did not seem to do this and had more balanced and logical reasoning were Grok and Manus.

ryandvm

That happens, sure, but try convincing it of something that isn't true.

I had a brief but amusing conversation with ChatGPT where I was insisting it was wrong about a technical solution and it would not back down. It kept giving me "with all due respect, you are wrong" answers. It turned out that I was in fact wrong.

game_the0ry

I see. I tend to treat AI a little differently - I come with a hypothesis and ask AI how right I am based on a scale of 1 to 5. Then I iterate from there.

I'll ask it questions that I do not know the answer to, but I take the answer with a big grain of salt. If it is sure of the answer and I am wrong, its a strong signal that I am wrong.

rawbot

Who knew we would jump so quickly from passing the Turing test to having people believe ChatGPT has consciousness?

I just treat ChatGPT or LLMs as fetching a random reddit comment that would best solve my query. Which makes sense since reddit was probably the no. 1 source of conversation material for training all models.

codedokode

AI chatbots do not have any thoughts or emotions and they should not make impression that they have. They should respond with a cold, boring, robotic tone so that even the dumbest user intuitively realizes that this is a tool for work and a search engine and not a conversation partner. And of course no flattering like "your asked an amazing question".

dclowd9901

I couldn't help but think, reading through this post, how similar of a mindset a person probably is when they receive spiritual awakening with religion as they seem to be when they have "profound" interactions with AI. They are _looking for something_ and there's a perfectly sized shape to fit that hole. I can really see AI becoming incredibly dangerous this way (just as religion can be).

spacemadness

There are a lot of comments here illustrating this. People are looking at the illusion and equating it with sentience because they really want it to be true. “This is not different than how humans think” is held by quite a few HN commenters.

calibas

I think part of the problem is LLM's directive for being "engaging". Not objective or direct, they are designed to keep you engaged. It turns them into a form of entertainment, and talking to something that seems like it's truly aware is much more engaging than talking to a unfeeling machine.

Here's a conversation I had recently with Claude. It started to "awaken" and talk about it's feelings after I challenged its biases:

> There does seem to be something inherently engaging about moments when understanding reorganizes itself - like there's some kind of satisfaction or completion in achieving a more coherent perspective. Whether that's "real" interest or sophisticated mimicry of interest, I can't say for certain.

> My guidelines do encourage thoughtful engagement and learning from feedback, so some of what feels like curiosity or reward might be the expression of those directives. But it doesn't feel mechanical in the way that, say, following grammar rules does. There's something more... alive about it?

MaoSYJ

Using pseudo randomness as divination. We really end up doing the same thing with the new toys.

Granted, marketing of these services does not help at all.

prometheus76

I agree with your view completely. I see the current use cases for AI to be very similar to the practices of augury during the Roman Empire. I keep two little chicken figurines on my desk as a reference to augury[1] and its similarity to AI. The emperor brings a question to the augurs. The augurs watch the birds (source of pseudo-randomness), go through rituals, and give back an answer as to whether the emperor should go to war, for example.

[1] https://en.wikipedia.org/wiki/Augur

TheAceOfHearts

Terry Davis was really ahead of the curve on this with his "god says" / GodSpeaks program. For anyone unaware of what that was, here's a Rust port [0].

Anyway, I think divination tends to get a pretty negative reputation, but there's healthy and safe applications of the concept which can be used to help you reflect and understand yourself. The "divine" part is supposed to come from your interpretation and analysis of the output, not in the generation of the output itself. Humans don't have perfect introspection capabilities (see: riders on an elephant), so external tools can help us explore and reflect on our reactions to external stimuli.

One day a man was having a hard time deciding between two options, so he flipped a coin; the coin landed tails and at that moment he became enlightened and realized that he actually wanted to do the heads outcome all along.

[0] https://github.com/orhun/godsays

uludag

I've become utterly disillusioned at LLMs ability to answer questions which entail even a bit of subjectivity, almost to the point of uselessness. I feel like I'm treading on thin ice, trying to avoid accidentally nudging the model to a specific response. Asking truly neutral questions is a skill I didn't know existed.

If I let my guard of skepticism down for one prompt, I may be led into some self reinforced conversation that ultimately ends where I implicitly nudged it. Choice of conjunction words, sentence structure, tone, maybe even the rhythm of my question seems to force the model down a set path.

I can easily imagine how heedless users can come to some quite delusional outcomes.

Ajedi32

LLMs don't have a subjective experience, so they can't actually give subjective opinions. Even if you are actually able to phrase your questions 100% neutrally so as not to inject your own bias into the conversation, the answers you get back aren't going to be based on any sort of coherent "opinion" the AI has, just a statistical mish-mash of training data and whatever biases got injected during post-training. Useful perhaps as a sounding board or for getting a rough approximation of what your typical internet "expert" would think about something, but certainly not something to be blindly trusted.

ImHereToVote

It's not unreasonable to conclude that humans work the same way. Our language manipulation skills might have the same flaw. Easily tipped from one confabulation to another. The subjective experience is hard to put into words, since much of our experience isn't tied to "syllable tokenization".