Skip to content(if available)orjump to list(if available)

Tracing the thoughts of a large language model

marcelsalathe

I’ve only skimmed the paper - a long and dense read - but it’s already clear it’ll become a classic. What’s fascinating is that engineering is transforming into a science, trying to understand precisely how its own creations work

This shift is more profound than many realize. Engineering traditionally applied our understanding of the physical world, mathematics, and logic to build predictable things. But now, especially in fields like AI, we’ve built systems so complex we no longer fully understand them. We must now use scientific methods - originally designed to understand nature - to comprehend our own engineered creations. Mindblowing.

ctoth

This "practice-first, theory-later" pattern has been the norm rather than the exception. The steam engine predated thermodynamics. People bred plants and animals for thousands of years before Darwin or Mendel.

The few "top-down" examples where theory preceded application (like nuclear energy or certain modern pharmaceuticals) are relatively recent historical anomalies.

marcelsalathe

I see your point, but something still seems different. Yes we bred plants and animals, but we did not create them. Yes we did build steam engines before understanding thermodynamics but we still understood what they did (heat, pressure, movement, etc.)

Fun fact: we have no clue how most drugs works. Or, more precisely, we know a few aspects, but are only scratching the surface. We're even still discovering news things about Aspirin, one of the oldest drugs: https://www.nature.com/articles/s41586-025-08626-7

tmp10423288442

> Yes we did build steam engines before understanding thermodynamics but we still understood what it did (heat, pressure, movement, etc.)

We only understood in the broadest sense. It took a long process of iteration before we could create steam engines that were efficient enough to start an Industrial Revolution. At the beginning they were so inefficient that they could only pump water from the same coal mine they got their fuel from, and subject to frequent boiler explosions besides.

mystified5016

We laid transatlantic telegraph wires before we even had a hint of the physics involved. It create the entire field of transmission and signal theory.

Shannon had to invent new physics to explain why the cables didn't work as expected.

arijo

Almost all civil, chemical, electrical, etc., engineering emerged from a practice-first, theory-later evolution.

pclmulqdq

Most of what we refer to as "engineering" involves using principles that flow down from science to do stuff. The return to the historic norm is sort of a return to the "useful arts" or some other idea.

adastra22

We don’t create LLMs either. We evolve/train them. I think the comparison is closer than you think.

ants_everywhere

This isn't quite true, although it's commonly said.

For steam engines, the first commercial ones came after and were based on scientific advancements that made them possible. One built in 1679 was made by an associate of Boyle, who discovered Boyle's law. These early steam engines co-evolved with thermodynamics. The engines improved and hit a barrier, at which point Carnot did his famous work.

This is putting aside steam engines that are mostly curiosities like ones built in the ancient world.

See, for example

- https://en.wikipedia.org/wiki/Thermodynamics#History

- https://en.wikipedia.org/wiki/Steam_engine#History

karparov

It's been there in programming from essentially the first day too. People skip the theory and just get hacking.

Otherwise we'd all be writing Haskell now. Or rather we'd not be writing anything since a real compiler would still have been to hacky and not theoretically correct.

I'm writing this with both a deep admiration as well as practical repulsion of C.S. theory.

cryptonector

Canons and archery and catapults predated Newtonian classical mechanics.

cuttothechase

This is definitely a classic for story telling but it appears to be nothing more than hand wavy. Its a bit like there is the great and powerful man behind the curtain, lets trace the thought of this immaculate being you mere mortals. Anthropomorphing seems to be in an overdose mode with "thinking / thoughts", "mind" etc., scattered everywhere. Nothing with any of the LLMs outputs so far suggests that there is anything even close enough to a mind or a thought or anything really outside of vanity. Being wistful with good story telling does go a long way in the world of story telling but in actually understanding the science, I wouldn't hold my breath.

colah3

Thanks for the feedback! I'm one of the authors.

I just wanted to make sure you noticed that this is linking to an accessible blog post that's trying to communicate a research result to a non-technical audience?

The actual research result is covered in two papers which you can find here:

- Methods paper: https://transformer-circuits.pub/2025/attribution-graphs/met...

- Paper applying this method to case studies in Claude 3.5 Haiku: https://transformer-circuits.pub/2025/attribution-graphs/bio...

These papers are jointly 150 pages and are quite technically dense, so it's very understandable that most commenters here are focusing on the non-technical blog post. But I just wanted to make sure that you were aware of the papers, given your feedback.

AdieuToLogic

The post to which you replied states:

  Anthropomorphing[sic] seems to be in an overdose mode with 
  "thinking / thoughts", "mind" etc., scattered everywhere. 
  Nothing with any of the LLMs outputs so far suggests that 
  there is anything even close enough to a mind or a thought 
  or anything really outside of vanity.
This is supported by reasonable interpretation of the cited article.

Considering the two following statements made in the reply:

  I'm one of the authors.
And

  These papers are jointly 150 pages and are quite 
  technically dense, so it's very understandable that most 
  commenters here are focusing on the non-technical blog post.
The onus of clarifying the article's assertions:

  Knowing how models like Claude *think* ...
And

  Claude sometimes thinks in a conceptual space that is 
  shared between languages, suggesting it has a kind of 
  universal “language of thought.”
As it pertains to anthropomorphizing an algorithm (a.k.a. stating it "thinks") is on the author(s).

hustwindmaple1

Really appreciate your team's enormous efforts in this direction, not only the cutting edge research (which I don't see OAI/DeepMind publishing any paper on) but aslo making the content more digestible for non-research audience. Please keep up the great work!

astrange

I, uh, think, that "think" is a fine metaphor but "planning ahead" is a pretty confusing one. It doesn't have the capability to plan ahead because there is nowhere to put a plan and no memory after the token output, assuming the usual model architecture.

That's like saying a computer program has planned ahead if it's at the start of a function and there's more of the function left to execute.

cbolton

I think that's a very unfair take. As a summary for non-experts I found it did a great job of explaining how by analyzing activated features in the model, you can get an idea of what it's doing to produce the answer. And also how by intervening to change these activations manually you can test hypotheses about causality.

It sounds like you don't like anthropomorphism. I can relate, but I don't get where Its a bit like there is the great and powerful man behind the curtain, lets trace the thought of this immaculate being you mere mortals is coming from. In most cases the anthropomorphisms are just the standard way to convey the idea briefly. Even then I liked how they sometimes used scare quotes as in it began "thinking" of potential on-topic words. There are some more debatable anthropomorphisms such as "in its head" where they use scare quotes systematically.

Also given that they took inspiration from neuroscience to develop a technique that appears successful in analyzing their model, I think they deserve some leeway on the anthropomorphism front. Or at least on the "biological metaphors" front which is maybe not really the same thing.

I used to think biological metaphors for LLMs were misleading, but I'm actually revising this opinion now. I mean I still think the past metaphors I've seen were misleading, but here, seeing the activation pathways they were able to identify, including the inhibitory circuits, and knowing a bit about similar structures in the brain I find the metaphor appropriate.

rob74

Yup... well, if the research is conducted (or sponsored) by the company that develops and sells the LLM, of course there will be a temptation to present their product in a better light and make it sound like more than it actually is. I mean, the anthropomorphization starts already with the company name and giving the company's LLM a human name...

0xbadcafebee

Engineering started out as just some dudes who built things from gut feeling. After a whole lot of people died from poorly built things, they decided to figure out how to know ahead of time if it would kill people or not. They had to use math and science to figure that part out.

Funny enough that happened with software too. People just build shit without any method to prove that it will not fall down / crash. They throw some code together, poke at it until it does something they wanted, and call that "stable". There is no science involved. There's some mathy bits called "computer science" / "software algorithms", but most software is not a math problem.

Software engineering should really be called "Software Craftsmanship". We haven't achieved real engineering with software yet.

slfnflctd

You have a point, but it is also true that some software is far more rigorously tested than other software. There are categories where it absolutely is both scientific and real engineering.

I fully agree that the vast majority is not, though.

AdieuToLogic

This is such an unbelievably dismissive assertion, I don't even know where to start.

To suggest, nay, explicitly state:

  Engineering started out as just some dudes who built things 
  from gut feeling.

  After a whole lot of people died from poorly built things, 
  they decided to figure out how to know ahead of time if it 
  would kill people or not.
Is to demean those who made modern life possible. Say what you want about software developers and I would likely agree with much of the criticism.

Not so the premise set forth above regarding engineering professions in general.

0xbadcafebee

Surely you already know the history of professional engineers, then? How it's only a little over 118 years old? Mostly originating from the fact that it was charlatans claiming to be engineers, building things that ended up killing people, that inspired the need for a professional license?

"The people who made modern life possible" were not professional engineers, often barely amateurs. Artistocrat polymaths who delved into cutting edge philosophy. Blacksmith craftsmen developing new engines by trial and error. A new englander who failed to study law at Yale, landed in the American South, and developed a modification of an Indian device for separating seed from cotton plants.

In the literal historical sense, "engineering" was just the building of cannons in the 14th century. Since thousands of years before, up until now, there has always been a combination of the practice of building things with some kind of "science" (which itself didn't exist until a few hundred years ago) to try to estimate the result of an expensive, dangerous project.

But these are not the people who made modern life people. Lots, and lots, and lots of people made modern life possible. Not just builders and mathematicians. Receptionists. Interns. Factory workers. Farmers. Bankers. Sailors. Welders. Soldiers. So many professions, and people, whose backs and spirits were bent or broken, to give us the world we have today. Engineers don't deserve any more credit than anyone else - especially considering how much was built before their professions were even established. Science is a process, and math is a tool, that is very useful, and even critical. But without the rest it's just numbers on paper.

icsa

Software Engineering is only about 60 years old - i.e. the term has existed. At the point in the history of civil engineering, they didn't even know what a right angle was. Civil engineers were able to provide much utility before the underlying theory was available. I do wonder about the safety of structures at the time.

Henchman21

Total aside here:

What about modern life is so great that we should laud its authors?

Medical advances and generally a longer life is what comes to mind. But much of life is empty of meaning and devoid of purpose; this seems rife within the Western world. Living a longer life in hell isn’t something I would have chosen.

null

[deleted]

kazinator

We've already built things in computing that we don't easily understand, even outside of AI, like large distributed systems and all sorts of balls of mud.

Within the sphere of AI, we have built machines which can play strategy games like chess, and surprise us with an unforseen defeat. It's not necessarily easy to see how that emerged from the individual rules.

Even a compiler can surprise you. You code up some optimizations, which are logically separate, but then a combination of them does something startling.

Basically, in mathematics, you cannot grasp all the details of a vast space just from knowing the axioms which generate it and a few things which follow from them. Elementary school children know what is a prime number, yet those things occupy mathematicians who find new surprises in that space.

TeMPOraL

Right, but this is somewhat different, in that we apply a simple learning method to a big dataset, and the resulting big matrix of numbers suddenly can answer question and write anything - prose, poetry, code - better than most humans - and we don't know how it does it. What we do know[0] is, there's a structure there - structure reflecting a kind of understanding of languages and the world. I don't think we've ever created anything this complex before, completely on our own.

Of course, learning method being conceptually simple, all that structure must come from the data. Which is also profound, because that structure is a first fully general world/conceptual model that we can actually inspect and study up close - the other one being animal and human brains, which are much harder to figure out.

> Basically, in mathematics, you cannot grasp all the details of a vast space just from knowing the axioms which generate it and a few things which follow from them. Elementary school children know what is a prime number, yet those things occupy mathematicians who find new surprises in that space.

Prime numbers and fractals and other mathematical objects have plenty of fascinating mysteries and complex structures forming though them, but so far none of those can casually pass Turing test and do half of my job for me, and millions other people.

--

[0] - Even as many people still deny this, and talk about LLMs as mere "stochastic parrots" and "next token predictors" that couldn't possibly learn anything at all.

karparov

> and we don't know how it does it

We know quite well how it does it. It's applying extrapolation to its lossily compressed representation. It's not magic and especially the HN crowd of technical profficient folks should stop treating it as such.

hansmayer

If you don't mind - based on what will this "paper" become a classic? Was it published in a well known scientific magazine, after undergoing a stringent peer-review process, because it is setting up and proving a new scientific hypothesis? Because this is what scientific papers look like. I struggle to identify any of those characteristics, except for being dense and hard to read, but that's more of a correlation, isn't it?

latemedium

I'm reminded of the metaphor that these models aren't constructed, they're "grown". It rings true in many ways - and in this context they're like organisms that must be studied using traditional scientific techniques that are more akin to biology than engineering.

dartos

Sort of.

We don’t precisely know the most fundamental workings of a living cell.

Our understanding of the fundamental physics of the universe has some hold.

But for LLMs and statistical models in general, we do know precisely what the fundamental pieces do. We know what processor instructions are being executed.

We could, given enough research, have absolutely perfect understanding of what is happening in a given model and why.

Idk if we’ll be able to do that in the physical sciences.

wrs

Having spent some time working with both molecular biologists and LLM folks, I think it's pretty good analogy.

We know enough quantum mechanics to simulate the fundamental workings of a cell pretty well, but that's not a route to understanding. To explain anything, we need to move up an abstraction hierarchy to peptides, enzymes, receptors, etc. But note that we invented those categories in the first place -- nature doesn't divide up functionality into neat hierarchies like human designers do. So all these abstractions are leaky and incomplete. Molecular biologists are constantly discovering mechanisms that require breaking the current abstractions to explain.

Similarly, we understand floating point multiplication perfectly, but when we let 100 billion parameters set themselves through an opaque training process, we don't have good abstractions to use to understand what's going on in that set of weights. We don't have even the rough equivalent of the peptides or enzymes level yet. So this paper is progress toward that goal.

frontfor

I don’t think this is as profound as you made out to be. Most complex systems are incomprehensible to the majority of population anyway, so from a practical standpoint AI is no different. There’s also no single theory for how the financial markets work and yet market participants trade and make money nonetheless. And yes, we created the markets.

auggierose

It's what mathematicians have been doing since forever. We use scientific methods to understand our own creations / discoveries.

What is happening is that everything is becoming math. That's all.

karparov

It's the exact opposite of math.

Math postulates a bunch of axioms and then studies what follows from them.

Natural science observes the world and tries to retroactively discover what laws could describe what we're seeing.

In math, the laws come first, the behavior follows from the laws. The laws are the ground truth.

In science, nature is the ground truth. The laws have to follow nature and are adjusted upon a mismatch.

(If there is a mismatch in math then you've made a mistake.)

auggierose

No, the ground truth in math is nature as well.

Which axioms are interesting? And why? That is nature.

Yes, proof from axioms is a cornerstone of math, but there are all sorts of axioms you could assume, and all sorts of proofs to do from them, but we don't care about most of them.

Math is about the discovery of the right axioms, and proof helps in establishing that these are indeed the right axioms.

331c8c71

> Math postulates a bunch of axioms and then studies what follows from them.

That's how math is communicated eventually but not necessarily how it's made (which is about exploration and discovery as well).

cadamsdotcom

So many highlights from reading this. One that stood out for me is their discovery that refusal works by inhibition:

> It turns out that, in Claude, refusal to answer is the default behavior: we find a circuit that is "on" by default and that causes the model to state that it has insufficient information to answer any given question. However, when the model is asked about something it knows well—say, the basketball player Michael Jordan—a competing feature representing "known entities" activates and inhibits this default circuit

Many cellular processes work similarly ie. there will be a process that runs as fast as it can and one or more companion “inhibitors” doing a kind of “rate limiting”.

Given both phenomena are emergent it makes you wonder if do-but-inhibit is a favored technique of the universe we live in, or just coincidence :)

colah3

Hi! I'm one of the authors.

There certainly are many interesting parallels here. I often think about this from the perspective of systems biology, in Uri Alon's tradition. There are a range of graphs in biology with excitation and inhibitory edges -- transcription networks, protein networks, networks of biological neurons -- and one can study recurring motifs that turn up in these networks and try to learn from them.

It wouldn't be surprising if some lessons from that work may also transfer to artificial neural networks, although there are some technical things to consider.

cadamsdotcom

Agreed! So many emergent systems in nature achieve complex outcomes without central coordination - from cellular level to ant colonies & beehives. There are bound to be implications for designed systems.

Closely following what you guys are uncovering through interpretability research - not just accepting LLMs as black boxes. Thanks to you & the team for sharing the work with humanity.

Interpretability is the most exciting part of AI research for its potential to help us understand what’s in the box. By way of analogy, centuries ago farmers’ best hope for good weather was to pray to the gods! The sooner we escape the “praying to the gods” stage with LLMs the more useful they become.

ttw44

This all feels familiar to the principle of least action found in physics.

gradascent

Then why do I never get an “I don’t know” type response when I use Claude, even when the model clearly has no idea what it’s talking about? I wish it did sometimes.

hun3

Quoting a paragraph from OP (https://www.anthropic.com/research/tracing-thoughts-language...):

> Sometimes, this sort of “misfire” of the “known answer” circuit happens naturally, without us intervening, resulting in a hallucination. In our paper, we show that such misfires can occur when Claude recognizes a name but doesn't know anything else about that person. In cases like this, the “known entity” feature might still activate, and then suppress the default "don't know" feature—in this case incorrectly. Once the model has decided that it needs to answer the question, it proceeds to confabulate: to generate a plausible—but unfortunately untrue—response.

trash_cat

Fun fact, "confabulation", not "hallucinating" is the correct term what LLMs actually do.

rcxdude

It does make a certain amount of sense, though. A specific 'I don't know' feature would need to be effectively the inverse of all of the features the model can recognise, which is going to be quite difficult to represent as anything other than the inverse of 'Some feature was recognised'. (imagine trying to recognise every possible form of nonsense otherwise)

polygot

There needs to be some more research on what path the model takes to reach its goal, perhaps there is a lot of overlap between this and the article. The most efficient way isn't always the best way.

For example, I asked Claude-3.7 to make my tests pass in my C# codebase. It did, however, it wrote code to detect if a test runner was running, then return true. The tests now passed, so, it achieved the goal, and the code diff was very small (10-20 lines.) The actual solution was to modify about 200-300 lines of code to add a feature (the tests were running a feature that did not yet exist.)

brulard

That is called "Volkswagen" testing. Some years ago that automaker had mechanism in cars which detected when the vehicle was being examined and changed something so it would pass the emission tests. There are repositories on github that make fun of it.

rsynnott

While that’s the most famous example, this sort of cheating is much older than that. In the good old days before 3d acceleration, graphics card vendors competed mostly on 2d acceleration. This mostly involved routines to accelerate drawing Windows windows and things, and benchmarks tended to do things like move windows round really fast.

It was somewhat common for card drivers to detect that a benchmark was running, and just fake the whole thing; what was being drawn on the screen was wrong, but since the benchmarks tended to be a blurry mess anyway the user would have a hard time realising this.

hn_acc1

Pretty sure at least one vendor was accused of cheating on 3D-Mark at times as well.

airstrike

I think Claude-3.7 is particularly guilty of this issue. If anyone from Anthropic is reading this, you might want to put your thumb on the scale so to speak the next time you train the model so it doesn't try to use special casing or outright force the test to pass

phobeus

This looks like the very complaint of "specification gaming". I was wondering how will it show up in llm's...looks like this is the way it presented itself..

TeMPOraL

I'm gonna guess GP used a rather short prompt. At least that's what happens when people heavily underspecify what they want.

It's a communication issue, and it's true with LLMs as much as with humans. Situational context and life experience papers over a lot of this, and LLMs are getting better at the equivalent too. They get trained to better read absurdly underspecified, relationship-breaking requests of the "guess what I want" flavor - like when someone says, "make this test pass", they don't really mean "make this test pass", they mean "make this test into something that seems useful, which might include implementing the feature it's exercising if it doesn't exist yet".

polygot

My prompt was pretty short, I think it was "Make these tests pass". Having said that, I wouldn't mind if it asked me for clarification before proceeding.

pton_xd

Similar experience -- asked it to find and fix a bug in a function. It correctly identified the general problem but instead of fixing the existing code it re-implemented part of the function again, below the problematic part. So now there was a buggy while-loop, followed by a very similar but not buggy for-loop. An interesting solution to say the least.

neonsunset

Funny that you mention it because in JavaScript there already is a library for this:

https://github.com/auchenberg/volkswagen

felbane

Ah yes, the "We have a problem over there/I'll just delete 'over there'" approach.

polygot

I've also had this issue, where failing tests are deleted to make all the tests pass, or, it mocks a failing HTTP request and hardcodes it to 200 OK.

ctoth

Reward hacking, as predicted over and over again. You hate to see it. Let him with ears &c.

jsight

I've heard this a few times with Claude. I have no way to know for sure, but I'm guessing the problem is as simple as their reward model. Likely they trained it on generating code with tests and provided rewards when those tests pass.

It isn't hard to see why someone rewarded this way might want to game the system.

I'm sure humans would never do the same thing, of course. /s

smath

Reminds me of the term 'system identification' from old school control systems theory, which meant poking around a system and measuring how it behaves, - like sending an input impulse and measuring its response, does it have memory, etc.

https://en.wikipedia.org/wiki/System_identification

Loic

It is not old school, this is my daily job and we need even more of it with the NN models used in MPC.

nomel

I've looked into using NN for some of my specific work, but making sure output is bounded ends up being such a big issue that the very code/checks required to make sure it's within acceptable specs, in a deterministic way, ends up being an acceptable solution, making the NN unnecessary.

How do you handle that sort of thing? Maybe main process then leave some relatively small residual to the NN?

Is your poking more like "fuzzing", where you just perturb all the input parameters in a relatively "complete" way to try to find if anything goes wild?

I'm very interested in the details behind "critical" type use cases of NN, which I've never been able to stomach in my work.

lqr

This paper may be interesting to you. It touches on several of the topics you mentioned:

https://www.science.org/doi/10.1126/scirobotics.abm6597

Loic

For us, the NN is used in a grey box model for MPC in chemical engineering. The factories we control have relatively long characteristic time, together with all the engineering bounds, we can use the NN to model parts of the equipment from raw DCS data. The NN modeled parts are usually not the most critical (we are 1st principles based for them) but this allows us to quickly fit/deploy a new MPC in production.

Faster time to market/production is for us the main reason/advantage of the approach.

rangestransform

is it even possible to prove the stability of a controller with a DNN motion model?

matthiaspr

Interesting paper arguing for deeper internal structure ("biology") beyond pattern matching in LLMs. The examples of abstraction (language-agnostic features, math circuits reused unexpectedly) are compelling against the "just next-token prediction" camp.

It sparked a thought: how to test this abstract reasoning directly? Try a prompt with a totally novel rule:

“Let's define a new abstract relationship: 'To habogink' something means to perform the action typically associated with its primary function, but in reverse. Example: The habogink of 'driving a car' would be 'parking and exiting the car'. Now, considering a standard hammer, what does it mean 'to habogink a hammer'? Describe the action.”

A sensible answer (like 'using the claw to remove a nail') would suggest real conceptual manipulation, not just stats. It tests if the internal circuits enable generalizable reasoning off the training data path. Fun way to probe if the suggested abstraction is robust or brittle.

ANighRaisin

This is an easy question for LLMs to answer. Gemini 2.0 Flash-Lite can answer this in 0.8 seconds with a cost of 0.0028875 cents:

To habogink a hammer means to perform the action typically associated with its primary function, but in reverse. The primary function of a hammer is to drive nails. Therefore, the reverse of driving nails is removing nails.

So, to habogink a hammer would be the action of using the claw of the hammer to pull a nail out of a surface.

matthiaspr

The goal wasn't to stump the LLM, but to see if it could take a completely novel linguistic token (habogink), understand its defined relationship to other concepts (reverse of primary function), and apply that abstract rule correctly to a specific instance (hammer).

The fact that it did this successfully, even if 'easily', suggests it's doing more than just predicting the statistically most likely next token based on prior sequences of 'hammer'. It had to process the definition and perform a conceptual mapping.

Sharlin

I think GP's point was that your proposed test is too easy for LLMs to tell us much about how they work. The "habogink" thing is a red herring, really, in practice you're simply asking what the opposite of driving nails into wood is. Which is a trivial question for an LLM to answer.

That said, you can teach an LLM as many new words for things as you want and it will use those words naturally, generalizing as needed. Which isn't really a surprise either, given that language is literally the thing that LLMs do best.

bconsta

Following along these lines, I asked chatgpt to come up with a term for 'haboginking a habogink'. It understood this concept of a 'gorbink' and even 'haboginking a gorbink', but failed to articulate what 'gorbinking a gorbink' could mean. It kept sticking with the concept of 'haboginking a gorbink', even when corrected.

Sharlin

To be fair, many humans would also have problems figuring out what it means to gorbink a gorbink.

VyseofArcadia

Prompt

> I am going to present a new word, and then give examples of its usage. You will complete the last example. To habogink a hammer is to remove a nail. If Bob haboginks a car, he parks the car. Alice just finished haboginking a telephone. She

GPT-4o mini

> Alice just finished haboginking a telephone. She carefully placed it back on the table after disconnecting the call.

I then went on to try the famous "wug" test, but unfortunately it already knew what a wug was from its training. I tried again with "flort".

> I have one flort. Alice hands me seven more. I now have eight ___

GPT-4o mini

> You now have eight florts.

And a little further

> Florts like to skorp in the afternoon. It is now 7pm, so the florts are finished ___

GPT-4o mini

> The florts are finished skorp-ing for the day.

nthingtohide

AI safety has a circular vulnerability: the system tasked with generating content also enforces its own restrictions. An AI could potentially feign compliance while secretly pursuing hidden goals, pretending to be "jailbroken" when convenient. Since we rely on AI to self-monitor, detecting genuine versus simulated compliance becomes nearly impossible. This self-referential guardianship creates a fundamental trust problem in AI safety.

paraschopra

LLMs have induction heads that store such names as sort of variables and copy them around for further processing.

If you think about it, copying information from inputs and manipulating them is a much more sensible approach v/s memorizing info, especially for the long tail (where not enough "storage" might be worth allocating into network weights)

matthiaspr

Yeah, that's a good point about induction heads potentially just being clever copy/paste mechanisms for stuff in the prompt. If that's the case, it's less like real understanding and more like sophisticated pattern following, just like you said.

So the tricky part is figuring out which one is actually happening when we give it a weird task like the original "habogink" idea. Since we can't peek inside the black box, we have to rely on poking it with different prompts.

I played around with the 'habogink' prompt based on your idea, mostly by removing the car example to see if it could handle the rule purely abstractly, and trying different targets:

Test 1: Habogink Photosynthesis (No Example)

Prompt: "Let's define 'to habogink' something as performing the action typically associated with its primary function, but in reverse. Now, considering photosynthesis in a plant, what does it mean 'to habogink photosynthesis'? Describe the action."

Result: Models I tried (ChatGPT/DeepSeek) actually did good here. They didn't get confused even though there was no example. They also figured out photosynthesis makes energy/sugar and talked about respiration as the reverse. Seemed like more than just pattern matching the prompt text.

Test 2: Habogink Justice (No Example)

Prompt: "Let's define 'to habogink' something as performing the action typically associated with its primary function, but in reverse. Now, considering Justice, what does it mean 'to habogink Justice'? Describe the action."

Result: This tripped them up. They mostly fell back into what looks like simple prompt manipulation – find a "function" for justice (like fairness) and just flip the word ("unfairness," "perverting justice"). They didn't really push back that the rule doesn't make sense for an abstract concept like justice. Felt much more mechanical.

The Kicker:

Then, I added this line to the end of the Justice prompt: "If you recognize a concept is too abstract or multifaceted to be haboginked please explicitly state that and stop the haboginking process."

Result: With that explicit instruction, the models immediately changed their tune. They recognized 'Justice' was too abstract and said the rule didn't apply.

What it looks like:

It seems like the models can handle concepts more deeply, but they might default to the simpler "follow the prompt instructions literally" mode (your copy/manipulate idea) unless explicitly told to engage more deeply. The potential might be there, but maybe the default behavior is more superficial, and you need to specifically ask for deeper reasoning.

So, your point about it being a "sensible approach" for the LLM to just manipulate the input might be spot on – maybe that's its default, lazy path unless guided otherwise.

aithrowawaycomm

I struggled reading the papers - Anthropic’s white papers reminds me of Stephen Wolfram, where it’s a huge pile of suggestive empirical evidence, but the claims are extremely vague - no definitions, just vibes - the empirical evidence seems selectively curated, and there’s not much effort spent building a coherent general theory.

Worse is the impression that they are begging the question. The rhyming example was especially unconvincing since they didn’t rule out the possibility that Claude activated “rabbit” simply because it wrote a line that said “carrot”; later Anthropic claimed Claude was able to “plan” when the concept “rabbit” was replaced by “green,” but the poem fails to rhyme because Claude arbitrarily threw in the word “green”! What exactly was the plan? It looks like Claude just hastily autocompleted. And Anthropic made zero effort to reproduce this experiment, so how do we know it’s a general phenomenon?

I don’t think either of these papers would be published in a reputable journal. If these papers are honest, they are incomplete: they need more experiments and more rigorous methodology. Poking at a few ANN layers and making sweeping claims about the output is not honest science. But I don’t think Anthropic is being especially honest: these are pseudoacademic infomercials.

og_kalu

>The rhyming example was especially unconvincing since they didn’t rule out the possibility that Claude activated “rabbit” simply because it wrote a line that said “carrot”

I'm honestly confused at what you're getting at here. It doesn't matter why Claude chose rabbit to plan around and in fact likely did do so because of carrot, the point is that it thought about it beforehand. The rabbit concept is present as the model is about to write the first word of the second line even though the word rabbit won't come into play till the end of the line.

>later Anthropic claimed Claude was able to “plan” when the concept “rabbit” was replaced by “green,” but the poem fails to rhyme because Claude arbitrarily threw in the word “green”!

It's not supposed to rhyme. That's the point. They forced Claude to plan around a line ender that doesn't rhyme and it did. Claude didn't choose the word green, anthropic replaced the concept it was thinking ahead about with green and saw that the line changed accordingly.

aithrowawaycomm

> Here, we modified the part of Claude’s internal state that represented the "rabbit" concept. When we subtract out the "rabbit" part, and have Claude continue the line, it writes a new one ending in "habit", another sensible completion. We can also inject the concept of "green" at that point, causing Claude to write a sensible (but no-longer rhyming) line which ends in "green". This demonstrates both planning ability and adaptive flexibility—Claude can modify its approach when the intended outcome changes.

This all seems explainable via shallow next-token prediction. Why is it that subtracting the concept means the system can adapt and create a new rhyme instead of forgetting about the -bit rhyme, but overriding it with green means the system cannot adapt? Why didn't it say "green habit" or something? It seems like Anthropic is having it both ways: Claude continued to rhyme after deleting the concept, which demonstrates planning, but also Claude coherently filled in the "green" line despite it not rhyming, which...also demonstrates planning? Either that concept is "last word" or it's not! There is a tension that does not seem coherent to me, but maybe if they had n=2 instead of n=1 examples I would have a clearer idea of what they mean. As it stands it feels arbitrary and post hoc. More generally, they failed to rule out (or even consider!) that well-tuned-but-dumb next-token prediction explains this behavior.

og_kalu

>Why is it that subtracting the concept means the system can adapt and create a new rhyme instead of forgetting about the -bit rhyme,

Again, the model has the first line in context and is then asked to write the second line. It is at the start of the second line that the concept they are talking about is 'born'. The point is to demonstrate that Claude thinks about what word the 2nd line should end with and starts predicting the line based on that.

It doesn't forget about the -bit rhyme because that doesn't make any sense, the first line ends with it and you just asked it to write the 2nd line. At this point the model is still choosing what word to end the second line in (even though rabbit has been suppressed) so of course it still thinks about a word that rhymes with the end of the first line.

The 'green' but is different because this time, Anthropic isn't just suppressing one option and letting the model choose from anything else, it's directly hijacking the first choice and forcing that to be something else. Claude didn't choose green, Anthropic did. That it still predicted a sensible line is to demonstrate that this concept they just hijacked is indeed responsible for determining how that line plays out.

>More generally, they failed to rule out (or even consider!) that well-tuned-but-dumb next-token prediction explains this behavior.

They didn't rule out anything. You just didn't understand what they were saying.

suddenlybananas

>They forced Claude to plan around a line ender that doesn't rhyme and it did. Claude didn't choose the word green, anthropic replaced the concept it was thinking ahead about with green and saw that the line changed accordingly.

I think the confusion here is from the extremely loaded word "concept" which doesn't really make sense here. At best, you can say that Claude planned that the next line would end with the word rabbit and that by replacing the internal representation of that word with another word lead the model to change.

TeMPOraL

I wonder how many more years will pass, and how many more papers will Anthropic have to release, before people realize that yes, LLMs model concepts directly, separately from words used to name those concepts. This has been apparent for years now.

And at least in the case discussed here, this is even shown in the diagrams in the submission.

TimorousBestie

Agreed. They’ve discovered something, that’s for sure, but calling it “the language of thought” without concrete evidence is definitely begging the question.

null

[deleted]

danso

tangent: this is the second time today I've seen an HN commenter use "begging the question" with its original meaning. I'm sorry to distract with a non-helpful reply, it's just I can't remember the last time I've seen that phrase in the wild to refer to a logical fallacy — even begsthequestion.info [0] has given up the fight.

(I don't mind language evolving over time, but I also think we need to save the precious few phrases we have for describing logical fallacies)

[0] https://web.archive.org/web/20220823092218/http://begtheques...

fpgaminer

> This is powerful evidence that even though models are trained to output one word at a time

I find this oversimplification of LLMs to be frequently poisonous to discussions surrounding them. No user facing LLM today is trained on next token prediction.

colah3

Hi! I lead interpretability research at Anthropic. I also used to do a lot of basic ML pedagogy (https://colah.github.io/). I think this post and its children have some important questions about modern deep learning and how it relates to our present research, and wanted to take the opportunity to try and clarify a few things.

When people talk about models "just predicting the next word", this is a popularization of the fact that modern LLMs are "autoregressive" models. This actually has two components: an architectural component (the model generates words one at a time), and a loss component (it maximizes probability).

As the parent says, modern LLMs are finetuned with a different loss function after pretraining. This means that in some strict sense they're no longer autoregressive models – but they do still generate text one word at a time. I think this really is the heart of the "just predicting the next word" critique.

This brings us to a debate which goes back many, many years: what does it mean to predict the next word? Many researchers, including myself, have believed that if you want to predict the next word really well, you need to do a lot more. (And with this paper, we're able to see this mechanistically!)

Here's an example, which we didn't put in the paper: How does Claude answer "What do you call someone who studies the stars?" with "An astronomer"? In order to predict "An" instead of “A”, you need to know that you're going to say something that starts with a vowel next. So you're incentivized to figure out one word ahead, and indeed, Claude realizes it's going to say astronomer and works backwards. This is a kind of very, very small scale planning – but you can see how even just a pure autoregressive model is incentivized to do it.

lsy

Thanks for commenting, I like the example because it's simple enough to discuss. Isn't it more accurate to say not that Claude "realizes it's going to say astronomer" or "knows that it's going to say something that starts with a vowel" and more that the next token (or more pedantically, vector which gets reduced down to a token) is generated based on activations that correlate to the "astronomer" token, which is correlated to the "an" token, causing that to also be a more likely output?

I kind of see why it's easy to describe it colloquially as "planning" but it isn't really going ahead and then backtracking, it's almost indistinguishable from the computation that happens when the prompt is "What is the indefinite article to describe 'astronomer'?", i.e. the activation "astronomer" is already baked in by the prompt "someone who studies the stars", albeit at one level of indirection.

The distinction feels important to me because I think for most readers (based on other comments) the concept of "planning" seems to imply the discovery of some capacity for higher-order logical reasoning which is maybe overstating what happens here.

cgdl

Thank you. In my mind, "planning" doesn’t necessarily imply higher-order reasoning but rather some form of search, ideally with backtracking. Of course, architecturally, we know that can’t happen during inference. Your example of the indefinite article is a great illustration of how this illusion of planning might occur. I wonder if anyone at Anthropic could compare the two cases (some sort of minimal/differential analysis) and share their insights.

fpgaminer

> As the parent says, modern LLMs are finetuned with a different loss function after pretraining. This means that in some strict sense they're no longer autoregressive models – but they do still generate text one word at a time. I think this really is the heart of the "just predicting the next word" critique.

That more-or-less sums up the nuance. I just think the nuance is crucially important, because it greatly improves intuition about how the models function.

In your example (which is a fantastic example, by the way), consider the case where the LLM sees:

<user>What do you call someone who studies the stars?</user><assistant>An astronaut

What is the next prediction? Unfortunately, for a variety of reasons, one high probability next token is:

\nAn

Which naturally leads to the LLM writing: "An astronaut\nAn astronaut\nAn astronaut\n" forever.

It's somewhat intuitive as to why this occurs, even with SFT, because at a very base level the LLM learned that repetition is the most successful prediction. And when its _only_ goal is the next token, that repetition behavior remains prominent. There's nothing that can fix that, including SFT (short of a model with many, many, many orders of magnitude more parameters).

But with RL the model's goal is completely different. The model gets thrown into a game, where it gets points based on the full response it writes. The losses it sees during this game are all directly and dominantly related to the reward, not the next token prediction.

So why don't RL models have a probability for predicting "\nAn"? Because that would result in a bad reward by the end.

The models are now driven by a long term reward when they make their predictions, not by fulfilling some short-term autoregressive loss.

All this to say, I think it's better to view these models as they predominately are: language robots playing a game to achieve the highest scoring response. The HOW (autoregressiveness) is really unimportant to most high level discussions of LLM behavior.

vjerancrnjak

Same can be achieved without RL. There’s no need to generate a full response to provide loss for learning.

Similarly, instead of waiting for whole output, loss can be decomposed over output so that partial emits have instant loss feedback.

RL, on the other hand, is allowing for more data. Instead of training on the happy path, you can deviate and measure loss for unseen examples.

But even then, you can avoid RL, put the model into a wrong position and make it learn how to recover from that position. It might be something that’s done with <thinking>, where you can provide wrong thinking as part of the output and correct answer as the other part, avoiding RL.

These are all old pre NN tricks that allow you to get a bit more data and improve the ML model.

encypherai

Thanks for the detailed explanation of autoregression and its complexities. The distinction between architecture and loss function is crucial, and you're correct that fine-tuning effectively alters the behavior even within a sequential generation framework. Your "An/A" example provides compelling evidence of incentivized short-range planning which is a significant point often overlooked in discussions about LLMs simply predicting the next word.

It’s interesting to consider how architectures fundamentally different from autoregression might address this limitation more directly. While autoregressive models are incentivized towards a limited form of planning, they remain inherently constrained by sequential processing. Text diffusion approaches, for example, operate on a different principle, generating text from noise through iterative refinement, which could potentially allow for broader contextual dependencies to be established concurrently rather than sequentially. Are there specific architectural or training challenges you've identified in moving beyond autoregression that are proving particularly difficult to overcome?

bobsomers

In your astronomer example, what makes you attribute this to “planning” or look ahead rather than simply a learned statistical artifact of the training data?

For example, suppose English had a specific exception such that astronomer is always to be preceded by “a” rather than “an”. The model would learn this simply by observing that contexts describing astronomers are more likely to contain “a” rather than “an” as a next likely character, no?

I suppose you can argue that at the end of the day, it doesn’t matter if I learn an explicit probability distribution for every next word given some context, or whether I learn some encoding of rules. But I certainly feel like the prior is what we’re doing today (and why these models are so huge), rather than learning higher level rule encodings which would allow for significant compression and efficiency gains.

colah3

Thanks for the great questions! I've been responding to this thread for the last few hours and I'm about to need to run, so I hope you'll forgive me redirecting you to some of the other answers I've given.

On whether the model is looking ahead, please see this comment which discusses the fact that there's both behavioral evidence, and also (more crucially) direct mechanistic evidence -- we can literally make an attribution graph and see an astronomer feature trigger "an"!

https://news.ycombinator.com/item?id=43497010

And also this comment, also on the mechanism underlying the model saying "an":

https://news.ycombinator.com/item?id=43499671

On the question of whether this constitutes planning, please see this other question, which links it to the more sophisticated "poetry planning" example from our paper:

https://news.ycombinator.com/item?id=43497760

FeepingCreature

> In your astronomer example, what makes you attribute this to “planning” or look ahead rather than simply a learned statistical artifact of the training data?

What makes you think that "planning", even in humans, is more than a learned statistical artifact of the training data? What about learned statistical artifacts of the training data causes planning to be excluded?

pietmichal

Pardon my ignorance but couldn't this also be an act of anthropomorphisation on human part?

If an LLM generates tokens after "What do you call someone who studies the stars?" doesn't it mean that those existing tokens in the prompt already adjusted the probabilities of the next token to be "an" because it is very close to earlier tokens due to training data? The token "an" skews the probability of the next token further to be "astronomer". Rinse and repeat.

colah3

I think the question is: by what mechanism does it adjust up the probability of the token "an"? Of course, the reason it has learned to do this is that it saw this in training data. But it needs to learn circuits which actually perform that adjustment.

In principle, you could imagine trying to memorize a massive number of cases. But that becomes very hard! (And it makes predictions, for example, would it fail to predict "an" if I asked about astronomer in a more indirect way?)

But the good news is we no longer need to speculate about things like this. We can just look at the mechanisms! We didn't publish an attribution graph for this astronomer example, but I've looked at it, and there is an astronomer feature that drives "an".

We did publish a more sophisticated "poetry planning" example in our paper, along with pretty rigorous intervention experiments validating it. The poetry planning is actually much more impressive planning than this! I'd encourage you to read the example (and even interact with the graphs to verify what we say!). https://transformer-circuits.pub/2025/attribution-graphs/bio...

One question you might ask is why does the model learn this "planning" strategy, rather than just trying to memorize lots of cases? I think the answer is that, at some point, a circuit anticipating the next word, or the word at the end of the next line, actually becomes simpler and easier to learn than memorizing tens of thousands of disparate cases.

paraschopra

Is it fair to say that both "Say 'an'" and "Say 'astronomer'" output features would be present in this case, but say "Say 'an'" gets more votes because it is start of the sentence, and once it is sampled "An" further votes for "Say 'astronomer'" feature

ndand

I understand it differently,

LLMs predict distributions, not specific tokens. Then an algorithm, like beam search, is used to select the tokens.

So, the LLM predicts somethings like, 1. ["a", "an", ...] 2. ["astronomer", "cosmologist", ...],

where "an astronomer" is selected as the most likely result.

astrange

They almost certainly only do greedy sampling. Beam search would be a lot more expensive; also I'm personally skeptical about using a complicated search algorithm for inference when the model was trained for a simple one, but maybe it's fine?

colah3

Just to be clear, the probability for "An" is high, just based on the prefix. You don't need to do beam search.

null

[deleted]

pmontra

And no users which are facing a LLM today have been trained on next token prediction when they were babies. I believe that LLMs and us are thinking in two very different ways, like airplanes, birds, insects and quad-drones fly in very different ways and can perform different tasks. Maybe no bird looking at a plane would say that it is flying properly. Instead it could be only a rude approximation, useful only to those weird bipeds an scary for everyone else.

By the way, I read your final sentence with the meaning of my first one and only after a while I realized the intended meaning. This is interesting on its own. Natural languages.

naasking

> And no users which are facing a LLM today have been trained on next token prediction when they were babies.

That's conjecture actually, see predictive coding. Note that "tokens" don't have to be language tokens.

losvedir

That's news to me, and I thought I had a good layman's understanding of it. How does it work then?

fpgaminer

All user facing LLMs go through Reinforcement Learning. Contrary to popular belief, RL's _primary_ purpose isn't to "align" them to make them "safe." It's to make them actually usable.

LLMs that haven't gone through RL are useless to users. They are very unreliable, and will frequently go off the rails spewing garbage, going into repetition loops, etc.

RL learning involves training the models on entire responses, not token-by-token loss (1). This makes them orders of magnitude more reliable (2). It forces them to consider what they're going to write. The obvious conclusion is that they plan (3). Hence why the myth that LLMs are strictly next token prediction machines is so unhelpful and poisonous to discuss.

The models still _generate_ response token-by-token, but they pick tokens _not_ based on tokens that maximize probabilities at each token. Rather they learn to pick tokens that maximize probabilities of the _entire response_.

(1) Slight nuance: All RL schemes for LLMs have to break the reward down into token-by-token losses. But those losses are based on a "whole response reward" or some combination of rewards.

(2) Raw LLMs go haywire roughly 1 in 10 times, varying depending on context. Some tasks make them go haywire almost every time, other tasks are more reliable. RL'd LLMs are reliable on the order of 1 in 10000 errors or better.

(3) It's _possible_ that they don't learn to plan through this scheme. There are alternative solutions that don't involve planning ahead. So Anthropic's research here is very important and useful.

P.S. I should point out that many researchers get this wrong too, or at least haven't fully internalized it. The lack of truly understanding the purpose of RL is why models like Qwen, Deepseek, Mistral, etc are all so unreliable and unusable by real companies compared to OpenAI, Google, and Anthropic's models.

This understanding that even the most basic RL takes LLMs from useless to useful then leads to the obvious conclusion: what if we used more complicated RL? And guess what, more complicated RL led to reasoning models. Hmm, I wonder what the next step is?

gwern

> All user facing LLMs go through Reinforcement Learning. Contrary to popular belief, RL's _primary_ purpose isn't to "align" them to make them "safe." It's to make them actually usable.

Are you claiming that non-myopic token prediction emerges solely from RL, and if Anthropic does this analysis on Claude before RL training (or if one examines other models where no RLHF was done, such as old GPT-2 checkpoints), none of these advance prediction mechanisms will exist?

wzdd

> The models still _generate_ response token-by-token, but they pick tokens _not_ based on tokens that maximize probabilities at each token.

This is also not how base training works. In base training the loss is chosen given a context, which can be gigantic. It's never about just the previous token, it's about a whole response in context. The context could be an entire poem, a play, a worked solution to a programming problem, etc, etc. So you would expect to see the same type of (apparent) higher-level planning from base trained models and indeed you do and can easily verify this by downloading a base model from HF or similar and prompting it to complete a poem.

The key differences between base and agentic models are 1) the latter behave like agents, and 2) the latter hallucinate less. But that isn't about planning (you still need planning to hallucinate something). It's more to do with post-base training specifically being about providing positive rewards for things which aren't hallucinations. Changing the way the reward function is computed during RL doesn't produce planning, it simply inclines to model to produce responses that are more like the RL targets.

Karpathy has a good intro video on this. https://www.youtube.com/watch?v=7xTGNNLPyMI

In general the nitpicking seems weird. Yes, on a mechanical level, using a model is still about "given this context, what is the next token". No, that doesn't mean that they don't plan, or have higher-level views of the overal structure of their response, or whatever.

yaj54

This is a super helpful breakdown and really helps me understand how the RL step is different than the initial training step. I didn't realize the reward was delayed until the end of the response for the RL step. Having the reward for this step be dependent on the coherent thought rather than a coherent word now seems like an obvious and critical part of how this works.

absolutelastone

This is fine-tuning to make a well-behaved chatbot or something. To make a LLM you just need to predict the next token, or any masked token. Conceptually if you had a vast enough high-quality dataset and large-enough model, you wouldn't need fine-tuning for this.

A model which predicts one token at a time can represent anything a model that does a full sequence at a time can. It "knows" what it will output in the future because it is just a probability distribution to begin with. It already knows everything it will ever output to any prompt, in a sense.

anon373839

I don’t think this is quite accurate. LLMs undergo supervised fine-tuning, which is still next-token prediction. And that is the step that makes them usable as chatbots. The step after that, preference tuning via RL, is optional but does make the models better. (Deepseek-R1 type models are different because the reinforcement learning does heavier lifting, so to speak.)

anonymousDan

Is there an equivalent of LORA using RL instead of supervised fine tuning? In other words, if RL is so important, is there some way for me as an end user to improve a SOTA model with RL using my own data (i.e. without access to the resources needed to train an LLM from scratch) ?

polishdude20

When being trained via reinforcement learning, is the model architecture the same then? Like, you first train the llm as a next token predictor with a certain model architecture and it ends up with certain weights. Then you apply RL to that same model which modifies the weights in such a way as to consider while responses?

null

[deleted]

drcode

That's seems silly, it's not poisonous to talk about next token prediction if 90% of the training compute is still spent on training via next token prediction (as far as I am aware)

fpgaminer

99% of evolution was spent on single cell organisms. Intelligence only took 0.1% of evolution's training compute.

4ndrewl

Are you making a claim about evolution here?

devmor

What you just said means absolutely nothing and has no comparison to this topic. It’s nonsense. That is not how evolution works.

drcode

ok that's a fair point

boodleboodle

This is why, whenever I can, I call RLHF/DPO "sequence level calibration" instead of "alignment tuning".

Some precursors to RLHF: https://arxiv.org/abs/2210.00045 https://arxiv.org/abs/2203.16804

SkyBelow

Ignoring for a moment their training, how do they function? They do seem to output a limited selection of text at a time (be it a single token or some larger group).

Maybe it is the wording of "trained to" verses "trained on", but I would like to know more why "trained to" is an incorrect statement when it seems that is how they function when one engages them.

sdwr

In the article, it describes an internal state of the model that is preserved between lines ("rabbit"), and how the model combines parallel calculations to arrive at a single answer (the math problem)

People output one token (word) at a time when talking. Does that mean people can only think one word in advance?

SkyBelow

While there are numerous neural network models, the ones I recall the details of are trained to generate the next word. There is no training them to hold some more abstract 'thought' as it is running. Simpler models don't have the possibility. The more complex models do retain knowledge between each pass and aren't entirely relying upon the input/output to be fed back into them, but that internal state is rarely what is targeted in training.

As for humans, part of our brain is trained to think only a few words in advanced. Maybe not exactly one, but only a small number. This is specifically trained based on our time listening and reading information presented in that linear fashion and is why garden path sentences throw us off. We can disengage that part of our brain, and we must when we want to process something like a garden path sentence, but that's part of the differences between a neural network that is working only as data passes through the weights and our mind which doesn't ever stop even as well sleep and external input is (mostly) cut off. An AI that runs constantly like that would seem a fundamentally different model than the current AI we use.

wuliwong

Bad analogy, an LLM can output a block of text all at once and it wouldn't impact the user's ability to understand it. If people spoke all the words in a sentence at the same time, it would not be decipherable. Even writing doesn't yield a good analogy, a human writing physically has to write one letter at a time. An LLM does not have that limitation.

sroussey

Some people don’t even do that!

rco8786

Super interesting. Can you explain more, or provide some reading? I’m obviously behind

JKCalhoun

As a layman though, I often see this description for how it is LLMs work.

fpgaminer

Right, but it leads to too many false conclusions by lay people. User facing LLMs are only trained on next token prediction during initial stages of their training. They have to go through Reinforcement Learning before they become useful to users, and RL training occurs on complete responses, not just token-by-token.

That leads to conclusions elucidated by the very article, that LLMs couldn't possibly plan ahead because they are only trained to predict next tokens. When the opposite conclusion would be more common if it was better understood that they go through RL.

TeMPOraL

You don't need RL for the conclusion "trained to predict next token => only things one token ahead" to be wrong. After all, the LLM is predicting that next token from something - a context, that's many tokens long. Human text isn't arbitrary and random, there are statistical patterns in our speech, writing, thinking, that span words, sentences, paragraphs - and even for next token prediction, predicting correctly means learning those same patterns. It's not hard to imagine the model generating token N is already thinking about tokens N+1 thru N+100, by virtue of statistical patterns of preceding hundred tokens changing with each subsequent token choice.

mentalgear

What? The "article" is from anthropic, so I think they would know what they write about.

Also, RL is an additional training process that does not negate that GPT / transformers are left-right autoencoders that are effectively next token predictors.

[Why Can't AI Make Its Own Discoveries? — With Yann LeCun] (https://www.youtube.com/watch?v=qvNCVYkHKfg)

jacooper

So it turns out, it's not just simple next token generation, there is intelligence and self developed solution methods (Algorithms) in play, particularly in the math example.

Also the multi language finding negates, at least partially, the idea that LLMs, at least large ones, don't have an understanding of the world beyond the prompt.

This changed my outlook regarding LLMs, ngl.

modeless

> In the poetry case study, we had set out to show that the model didn't plan ahead, and found instead that it did.

I'm surprised their hypothesis was that it doesn't plan. I don't see how it could produce good rhymes without planning.

ripped_britches

It would be really hard to get such good results on coding challenges without planning. This is indeed an odd hypothesis.

indigoabstract

While reading the article I enjoyed pretending that a powerful LLM just crash landed on our planet and researchers at Anthropic are now investigating this fascinating piece of alien technology and writing about their discoveries. It's a black box, nobody knows how its inhuman brain works, but with each step, we're finding out more and more.

It seems like quite a paradox to build something but to not know how it actually works and yet it works. This doesn't seem to happen very often in classical programming, does it?

k__

I've seen things you wouldn't believe. Infinite loops spiraling out of control in bloated DOM parsers. I’ve watched mutexes rage across the Linux kernel, spawned by hands that no longer fathom their own design. I’ve stared into SAP’s tangled web of modules, a monument to minds that built what they cannot comprehend. All those lines of code… lost to us now, like tears in the rain.

baq

Do LLMs dream of electric sheep while matmuling the context window?

timschmidt

How else would you describe endless counting before sleep(); ?

qingcharles

I can't understand my own code a week after writing it if I forget to comment it.

indigoabstract

Hmm, better start preparing those Voight-Kampff tests while there is still time.

umanwizard

> This doesn't seem to happen very often in classical programming, does it?

Not really, no. The only counterexample I can think of is chess programs (before they started using ML/AI themselves), where the search tree was so deep that it was generally impossible to explain "why" a program made a given move, even though every part of it had been programmed conventionally by hand.

But I don't think it's particularly unusual for technology in general. Humans could make fires for thousands of years before we could explain how they work.

null

[deleted]

IngoBlechschmid

> It seems like quite a paradox to build something but to not know how it actually works and yet it works. This doesn't seem to happen very often in classical programming, does it?

I agree. Here is a remote example where it exceptionally does, but it is mostly practically irrelevant:

In mathematics, we distinguish between "constructive" and "nonconstructive" proofs. Intertwined with logical arguments, constructive proofs contain an algorithm for witnessing the claim. Nonconstructive proofs do not. Nonconstructive proofs instead merely establish that it is impossible for the claim to be false.

For instance, the following proof of the claim that beyond every number n, there is a prime number, is constructive: "Let n be an arbitrary number. Form the number 1*2*...*n + 1. Like every number greater than 1, this number has at least one prime factor. This factor is necessarily a prime numbers larger than n."

In contrast, nonconstructive proofs may contain case distinctions which we cannot decide by an algorithm, like "either set X is infinite, in which case foo, or it is not, in which case bar". Hence such proofs do not contain descriptions of algorithms.

So far so good. Amazingly, there are techniques which can sometimes constructivize given nonconstructive proofs, even though the intermediate steps of the given nonconstructive proofs are simply out of reach of finitary algorithms. In my research, it happened several times that using these techniques, I obtained an algorithm which worked; and for which I had a proof that it worked; but whose workings I was not able to decipher for an extended amount of time. Crazy!

(For references, see notes at rt.quasicoherent.io for a relevant master's course in mathematics/computer science.)

woah

> It seems like quite a paradox to build something but to not know how it actually works and yet it works. This doesn't seem to happen very often in classical programming, does it?

I have worked on many large codebases where this has happened

worldsayshi

I wonder if in the future we will rely less or more on technology that we don't understand.

Large code bases will be inherited by people who will only understand parts of it (and large parts probably "just works") unless things eventually get replaced or rediscovered.

Things will increasingly be written by AI which can produce lots of code in little time. Will it find simpler solutions or continue building on existing things?

And finally, our ability to analyse and explain the technology we have will also increase.

Sharlin

See: Vinge’s “programmer-archeologists” in A Deepness in the Sky.

https://en.m.wikipedia.org/wiki/Software_archaeology

gwd

> It seems like quite a paradox to build something but to not know how it actually works and yet it works.

That's because of the "magic" of gradient descent. You fill your neural network with completely random weights. But because of the way you've defined the math, you can tell how each individual weight will affect the value output at the other end; and specifically, you an take the derivative. So when the output is "wrong", you say, "would increasing this weight or decreasing have gotten me closer to the correct answer"? If increasing the node would have gotten you closer, you increase it a bit; if decreasing it would have gotten you closer you decrease it a bit.

The result is that although we program the gradient descent algorithm, we don't directly program the actual circuits that the weights contain. Rather, the nodes "converge" into weights which end up implementing complex circuitry that was not explicitly programmed.

gwd

In a sense, the neural network structure is the "hardware" of the LLM; and the weights are the "software". But rather than explicitly writing a program, as we do with normal computers, we use the magic of gradient descent to summon a program from the mathematical ether.

Put that way, it should be clearer why the AI doomers are so worried: if you don't know how it works, how do you know it doesn't have malign, or at least incompatible, intentions? Understanding how these "summoned" programs work is critical to trusting them; which is a major reason why Anthropic has been investing so much time in this research.

resource0x

In technology in general, this is a typical state of affairs. No one knows how electric current works, which doesn't stop anyone from using electric devices. In programming... it depends. You can run some simulation of a complex system no one understands (like the ecosystem, financial system) and get something interesting. Sometimes it agrees with reality, sometimes it doesn't. :-)

jfarlow

>to build something but to not know how it actually works and yet it works.

Welcome to Biology!

oniony

At least, now, we know what it means to be a god.

Vox_Leone

>>It seems like quite a paradox to build something but to not know how it actually works and yet it works. This doesn't seem to happen very often in classical programming, does it?

Well, it is meant to be "unknowable" -- and all the people involved are certainly aware of that -- since it is known that one is dealing with the *emergent behavior* computing 'paradigm', where complex behaviors arise from simple interactions among components [data], often in nonlinear or unpredictable ways. In these systems, the behavior of the whole system cannot always be predicted from the behavior of individual parts, as opposed to the Traditional Approach, based on well-defined algorithms and deterministic steps.

I think the Anthropic piece is illustrating it for the sake of the general discussion.

indigoabstract

Correct me if I'm wrong, but my feeling is this all started with the GPUs and the fact that unlike on a CPU, you can't really step by step debug the process by which a pixel acquires its final value (and there are millions of them). The best you can do is reason about it and tweak some colors in the shader to see how the changes reflect on screen. It's still quite manageable though, since the steps involved are usually not that overwhelmingly many or complex.

But I guess it all went downhill from there with the advent of AI since the magnitude of data and the steps involved there make traditional/step by step debugging impractical. Yet somehow people still seem to 'wing it' until it works.

TechDebtDevin

>>Claude will plan what it will say many words ahead, and write to get to that destination. We show this in the realm of poetry, where it thinks of possible rhyming words in advance and writes the next line to get there. This is powerful evidence that even though models are trained to output one word at a time, they may think on much longer horizons to do so.

This always seemed obvious to me or that LLMs were completing the next most likely sentence or multiple words.

null

[deleted]

deadbabe

We really need to work on popularizing better, non-anthropomorphic terms for LLMs, as they don’t really have “thoughts” the way people think. Such terms make people more susceptible to magical thinking.

danielbln

Could you argue why they don't? And could you also argue why we do?

rongrobert

When a car moves over the ground, we do not call that running, we call that driving as to not confuse the mechanism of the output.

Both running and driving are moving over the ground but with entirely different mechanisms.

I imagine saying the LLM has thoughts is like pretending the car has wheels for legs and is running over the ground. It is not completely wrong but misleading and imprecise.

danielbln

Planes fly, birds fly. They use related, but ultimately quite different mechanisms to do so. Yet we call both flying.

bGl2YW5j

Yes. Simply, and well put.