Skip to content(if available)orjump to list(if available)

Naur's "Programming as Theory Building" and LLMs replacing human programmers

n4r9

Although I'm sympathetic to the author's argument, I don't think they've found the best way to frame it. I have two main objections i.e. points I guess LLM advocates might dispute.

Firstly:

> LLMs are capable of appearing to have a theory about a program ... but it’s, charitably, illusion.

To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Secondly:

> Theories are developed by doing the work and LLMs do not do the work

Isn't this a little... anthropocentric? That's the way humans develop theories. In principle, could a theory not be developed by transmitting information into someone's brain patterns as if they had done the work?

ryandv

> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

This idea has already been explored by thought experiments such as John Searle's so-called "Chinese room" [0]; an LLM cannot have a theory about a program, any more than the computer in Searle's "Chinese room" understands "Chinese" by using lookup tables to generate canned responses to an input prompt.

One says the computer lacks "intentionality" regarding the topics that the LLM ostensibly appears to be discussing. Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

[0] https://en.wikipedia.org/wiki/Chinese_room

smithkl42

The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia. But it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness. And theories fall into the category of reason, not of consciousness. Or another way of putting it that you might find more acceptable: maybe a computer will never, internally, know that it has developed a theory - but it sure seems like it will be able to act and talk as if it had, much like a philosophical zombie.

ryandv

> The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia.

To be as accurate as possible with respect to the primary source [0], the Chinese room thought experiment was devised as a refutation of "strong AI," or the position that

    the appropriately programmed computer really is a mind, in the
    sense that computers given the right programs can be literally
    said to understand and have other cognitive states.
Searle's position?

    Rather, whatever purely formal principles you put into the
    computer, they will not be sufficient for understanding, since
    a human will be able to follow the formal principles without
    understanding anything. [...] I will argue that in the literal
    sense the programmed computer understands what the car and the
    adding machine understand, namely, exactly nothing.
[0] https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...

slippybit

> maybe a computer will never, internally, know that it has developed a theory

Happens to people all the time :) ... especially if they don't have a concept of theories and hypotheses.

People are dumb and uneducated only until they aren't anymore, which is, even in the worst cases, no more than a decade of effort put in time. In fact, we don't even know how crazy fast neuro-genesis and or cognitive abilities might increase when a previously dense person reaches or "breaks through" a certain plateau. I'm sure there is research, but this is not something a satisfyingly precise enough answer can be formulated for.

If I formulate a new hypothesis, the LLM can tell me, "nope, you are the only idiot believing this path is worth pursuing". And if I go ahead, the LLM can tell me: "that's not how this usually works, you know", "professionals do it this way", "this is not a proof", "this is not a logical link", "this is nonsense but I commend your creativity!", all the way until the actual aha-moment when everything fits together and we have an actual working theory ... in theory.

We can then analyze the "knowledge graph" in 4D and the LLM could learn a theory of what it's like to have a potential theory even though there is absolutely nothing that supports the hypothesis or it's constituent links at the moment of "conception".

Stay put, it will happen.

lo_zamoyski

> The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia. But it proves nothing about reason

I think you misunderstand the Chinese Room argument [0]. It is exactly about how a mechanical process can produce results without having to reason.

[0] https://plato.stanford.edu/entries/chinese-room/

musicale

I imagine Searle feels vindicated since LLMs are good at translating Chinese.

On the other hand I am reminded of Nilsson's rebuttal:

> For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.

dingnuts

> it proves nothing about reason, which LLMs have clearly shown needs to be distinguished from consciousness.

Uh, they have? Are you saying they know how to reason? Because if so, why is it that when I give a state of the art model documentation lacking examples for a new library and ask it to write something, it cannot even begin to do that, even if the documentation is in the training data? A model that can reason should be able to understand the documentation and create novel examples. It cannot.

This happened to me just the other day. If the model can reason, examples of the language, which it has, and the expository documentation should have been sufficient.

Instead, the model repeatedly inserted bullshitted code in the style of the language I wanted, but with library calls and names based on a version of the library for another language.

This is evidence of reasoning ability? Claude Sonnet 3.7 and Gemini Pro both exhibited this behavior last week.

I think this technology is fundamentally the same as it has been since GPT2

TeMPOraL

Wait, isn't the conclusion to take from the "Chinese room" literally the opposite of what you suggest? I.e. it's the most basic, go-to example of a larger system showing capability (here, understanding Chinese) that is not present in any of its constituent parts individually.

> Their words aren't "about" anything, they don't represent concepts or ideas or physical phenomena the same way the words and thoughts of a human do. The computer doesn't actually "understand Chinese" the way a human can.

That's very much unclear at this point. We don't fully understand how we relate words to concepts and meaning ourselves, but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer.

sgt101

>We don't fully understand how we relate words to concepts and meaning ourselves,

This is definitely true.

>but to the extent we do, LLMs are by far the closest implementation of those same ideas in a computer

Well - this is half true but meaningless. I mean - we don't understand so LLM's are as good a bet as anything.

LLMs will confidently tell you that white wine is good with fish, but they have no experience of the taste of wine, or fish, or what it means for one to compliment the other. Humans all know what it's like to have fluid in their mouths, they know the taste of food and the feel of the ground under their feet. LLMs have no experience, they exist crystalised and unchanging in an abstract eternal now, so they literally can't understand anything.

ryandv

> the conclusion to take from the "Chinese room"

We can hem and haw about whether or not there are others, but the particular conclusion I am drawing from is that computers lack "intentionality" regarding language, and indeed about anything at all. Symbol shunting, pencil pushing, and the mechanics of syntax are insufficient for the production of meaning and understanding.

That is, to oversimplify, the broad distinction drawn in Naur's article regarding the "programming as text manipulation" view vis-a-vis "programming as theory building."

> That's very much unclear at this point.

It's certainly a central point of contention.

vacuity

The Chinese room experiment was originally intended by Searle to (IIUC) do as you claim and justify computers as being capable of understanding like humans do. Since then, it has been used both in this pro-computer, "black box" sense and in the anti-computer, "white box" sense. Personally, I think both are relevant, and the issue with LLMs currently is not a theoretical failing but rather that they aren't convincing when viewed as black boxes (e.g. the Turing test fails).

dragonwriter

The Chinese Room is a mirror that reflects people’s hidden (well, often not very, but still) biases about whether the universe is mechanical or whether understanding involves dualistic metaphysical woo back at them as conclusions.

That's not why it was presented, of course, Searle aimed at proving something, but his use of it just illustrates which side of that divide he was on.

jimbokun

The flaw of the Chinese Room argument is the need to explain why it does not apply to humans as well.

Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?

If no individual neuron or small group of neurons understand Chinese, how can you say any brain made of neurons understands Chinese?

ryandv

> The flaw of the Chinese Room argument is the need to explain why it does not apply to humans as well.

But it does - the thought experiment continues by supposing that I gave a human those lookup tables and instructions on how to use them, instead of having the computer run the procedure. The human doesn't understand the foreign language either, not in the same way a native speaker does.

The point is that no formal procedure or algorithm is sufficient for such a system to have understanding. Even if you memorized all the lookup tables and instructions and executed this procedure entirely in your head, you would still lack understanding.

> Does a single neuron "understand" Chinese? 10 neurons? 100? 1 million?

This sounds like a sorites paradox [0]. I don't know how to resolve this, other than to observe that our notions of "understanding" and "thought" and "intelligence" are ill-defined and more heuristic approximations than terms with a precise meaning; hence the tendency of the field of computer science to use thought experiments like Turing's imitation game or Searle's Chinese room as proxies for assessing intelligence, in lieu of being able to treat these terms and ideas more rigorously.

[0] https://plato.stanford.edu/entries/sorites-paradox/

looofooo0

But the LLM interacts with the program and the world through debugger, run-time feedback, linter, fuzzer etc., we can collect all the user feedback, user pattern ... Moreover, it can also get visual feedback. Reason through other programs like physic simulation etc. Use a robot to interact with the device running the code physically. Can use proof verifier like lean, to ensure its logical model of the program is sound. Do some back and forth between the logical model and the actual program through experiments. Maybe not now, but I don't see why the LLM needs to be kept in the Chinese Room.

jimbokun

That's true in general but not true of any current LLM, to my knowledge. Different subsets of those inputs and modalities, yes. But no current LLM has access to all of them.

im3w1l

You can state the argument formally as A has property B. Property B' implies property C. Hence A has property C. The fallacy is the sleight of hand where two almost but not quite identical properties B and B' are used, in this case two different defitions of theory, only one of which requires some ineffable mind consciousness.

It's important not to get caught up in a discussion about whether B or B' is the proper definition, but instead see that it's the inconsistency that is the issue.

LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact. What does matter to the highest degree however is where they succeed and where they fail, and how the representations and computing can improve the success rate and capabilities.

ryandv

No, I don't agree with this formalization. It's more that (some) humans have a "theory" of the program (in the same sense used by Ryle and Naur); let's take for granted that if one has a theory, then they have understanding; thus (some) humans have an understanding of the program. It's not equivocating between B and B', but rather observing that B implies B'.

Thus, if an LLM lacks understanding (Searle), then they don't have a theory either.

> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code. Whether that internal representation is satisfies your criteria for a theory doesn't change that fact.

The entire point of Naur's paper is that the activity of programming, of software engineering, is not just "manipulating source code." It is, rather, building a theory of the software system (which implies an understanding of it), in a way that an LLM or an AI cannot, as posited by Searle.

namaria

> LLM's build an internal representation that let's them efficiently and mostly successfully manipulate source code.

No, see, this is the problem right here. Everything in this discussion hinges on LLMs behavior. While they are capable of rendering text that looks like it was produced by reasoning from the input, they also often are incapable of that.

LLMs can be used by people who reason about the input and output. If and only if someone can show that LLMs can, without human intervention, go from natural language description to fully looping through the process and building and maintaining the code, that argument could be made.

The "LLM-as-AI" hinges entirely on their propensity to degenerate into nonsensical output being worked out. As long as that remains, LLMs will stay firmly in the camp of being usable to transform some inputs into outputs under supervision and that is no evidence of ability to reason. So the whole conversation devolves into people pointing out that they still descent into nonsense if left to their own devices, and the "LLM-as-AI" people saying "but when they don't..." as if it can be taken for granted that it is at all possible to get there.

Until that happens, using LLMs to generate code will remain a gimmick for using natural language to search for common patterns in popular programming languages.

CamperBob2

You're seriously still going to invoke the Chinese Room argument after what we've seen lately? Wow.

The computer understands Chinese better than Searle (or anyone else) understood the nature and functionality of language.

ryandv

You're seriously going to invoke this braindead reddit-tier of "argumentation," or rather lack thereof, by claiming bewilderment and offering zero substantive points?

Wow.

Jensson

> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

Human theory building works, we have demonstrated this, our science letting us build things on top of things proves it.

LLM theory building so far doesn't, they always veer in a wrong direction after a few steps, you will need to prove that LLM can build theories just like we proved that humans can.

jerf

You can't prove LLMs can build theories like humans can, because we can effectively prove they can't. Most code bases do not fit in a context window. And any "theory" an LLM might build about a code base, analogously to the recent reasoning models, itself has to carve a chunk out of the context window, at what would have to be a fairly non-trivial percentage expansion of tokens versus the underlying code base, and there's already not enough tokens. There's no way that is big enough to build a theory of a code base.

"Building a theory" is something I expect the next generation of AIs to do, something that has some sort of memory that isn't just a bigger and bigger context window. As I often observe, LLMs != AI. The fact that an LLM by its nature can't build a model of a program doesn't mean that some future AI can't.

imtringued

This is correct. The model context is a form of short term memory. It turns out LLMs have an incredible short term memory, but simultaneously that is all they have.

What I personally find perplexing is that we are still stuck at having a single context window. Everyone knows that turing machines with two tapes require significantly fewer operations than a single tape turning machine that needs to simulate multiple tapes.

The reasoning stuff should be thrown into a separate context window that is not subject to training loss (only the final answer).

dkarl

The article is about what LLMs can do, and I read it as what they can do in theory, as they're developed further. It's an argument based on principle, not on their current limitations.

You can read it as a claim about what LLMs can do now, but that wouldn't be very interesting, because it's obvious that no current LLM can replace a human programmer.

I think the author contradicts themselves. They argue that LLMs cannot build theories because they fundamentally do not work like humans do, and they conclude that LLMs can't replace human programmers because human programmers need to build theories. But if LLMs fundamentally do not work like humans, how do we know that they need to build theories the same way that humans do?

jimbokun

> because it's obvious that no current LLM can replace a human programmer.

A lot of managers need to be informed of this.

falcor84

> they always veer in a wrong direction after a few steps

Arguably that's the case for humans too in the general case, as per the aphorism "Beware of a guy in a room" [0]. But as for AIs, the thing is that they're exponentially improving at this, such that according to METR, "The length of tasks that AI can do is doubling every 7 months"[1].

[0] https://medium.com/machine-words/a-guy-in-a-room-bbbe058645e...

[1] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

Jensson

Even dumb humans learn to play and beat video games on their own, so humans don't fail on this. Some humans fail to update their world model based on what other people tell them or when they don't care, but basically every human can learn from their own direct experiences if they focus on it.

jimbokun

He doesn't prove the claim. But he does make a strong argument for why it's very unlikely that an LLM would have a theory of a program similar to what a human author of a program would have:

> Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.

And this is certainly a true statement about how LLMs are constructed. Maybe this latently induces in the LLM something very similar to what humans do when writing programs.

But another possibility is that it's similar to the Brain Teasers that were popular for a long time in programming interviews. The idea was that if the interviewee could use logic to solve riddles, they were probably also likely to be good at writing programs.

In reality, it was mostly a test of whether the interviewee had reviewed all the popular riddles commonly asked in these interviews. If they had, they could also produce a realistic chain of logic to simulate the process of solving the riddle from first principles. But if that same interviewee was given a riddle not similar to one they had previously reviewed, they probably wouldn't do nearly as well in solving it.

It's very likely that LLMs are like those interviewees who crammed a lot of examples, again due to how LLMs are trained. They can reproduce programs similar to ones in their training set. They can even produce explanations for their "reasoning" based on examples they've seen of explanations of why a program was written in one way instead of another. But that is a very different kind of model than the one a person builds up writing a program from scratch over a long period of time.

Having said all this, I'm not sure what experiments you would run to determine if the LLM is using one approach vs another.

psychoslave

> To make this point stick, you would also have to show why it's not an illusion when humans "appear" to have a theory.

That burden of proof is on you, since you are presumably human and you are challenging the need of humans to have more than a mere appearance of having a theory when they claim to have one.

Note that even when the only theoretical assumption we go with is that we will have a good laugh watching other people going crazy after random bullshits thrown at them, we still have a theory.

dcre

I agree. Of course you can learn and use a theory without having developed it yourself!

IanCal

Skipping that they say it's fallacious at the start, none of the arguments in the article are valid if you simply have models

1. Run code 2. Communicate with POs 3. Iteratively write code

n4r9

I thought the fallacy bit was tongue-in-cheek. They're not actually arguing from authority in the article.

The system you describe appears to treat programmers as mere cogs. Programmers do not simply write and iterate code as dictated by POs. That's a terrible system for all but the simplest of products. We could implement that system, then lose the ability to make broad architectural improvements, effectively adapt the model to new circumstances, or fix bugs that the model cannot.

IanCal

> The system you describe appears to treat programmers as mere cogs

Not at all, it simply addresses key issues raised. That they cannot have a theory of the program because they are reading it and not actually writing it - so have them write code, fix problems and iterate. Have them communicate with others to get more understanding of the "why".

> . Programmers do not simply write and iterate code as dictated by POs.

Communicating with POs is not the same as writing code directed by POs.

ebiester

First, I think it's fair to say that today, an LLM cannot replace a programmer fully.

However, I have two counters:

- First, the rational argument right now is that one person and money spent toward LLMs can replace three - or more - programmers total. This is the argument with a three year bound. The current technology will improve and developers will learn how to use it to its potential.

- Second, the optimistic argument is that a combination of the LLM model with larger context windows and other supporting technology around it will allow it to emulate a theory of mind that is similar to the average programmer. Consider Go or Chess - we didn't think computers had the theory of mind to be better than a human, but it found other ways. For humans, Naur's advice stands. We cannot assume that this is true if there are tools with different strengths and weaknesses than humans.

ActionHank

I think that everyone is misjudging what will improve.

There is no doubt it will improve, but if you look at a car, it is still the same fundamental "shape" of a model T.

There are niceties and conveniences, efficiency went way up, but we don't have flying cars.

I think we are going to have something, somewhere in the middle, AI features will eventually find their niche, people will continue to leverage whatever tools and products are available to build the best thing they can.

I believe that a future of self-writing code pooping out products, AI doing all the other white collar jobs, and robots doing the rest cannot work. Fundamentally there is no "business" without customers and no customers if no one is earning.

ebiester

You cannot build a tractor unit (the engine-cab half of the tractor-trailer) with Model T Technology even if they are close.

And the changes will be in the auxiliary features. We will figure out ways to have LLMs understand APIs better without training them. We will figure out ways to better focus its context. We will chain LLM requests and contexts in a way that help solve problems better. We will figure out ways to pass context from session to session that an LLM can effectively have a learning memory. And we will figure out our own best practices to emphasize their strengths and minimize their weaknesses. (We will build better roads.)

And as much as you want to say that - a Model T was uncomfortable, had a range of about 150 miles between fill-ups, and maxed out at 40-45 mph. It also broke frequently and required significant maintenance. It might take 13-14 days to get a Model T from new york to los angeles today notwithstanding maintenance issues, and a modern car could make it reliably in 4-5 days if you are driving legally and not pushing more than 10 hours a day.

I too think that self-writing code is not going to happen, but I do think there is a lot of efficiency to be made.

rowanseymour

If you forced me to put a number on how much more productive having copilot makes me I think I would say < 5%, so I'm struggling to see how anyone can just assert that "the rational argument right now" is that I can be 200% more productive.

Maybe as a senior dev working on a large complex established project I don't benefit from LLMs as much as others because as I and the project mature.. productivity becomes less and less correlated with lines of code, and more about the ability to comprehend the bigger picture and how different components interact... things that even LLMs with bigger context aren't good at.

ebiester

I don't think about it in lines of code, but let me say that there are some efficiencies being left on the table.

It helps because I am quicker to run to a script to automate a process instead of handling it manually, because I can bang it out in 15 minutes rather than an hour.

I am more likely to try a quick prototype of a refactor because I can throw it at the idea and just see what it looks like in ten minutes. If it has good testing and I tell it not to change, it can do a reasonable job getting 80% done and I can think through it.

It generates mock data quicker than I can, and can write good enough tests through chat. I can throw it to legacy code and it does a good job writing characterization tests and sometimes catches things I don't.

Sometimes, when I'm tired, I can throw easy tasks at it that require minimal thought and can get through "it would be nice if" issues.

It's not great at writing documentation, but it's pretty good at taking a slack chat and writing up a howto that I won't have the time or motivation to do.

All of those are small, but they definitely add up.

That's today and being compared to 5% improvement. I think the real improvements come as we learn more.

edanm

> If you forced me to put a number on how much more productive having copilot makes me I think I would say < 5%, so I'm struggling to see how anyone can just assert that "the rational argument right now" is that I can be 200% more productive.

If you're thinking about Copilot, you're simply not talking about the same thing that most people who claim a 200% speedup are talking about. They're talking about either using chat-oriented workflows, where you're asking Claude or similar to wholesale generate code, often using an IDE like Cursor. Or even possibly talking about Coding Agents like Claude Code, which can be even more productive.

You might still be right! They might still be wrong! But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it.

(Personally, I'm not 200% productive with Coding Agents, for various reasons, but given the number of people I admire who are, I believe this is something that will change, and soon.)

geraneum

> But your talking about Copilot makes it seem like you're nowhere near the cutting edge use of AI, so you don't have a well-formed opinion about it

You can use Claude, Gemini, etc through Copilot and you can use the agent mode. Maybe you do or maybe you don’t have a well formed opinion of the parent’s workflow.

spacemadness

This is what I tried explaining to our management who are using lines of code metrics on engineers working on an established codebase. Other than lines of code being a terrible metric in general, they don’t seem to understand or care to understand the difference.

falcor84

> First, you cannot obtain the "theory" of a large program without actually working with that program...

> Second, you cannot effectively work on a large program without a working "theory" of that program...

I find the whole argument and particularly the above to be a senseless rejection of bootstrapping. Obviously there was a point in time (for any program, individual programmer and humanity as a whole) that we didn't have a "theory" and didn't do the work, but now we have both, so a program and its theory can appear "de novo".

So with that in mind, how can we reject the possibility that as an AI Agent (e.g. Aider) works on a program over time, it bootstraps a theory?

Jensson

> So with that in mind, how can we reject the possibility that as an AI Agent (e.g. Aider) works on a program over time, it bootstraps a theory?

Lack of effective memory, that might have worked if you constantly retrained the LLM incorporating the new wisdom iteratively like a human does, but current LLM architecture doesn't enable that. The context provided is neither large enough nor can it use it effectively enough for complex problems.

And this isn't easy to solve, you very quickly collapse the LLM if you try to do this in the naive ways. We need some special insight that lets us update LLM continuously as it works in a positive direction the way humans can.

falcor84

Yeah, that's a good point. I absolutely agree that it needs access to effective long-term memory, but it's unclear to me that we need some "special insight". Research is relatively early on this, but we already see significant sparks of theory-building using basic memory retention, when Claude and Gemini are asked to play Pokemon [0][1]. It's clearly not at the level of a human player yet, but it (particularly Gemini) is doing significantly better than I expected at this stage.

[0] https://www.twitch.tv/claudeplayspokemon

[1] https://www.twitch.tv/gemini_plays_pokemon

Jensson

They update that gemini plays pokemon model when it gets stuck with new prompt engineering etc. So there the learning happens by a human and not the LLM, the LLM can do a lot with trial and error but if you follow it there it does the same action over and over and get stuck until the prompt engineering kicks it into self evaluating 20 steps later.

So that isn't just "ask it to play pokemon", that is a large program with tons of different prompts and memories that kicks in at different times, and even with all that and updates to the program when it gets stuck it still struggles massively and repeats mistakes over and over in ways human never would.

raincom

Yes, indeed. They think that every circular argument is vicious. Not at all, there are two kinds of circularity: virtuous circularity; vicious circularity. Bootstrapping falls under the former. Check [1] and [2]

[1] https://www.hipkapi.com/2011/03/10/foundationalism-and-virtu...

[2] Brown, Harold I. “Circular Justifications.” PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1994 (1994): 406–14. http://www.jstor.org/stable/193045.

mrkeen

> So with that in mind, how can we reject the possibility that as an AI Agent (e.g. Aider) works on a program over time, it bootstraps a theory?

That's the appropriate level of faith for today's LLMs. They're not good enough to replace programmers. They're good enough that we can't reject the possibility of them one day being good enough to replace programmers.

2mlWQbCK

And good enough does not mean "as good as". Companies happily outsource programming jobs to worse, but much cheaper, programmers, all the time.

codr7

I for one wouldn't mind seeing more focus on probability than possibility here.

Possibility means practically nothing.

mlsu

The information needs to propagate through the network either forward (when the model has the codebase in context) or backward (when it updates its weights).

You can have the models pseudo “learn” by putting things in something like a system prompt but this is limited by context, and they will never permanently learn. But we don’t train at inference time with today’s LLMs.

We can explicitly reject this possibility by looking at the information that goes into the model at train and test time.

andai

If I understand correctly, the critique here is that is that LLMs cannot generate new knowledge, and/or that they cannot remember it.

The former is false, and the latter is kind of true -- the network does not update itself yet, unfortunately, but we work around it with careful manipulation of the context.

Part of the discussion here is that when an LLM is working with a system that it designed, it understands it better than one it didn't. Because the system matches its own "expectations", its own "habits" (overall design, naming conventions, etc.)

I often notice complicated systems created by humans (e.g. 20 page long prompts), adding more and more to the prompt, to compensate for the fact that the model is fundamentally struggling to work in the way asked of it, instead of letting the model design a workflow that comes naturally to it.

drbig

> If I understand correctly, the critique here is that is that LLMs cannot generate new knowledge, and/or that they cannot remember it.

> The former is false, and the latter is kind of true -- the network does not update itself yet, unfortunately, but we work around it with careful manipulation of the context.

Any and all examples of where an LLM generated "new knowledge" will be greatly appreciated. And the quotes are because I'm willing to start with the lowest bar of what "new" and "knowledge" mean when combined.

andai

They are fundamentally mathematical models which extrapolate from data points, and occasionally they will extrapolate in a way that is consistent with reality, i.e. they will approximate uncharted territory with reasonable accuracy.

Of course, being able to tell the difference (both for the human and the machine) is the real trick!

Reasoning seems to be a case where the model uncovers what, to some degree, it already "knows".

Conversely, some experimental models (e.g. Meta's work with Concepts) shift that compute to train time, i.e. spend more compute per training token. Either way, they're mining "more meaning" out of the data by "working harder".

This is one area where I see that synthetic data could have a big advantage. Training the next gen of LLMs on the results of the previous generation's thinking would mean that you "cache" that thinking -- it doesn't need to start from scratch every time, so it could solve problems more efficiently, and (given the same resources) it would be able to go further.

Of course, the problem here is that most reasoning is dogshit, and you'd need to first build a system smart enough to pick out the good stuff...

---

It occurs to me now that you rather hoped for a concrete example. The ones that come to mind involve drawing parallels between seemingly unrelated things. On some level, things are the same shape.

I argue that noticing such a connection, such a pattern, and naming it, constitutes new and useful knowledge. This is something I spend a lot of time doing (mostly for my own amusement!), and I've found that LLMs are surprisingly good at it. They can use known patterns to coherently describe previously unnamed ones.

In other words, they map concepts onto other concepts in ways that hasn't been done before. What I'm referring to here is, I will prompt the LLM with some such query, and it will "get it", in ways I wasn't expecting. The real trick would be to get it to do that on its own, i.e. without me prompting it (or, with current tech, find a way to get it to prompt itself that produces similar results... and then feed that into some kind of Novelty+Coherence filtering system, i.e. the "real trick" again... :).

A specific example eludes me now, but it's usually a matter of "X is actually a special case of Y", or "how does X map onto Y". It's pretty good at mapping the territory. It's not "creating new territory" by doing that, it's just pointing out things that "have always been there, but nobody has looked at before", if that makes sense.

woah

These long winded philosophical arguments about what LLMs can't do which are invariably proven wrong within months are about as misguided as the gloom and doom pieces about how corporations will be staffed by "teams" of "AI agents". Maybe it's best just to let them cancel each other out. Both types of article seem to be written by people with little experience actually using AI.

BenoitEssiambre

Solomonoff induction says that the shortest program that can simulate something is its best explanatory theory. OpenAI researchers very much seem to be trying to do theory building ( https://x.com/bessiambre/status/1910424632248934495 ).

philipswood

> Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.

It isn't certain that this framing is true. As part of learning to predict the outcome of the work token by token, LLMs very well might be "doing the work" as an intermediate step via some kind of reverse engineering.

skydhash

> As part of learning to predict the outcome of the work token by token

They're already have the full work available. When you're reading the source code of a program to learn how it works, your objective is not to learn what keyword are close to each other or extract the common patterns. You're extracting a model which is an abstraction about some real world concept (or some other abstractions) and rules of manipulation of that abstraction.

After internalizing that abstraction, you can replicate it with whatever you want, extends it further,... It's an internal model that you can shape as you please in your mind, then create a concrete realization once you're happy with the shape.

philipswood

As Naur describes this, the full code and documentation, and the resulting model you can build up from it is merely "walking the path" (as the blogpost put it), and does not encode "building the path".

I.e. the theory of the program as it exist in the minds of the development team might not be fully available for reconstruction from just the final code and docs since it includes a lot of activity that does not end up in the code.

MarkusQ

> the theory of the program as it exist in the minds of the development

> team might not be fully available for reconstruction from just the

> final code and docs

As an obvious and specific source of examples, all the features they decided to omit, "optimizations" they considered but rejected for various reasons, etc. are not present in the code and seldom in the comments or documentation.

Occasionally you will see things like "Full search rather than early exit on match to prevent timing attacks" or "We don't write it in format xyz because of patent issues" or some such, but the vast majority of such cases pass unremarked.

skydhash

It could be, if you were trying to only understand how the code does something. But more often, you're actively trying to understand how it was built by comparing assumptions with the code in front of you. It is not merely walking the path, if you've created a similar path and are comparing techniques.

BiraIgnacio

Great post and Naur's paper is really great. What I can't help stop thinking is of the many other cases where something should-not-be because being is less than ideal, and yet, they insist on being. In other words, LLMs should not be able to largely replace programmers and yet, they might.

lo_zamoyski

In some respects, perhaps in principle they could. But what is the point of handing off the entire process to a machine, even if you could?

If programming is a tool for thinking and modeling, with execution by a machine as a secondary benefit, then outsourcing these things to LLMs contributes nothing to our understanding. By analogy, we do math because we wish to understand the mathematical universe, so to speak, not because we just want some practical result.

To understand, to know, are some of the highest powers of the human person. Machines are useful for helping us enable certain work or alleviate tedium to focus on the important stuff, but handing off understanding and knowledge to a machine (if it were possible, which it isn't) would be one of the most inhuman things you could do.

BiraIgnacio

As a software engineer, I really hope that will be the case :) Thanks for the reply!

codr7

Might, potentially; it's all wishful thinking.

I might one day wake up and find my dog to be more intelligent than me, not very likely but I can't prove it to be impossible.

It's still useless.

andriesm

Many like to say that LLM's cannot do ANY reasoning or "theory building".

However, is it really true that LLM's cannot reason AT ALL or cannot do theory construction AT ALL?

Maybe they are just pretty bad at it. Say 2 out 10. But almost certainly not 0 out of 10.

They used to be at 0, and now they're at 2.

Systematically breaking down problems and Systematically reasononing through parts, as we can see with chain-of-thought hints that further improvements may come.

What most people however now agree is that LLMs can learn and apply existing theories.

So if you teach an LLM enough theories iy can still be VERY useful and solve many coding problems, because an LLM can memorise more theories than any human can. Big chunks of computer software still keeps reinventing wheels.

The other objection from the article, that without theory building an AI cannot make additions or changes to a large code base very effectively - this suggests an idea to try - before promting the AI for a change on a large code base, prepend it with a big description of the entire program, the main ideas and how they map yo certain files, classes, modules etc, and see if this doesn't improve your results?

And in case you are concerned that. documenting and typing out entire system theories for every new prompt, keep in mind that this is something you can write once and keep reusing (and adding to over time incrementally).

Of course context limits may still be a constraint.

Of course I am not saying "definitely AI will make all human programmers jobless".

I'm merely saying, these things are already a massive productivity boost, if used correctly.

I've been programming for 30 years, started using cursor last year, and you would need to fight me to take it away from me.

I'm happy to press ESC to cancel all the bad code suggestions, to still have all thr good tab-completes, prompts, better than stack-overflow question answering etc.

analyte123

The "theory" of a program is supposed to be majority embedded in its identifiers, tests, and type definitions. The same line of reasoning in this article could be used to argue that you should just name all your variables random 1 or 2 letter combinations since the theory is supposed to be all in your head anyway.

Indeed, it's quickly obvious where an LLM is lacking context because the type of a variable is not well-specified (or specified at all), the schema of a JSON blob is not specified, or there is some other secret constraint that maybe someone had in their head X years ago.

xpe

> Theories are developed by doing the work and LLMs do not do the work. They ingest the output of work.

This is often the case but does not _have_ to be so. LLMs can use chain of thought to “talk out loud” and “do the work”. It can use supplementary documents and iterate on its work. The quality of course varies, but it is getting better. When I read Gemini 2.5’s “thinking” notes, it indeed can build up text that is not directly present in its training data.

Putting aside anthropocentric definitions of “reasoning” and “consciousness” are key to how I think about the issues here. I’m intentionally steering completely clear of consciousness.

Modern SOTA LLMs are indeed getting better at what people call “reasoning”. We don’t need to quibble over defining some quality bar; that is probably context-dependent and maybe even arbitrary.

It is clear LLMs are doing better at “reasoning” — I’m using quotes to emphasize that (to me) it doesn’t matter if their inner mechanisms for doing reasoning don’t look like human mechanisms. Instead, run experiments and look at the results.

We’re not talking about the hard problem of consciousness, we’re talking about something that can indeed be measured: roughly speaking, the ability to derive new truths from existing ones.

(Because this topic is charged and easily misunderstood, let me clarify some questions that I’m not commenting on here: How far can the transformer-based model take us? Are data and power hungry AI models cost-effective? What viable business plans exist? How much short-term risk, to say, employment and cybersecurity? How much long-term risk to human values, security, thriving, and self-determination?)

Even if you disagree with parts of my characterization above, hear this: We should at least be honest to ourselves when we move the goal posts.

Don’t mistake my tone for zealotry. I’m open to careful criticism. If you do, please don’t try to lump me into one “side” on the topic of AI — whether it be market conditions, commercialization, safety, or research priorities — you probably don’t know me well enough to do that (yet). Apologies for the pre-defensive posture; but the convos here are often … fraught, so I’m trying to head off some of the usual styles of reply.

geraneum

> it indeed can build up text that is not directly present in its training data.

I’m curious how you know that.

triclops200

Not the parent, but, it's kinda obvious in even degenerate logic cases:

# user

what's 78904 + 34567282902

# Gemma 2.5 pro

    Identify the core task: The user wants to add two numbers: 78904 and 34567282902.
    Recognize the operation: The operation is addition (+).
    Perform the calculation:
        Align the numbers vertically by place value:

          34567282902
        +       78904
        -------------

        Add column by column, starting from the rightmost column (ones place):
            2 + 4 = 6
            0 + 0 = 0
            9 + 9 = 18 (write down 8, carry over 1)
            2 + 8 + 1 (carry-over) = 11 (write down 1, carry over 1)
            8 + 7 + 1 (carry-over) = 16 (write down 6, carry over 1)
            2 + 1 (carry-over) = 3
            7 + 0 = 7
            6 + 0 = 6
            5 + 0 = 5
            4 + 0 = 4
            3 + 0 = 3
        Combine the results: 34567361806
    Format the answer: Present the calculation clearly and state the final result.
To find the sum of 78904 and 34567282902, you add the two numbers together:

   34567282902
+ 78904 -------------- 34567361806

So, 78904+34567282902=34567361806.

The model definitely iteratively built up (useful and correct even) text that wasn't directly in the training data (I doubt that number ever appeared anywhere in there given that 34567361806 doesn't appear in any searches I do of various engines)

geraneum

> The model definitely iteratively built up (useful and correct even) text that wasn't directly in the training data

The text is highly likely in training data, as it’s textbook arithmetic instructions. It’s the number that is probably not there. Simple arithmetic is one of the verifiable operation types (truths) with a straightforward reward function used to train CoT models. In your example, what’s interesting to me is improving LLM inference with RL that can result in such wonderful outcomes, but that’s perhaps a different question.

xpe

To answer directly: Ask a question. Watch the “thinking” process. Estimate the likelihood that all of the generated text is in the training data.

Do you disagree with my claim?

Or perhaps you were hoping for a very rigorous set of experiments?

IanCal

What's the purpose of this?

> In this essay, I will perform the logical fallacy of argument from authority (wikipedia.org) to attack the notion that large language model (LLM)-based generative "AI" systems are capable of doing the work of human programmers.

Is any part of this intended to be valid? It's a very weak argument - is that the purpose?