LLMs Bring New Nature of Abstraction
36 comments
·June 25, 2025bwfan123
moregrist
This would require Martin Fowler to know about this article and to appreciate that it might be important.
He might, but I am not encouraged by his prior work or those of his contemporary Agile boosters. Regardless of how you feel about the Agile Manifesto (my feelings are quite mixed), these boosters over the last 25-ish years tend to love citing Agile and OOP things and rarely seem to look beyond that to historical or fundamental CS.
They found a lucrative niche telling management why their software process is broken and how to fix it. And why the previous attept at Agile wasn't really Agile and so things are still broken.
Perhaps now they have to ride the AI hype train, too. I can only guess that whatever AI-driven lucrative consulting / talk-circuit may emerge from this will also be able to explain why the last AI attempt wasn't really AI and that's why things are still broken.
awb
Interesting read and thanks for sharing.
Two observations:
1. Natural language appears to be to be the starting point of any endeavor.
2.
> It may be illuminating to try to imagine what would have happened if, right from the start our native tongue would have been the only vehicle for the input into and the output from our information processing equipment. My considered guess is that history would, in a sense, have repeated itself, and that computer science would consist mainly of the indeed black art how to bootstrap from there to a sufficiently well-defined formal system. We would need all the intellect in the world to get the interface narrow enough to be usable, and, in view of the history of mankind, it may not be overly pessimistic to guess that to do the job well enough would require again a few thousand years.
LLMs are trying to replicate all of the intellect in the world.
I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.
bwfan123
> I’m curious if the author would consider that these lofty caveats may be more plausible today than they were when the text was written.
What is missed by many and highlighted in the article is the following: that there is no way to be "precise" with natural languages. The "operational definition" of precision involves formalism. For example, I could describe to you in english how an algorithm works, and maybe you understand it. But for you to precisely run that algorithm requires some formal definition of a machine model and steps involved to program it.
The machine model for english is undefined ! and this could be considered a feature and not a bug. ie, It allows a rich world of human meaning to be communicated. Whereas, formalism limits what can be done and communicated in that framework.
akavi
But for most human endeavors, "operational precision" is a useful implementation detail, not a fundamental requirement.
We want software to be operationally precise because it allows us to build up towers of abstractions without needing to worry about leaks (even the leakiest software abstraction is far more watertight than any physical "abstraction").
But, at the level of the team or organization that's _building_ the software, there's no such operational precision. Individuals communicating with each other drop down to such precision when useful, but at any endeavor larger than 2-3 people, the _vast_ majority of communication occurs in purely natural language. And yet, this still generates useful software.
The phase change of LLMs is that they're computers that finally are "smart" enough to engage at this level. This is fundamentally different from the world Dijkstra was living in.
skydhash
I forgot where I read it, but the reason that natural languages works so well for communication is because the terms are labels for categories instead of identifiers. You can concatenate enough to refer to a singleton, but for the person in front, it can be many items or an empty set. Some labels may even be nonexistent in their context
So when we want deterministic process, we invent a set of labels where each is a singleton. Alongside them is a set of rules that specify how to describe their transformation. Then we invented machines that can interpret those instructions. The main advantage was that we know the possible outputs (assuming a good reliability) before we even have to act.
LLMs don't work so well in that regard, as while they have a perfect embedding of textual grammar rules, they don't have a good representation for what those labels refers to. All they have are relations between labels and how likely are they used together. But not what are the sets that those labels refer to and how the items in those sets interact.
abeppu
I think we should shift the focus from adapting LLMs to our purposes (e.g. external tool use) and adapting how we think about software and focus on getting models that internally understand compilation and execution. Rather than merely building around next token prediction, the industry should take advantage of the fact that software in particular provides a cheap path to learning a domain-specific "world model".
Currently I sometimes get predictions where a variable that doesn't exist gets used or a method call doesn't match the signature. The text of the code might look pretty plausible but it's only relatively late that a tool invocation flags that something is wrong.
If instead of just code text, we trained a model on (code text,IR, bytecode) tuples, (byte code, fuzzer inputs, execution trace) examples, and (trace, natural language description) annotations. The model needs to understand not just what token sequences seem likely but (a) what will the code compile to? (b) what does the code _do_ and (c) how would a human describe this behavior? Bonus points for some path to tie in pre/post conditions, invariants, etc
"People need to adapt to weaker abstractions in the LLM era" is a short term coping strategy. Making models that can reason about abstractions in a much tighter loop and higher fidelity loop may get us code generation we can trust.
agentultra
Abstraction? Hardly.
What are the new semantics and how are the lower levels precisely implemented?
Spoken language isn’t precise enough for programming.
I’m starting to suspect what people are excited about is the automation.
skydhash
But that's not really automation.
It's more search and act based on the first output. You don't know what's going to come out, you just hope it will be good. The issue is that that the query is fed to the output function. So what you get is a mixture of what is a mixture of what you told it and what's was stored. Great if you can separate the two afterwards, not so if the output is tainted by the query.
With automation, what you seek is predictability. Not an echo chamber.
ADDENDUM
If we continue with the echo chamber analogy:
Prompt Engineering: Altering your voice so that the result back is more pleasant
System Prompt: The echo chamber's builders altering the configuration to get the above effects
RAG: Sound effects
Agent: Replace yourself in front of the echo chamber with someone/something that act based on the echo.
ptx
> As we learn to use LLMs in our work, we have to figure out how to live with this non-determinism [...] but there will also things we'll gain that few of us understand yet.
No thanks. Let's not give up determinism for vague promises of benefits "few of us understand yet".
aradox66
Determinism isn't always ideal. Determinism may trade off with things like accuracy, performance, etc. There are situations where the tradeoff is well worth it.
pixl97
Yep, there are plenty of things that aren't computable without burning all the entropy in the visible universe, yet if you exchange it with a heuristic you can get a good enough answer in polynomial time.
Weather forecasts are a good example of this.
betenoire
I understand there are probabilities and shortcuts in weather forecasts.... but what part is non-deterministic?
josefx
Most heuristics are still deterministic.
aradox66
Also, at temperature 0 LLMs can behave deterministically! Indeterminism isn't necessarily quite the right word for the kind of abstraction LLMs provide
gpm
Even at temperature != 0 it's trivial to just use a fixed seed in the RNG... it's just a computer being used in a naive, not even multi threaded (i.e. with race conditions), way.
I wouldn't be surprised to find out different stacks multiple fp16s slightly differently or something. Getting determinism across machines might take some work... but there's really nothing magic going on here.
bird0861
Quite pleased you mentioned this. I would like to add transformer LLMs can be turing complete, see the work of Franz Nowak and his colleagues (I think there were at least one or two other papers by other teams but I read Nowak's the closest as it was the latest one when I became aware of this).
josefx
That runs into the issue that nobody runs LLMs with a temperature of zero.
billyp-rva
Nobody was stopping anyone from making compilers that introduced random different behavior every time you ran them. I think it's telling this didn't catch on.
gpm
I think there was actually a very big push to stop people from doing that - https://en.wikipedia.org/wiki/Reproducible_builds
There were definitely compilers that used things like data-structures with an unstable iteration order resulting in non-determinism, and people went stopping other people from doing that. This behavior would result in non-deterministic performance everywhere, and combined with race conditions or just undefined behavior other random non-deterministic behaviors too.
At least in part this was achieved with techniques that can be used to make LLMs to, like by seeding RNGs in hash tables deterministically. LLMs are in that sense no less deterministic than iterating over a hash table (they are just a bunch of matrix multiplications with a sampling procedure at the end, after all).
danenania
I think this gets at a major hurdle that needs to be overcome for truly human-level AGI.
Because the human brain is also non-deterministic. If you ask a software engineer the same question on different days, you can easily get different answers.
So I think what we want from LLMs is not determinism, just as that's not really what you'd want from a human. It's more about convergence. Non-determinism is ok, but it shouldn't be all over the map. If you ask the engineer to talk through the best way to solve some problem on Tuesday, then you ask again on Wednesday, you might expect a marginally different answer considering they've had time to think on it, but you'd also expect quite a lot of consistency. If the second answer went in a completely different direction, and there was no clear explanation for why, you'd probably raise an eyebrow.
Similarly, if there really is a single "right" answer to a question, like something fact-based or where best practices are extremely well established, you want convergence around that single answer every time, to the point that you effectively do have determinism in that narrow scope.
LLMs struggle with this. If you ask an LLM to solve the same problem multiple times in code, you're likely to get wildly different approaches each time. Adding more detail and constraints to the prompt helps, but it's definitely an area where LLMs are still far behind humans.
w10-1
> I've not had the opportunity to do more than dabble with the best Gen-AI tools, but I'm fascinated as I listen to friends and colleagues share their experiences. I'm convinced that this is another fundamental change
So: impressions of impressions is the foundation for a declaration of fundamental change?
What exactly is this abstraction? why nature? why new?
RESULT: unfounded, ill-formed expression
bgwalter
Dupe of https://news.ycombinator.com/item?id=44366904 , which was abruptly sinking from the front page.
dingnuts
If LLMs are the new compilers, enabling software to be built with natural language, why can't LLMs just generate bytecode directly? Why generate HLL code at all?
akavi
Same reason humans use high-level languages: limited context windows.
Both humans and LLMs benefit from non-leaky abstractions—they offload low-level details and free up mental or computational bandwidth for higher-order concerns. When, say, implementing a permissioning system for a web app, I can't simultaneously track memory allocation and how my data model choices aligns with product goals. Abstractions let me ignore the former to "spend" my limited intelligence on the latter; same with LLMs and their context limits.
Yes, more intelligence (at least in part) means being able to handle larger contexts, and maybe superintelligent systems could keep everything "in mind." But even then, abstraction likely remains useful in trading depth for surface area. Chris Sawyer was brilliant enough to write Rollercoaster Tycoon in assembly, but probably wouldn't be able to do the same for Elden Ring.
(Also, at least until LLMs are so transcendentally intelligent they outstrip our ability to understand their actions, HLLs are much more verifiable by humans than assembly is. Admittedly, this might be a time-limited concern)
Uehreka
Why would the ability to generate source code imply the ability to generate bytecode? Also you wouldn’t want that, humans can’t review bytecode. I think you may be taking the metaphor too literally.
skydhash
Because the semantic for each term in a programming language is pretty much a 1:1 relation to a sequential and logic-based ordering of terms in bytecode (which are still code).
> Also you wouldn’t want that, humans can’t review bytecode
The one great thing about automation (and formalism) is that you don't have to continuously review it. You vet it once, then you add another mechanism that monitors for wrong output/behavior. And now, the human is free for something else.
pixl97
I dont think they are... LLMs can learn from anything thats been tokenized. Feed enough decompiled and labeled data with the bytecode and it's likely the machine will be able to dump out an executable. I wouldn't be surprised if an llm could output a valid elf right now other than the tokens may have been stripped in pretraining.
VinLucero
I agree here. English (human language) to Bytecode is the future.
With reverse translation as needed.
thfuran
English is a pretty terrible language for describing the precise behavior of a program.
demirbey05
How will you figure out or solve hallucinated assembly code ?
sgt101
LLMs are deterministic.
If you run an LLM with optimization turned on on a NVIDIA GPU then you can get non-deterministic results.
But, this is a choice.
Can authors of such articles at least cite Dijkstra's "On the foolishness of "natural language programming"." which appeared eons ago ? Which presents an argument against the "english is a programming language" hype.
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667...