Skip to content(if available)orjump to list(if available)

Dijkstra On the foolishness of "natural language programming"

01100011

People are sticking up for LLMs here and that's cool.

I wonder, what if you did the opposite? Take a project of moderate complexity and convert it from code back to natural language using your favorite LLM. Does it provide you with a reasonable description of the behavior and requirements encoded in the source code without losing enough detail to recreate the program? Do you find the resulting natural language description is easier to reason about?

I think there's a reason most of the vibe-coded applications we see people demonstrate are rather simple. There is a level of complexity and precision that is hard to manage. Sure, you can define it in plain english, but is the resulting description extensible, understandable, or more descriptive than a precise language? I think there is a reason why legalese is not plain English, and it goes beyond mere gatekeeping.

drpixie

> Do you find the resulting natural language description is easier to reason about?

An example from an different field - aviation weather forecasts and notices are published in a strongly abbreviated and codified form. For example, the weather at Sydney Australia now is:

  METAR YSSY 031000Z 08005KT CAVOK 22/13 Q1012 RMK RF00.0/000.0
It's almost universal that new pilots ask "why isn't this in words?". And, indeed, most flight planning apps will convert the code to prose.

But professional pilots (and ATC, etc) universally prefer the coded format. Is is compact (one line instead of a whole paragraph), the format well defined (I know exactly where to look for the one piece I need), and it's unambiguous and well defined.

Same for maths and coding - once you reach a certain level of expertise, the complexity and redundancy of natural language is a greater cost than benefit. This seems to apply to all fields of expertise.

WillAdams

Reading up on the history of mathematics really makes that clear as shown in

https://www.goodreads.com/book/show/1098132.Thomas_Harriot_s...

(ob. discl., I did the typesetting for that)

It shows at least one lengthy and quite wordy example of how an equation would have been stated, then contrasts it in the "new" symbolic representation (this was one of the first major works to make use of Robert Recorde's development of the equals sign).

tim333

Although if you look at most maths textbooks or papers there's a fair bit of English waffle per equation. I guess both have their place.

shit_game

> Same for maths and coding - once you reach a certain level of expertise, the complexity and redundancy of natural language is a greater cost than benefit. This seems to apply to all fields of expertise.

And as well as these points, ambiguity. A formal specification of communication can avoid ambiguity by being absolute and precise regardless of who is speaking and who is interpreting. Natural languages are riddled wth inconsistencies, colloquialisms, and imprecisions that can lead to misinterpretations by even the most fluent of speakers simply by nature of natural languages being human language - different people learn these languages differently and ascribe different meanings or interpretations to different wordings, which are inconsistent because of the cultural backgrounds of those involved and the lack of a strict formal specification.

smcin

Sure, but much ambiguity is trivially handled with a minimum amount of context. "Tomorrow I'm flying from Austin to Atlanta and I need to return the rental". (Is the rental (presumably car) to be returned to Austin or Atlanta? Almost always Austin, absent some unusual arrangement. And presumably to the Austin airport rental depot, unless context says it was another location. And presumably before the flight, with enough timeframe to transfer and checkin.)

(You meant inherent ambiguity in actual words, though.)

staplers

Extending this further, "natural language" changes within populations over time where words or phrases carry different meaning given context. The words "cancel" or "woke" were fairly banal a decade ago. Whereas they can be deeply charged now.

All this to say "natural language"'s best function is interpersonal interaction not defining systems. I imagine most systems thinkers will understand this. Any codified system is essentially its own language.

diputsmonro

An interesting perspective on this is that language is just another tool on the job. Like any other tool, you use the kind of language that is most applicable and efficient. When you need to describe or understand weather conditions quickly and unambiguously, you use METAR. Sure, you could use English or another natural language, but it's like using a multitool instead of a chef knife. It'll work in a pinch, but a tool designed to solve your specific problem will work much better.

Not to slight multitools or natural languages, of course - there is tremendous value in a tool that can basically do everything. Natural languages have the difficult job of describing the entire world (or, the experience of existing in the world as a human), which is pretty awesome.

And different natural languages give you different perspectives on the world, e.g., Japanese describes the world from the perspective of a Japanese person, with dedicated words for Japanese traditions that don't exist in other cultures. You could roughly translate "kabuki" into English as "Japanese play", but you lose a lot of what makes kabuki "kabuki", as opposed to "noh". You can use lots of English words to describe exactly what kabuki is, but if you're going to be talking about it a lot, operating solely in English is going to become burdensome, and it's better to borrow the Japanese word "kabuki".

All languages are domain specific languages!

thaumasiotes

> You can use lots of English words to describe exactly what kabuki is, but if you're going to be talking about it a lot, operating solely in English is going to become burdensome, and it's better to borrow the Japanese word "kabuki".

This is incorrect. Using the word "kabuki" has no advantage over using some other three-syllable word. In both cases you'll be operating solely in English. You could use the (existing!) word "trampoline" and that would be just as efficient. The odds of someone confusing the concepts are low.

Borrowing the Japanese word into English might be easier to learn, if the people talking are already familiar with Japanese, but in the general case it doesn't even have that advantage.

Consider that our name for the Yangtze River is unrelated to the Chinese name of that river. Does that impair our understanding, or use, of the concept?

sim7c00

you guys are not wrong. explain any semi complez program, you will instantly resort to diagrams, tables, flow charts etc. etc.

ofcourse, you can get your LLM to be bit evil in its replies, to help you truly. rather than to spoon feed you an unhealthy diet.

i forbid my LLM to send me code and tell it to be harsh to me if i ask stupid things. stupid as in, lazy questions. send me the link to the manual/specs with an RTFM or something i can digest and better my undertanding. send links not mazes of words.

now i can feel myself grow again as a programmer.

as you said. you need to build expertise, not try to find ways around it.

with that expertise you can find _better_ ways. but for this, firstly, you need the expertise.

azernik

If you don't mind sharing - what's the specific prompt you use to get this to happen, and which LLM do you use it with?

steveBK123

And to this point - the English language has far more ambiguity than most programming languages.

tim333

> prefer the coded format. Is is compact...

On the other hand "a folder that syncs files between devices and a server" is probably a lot more compact than the code behind Dropbox. I guess you can have both in parallel - prompts and code.

ratorx

Let’s say that all of the ambiguities are automatically resolved in a reasonable way.

This is still not enough to let 2 different computers running two different LLMs to produce compatible code right? And no guarantee of compatibility as you refine it more etc. And if you get into the business of specifying the format/protocol, suddenly you have made it much less concise.

So as long as you run the prompt exactly once, it will work, but not necessarily the second time in a compatible way.

emaro

More compact, but also more ambiguous. I suspect an exact specification what Dropbox does in natural language will not be substantially more compact compared to the code.

xigoi

I’ll bet my entire net worth that you can’t get an LLM exactly recreate Dropbox from this mescription alone.

cratermoon

What do you mean by "sync"? What happens with conflicts, does the most recent version always win? What is "recent" when clock skew, dst changes, or just flat out incorrect clocks exist? Do you want to track changes to be able to go back to previous versions? At what level of granularity?

scotty79

"syncs" can mean so many different things

delusional

You just cut out half the sentence and responded to one part. Your description is neither well defined nor us it unambiguous.

You can't just pick a singular word out of an argument and argue about that. The argument has a substance, and the substance is not "shorter is better".

fnord77

I wonder why the legal profession sticks to natural language

me-vs-cat

Backwards compatibility works differently there, and legalese has not exactly evolved naturally.

thaumasiotes

You can see the same phenomenon playing a roguelike game.

They traditionally have ASCII graphics, and you can easily determine what an enemy is by looking at its ASCII representation.

For many decades now graphical tilesets have been available for people who hate the idea of ASCII graphics. But they have to fit in the same space, and it turns out that it's very difficult to tell what those tiny graphics represent. It isn't difficult at all to identify an ASCII character rendered in one of 16 (?) colors.

eightysixfour

Language can carry tremendous amounts of context. For example:

> I want a modern navigation app for driving which lets me select intersections that I never want to be routed through.

That sentence is low complexity but encodes a massive amount of information. You are probably thinking of a million implementation details that you need to get from that sentence to an actual working app but the opportunity is there, the possibility is there, that that is enough information to get to a working application that solves my need.

And just as importantly, if that is enough to get it built, then “can I get that in cornflower blue instead” is easy and the user can iterate from there.

fourside

You call it context or information but I call it assumptions. There are a ton assumptions in that sentence that an LLM will need to make in order to take that and turn it into a v1. I’m not sure what resulting app you’d get but if you did get a useful starting point, I’d wager the fact that you chose a variation of an existing type of app helped a lot. That is useful, but I’m not sure this is universally useful.

stouset

Dingdingding

Since none of those assumptions are specified, you have no idea which of them will inexplicably change during a bugfix. You wanted that in cornflower blue instead, but now none of your settings are persisted in the backend. So you tell it to persist the backend, but now the UI is completely different. So you specify the UI more precisely, and now the backend data format is incompatible.

By the time you specify all the bits you care about, maybe you start to think about a more concise way to specify all these requirements…

eightysixfour

> There are a ton assumptions in that sentence that an LLM will need to make in order to take that and turn it into a v1.

I think you need to think of the LLM less like a developer and more like an entire development shop. The first step is working with the user to define their goals, then to repeat it back to them in some format, then to turn it into code, and to iterate during the work with feedback. My last product development conversation with Claude included it drawing svgs of the interface and asking me if that is what I meant.

This is much like how other professional services providers don’t need you to bring them exact specs, they take your needs and translate it to specifications that producers can use - working with an architect, a product designer, etc. They assume things and then confirm them - sometimes on paper and in words, sometimes by showing you prototypes, sometimes by just building the thing.

The near to mid future of work for software engineers is in two areas in my mind:

1. Doing things no one has done before. The hard stuff. That’s a small percentage of most code, a large percentage of value generated.

2. Building systems and constraints that these automated development tools work within.

acka

This is why we have system prompts (or prompt libraries if you cannot easily modify the system prompt). They can be used to store common assumptions related to your workflow.

In this example, setting the system prompt to something like "You are an experienced Android app developer specialising in apps for phone form factor devices" (replacing Android with iOS if needed) would get you a long way.

fluidcruft

I'm not so sure it's about precision rather than working memory. My presumption is people struggle to understand sufficiently large prose versions for the same reason a LLM would struggle working with larger prose versions: people have limited working memory. The time needed to reload info from prose is significant. People reading large text works will start highlighting and taking notes and inventing shorthand forms in their notes. Compact forms and abstractions help reduce demands for working memory and information search. So I'm not sure it's about language precision.

layer8

Another important difference is reproducibility. With the same program code, you are getting the same program. With the same natural-language specification, you will presumably get a different thing each time you run it through the "interpreter". There is a middle ground, in the sense that a program has implementation details that aren't externally observable. Still, making the observable behavior 100% deterministic by mere natural-language description doesn't seem a realistic prospect.

card_zero

So is more compact better? Does K&R's *d++ = *s++; get a pass now?

alankarmisra

I would guard against "arguing from the extremes". I would think "on average" compact is more helpful. There are definitely situations where compactness can lead to obfuscation but where the line is depends on the literacy and astuteness of the reader in the specific subject as already pointed out by another comment. There are ways to be obtuse even in the other direction where written prose can be made sufficiently complicated to describe even the simplest things.

fluidcruft

That's probably analogous to reading levels. So it would depend on the reading level of the intended audience. I haven't used C in almost a decade and I would have to refresh/confirm the precise orders of operations there. I do at least know that I need to refresh and after I look it up it should be fine until I forget it again. For people fluent in the language unlikely to be a big deal.

Conceivably, if there were an equivalent of "8th grade reading level" for C that forbade pointer arithmetic on the left hand side of an assignment (for example) it could be reformatted by an LLM fairly easily. Some for loop expressions would probably be significantly less elegant, though. But that seems better that converting it to English.

That might actually make a clever tooltip sort of thing--highlight a snippet of code and ask for a dumbed-down version in a popup or even an English translation to explain it. Would save me hitting the reference.

APL is another example of dense languages that (some) people like to work in. I personally have never had the time to learn it though.

layer8

When I first read the K&R book, that syntax made perfectly sense. They are building up to it through a few chapters, if I remember correctly.

What has changed is that nowadays most developers aren't doing low-level programming anymore, where the building blocks of that expression (or the expression itself) would be common idioms.

pton_xd

I think the parent poster is incorrect; it is about precision, not about being compact. There is exactly one interpretation for how to parse and execute a computer program. The opposite is true of natural language.

kmoser

Nothing wrong with that as long as the expected behavior is formally described (even if that behavior is indeterminate or undefined) and easy to look up. In fact, that's a great use for LLMs: to explain what code is doing (not just writing the code for you).

fluoridation

No, but *++d = *++s; does.

wizzwizz4

That's confusing because of order of operations. But

  while ( *(d++) = *(s++) );
is fairly obvious, so I think it gets a pass.

Affric

Sure but we build (leaky) abstractions, and this is even happens in legal texts.

Asking an llm to build a graphical app in assembly from an ISA and a driver for the display would give you nothing.

But with a mountain of abstractions then it can probably do it.

This is not to defend an LLM more to say I think that by providing the right abstractions (reusable components) then I do think it will get you a lot closer.

fsloth

Being doing toy-examples of non-trivial complexity. Architecting the code so context is obvious and there are clear breadcrumbs everywhere is the key. And the LLM can do most of this. Prototype-> refactor/cleanup -> more features -> refactor / cleanup add architectural notes.

If you know what a well architected piece of code is supposed to look like, and you proceed in steps, LLM gets quite far as long as you are handholding it. So this is usable for non-trivial _familiar_ code where typing it all would be slower than prompting the llm. Maintaining LLM context is the key here imo and stopping it when you see weird stuff. So it requires you act as thr senior partner PR:ing everyhting.

cdkmoose

This begs the question, how many of the newer generation of developers/engineers "know what a well architected piece of code is supposed to look like"?

sciencesama

Llm frameworks !!

jimmydddd

--I think there is a reason why legalese is not plain English

This is true. Part of the precision of legalese is that the meanings of some terms have already been more precisely defined by the courts.

dongkyun

Yeah, my theory on this has always been that a lot of programming efficiency gains have been the ability to unambiguously define behavior, which mostly comes from drastically restricting the possible states and inputs a program can achieve.

The states and inputs that lawyers have to deal with tend to much more vague and imprecise (which is expected if you're dealing with human behavior and not text or some other encodeable input) and so have to rely on inherently ambiguous phrases like "reasonable" and "without undue delay."

xwiz

This opens an interesting possibility for a purely symbol-based legal code. This would probably improve clarity when it came to legal phrases that overlap common English, and you could avoid ambiguity when it came to language constructs, like in this case[1], where some drivers were losing overtime pay because of a comma in the overtime law.

[1] https://cases.justia.com/federal/appellate-courts/ca1/16-190...

null

[deleted]

jsight

I've thought about this quite a bit. I think a tool like that would be really useful. I can imagine asking questions like "I think this big codebase exposes a rest interface for receiving some sort of credit check object. Can you find it and show me a sequence diagram for how it is implemented?"

The challenge is that the codebase is likely much larger than what would fit into a single codebase. IMO, the LLM really needs to be taught to consume the project incrementally and build up a sort of "mental model" of it to really make this useful. I suspect that a combination of tool usage and RL could produce an incredibly useful tool for this.

1vuio0pswjnm7

"Sure, you can define it in plain english, but is the resulting description extensible, understandable, or more descriptive than a precise language? I think there is a reason why legalese is not plain English, and it goes beyond mere gatekeeping."

Is this suggesting the reason for legalese is to make documents more "extensible, understable or descriptive" than if written in plain English.

What is this reason that the parent thinks legalese is used that "goes beyond gatekeeping".

Plain English can be every bit as precise as legalese.

It is also unclear that legalese exists for the purpose of gatekeeping. For example, it may be an artifact that survives based on familiarity and laziness.

Law students are taught to write in plain English.

https://www.law.columbia.edu/sites/default/files/2021-07/pla...

In some situations, e.g., drafting SEC filings, use of plain English is required by law.

https://www.law.cornell.edu/cfr/text/17/240.13a-20

feoren

> Plain English can be every bit as precise as legalese.

If you attempt to make "plain English" as precise as legalese, you will get something that is basically legalese.

Legalese does also have some variables, like "Party", "Client", etc. This allows for both precision -- repeating the variable name instead of using pronouns or re-identifying who you're talking about -- and also for reusability: you can copy/paste standard language into a document that defines "Client" differently, similar to a subroutine.

soulofmischief

What you're describing is decontextualization. A sufficiently powerful transformer would theoretically be able recontextualize a sufficiently descriptive natural language specification. Likewise, the same or an equivalently powerful transformer should be able to fully capture the logic of a complicated program. We just don't have sufficient transformers yet.

I don't see why a complete description of the program's design philosophy as well as complete descriptions of each system and module and interface wouldn't be enough. We already produce code according to project specification and logically fill in the gaps by using context.

izabera

>sufficiently descriptive natural language specification https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...

intelVISA

sounds like it would pair well with a suitably smart compiler

soulofmischief

No, the key difference is that an engineer becomes more product-oriented, and the technicalities of the implementation are deprioritized.

It is a different paradigm, in the same way that a high-level language like JavaScript handles a lot of low-level stuff for me.

scribu

“Fill in the gaps by using context” is the hard part.

You can’t pre-bake the context into an LLM because it doesn’t exist yet. It gets created through the endless back-and-forth between programmers, designers, users etc.

soulofmischief

But the end result should be a fully-specced design document. That might theoretically be recoverable from a complete program given a sufficiently powerful transformer.

haolez

This reminded me of this old quote from Hal Abelson:

"Underlying our approach to this subject is our conviction that "computer science" is not a science and that its significance has little to do with computers. The computer revolution is a revolution in the way we think and in the way we express what we think. The essence of this change is the emergence of what might best be called procedural epistemology—the study of the structure of knowledge from an imperative point of view, as opposed to the more declarative point of view taken by classical mathematical subjects. Mathematics provides a framework for dealing precisely with notions of "what is". Computation provides a framework for dealing precisely with notions of "how to"."

light_triad

This is key: computation is about making things happen. Coding with an LLM adds a level of abstraction but the need for precision and correctness of the "things that happen" doesn't go away. No matter how many cool demos and "coding is dead" pronouncements because AI - and the demos are very cool - the bulk of the work moves to the pre- and post-processing and evals with AI. To the extent that it makes programming more accessible it's a good thing, but can't really replace it.

Cheer2171

Well that sure isn't what they teach in computer science programs anymore

0xbadcafebee

I read this when I was younger, but I only now get it, and realize how true it all is.

13) Humans writing code is an inherently flawed concept. Doesn't matter what form the code takes. Machine code, assembly language, C, Perl, or a ChatGPT prompt. It's all flawed in the same way. We have not yet invented a technology or mechanism which avoids it. And high level abstraction doesn't really help. It hides problems only to create new ones, and other problems simply never go away.

21) Loosely coupled interfaces made our lives easier because it forced us to compartmentalize our efforts into something manageable. But it's hard to prove that this is a better outcome overall, as it forces us to solve problems in ways that still lead to worse outcomes than if we had used a simpler [formal] logic.

34) We will probably end up pushing our technical abilities to the limit in order to design a superior system, only to find out in the end that simpler formal logic is what we needed all along.

55) We're becoming stupider and worse at using the tools we already have. We're already shit at using language just for communicating with each other. Assuming we could make better programs with it is nonsensical.

For a long time now I've been upset at computer science's lack of innovation in the methods we use to solve problems. Programming is stupidly flawed. I've never been good at math, so I never really thought about it before, but math is really the answer to what I wish programming was: a formal system for solving a problem, and a formal system for proving that the solution is correct. That's what we're missing from software. That's where we should be headed.

centra_minded

Modern programming already is very, very far from strict obedience and formal symbolism. Most programmers these days (myself included!) are using libraries, frameworks, and other features that mean what they are doing in practice is wielding sky-high abstractions, gluing things together they do not (and can not) fully understand the inner workings of.

If I create a website with Node.js, I’m not manually managing memory, parsing HTTP requests byte-by-byte, or even attempting to fully grasp the event loop’s nuances. I’m orchestrating layers of code written by others, trusting that these black boxes will behave as advertised according to my best, but deeply incomplete, understanding of them.

I'm not sure what this means for LLMs programming, but I already feel separated from the case Dijkstra lays out.

tired-turtle

> Modern programming already is very, very far from strict obedience and formal symbolism

Difficult to sort this out with what follows.

Consider group theory. A group G is a set S with an operator * that supports an identity, closure, and an inverse. With that abstraction comes a hefty amount of power. In some sense, a group is akin to a trait on some type, much like how a class in Java can implement or extend Collection. (Consider how a ring ‘extends’ a group.)

I’d posit frameworks and libraries are no different in terms of formal symbolism from the math structure laid out above. Maybe the interfaces are fuzzy and the documentation is shoddy, but there’s still a contract we use to reason about the tool at hand.

> I’m not manually managing memory, parsing HTTP requests byte-by-byte

If I don’t reprove Peano’s work, then I’m not really doing math?

l0new0lf-G

Finally someone put it this way! Natural language has embedded limitations that stem from our own mental limitations -the human mind thinks sometimes too abstract or too specific things, and misses important details or generalizations.

As a programmer, I know first hand that the problems or even absurdities of some assignments only become apparent after one has begun implement the code as code, i.e. as strict symbolisms.

Not to mention that it often takes more time to explain something accurately in natural language than it takes to just write the algorithm as code.

chilldsgn

Yes! I have a certain personality preference for abstractions and tend to understand things in an abstract manner which is extremely difficult for me to articulate in natural language.

MattSayar

We need realistic expectations for the limitations of LLMs as they work today. Philosophically, natural language is imperfect at communicating ideas between people, which is its primary purpose! How often do you rewrite sentences, or say "actually what I meant was...", or rephrase your emails before pressing Send? We are humans and we rarely get things perfect on the first try.

And now we're converting this imperfect form of communication (natural language) into a language for machines (code), which notoriously do exactly what you say, not what you intend.

NLP is massively, and I mean massively, beneficial to get you started on the right path to writing an app/script/etc. But at the end of the day it may be necessary to refactor things here and there. The nice thing is you don't have to be a code ninja to get value out of LLMs, but it's still helpful and sometimes necessary.

roccomathijn

The man has been dead for 23 years

indigoabstract

He's the Immortal Dutchman.

moralestapia

This is one of the best comments I've read on this site in a long while.

A single, crude, statement of fact slaying the work of a million typewriter monkeys spewing out random characters thinking they're actually writing the Shakespeare novel, lmao.

sotix

> Machine code, with its absence of almost any form of redundancy, was soon identified as a needlessly risky interface between man and machine. Partly in response to this recognition so-called "high-level programming languages" were developed, and, as time went by, we learned to a certain extent how to enhance the protection against silly mistakes. It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer.

I feel that we’ve collectively jumped into programming with LLMs too quickly. I really liked how Rust has iterated on pointing out “silly mistakes” and made it much more clear what the fix should be. That’s a much more favorable development for me as a developer. I still have the context and understanding of the code I work on while the compiler points out obvious errors and their fixes. Using an LLM feels like a game of semi-intelligent guessing on the other hand. Rust’s compiler is the master teaching the apprentice. LLMs are the confident graduate correcting the master. I greatly prefer Rust’s approach and would like to see it evolved further if possible.

astrobe_

Rust (and others) has type inference, LLMs have so-called "reasoning". They fake understanding, a lie that will sooner or later have consequences.

Someone

/s: that’s because we haven’t gone far enough. People use natural language to generate computer programs. Instead, they should directly run prompts.

“You are the graphics system, an entity that manages what is on the screen. You can receive requests from all programs to create and destroys “windows”, and further requests to draw text, lines, circles, etc. in a window created earlier. Items can be of any colour.

You also should send more click information to whomever created the window in which the user clicked the mouse.

There is one special program, the window manager, that can tell you what windows are displayed where on any of the monitors attached to the system”

and

“you are a tic-tac-toe program. There is a graphics system, an entity that manages what is on the screen. You can command it to create and destroys “windows”, and to draw text, lines, circles, etc. in a window created earlier. Items can be of any colour.

The graphics you draw should show a tic-tac-toe game, where users take turn by clicking the mouse. If a user wins the game, it should…

Add ads to the game, unless the user has a pay-per-click subscription”

That should be sufficient to get a game running…

To save it, you’d need another prompt:

”you are a file system, an entity that persists data to disk…”

You also will want

”you are a multi-tasking OS. You give multiple LLMs the idea that they have full control over a system’s CPU and memory. You…”

I look forward to seeing this next year in early April.

slt2021

all these prompts are currently implemented under the hood by generating and running python code

llsf

Natural language is poor medium at communicating rules and orders. The current state of affair in US is a prime example.

We are still debating what some laws and amendments mean. The meaning of words change over time, lack of historical context, etc.

I would love natural language to operate machines, but I have been programming since mid 80's and the stubbornness of the computer languages (from BASIC, to go) strikes a good balance, and puts enough responsibility on the emitter to precisely express what he wants the machine to do.

weeeee2

Forth, PostScript and Assembly are the "natural" programming languages from the perspective of how what you express maps to the environment in which the code executes.

The question is "natural" to whom, the humans or the computers?

AI does not make human language natural to computers. Left to their own devices, AIs would invent languages that are natural with respect to their deep learning architectures, which is their environment.

There is always going to be an impedance mismatch across species (humans and AIs) and we can't hide it by forcing the AIs to default to human language.

jedimastert

> It was a significant improvement that now many a silly mistake did result in an error message instead of in an erroneous answer. (And even this improvement wasn't universally appreciated: some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.)

If I didn't know who wrote this it would seem like a jab directly at people who dislike Rust.

still_grokking

Rust? Since when is Rust the pinnacle of static type safety?

After I've worked for some time with a language that can express even stronger invariants in types than Rust (Scala) I don't see that property anymore as clear win regardless circumstances. I don't think any more "stronger types == better, no matter what".

You have a price to pay for "not being allowed to do mistakes": Explorative work becomes quite difficult if the type system is really rigid. Fast iteration may become impossible. (Small changes may require to re-architecture half your program, just to make the type system happy again![1])

It's a trade-off. Like with everything else. For a robust end product it's a good thing. For fast experimentation it's a hindrance.

[1] Someone described that issue quite well in the context of Rust and game development here: https://loglog.games/blog/leaving-rust-gamedev/

But it's not exclusive to Rust, nor game dev.

bob1029

> You have a price to pay for "not being allowed to do mistakes": Explorative work becomes quite difficult

This is a huge deal for me.

At the beginning of most "what if...?" exercises, I am just trying to get raw tuples of information in and out of some top-level-program logic furnace for the first few [hundred] iterations. I'll likely resort to boxing and extremely long argument lists until what I was aiming for actually takes hold.

I no longer have an urge to define OOP type hierarchies when the underlying domain model is still a vague cloud in my head. When unguided, these abstractions feel like playing Minecraft or Factorio.

mjburgess

I can't remember if I came up with this analogy or not, but programming in Rust is like trying to shape a piece of clay just as it's being baked.

card_zero

> Explorative work becomes quite difficult if the type system is really rigid

Or to put it another way, the ease of programming is correlated with the ease of making undetected mistakes.

still_grokking

I'm not sure you tried to understand what I've depicted.

As long as you don't know how the end result should look like there are no "mistakes".

The whole point of explorative work is to find out how to approach something in the first place.

It's usually impossible to come up with the final result at first try!

After you actually know how to do something in general tools which help to avoid all undetected mistakes in the implementation of the chosen approach are really indispensable. But before having this general approach figured out too much rigidity is not helpful but instead a hindrance.

To understand this better read the linked article. It explains the problem very well over a few paragraphs.

pwdisswordfishz

I would have thought of people who unironically liked fractal-of-bad-design-era PHP and wat-talk JavaScript.

I guess some kinds of foolishness are just timeless.

mjburgess

As a person who dislikes rust, the problem is the error messages when there's no error -- quite a different problem. The rust type system is not an accurate model of RAM, the CPU and indeed, no device.

He's here talking about interpreted languages.

He's also one of those mathematicians who are now called computer scientists whose 'algorithms' are simple restatements of mathematics and require no devices. A person actively hostile, in temperament, to the embarrassing activity of programming an actual computer.

truculent

Any sufficiently advanced method of programming will start to look less like natural language and more like a programming language.

If you still don’t want to do programming, then you need some way to instruct or direct the intelligence that _will_ do the programming.

And any sufficiently advanced method of instruction will look less like natural language, and more like an education.

indigoabstract

Using natural language to specify and build an application is not unlike having a game design document before you actually start prototyping your game. But once you have implemented the bulk of what you wanted, the implementation becomes the reference and you usually end up throwing away the GDD since it's now out of sync with the actual game.

Insisting that for every change one should go read the GDD, implement the feature and then sync back the GDD is cumbersome and doesn't work well in practice. I've never seen that happen.

But if there ever comes a time when some AI/LLM can code the next version of Linux or Windows from scratch based on some series of prompts, then all bets are off. Right now it's clearly not there yet, if ever.