Skip to content(if available)orjump to list(if available)

AI coding agents are removing programming language barriers

behnamoh

Counter point: AI makes mainstream languages (for which a lot of data exists in the training data) even more popular because those are the languages it knows best (ie, has the least rate of errors in) regardless of them being typed or not (in fact, many are dynamic, like Python, JS, Ruby).

The end result? Non-mainstream languages don't get much easier to get into because average Joe isn't already proficient in them to catch AI's bugs.

People often forget the bitter lesson of machine learning which plagues transformer models as well.

rm_-rf_slash

Cursor and Claude Code were the asskicking I needed to finally get on the typescript bandwagon.

Strong typing drastically reduces hallucinations and wtf bugs that slip through code review.

So it’ll probably be the strongly typed languages that receive the proportionally greatest boost in popularity from LLM-assisted coding.

bluetomcat

It’s good at matching patterns. If you can frame your problem so that it fits an existing pattern, good for you. It can show you good idiomatic code in small snippets. The more unusual and involved your problem is, the less useful it is. It cannot reason about the abstract moving parts in a way the human brain can.

carlmr

>It cannot reason about the abstract moving parts in a way the human brain can.

Just found 3 race conditions in 100 lines of code. From the UTF-8 emojis in the comments I'm really certain it was AI generated. The "locking" was just abandoning the work if another thread had started something, the "locking" mechanism also had toctou issues, the "locking" also didn't actually lock concurrent access to the resource that actually needed it.

bluetomcat

Yes, that was my point. Regardless of the programming language, LLMs are glorified pattern matchers. A React/Node/MongoDB address book application exposes many such patterns and they are internalised by the LLM. Even complex code like a B-tree in C++ forms a pattern because it has been done many times. Ask it to generate some hybrid form of a B-tree with specific requirements, and it will quickly get lost.

practice9

Humans cannot reason about code at scale. Unless you add scaffolding like diagrams and maps and …

Things that most teams don’t do or half-ass

samrus

Its not scaffolding if the intelligence itself is adding it. Humans can make their own diagrams ajd maps to help them, LLM agentsbneed humans to scaffold for them, thats the setup for the bitter lesson

minebreaker

From what I can tell, LLMs tend to hallucinate more with minor languages than with popular ones. I'm saying this as a Scala dev. I suspect most discussions about the LLM usefulness depend on the language they use. Maybe it's useful for JS devs.

noosphr

Its more useful for python devs since pretty much all ml code is python wrappers around c++.

RedNifre

I'm not sure, I have a custom config format that combines a CSV schema with processing instructions that I use for bank CSVs and Claude was able to generate a perfect one for a new bank only based on one config plus CSV and the new bank's CSV.

I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.

0points

> I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.

CSV is not a complex format.

Why do you reach this conclusion from toying with CSV?

And why do you trust a LLM for economic planning?

rapind

I’m having a good time with claude and Elm. The correctness seems to help a lot. I mean it still goes wonky some times, but I assume that’s the case with everyone.

greener_grass

More people who are not traditionally programmers are now writing code with AI assistance (great!) but this crowd seems unlikely to pick up Clojure, Haskell, OCaml etc... so I agree this is a development in favor of mainstream languages.

lonelyasacloud

Not sure.

Even for small projects the optimisation criteria is different if the human's role in the equation shifts from authoring to primarily a reviewing based one.

badgersnake

Any they don’t understand it. So they get something that kinda half works and then they’re screwed.

__loam

Imo there's been a big disconnect between people who view code as work product vs those who view it as a liability/maintenance burden. AI is going to cause an explosion in the production of code, I'm not sure it's going to have the same effect on long term maintenance and I don't think rewriting the whole thing with ai again is a solution.

arrowsmith

Ehhhh, a year ago I'd have agreed with you — LLMs were noticeably worse with Elixir than with bigger langs.

But I'm not noticing that anymore, at least with Elixir. The gap has closed; Claude 4 and Gemini 2.5 both write it excellently.

Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.

golergka

Recently I wrote a significant amount of zig first time in my life thanks to Claude Code. Is zig a mainstream language yet?

ACCount36

It's not too obscure. It's also about the point where some coding LLMs get weak.

Zig changes a lot. So LLMs reference outdated data, or no data at all, and resort to making a lot of 50% confidence guesses.

0x000xca0xfe

Interesting, my experience lerning Zig was that Claude was really bad at the language itself to the point it wrote obvious syntax errors and I had to touch up almost everything.

With Rust OTOH Claude feels like a great teacher.

golergka

Syntax and type errors gets instantly picked up by type checker and corrected, and as long as these failures stay in context, LLM doesn’t make those mistakes again. Not something I ever have to pay attention to.

Pamar

Am I the only one that remembers how Microsoft tried to convince everyone to adopt .Net because this way you could have teams where one member could use J#, another use Fortran.Net (or whatever the name was) and old chaps could still contribute by writing Cobol# and everything would just magically work together and you would quadruple productivity just by leveraging the untapped pool of #Intercal talent out there?

mikert89

Wish I could go back to a time when I believed stuff like this

ChrisMarshallNY

> AI as a Complementary Pairing Partner

That's how I've been using it.

I treat it as a partner that has a "wide and shallow" initial base, but the ability to "dive deep," when I need it. Basically, I do a "shallow triage," to figure out what I want to focus on, then I ask it to "dive deep," on my chosen topic.

I haven't been using it to learn new languages, but I have been using it to learn new concepts and techniques.

Right now, I'm learning up on implementing a webauthn backend and passkey integration into my app. It's been instrumental. Coming along great. I hadn't had any previous experience, and it's helping me to learn.

I should note that it has given me wrong examples; notably, it assumed a deprecated dependency version, that I had to debug and figure out a fix. That was actually a good thing, as it helped me to learn the "ins and outs" a bit better.

I'm still not convinced that I'd let AI just go ahead and ship an application from scratch, without any intervention on my part. It often makes mistakes; not serious ones, but ones that would be bad, if they shipped.

AstroBen

I've been diving into a (new language to me) Swift codebase over the last week and AI has been incredibly helpful in answering my questions and speeding up my learning

But meaningfully contributing to a complex project without the skills? Not a chance I'd put my name on the contributions it makes. I know how many mistakes these tools make in the languages I know well - it also makes them in the ones I don't. Only now I can't review its output

cultofmetatron

I think AI will push programming languages in the direction of stronger hindly milner type type checking. Haskell is brutally hard to learn but with enough of a data set to learn from, its the perfect target language for a coding agent. its high level, can be formally verified using well known algos and a language server could easily be connected with the ai agent via some mcp interface.

Paradigma11

I used a LSP MCP tool for a LLM and was so far a bit underwhelmed. The problem is that LSP is designed for human consumption and LLMs have different constraints.

LLMs don't use the LSP exploratory to learn the API, you just give it to it as a context or MCP tool. LLMs are really good at pattern matching and wont make type errors as long as the type structure and constructs are simple.

If they are not simple it is not said that the LLM can solve and the user understand it.

js8

I wish but the opposite seems to be coming - Haskell will have less support from coding AIs than mainstream languages.

I think people, who care about FP, should think about what is appealing about coding in natural language and is missing from programming in strongly typed FP languages such as Haskell and Lean. (After all, what attracted me to Haskell compared to Python was that the typechecking is relatively cheap thanks to type inference.)

I believe that natural language in coding has allure because it can express the outcome in fuzzy manner. I can "handwave" certain parts and the machine fills them out. I further believe, to make this work well with formal languages, we will need to use some kind of fuzzy logic, in which we specify the programs. (I particularly favor certain strong logics based on MTL but that aside.) Unfortunately, this line of research seems to have been pretty much abandoned in AI in favor of NNs.

tsimionescu

> can be formally verified using well known algos

Is there any large formally verified project written in Haskell? The most well known ones are C (seL4 microkernel) and Coq+OCaml (CompCert verified C compiler).

aetherspawn

Well, Haskell has GADTs, new type wrappers and type interfaces which can be (and are often) used to implement formal verification using meta programming, so I get the point he was making.

You pretty much don’t need to plug another language into Haskell to be satisfied about certain conditions if the types are designed correctly.

tsimionescu

Those can all encode only very simplistic semantics of the code. You need either a model checker or dependent types to actually verify any kind of interesting semantics (such as "this sort function returns the number in a sorted order", or "this monad obeys the monad laws"). GADTs, newtypes and type interfaces are not significantly more powerful than what you'd get in, say, a Java program in terms of encoding semantics into your types.

Now, I believe GHC also has support for dependent types, but the question stands: are there any major Haskell projects that actually use all of these features to formally verify their semantics? Is any part of the Haskell standard library formally verified, for example?

And yes, I do understand that type checking is a kind of formal verification, so in some sense even a C program is "formally verified", since the compiler ensures that you can't assign a float to an int. But I'm specifically asking about formal verification of higher level semantics - sorting, monad laws, proving some tree is balanced, etc.

seanmcdirmid

We might see wider adoption of dependently typed languages like Agda. But limited corpus might become the limiting factor, I’m not sure how knowledge transfers as the languages get more different.

ipnon

It's getting cheaper and cheaper to generate corpora by the day, and Agda has the advantage of being verifiable like Lean. So you can simulate large amounts of programs and feed these back into the model. I think this is a major reason why we're seeing remarkable improvements in formal sciences like the recent IMO golds, and yet LLMs are still struggling to generate aesthetically pleasing and consistent CSS. Imagine a high schooler who can win an IMO gold medal but can't center a div!

andrewflnr

It seems like "generating" a corpus in that situation is more like a search process guided by prompts and more critically the type checker, rather than a straight generation process right? You need some base reality or you'll still just have garbage in, garbage out.

iparaskev

> The real breakthrough came when I stopped thinking of AI as a code generator and started treating it as a pairing partner with complementary skills.

I think this is the most important thing mentioned in the post. In order for the AI to actually help you with languages you don't know you have to question its solutions. I have noticed that asking questions like why are we doing it like this and what will happen in the x,y,z scenario, really helps.

solids

My experience is that each question I ask or point I make produces an answer that validates my thinking. After two or three iterations in a row in this style I end up distrusting everything.

iparaskev

This is a good point. Lately I have been experimenting with phrasing the question in a way that it makes it believe that I prefer what I am suggesting, while the truth is that I don't.

For example: - I implement something. - Then I ask it to review it and suggest alternatives. Where it will likely say my solution is the best. - Then I say something like "Isn't the other approach better for __reason__ ?". Where the approach might not even be something it suggested.

And it seems that sometimes it gives me some valid points.

samrus

This is very true. Constant insecurity for me. One thing that helps a little is asking it to search for sources to back up what its saying. But claude has hallucinated those as well. Perplexity seems to be good at being true to sources, but idk how good it is at coding itself

tietjens

yes, this. biggest problem and danger in my daily work with llms. my entire working method with them is shaped around this problem. instead of asking it to give me answers or solutions, i give it a line of thought or logical chain, and then ask it to continue down the path and force it to keep explaining the reasoning while i interject, continuing to introduce uncertainty. suspicion is one of the most valuable things i need to make any progress. in the end it's a lot of work and very much reading and reasoning.

danielbln

I'm addition, I frequently tell it to ask clarifying questions. Those often reveal gaps in understanding or just plain misunderstanding that you can then bip in the bud before it has generated a million tokens.

Maro

This is great, and I think this is the right way to use AI: treat it as a pair programming partner and learn from it. As the human learns and becomes better at both programming and the domain in question (eg. a Ruby JIT compiler), the role of the AI partner shifts: at the beginning it's explaining basic concepts and generating/validating smaller snippets of code; in later stages the conversations focus on advanced topics and the AI is used to generate larger portions of code, which now the human is more confident to review to spot bugs.

thefz

I wanted to test Gemini's code generation so I asked it for a bash script iterating through an array of directory names and executing a command for each one.

It got it wrong. The command was generated outside of the for, and never updated inside the loop effectively making it useless.

Luckily I know bash so I spotted it immediately. But I wonder how it's "removing programming language barriers" when one that does not know its output language can not spot even such a glaring issue.

jiggawatts

I think this is more of a criticism of bash than of Gemini.

Also, I out-of-hand reject any criticism of an AI that specifies only the brand ("ChatGPT") and not the specific model version, settings, etc...

So, for example, Gemini 2.5 Pro (Jan 2025) with temperature 0.15 prompted with: "Write a short, idiomatic bash script that takes a file containing a list of directory names, changes to each one, runs a command, and then returns to the original working directory."

Spits out this:

    #!/bin/bash

    # A script to run a command in a list of directories.

    # Exit if the wrong number of arguments are provided.
    if [[ $# -lt 2 ]]; then
      echo "Usage: $0 <file_with_dirs> <command> [args...]" >&2
      exit 1
    fi

    DIR_FILE="$1"
    shift # The rest of the arguments ("$@") are now the command.

    # Ensure the directory list file exists and is readable.
    if [[ ! -r "$DIR_FILE" ]]; then
      echo "Error: Cannot read file '$DIR_FILE'" >&2
      exit 1
    fi

    # Read the file line by line.
    while IFS= read -r dir; do
      # Skip empty lines or non-existent directories.
      if [[ -n "$dir" && -d "$dir" ]]; then
        echo "==> Entering '$dir'..."
        # Run command in a subshell to keep the 'cd' local.
        # '&&' ensures the command only runs if 'cd' succeeds.
        (cd "$dir" && "$@")
        echo "<== Finished with '$dir'."
      else
        echo "--> Skipping invalid directory: '$dir'" >&2
      fi
    done < "$DIR_FILE"

    echo "All done."
That worked for me, but I was testing it in WSL 2 where I got a gibberish error... which was because I edited the file in Windows Notepad and the line endings were confusing bash. Gemini helpfully told me how to fix that too!

Something that I found amusing, and again, is a criticism of bash instead of the AI, is that this fails to process the last line if it isn't terminated with a \n character.

PS: This is almost a one-liner in PowerShell, and works with or without the final terminator character:

    Push-Location
    Get-Content dirs.txt | cd -PassThru | Foreach-Object { echo "Hello from: $pwd" }
    Pop-Location
Gemini also helped me code-golf this down to:

    pushd;gc dirs.txt|%{cd $_;"Hello from: $pwd"};popd

thefz

> I think this is more of a criticism of bash than of Gemini.

I can write correct bash; Gemini in this instance could not.

> Also, I out-of-hand reject any criticism of an AI that specifies only the brand ("ChatGPT") and not the specific model version

Honestly I don't care, I opened the browser and typed my query just like anyone would.

> PS: This is almost a one-liner in PowerShell, and

Wonder how this is related to "I asked Gemini to generate a script and it was severely bugged"

jiggawatts

> typed my query just like anyone would.

Yes, well... are you "anyone", or an IT professional? Are you using the computer like my mother, or like someone that knows how LLMs work?

This is a very substantial difference. There's just no way "anyone" is going to get useful code out of LLMs as they are now, in most circumstances.

However, I've seen IT professionals (not necessarily developers!) get a lot of utility out of them, but only after switching to specific models in "API playgrounds" or some similarly controlled environment.

oneshtein

  for dir in $(cat dirs.txt); do ( cd "$dir"; echo "Hello from $(pwd)" ); done

lucianbr

Unbelievable how long and convoluted the other answer is, and that it is presented as proof that the AI provided a good solution.

0points

Hyperbole: AI isn't even trained on most programming languages.

Compare it yourself with letting it generate js/python or something it trained alot on, versus something more esoteric, like brainfuck.

And even in a common language, you'll hit brick walls when the LLM confuses different versions of the library you are using, or whatever.

I had issues with getting AI generated rust code to even compile.

It's simple: The less mainstream language, the less exposure in the training set leads to worse output.

karmasimida

AI has basically removed my fear with regards to programming languages.

It almost never misses on explaining how certain syntax works.

dearilos

AI coding agents help you solve the problem faster

AI code review helps you catch issues you've forgotten about and eliminates the repetitive work

These tools are helping developers create quality software - not replace them

sunrunner

What about the part of programming and software development that relies on programmatic/systemic thinking? How much is the language syntax itself part of any 'program' solution?