LLM code generation may lead to an erosion of trust
125 comments
·June 26, 2025dirkc
I have a friend that always says "innovation happens at the speed of trust". Ever since GPT3, that quote comes to mind over and over.
Verification has a high cost and trust is the main way to lower that cost. I don't see how one can build trust in LLMs. While they are extremely articulate in both code and natural language, they will also happily go down fractal rabbit holes and show behavior I would consider malicious in a person.
acedTrex
Author here: I quite like that quote. A very succinct way of saying what took me a few paragraphs.
This new world of having to verify every single thing at all points is quite exhausting and frankly pretty slow.
stavros
I don't understand the premise. If I trust someone to write good code, I learned to trust them because their code works well, not because I have a theory of mind for them that "produces good code" a priori.
If someone uses an LLM and produces bug-free code, I'll trust them. If someone uses an LLM and produces buggy code, I won't trust them. How is this different from when they were only using their brain to produce the code?
acedTrex
Author here:
Essentially the premise is that in medium trust environments like very large teams or low trust environments like an open source project.
LLMs make it very difficult to make an immediate snap judgement about the quality of the dev that submitted the patch based solely on the code itself.
In the absence of being able to ascertain the type of person you are dealing with you have to fall back too "no trust" and review everything with a very fine tooth comb. Essentially there are no longer any safe "review shortcuts" and that can be painful in places that relied on those markers to grease the wheels so to speak.
Obviously if you are in an existing competent high trust team then this problem does not apply and most likely seems completely foreign as a concept.
lxgr
> LLMs make it very difficult to make an immediate snap judgement about the quality [...]
That's the core of the issue. It's time to say goodbye to heuristics like "the blog post is written in eloquent, grammatical English, hence the point its author is trying to make must be true" or "the code is idiomatic and following all code styles, hence it must be modeling the world with high fidelity".
Maybe that's not the worst thing in the world. I feel like it often made people complacent.
furyofantares
I think you're unfair to the heuristics people use in your framing here.
You said "hence the point its author is trying to make must be true" and "hence it must be modeling the world with high fidelity".
But it's more like "hence the author is likely competent and likely put in a reasonable effort."
When those assumptions hold, putting in a very deep review is less likely to pay off. Maybe you are right that people have been too complacent to begin with, I don't know, but I don't think you've framed it fairly.
acedTrex
> Maybe that's not the worst thing in the world. I feel like it often made people complacent.
For sure, in some ways perhaps reverting to a low trust environment might improve quality in that it now forces harsher/more in depth reviews.
That however doesn't make the requirement less exhausting for people previously relying heavily on those markers to speed things up.
Will be very interesting to see how the industry standardizes around this. Right now it's a bit of the wild west. Maybe people in ten years will look back at this post and think "what do you mean you judged people based on the code itself that's ridiculous"
sim7c00
its about the quality of the code, not the quality of the dev. you might think it's related, but it's not.
a dev can write piece of good, and piece of bad code. so per code, review the code. not the dev!
haswell
> its about the quality of the code, not the quality of the dev. you might think it's related, but it's not.
I could not disagree more. The quality of the dev will always matter, and has as much to do with what code makes it into a project as the LLM that generated it.
An experienced dev will have more finely tuned evaluation skills and will accept code from an LLM accordingly.
An inexperienced or “low quality” dev may not even know what the ideal/correct solution looks like, and may be submitting code that they do not fully understand. This is especially tricky because they may still end up submitting high quality code, but not because they were capable of evaluating it as such.
You could make the argument that it shouldn’t matter who submits the code if the code is evaluated purely on its quality/correctness, but I’ve never worked in a team that doesn’t account for who the person is behind the code. If its the grizzled veteran known for rarely making mistakes, the review might look a bit different from a review for the intern’s code.
acedTrex
> you might think it's related, but it's not.
In my experience they very much are related. High quality devs are far more likely to output high quality working code. They test, they validate, they think, ultimately they care.
In that case that you are reviewing a patch from someone you have limited experience with, it previously was feasible to infer the quality of the dev from the context of the patch itself and the surrounding context by which it was submitted.
LLMs make that judgement far far more difficult and when you can not make a snap judgement you have to revert your review style to very low trust in depth review.
No more greasing the wheels to expedite a process.
alganet
> I learned to trust them because their code works well
There's so much more than "works well". There are many cues that exist close to code, but are not code:
I trust more if the contributor explains their change well.
I trust more if the contributor did great things in the past.
I trust more if the contributor manages granularity well (reasonable commits, not huge changes).
I trust more if the contributor picks the right problems to work on (fixing bugs before adding new features, etc).
I trust more if the contributor proves being able to maintain existing code, not just add on top of it.
I trust more if the contributor makes regular contributions.
And so on...
acedTrex
Author here:
Spot on, there are so many little things that we as humans use as subtle verification steps to decide how much scrutiny various things require. LLMs are not necessarily the death of that concept but they do make it far far harder.
moffkalast
It's easy to get overconfident and not test the LLM's code enough when it worked fine for a handful of times in a row, and then you miss something.
The problem is often really one of miscommunication, the task may be clear to the person working on it, but with frequent context resets it's hard to make sure the LLM also knows what the whole picture is and they tend to make dumb assumptions when there's ambiguity.
The thing that 4o does with deep research where it asks for additional info before it does anything should be standard for any code generation too tbh, it would prevent a mountain of issues.
stavros
Sure, but you're still responsible for the quality of the code you commit, LLM or no.
acedTrex
In an ideal world you would think everyone see's it this way. But we are starting to see an uptick in "I don't know the LLMs said do that."
As if that is a somehow exonerating sentence.
moffkalast
Of course you are, but it's sort of like how people are responsible their Tesla driving on autopilot, which then suddenly swerves into a wall and disengages two seconds before impact. The process forces you to make mistakes you wouldn't normally ever do or even consider a possibility.
taneq
If you have a long standing, effective heuristic that “people with excellent, professional writing are more accurate and reliable than people with sloppy spelling and punctuation” then the appearance of a semi-infinite group of ‘people’ writing well presented, convincingly worded articles which nonetheless are riddled with misinformation, hidden logical flaws, and inconsistencies, you’re gonna end up trusting everyone a lot less.
It’s like if someone started bricking up tunnel entrances and painting ultra realistic versions of the classic Road Runner tunnel painting on them, all over the place. You’d have to stop and poke every underpass with a stick just to be sure.
stavros
Sure, your heuristic no longer works, and that's a bit inconvenient. We'll just find new ones.
oasisaimlessly
"A bit inconvenient" might be the understatement of the year. If information requires say, 2x the time to validate, the utility of the internet is halved.
sebmellen
Yeah, now you need to be able to demonstrate verbal fluency. The problem is, that inherently means a loss of “trusted anonymous” communication, which is particularly damaging to the fiber of the internet.
mexicocitinluez
It's not.
What you're seeing now is people who once thought and proclaimed these tools as useless now have to start to walk back their claims with stuff like this.
It does amaze me that the people who don't use these tools seem to have the most to say about them.
acedTrex
Author here:
For what it's worth I do actually use the tools albeit incredibly intentionally and sparingly.
I see quite a few workflows and tasks that they can be a value add on, mostly outside of the hotpath of actual code generation but still quite enticing. So much so in fact I'm working on my own local agentic tool with some self hosted ollama models. I like to think that i am at least somewhat in the know on the capabilities and failure points of the latest LLM tooling.
That however doesn't change my thoughts on trying to ascertain if code submitted to me deserves a full indepth review or if I can maybe cut a few corners here and there.
mexicocitinluez
> That however doesn't change my thoughts on trying to ascertain if code submitted to me deserves a full indepth review or if I can maybe cut a few corners here and there.
How would you even know? Seriously, if I use Chatgpt to generate a one-off function for a feature I'm working on that searches all classes for one that inherits a specific interface and attribute, are you saying you'd be able to spot the difference?
And what does it even matter it works?
What if I use Bolt to generate a quick screen for a PoC? Or use Claude to create a print-preview with CSS of a 30 page Medicare form? Or converting a component's styles MUI to tailwind? What if all these things are correct?
This whole OS repos will ban LLM-generated code is a bit absurd.
> or what it's worth I do actually use the tools albeit incredibly intentionally and sparingly.
How sparingly? Enough to see how it's constantly improving?
somewhereoutth
Because when people use LLMs, they are getting the tool to do the work for them, not using the tool to do the work. LLMs are not calculators, nor are they the internet.
A good rule of thumb is to simply reject any work that has had involvement of an LLM, and ignore any communication written by an LLM (even for EFL speakers, I'd much rather have your "bad" English than whatever ChatGPT says for you).
I suspect that as the serious problems with LLMs become ever more apparent, this will become standard policy across the board. Certainly I hope so.
flir
> A good rule of thumb is to simply reject any work that has had involvement of an LLM,
How are you going to know?
null
stavros
Well, no, a good rule of thumb is to expect people to write good code, no matter how they do it. Why would you mandate what tool they can use to do it?
somewhereoutth
Because it pertains to the quality of the output - I can't validate every line of code, or test every edge case. So if I need a certain level of quality, I have to verify the process of producing it.
This is standard for any activity where accuracy / safety is paramount - you validate the process. Hence things like maintenance logs for airplanes.
sebmellen
You’re being unfairly downvoted. There is a plague of well-groomed incoherency in half of the business emails I receive today. You can often tell that the author, without wrestling with the text to figure out what they want to say, is a kind of stochastic parrot.
This is okay for platitudes, but for emails that really matter, having this messy watercolor kind of writing totally destroys the clarity of the text and confuses everyone.
To your point, I’ve asked everyone on my team to refrain from writing words (not code) with ChatGPT or other tools, because the LLM invariably leads to more complicated output than the author just badly, but authentically, trying to express themselves in the text.
acedTrex
Yep, I have come to really dislike LLMs for documentation as it just reads wrong to me and I find so often misses the point entirely. There is so much nuance tied up in documentation and much of it is in what is NOT said as much as what is said.
The LLMs struggle with both but REALLY struggle with figuring out what NOT to say.
mexicocitinluez
>Because when people use LLMs, they are getting the tool to do the work for them, not using the tool to do the work.
What? How on god's green earth could you even pretend to know how all people are using these tools?
> LLMs are not calculators, nor are they the internet.
Umm, okay? How does that make them less useful?
I'm going to give you a concrete example of something I just did and let you try and do whatever mental gymnastics you have to do to tell me it wasn't useful:
Medicare requires all new patients receiving home health treatment go through a 100+ question long form. This form changes yearly, and it's my job to implement the form into our existing EMR. Well, part of that is creating a printable version. Guess what I did? I uploaded the entire pdf to Claude and asked it to create a print-friendly template using Cottle as the templating language in C#. It generated the 30 page print preview in a minute. And it took me about 10 more minutes to clean up.
> I suspect that as the serious problems with LLMs become ever more apparent, this will become standard policy across the board. Certainly I hope so.
The irony is that they're getting better by the day. That's not to say people don't use them for the wrong applications, but the idea that this tech is going to be banned is absurd.
> A good rule of thumb is to simply reject any work that has had involvement of an LLM
Do you have any idea how ridiculous this sounds to people who actually use the tools? Are you going to be able to hunt down the single React component in which I asked it to convert the MUI styles to tailwind? How could you possibly know? You can't.
axegon_
That is already the case for me. The amount of times I've read "apologies for the oversight, you are absolutely correct" is staggering: 8 or 9 out of 10 times. Meanwhile I constantly see people mindlessly copy paying llm generated code and subsequently furious when it doesn't do what they expected it to do. Which, btw, is the better option: I'd rather have something obviously broken as opposed to something seemingly working.
devjab
Are you using the LLM's through a browser chatbot? Because the AI-agents we use with direct code-access aren't very chatty. I'd also argue that they are more capable than a lot of junior programmers, at least around here. We're almost at a point where you can feed the agents short specific tasks, and they will perform them well enough to not really require anything outside of a code review.
That being said, the prediction engine still can't do any real engineering. If you don't specifically task them with using things like Python generators, you're very likely to have a piece of code that eats up a gazillion memory. Which unfortunately don't set them appart from a lot of Python programmers I know, but it is an example of how the LLM's are exactly as bad as you mention. On the positive side, it helps with people actually writing the specification tasks in more detail than just "add feature".
Where AI-agents are the most useful for us is with legacy code that nobody prioritise. We have a data extractor which was written in the previous millennium. It basically uses around two hunded hard-coded coordinates to extact data from a specific type of documents which arrive by fax. It's worked for 30ish years because the documents haven't changed... but it recently did, and it took co-pilot like 30 seconds to correct the coordinates. Something that would've likely taken a human a full day of excruciating boredom.
I have no idea how our industry expect anyone to become experts in the age of vibe coding though.
furyofantares
> Because the AI-agents we use with direct code-access aren't very chatty.
Every time I tell claude code something it did is wrong, or might be wrong, or even just ask a leading question about a potential bug it just wrote, it leads with "You're absolutely correct!" before even invoking any tools.
Maybe you've just become used to ignoring this. I mostly ignore it but it is a bit annoying when I'm trying to use the agent to help me figure out if the code it wrote is correct, so I ask it some question it should be capable of helping with and it leads with "you're absolutely correct".
I didn't make a proposition that can be correct or not, and it didn't do any work yet to to investigate my question - it feels like it has poisoned its own context by leading with this.
teeray
> Because the AI-agents we use with direct code-access aren't very chatty.
So they’re even more confident in their wrongness
autobodie
In my experience, LLMs are extremely inclined to modify code just to pass tests instead of meeting requirements.
mexicocitinluez
> 8 or 9 out of 10 times.
Not they don't. This is 100% a made up statistic.
geor9e
They changed the headline to "Yes, I will judge you for using AI..." so I feel like I got the whole story without reading it.
cheriot
> promises that the contributed code is not the product of an LLM but rather original and understood completely.
> require them to be majority hand written.
We should specify the outcome not the process. Expecting the contributor to understand the patch is a good idea.
> Juniors may be encouraged/required to elide LLM-assisted tooling for a period of time during their onboarding.
This is a terrible idea. Onboarding is a lot of random environment setup hitches that LLMs are often really good at. It's also getting up to speed on code and docs and I've got some great text search/summarizing tools to share.
namenotrequired
> LLMs … approximate correctness for varying amounts of time. Once that time runs out there is a sharp drop off in model accuracy, it simply cannot continue to offer you an output that even approximates something workable. I have taken to calling this phenomenon the "AI Cliff," as it is very sharp and very sudden
I’ve never heard of this cliff before. Has anyone else experienced this?
gwd
I experience it pretty regularly -- once the complexity of the code passes a certain threshold, the LLM can't keep everything in its head and starts thrashing around. Part of my job working with the LLM is to manage the complexity it sees.
And one of the things with current generators is that they tend to make things more complex over time, rather than less. It's always me prompting the LLM to refactor things to make it simpler, or doing the refactoring once it's gotten to complex for the LLM to deal with.
So at least with the current generation of LLMs, it seems rather inevitable that if you just "give LLMs their head" and let them do what they want, eventually they'll create a giant Rube Goldberg mess that you'll have to try to clean up.
ETA: And to the point of the article -- if you're an old salt, you'll be able to recognize when the LLM is taking you out to sea early, and be able to navigate your way back into shallower waters even if you go out a bit too far. If you're a new hand, you'll be out of your depth and lost at sea before you know it's happened.
Workaccount2
I call it context rot. As the context fills up the quality of output erodes with it. The rot gets even worse or progresses faster the more spurious or tangential discussion is in context.
This is also can be made much worse by thinking models, as their CoT is all in context, and if there thoughts really wander it just plants seeds of poison feeding the rot. I really wish they can implement some form of context pruning, so you can nip irrelevant context when it forms.
In the meantime, I make summaries and carry it to a fresh instance when I notice the rot forming.
windward
I've seen it referred to as 'context drunk'.
Imagine that you have your input to the context, 10000 tokens that are 99% correct. Each time the LLM replies it adds 1000 tokens that are 90% correct.
After some back-and-forth of you correcting the LLM, its context window is mostly its own backwash^Woutput. Worse, the error compounds because the 90% that is correct is just correct extrapolation of an argument about incorrect code, and because the LLM ranks more recent tokens as more important.
The same problem also shows up in prose.
bubblyworld
I've only experienced this while vibe coding through chat interfaces, i.e. in the complete absence of feedback loops. This is much less of a problem with agentic tools like claude code/codex/gemini cli, where they manage their own context windows and can run your dev tooling to sanity check themselves as they go.
Paradigma11
If the context gets to big or otherwise poisoned you have to restart the chat/agent. A bit like windows of old. This trains you to document the current state of your work so the new agent can get up to speed.
Syzygies
One can find opinions that Claude Code Opus 4 is worth the monthly $200 I pay for Anthropic's Max plan. Opus 4 is smarter; one either can't afford to use it, or can't afford not to use it. I'm in the latter group.
One feature others have noted is that the Opus 4 context buffer rarely "wears out" in a work session. It can, and one needs to recognize this and start over. With other agents, it was my routine experience that I'd be lucky to get an hour before having to restart my agent. A reliable way to induce this "cliff" is to let AI take on a much too hard problem in one step, then flail helplessly trying to fix their mess. Vibe-coding an unsuitable problem. One can even kill Opus 4 this way, but that's no way to run a race horse.
Some "persistence of memory" harness is as important as one's testing harness, for effective AI coding. With the right care having AI edit its own context prompts for orienting new sessions, this all matters less. AI is spectacularly bad at breaking problems into small steps without our guidance, and small steps done right can be different sessions. I'll regularly start new sessions when I have a hunch that this will get me better focus for the next step. So the cliff isn't so important. But Opus 4 is smarter in other ways.
suddenlybananas
>can't afford not to use it. I'm in the latter group.
People love to justify big expenses as necessary.
Kuinox
I'm doing my own procedurally generated benchmark.
I can make the problem input bigger as I want.
Each LLM have a different thresholf for each problem, when crossed the performance of the LLM collapse.
sandspar
I'm not sure. Is he talking about context poisoning?
acedTrex
Hi everyone, author here.
Sorry about the JS stuff I wrote this while also fooling around with alpine.js for fun. I never expected it to make it to HN. I'll get a static version up and running.
Happy to answer any questions or hear other thoughts.
Edit: https://static.jaysthoughts.com/
Static version here with slightly wonky formatting, sorry for the hassle.
Edit2: Should work on mobile now well, added a quick breakpoint.
konaraddi
Given the topic of your post, and high pagespeed results, I think >99% of your intended audience can already read the original. No need to apologize or please HN users.
davidthewatson
Well said. The death of trust in software is a well worn path from the money that funds and founds it to the design and engineering that builds it - at least the 2 guys-in-a-garage startup work I was involved in for decades. HITL is key. Even with a human in the loop, you wind up at Therac 25. That's exactly where hybrid closed loop insulin pumps are right now. Autonomy and insulin don't mix well. If there weren't a moat of attorneys keeping the signal/noise ratio down, we'd already realize that at scale - like the PR team at 3 letter technical universities designed to protect parents from the exploding pressure inside the halls there.
satisfice
LLMs make bad work— of any kind— look like plausibly good work. That’s why it is rational to automatically discount the products of anyone who has used AI.
I once had a member of my extended family who turned out to be a con artist. After she was caught, I cut off contact, saying I didn’t know her. She said “I am the same person you’ve known for ten years.” And I replied “I suppose so. And now I realized I have never known who that is, and that I never can know.”
We all assume the people in our lives are not actively trying to hurt us. When that trust breaks, it breaks hard.
No one who uses AI can claim “this is my work.” I don’t know that it is your work.
No one who uses AI can claim that it is good work, unless they thoroughly understand it, which they probably don’t.
A great many students of mine have claimed to have read and understand articles I have written, yet I discovered they didn’t. What if I were AI and they received my work and put their name on it as author? They’d be unable to explain, defend, or follow up on anything.
This kind of problem is not new to AI. But it has become ten times worse.
pu_pe
> While the industry leaping abstractions that came before focused on removing complexity, they did so with the fundamental assertion that the abstraction they created was correct. That is not to say they were perfect, or they never caused bugs or failures. But those events were a failure of the given implementation a departure from what the abstraction was SUPPOSED to do, every mistake, once patched led to a safer more robust system. LLMs by their very fundamental design are a probabilistic prediction engine, they merely approximate correctness for varying amounts of time.
I think what the author misses here is that imperfect, probabilistic agents can build reliable, deterministic systems. No one would trust a garbage collection tool based on how reliable the author was, but rather if it proves it can do what it intends to do after extensive testing.
I can certainly see an erosion of trust in the future, with the result being that test-driven development gains even more momentum. Don't trust, and verify.
lbalazscs
It's naive to hope that automatic tests will find all problems. There are several types of problems that are hard to detect automatically: concurrency problems, resource management errors, security vulnerabilities, etc.
An even more important question: who tests the tests themselves? In traditional development, every piece of logic is implemented twice: once in the code and once in the tests. The tests checks the code, and in turn, the code implicitly checks the tests. It's quite common to find that a bug was actually in the tests, not the app code. You can't just blindly trust the tests, and wait until your agent finds a way to replicate a test bug in the code.
acedTrex
> I think what the author misses here is that imperfect, probabilistic agents can build reliable, deterministic systems. No one would trust a garbage collection tool based on how reliable the author was, but rather if it proves it can do what it intends to do after extensive testing.
> but rather if it proves it can do what it intends to do after extensive testing.
Author here: Here I was less talking about the effectiveness of the output of a given tool and more so about the tool itself.
To take your garbage collection example, sure perhaps an agentic system at some point can spin some stuff up and beat it into submission with test harnesses, bug fixes etc.
But, imagine you used the model AS the garbage collector/tool, in that say every sweep you simply dumped the memory of the program into the model and told it to release the unneeded blocks. You would NEVER be able to trust that the model itself correctly identifies the correct memory blocks and no amount of "patching" or "fine tuning" would ever get you there.
With other historical abstractions like say jvm, if the deterministic output, in this case the assembly the jit emits is incorrect that bug is patched and the abstraction will never have that same fault again. not so with LLMs.
To me that distinction is very important when trying to point out previous developer tooling that changed the entire nature of the industry. It's not to say I do not think LLMs will have a profound impact on the way things work in the future. But I do think we are in completely uncharted territory with limited historical precedence to guide us.
beau_g
The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.
That said, I do think it would be nice for people to note in pull requests which files have AI gen code in the diff. It's still a good idea to look at LLM gen code vs human code with a bit different lens, the mistakes each make are often a bit different in flavor, and it would save time for me in a review to know which is which. Has anyone seen this at a larger org and is it of value to you as a reviewer? Maybe some tool sets can already do this automatically (I suppose all these companies report the % of code that is LLM generated must have one if they actually have these granular metrics?)
acedTrex
Author here:
> The article opens with a statement saying the author isn't going to reword what others are writing, but the article reads as that and only that.
Hmm, I was just saying I hadn't seen much literature or discussion on trust dynamics in teams with LLMs. Maybe I'm just in the wrong spaces for such discussions but I haven't really come across it.
https://archive.is/5I9sB
(Works on older browsers and doesn't require JavaScript except to get past CloudSnare).