Vibe Coding: Empowering and Imprisoning
8 comments
·December 2, 2025atrettel
I agree with the notion that LLMs may just end up repeating coding mistakes of the past because they are statistically likely mistakes.
I'm reminded of an old quote by Dijkstra about Fortran [1]: "In the good old days physicists repeated each other's experiments, just to be sure. Today they stick to FORTRAN, so that they can share each other's programs, bugs included."
I've encountered that same problem in some older scientific codes (both C and Fortran). After a while, the bugs somewhat become features because people just don't know to question them anymore. To me, this is why it is important to understand the code thoroughly enough to question what is going on.
[1] https://www.cs.utexas.edu/~EWD/transcriptions/EWD04xx/EWD498...
FarmerPotato
But how much of this article was written by an LLM? Cliches, listicles, fluffy abstractions abandoned and not developed...
Was there anything original in it? I'd like to ask this article, what was your knowledge cut-off date?
Aperocky
There are too much of both fear and optimism in what's is essentially a better compiler and google.
Eventually we will gravitate back to square one, and business people are not going to be writing COBOL or VISUAL BASIC or the long list of eventual languages (yes this now include natural ones, like English) that claim to be so easy that a manager would write it. And Googling/Prompting remain a skill that surprisingly few has truly mastered.
Of course all the venture capital believe that soon we'll be at AGI, but like the internet bubble of 2001 we can awkwardly stay at this stage for quite a long time.
crinklewrinkle
> A lot is still very uncertain, but I come back to one key question that helps me frame the discussion of what’s next: What’s the most radical app that we could build? And which tools will enable me to build it? Even if all we can do is start having a more complicated conversation about what we’re doing when we’re vibe coding, we’ll be making progress towards a more empowered future.
Why not ask ChatGPT?
wilg
The entire premise which he summarizes as:
> A huge reason VCs and tech tycoons put billions into funding LLMs was so they could undermine coders and depress wages
is just pure speculation, totally unsupported, and almost certainly untrue, and makes very little sense given the way LLMs and ChatGPT in particular came about. Every time I read something from Anil Dash it seems like it's this absolutely braindead sort of "analysis".
aaron_m04
Why do you say it's almost certainly untrue? Capital is well known for trying to suppress wages.
null
I was working on a new project and I wanted to try out a new frontend framework (data-star.dev). What you quickly find out is that LLMs are really tuned to like react and their frontend performance drops pretty considerably if you aren't using it. Like even pasting the entire documentation in context, and giving specific examples close to what I wanted, SOTA models still hallucinated the correct attributes/APIs. And it isn't even that you have to use Framework X, it's that you need to use X as of the date of training.
I think this is one of the reasons we don't see huge productivity gains. Most F500 companies have pretty proprietary gnarly codebases which are going to be out-of-distribution. Context-engineering helps but you still don't get near the performance you get with in-distribution. It's probably not unsolvable but it's a pretty big problem ATM.