Skip to content(if available)orjump to list(if available)

A guide to Gen AI / LLM vibecoding for expert programmers

GuB-42

I think the article could be more accurately titled "A Guide to Gen AI / LLM Vibecoding for Programmers who hate their job"

To me, someone who actually love programming, it makes vibe coding look like hell.

> The workflow of vibe coding is the same as managing scrums or helping a student through a research thesis

Which programmer wants that?! Just hearing the word "scrum" makes me want to run away and I know I am not alone here. Helping a student through a research thesis doesn't sound so bad, because as a human, helping people feels good. But here, there is no student, no human feelings, and you are not even helping the AI become better in the long term.

uncircle

> I think the article could be more accurately titled "A Guide to Gen AI / LLM Vibecoding for Programmers who hate their job"

Given how “vibe coding” is all about explaining clearly the requirements and defining context, it’s for programmers who should have chosen middle management as a career.

To actual programmers that enjoy the craft, using an LLM means ruining the beautiful art of abstraction and mental visualisation of a complex piece of logic, by putting it through the very lossy process of explaining it into words. It would be a bit like a composer or a painter having to use words instead of leveraging their well-honed instinct.

CuriouslyC

I choose the tech stack and architect the project.

I choose the language patterns and code organization.

I step in to solve hard problems when agents flounder.

What about that says middle management? It's just getting rid of all the low iq parts of the job.

jf22

I've been middle management for half my career and the role has never been about explaining or requirements or defining context like I do with an LLM to code...

starfallg

The reality is that hacking code isn't always beautiful. Most of the time, it is mundane grunt work.

You can always leave the core logic for your to work on and have the AI handle all the bits that you don't like to do. This is what we do for modelling for example, AI helps with the interface and data backends, the core modelling logic is hand-crafted.

dijksterhuis

> mundane grunt work.

this is my favourite kind of work. i can switch my brain off and just do something repetitive for a bit.

boredom is necessary for good ideas.

channel_t

Yup. I think programmers are giving themselves too much credit here. I love programming, but let's not kid ourselves, at most organizations at least 75% of the code needed to make something a working product is BS. I'd rather prompt an LLM agent to take care of that while I review it so that I can spend my limited energy on the more interesting bits. I find the exercise of prompting an LLM to generate boring code to my exact specifications far more intellectually stimulating than doing any of that stuff by hand, and the time that I have invested in this area has paid dividends in making the code cleaner, more consistent, and more coherent.

roadside_picnic

> Which programmer wants that?!

Agreed.

My LLM usage has quickly become a more efficient way to solve problems that basically require copy/pasting from some documentation I have to look up where me doing it myself is more error prone.

I was recently doing a fairly complex set of data transformations and understanding what the data means remained essential. AI tends to fail spectacularly at really understanding the nuances of data (that often requires understanding the underlying business logic generating the data).

However it's very useful when I have to do a bunch of window functions, where I can describe the operation but don't want to look up the syntax. Or just writing SQL when I can explain exactly what I need it to do.

Similarly working with Pytorch involves a fair bit of what I consider pseudo-boilerplate where the code itself is quite boring, but contains just enough complexity that it can't be simply automated away. Hand rolling this stuff often leads to small errors/bugs, but LLMs can do a spectacular job of quickly generating this, provided you do know exactly what you're looking to build.

What's interesting is that this has still been a major speed boost, because looking up how to do some tedious thing you just forgot how to do can really break flow.

JohnMakin

almost every job ive ever had has been to build stuff. frequently programming is used to build stuff. programming is not my job, building stuff is. it’s perfectly normal (and a sign of a more mature engineer, imho) to prefer building stuff to the annoying stuff that gets in the way of building stuff, especially since many languages are obnoxious to work with.

polishdude20

I agree. I'm hired to build stuff and make it work. I get satisfaction from building stuff people want and use. If I can use an LLM to help me focus more on what the user wants in all for it.

I feel good because real humans are using what I've built and they like it.

Sharlin

Yes; in particular English is an incredibly obnoxious language to try to build software with. (Replace English with any natural language if you want.)

kiitos

building stuff is the application of programming, not the platonic form of it

if you see programming as a necessary but ultimately annoying means to an end, that's fine, you do you, but there are many other folks who don't look at it that way, and they're no more or less right or wrong than you are

recursive

I think I enjoy programming. Vibe coding removes most of the parts that I like. It already looks like hell. I'm probably a minority, but I don't think I'm alone in this.

ako

I really like creating software solutions, vibe coding removes the part that is most tedious. LLMs allow me to experiment with different solutions and different designs faster.

afro88

Do you enjoy the work you had to put in for every single PR? I'm not trying to make a "surely there's 1" annoying argument, but a "surely there's 5-10%".

For me, that's:

- working in legacy parts of the codebase

- anything that requires boilerplate that can't be code genned

- actually writing the code of unit tests (the fun part is making code testable, and coming up with what to test)

- fixing lint issues that can't be auto fixed yet

- removing old feature toggles

- building a temporary test harness

The list goes on. That's not hell. That's getting someone else on the team to do all the stuff you don't enjoy, without ruining someones day.

dgfitz

> working in legacy parts of the codebase

This is why most of us get paid what we do, I’m sure you realize that. There is immense value in having engineers on a team/at a company that can do this.

> anything that requires boilerplate that can't be code genned

It is important to understand the boilerplate. It is also important to understand what is boilerplate and what isn’t. After these things are grasped, usually it’s a copypasta.

> actually writing the code of unit tests (the fun part is making code testable, and coming up with what to test)

If you don’t know how to write unit tests you don’t know how to write testable code. If you do know how to write unit tests you understand the value in maintaining them when you make code changes. Passing that off to a statistical next token predictor renders the tests largely useless

> fixing lint issues that can't be auto fixed yet

You should be thankful you didn’t write code in the 80s if this is a real stance. More thankful still that you rarely need to interact with code sans a linter or autocomplete. There are lots of technologies out there where this isn’t possible. Off the top of my head would be yocto recipes.

> removing old feature toggles

I’m not clear what this even means? You mean, doing your job?

> building a temporary test harness

Generate one, I don’t care. You’ll never know if it’s any good unless you go over the whole thing line by line. At which point, you didn’t save time and didn’t “level up” any skills

cheema33

If someone enjoys building houses with a hammer and nail, they still can. Other like using tools.

You will always be able to write code by hand. But you will not be able to keep up with other engineers who master using AI tools. I am a software engineer, I like building things. And I don't use binary or assembly, or C or any other lower level languages for good reasons. AI is just another higher level abstraction. A tool that allows me to build things a little faster. I use it every day.

wsintra2022

IDE’s are tools, languages are tools, LLMs are code generators, not quite the same thing.

lubujackson

I like coding. I've been doing it for a couple decades. I disagree with the "managing scrum" metaphor. Sure, you can use LLMs that way. And there is some truth to the fact that it may feel more like writing detailed Jira tickets than actually programming at times if you are trying to have it make huge changes... BUT coding with LLMs is really just a higher level abstraction. And the good news of that is LLMs are more deterministic than they seem, which is a lot of what people are fearful of losing by giving LLMs "the reins".

One nice thing about programming is that the computer is a dumb funnel for your human brain to encoded actions. If the program doesn't work, you did something wrong, missed a semicolon etc. This still applies to LLMs. If the LLM gave you shit output, it is your fault. You gave it shit input. If the model you used was the wrong one, you can still get good results, you just have to put in more work or break the problem down more first. But it's still programming. If you treat using LLMs that way, and start to apply your normal programming approaches to your prompts, you can find a workflow that satisfies your demands.

But even if you only use LLMs to rubber duck a problem, by literally typing "The program is doing X,Y,Z but it's only supposed to do Z. I already looked at A,B,C and I have a theory that G is causing the issue but I'm not seeing anything wrong there." and just paste that in the chat you might be surprised what it can turn up. And that's a fine use case!

LLMs are broadly useful and there are certainly elements of programming that are the "shit-shoveling" parts for you, from debugging to writing tests to planning or even re-writing Jira tickets, LLMs can help at different levels and in different ways. I think prescriptive calls to "code this way with LLMs" are shortsighted. If you are determined to write each line yourself, go for it. But like refusing to learn IDE shortcuts or use a new tool or language, you are simply cutting yourself off from technological progress for short term comfort.

The best part of programming to me is that it is always changing and growing and you are always learning. Forget the "AI eating the world" nonsense and treat LLMs as just another tool in your toolkit.

polotics

This appears to me to be a surprisingly low-effort article. Nothing about managing context length and starting tasks from a good context, managing .md files and structuring repos, very anthropomorphising which does not help, nothing about tools to use to build the right context, no comparison of different approaches, what kind of MCP and/or RAG to get agents to look at documentation.

ChrisRackauckas

That's kind of the whole point. Setting up a few MCPs etc. is really a minor thing in the grand scheme of things. Sure, I have setup context7, Sequential Thinking, etc. and my Claude.md has a lot of details in it, but that only improves the accuracy so much. My whole point is that if the mindset is "I need this to be very accurate for it to work", then you're already putting too much effort into it. I personally have not found any of that time worth it. For any PR I had to put effort into, I could have done it quicker myself. So I learned those things and they just didn't have a payoff for my work. They did really well in some things like front end development, devops, and stuff of that sort, but if it's the normal kind of problem that is my "here's my actual hard problem of the day", i.e. something where I would need to start white boarding a new algorithm and semi-prove some numerical stability result before throwing code, the LLMs just fail to ever come up with anything new enough to get a solution.

But, slamming down commands for it on only easy problems in order to get 90% of a PR done and just finishing it yourself? That can get you 6 PRs in instead of 3, where you did the hard 3 and just looked at the transcripts for another 15, and tossed all but the 3 that looked good. Using Claude like that takes about half an hour out of your day and you get a good benefit, and that is what I am pointing to as a very useful approach.

I personally don't think any amount of MCP servers will make "Claude, find me a novel algorithmic improvement in this space and code it up" work, but hey if that works for you that's great. But reviewing and checking the proofs would likely make it not worth it for what I'm doing.

Ethee

The article clearly isn't directed at how expert programmers are 'supposed' to use AI, though we see at least one of those a day on the front page of HN now. The first paragraph makes it pretty obvious that the author is trying to convince people who have completely disregarded AI, for one reason or another, to give it another try in a more meaningful way. Telling the reader how the author configures his AI isn't going to convince those people, as they likely are already very aware of how these tools 'work' the problem is in the 'how to use them' department which has nothing to do with what MCP tool you're using.

jpollock

I've just spent the better part of two weeks trying to convince a LLM to automate some programming for me.

We use feature flags. However, cleaning them up is something rarely done. It typically takes me ~3minutes to clean one up.

To clean up the flag:

1) delete the test where the flag is off

2) delete all the code setting the flag to on

3) anything getting the value of the flag is set to true

4) resolve all "true" expressions, cleaning up if's and now constant parameters.

5) prep a pull request and send it for review

This is all fully supported by the indexing and refactoring tooling in my IDE.

However, when I prompted the LLM with those steps (and examples), it failed. Over and over again. It would delete tests where the value was true, forget to resolve the expressions, and try to run grep/find across a ginormous codebase.

If this was an intern, I would only have to correct them once. I would correct the LLM, and then it would make a different mistake. It wouldn't follow the instructions, and it would use tools I told it to not use.

It took 5-10 minutes to make the change, and then would require me to spend a couple of minutes fixing things. It was at the point of not saving me any time.

I've got a TONNE of low-hanging fruit that I can't give to an intern, but could easily sick a tool as capable as an intern on. This was not that.

theLiminator

Might make sense getting it to instead create a CST traversal that deletes feature flags by their id. Then you have a re-usable trustworthy tool that you can incrementally improve/verify.

jpollock

That was the lesson I was learning. I should use the LLM to generate the tools that I use for consistently repeatable tasks.

Then I can rinse and repeat using the tool, fixing the bugs in the tool myself instead of repeating the expensive (in time) cost of using the LLM.

That was my last attempt, but I ran out of time.

SparkyMcUnicorn

If you have examples, can you provide git commit hashes that it can diff and use as a reference?

For repeating patterns I'll identify 1-3 commit hashes or PRs, reference them in a slash command, and keep the command up to date if/when edge cases occur.

polishdude20

Which LLM? How are you prompting it?

I've been using Cursor for the last few months and notice that for tasks like this, it helps to give examples of the code you're looking for, tell it more or less how the feature flags are implemented and also have it spit out a list of files it would modify first.

jpollock

I gave it explicit ordering, instructions on what tools to _not_ use, and before/after examples from the codebase. A full page of instructions.

After iterating on that for a while, I did a bunch manually (90) and then gave the LLM a list of pull requests as examples, and asked _it_ to write the prompt. It still failed.

Finally, I broke the problem up and started to ask it to generate tools to perform each step. It started to make progress - each execution gave me a new checkpoint so it wouldn't make new mistakes.

dimtass

After 25 years in embedded, embedded Linux, devops and cloud security I don’t care about coding anymore. I’ve done almost everything in so many programming languages and assembly that I can’t even count. Every programming task seems like a repetitive task. But architecture and solving a real problem wow, this still kicks in me. So, LLMs are my pals that we spend so much time in planning and go to every single small detail and in such depth that it’s hard to explain the mental erection. Just give it a prompt to try to contradict every step and make it hard for you and try to convince it that it’s wrong. I can spend hours just building the plan and the idea. Then I have the solution and it’s just boring to write the code. I know I can write the code and I’ve already done it before. So, I just create a full plan with specs and a water flow to do list and let it do the job. I check if it does it right and keep it on track not to over engineer things (they tend to do that a lot) and just enjoy the outcome. I would say I do vibe-planning before vibe-coding.

percentcer

I like programming. It's fun, and sometimes it even requires creativity. There are plenty of times when it's not that fun, and doesn't require any creativity (because whatever it is has been done by someone else a thousand times over), and you just want a result. Vibecoding is great for _that_ stuff (think little throw-away scripts, shell one liners, tool plugins, etc)

I generally don't get great results from LLM code because most of my work is in C++ (which I'm guessing is underrepresented in the training data?), but when I point it towards some well-worn javascript thing I've had real successes! Most recent example is this little chrome plugin I had it whip up in one shot (https://chromewebstore.google.com/detail/favicon-tab-grouper...) because I couldn't find the exact functionality I needed in other plugins.

Works perfectly for my needs, took less than five minutes to spin up, and I use it all the time. If you're looking to get started with vibecoding stuff, try making plugins that provide niche functionality for your hyper-specific workflows.

ppqqrr

generic advice about how you should use the same tool and methodology (claude code, scrum) that everyone’s already using, lot of hand waving about being “senior,” This may be the most 2025 blog post ever written.

if anyone takes the art of software programming further using LLMs, it’s going to be young inexperienced people who grow up closely observing and learning the transcendental nature of LLMs and software, not hardened industry titans joyfully cracking their whip over an army of junior devs and LLMs.

null

[deleted]

rafterydj

Not to be rude, but what about understanding the "transcendental nature" of LLMs allows people to build more, faster, or with better maintainability than a "hardened industry titan"?

Retr0id

New generations are always leapfrogging those that came before them, so I don't find it too hard to believe even under more pessimistic opinions of LLM usefulness.

They are young and inexperienced today, but won't stay that way for long. Learning new paradigms while your brain is still plastic is an advantage, and none of us can go back in time.

lagrange77

But automating isn't a programming paradigm.

> They are young and inexperienced today, but won't stay that way for long.

I doubt that. For me this is the real dilemma with a generation of LLM-native developers. Does a worker in a fully automated watch factory become better at the craft of watchmaking with time?

alfalfasprout

> Learning new paradigms while your brain is still plastic is an advantage, and none of us can go back in time.

You can absolutely learn new paradigms at any age. This idea that you can only do so as an 18-25 year old is ridiculous.

ppqqrr

we’ve been taught to think of programs as sculptures, shaped by the creator to a fixed purpose. with LLMs, the greatest advance isn’t in being able to make larger and more detailed sculptures more quickly; it’s that you can make the sculptures alive.

rafterydj

But who _wants_ a program to be alive? To be super clear, I love the tech behind LLMs and other transformers. But when discussing regular, run of the mill software projects that don't require AI capabilities - do you really need to have the understanding of the transcendental nature of LLMs to do that job well?

alfalfasprout

> generic advice about how you should use the same tool and methodology (claude code, scrum) that everyone’s already using, lot of hand waving about being “senior,” This may be the most 2025 blog post ever written.

This part tracks. It's honestly rather generic.

> if anyone takes the art of software programming further using LLMs, it’s going to be young inexperienced people who grow up closely observing and learning the transcendental nature of LLMs and software, not hardened industry titans joyfully cracking their whip over an army of junior devs and LLMs.

This, I'm not sure applies either. TBH the biggest risk I'm seeing already right now is how quickly we're starting to see juniors trying to enter the job market who don't have the faintest idea how actually code. Let alone build software (but let's be honest, that's usually been the case). What decisions to make when writing something are based on factors outside just implementing the functionality: how maintainable is this? how extensible?

Giving a junior a sycophant that's reasonably competent at spitting out something functional (not necessarily functional in the sense you need it to be but apparently working) is a recipe for disaster IMO.

There will absolutely be some who "get it". But, how does that scale?

More worryingly, something I don't see discussed enough is "review fatigue". It's far more fatiguing reviewing the output of an LLM than writing the code yourself a lot of the times. Early on in your career, this might lead to the tendency to just say "eh, looks alright enough".

simonw

This piece is excellent - it's full of ideas and tips that I have not seen in other similar tutorials. Worth spending some time with, especially if you are still skeptical of the value that can be unlocked by AI-assisted programming.

(A minor disagreement: it's using a definition of "vibe coding" that applies to any form of ai-assisted programming. I prefer to define vibe coding with its original definition from all the way back in February where it only refers to code that is generated through prompting without any review.)

nphardon

I think the idea that it's important to know how to code is going to be seriously challenged. I know I feel like the learning process and insight I gain is important, but I wonder if it is, beyond the subjective.

Like I'm sure the grad students working for Euler learned a ton generating logarithmic tables by hand, but it proved to be useless in the end. Could having a solid grasp on memory management/access in C be the same?

I think this is why obsolescence can be hard to predict.

Like if in 30 years all code is run and managed by ai bots, then all this debate about "it's important to know how to code!" will seem really silly.

wsintra2022

Let’s just call it like it is, Vibe coding takes away the thinking of the execution of steps to solve the problem and offloads that part of our mind to a text generator with intelligent emergent properties. If you enjoy keeping the intelligence sharp for laying out the steps to solve the problem and executing the steps then don’t vibe code. If you don’t mind the engineering skills atrophy then go ahead and vibe code. But know this. You will lose the skills.

afro88

> The moment you see it go off the rails, just throw it out. That problem is too hard for Claude, it’s for you now.

Or, any of:

- the problem was too big in scope and needed a stepped plan to refer to and execute step by step

- your instructions weren't clear enough

- the context you provided was missing something crucial it couldn't find agentically, or build knowledge of (in which case, document that part of the codebase first)

- your rules/AGENTS.md/CLAUDE.md needs some additions or tweaking

- you may need a more powerful model to plan implementation first

Just throwing away and moving on is often the wrong choice and you'll get better at using these tools slower. If you're still within the "time it would have taken me to do it myself" window, think about what caused it to go off the rails or fail spectacularly and try giving it another go (not following up, throw away current results and chat and try again with the above in mind)

polishdude20

I totally agree. I've recently been using Cursor to help create a new react project with a lot of functionality. I realized I need to have it do more smaller steps that I get to have lots of input on rather than say "this is the big picture now go forth".

throwanem

> Vibe coding is useful only if you have enough problems that you’re happy that some subset [is] being solved, not caring what in that subset is solved.

renodino

My recent vibe code experience made me realize it's almost exactly like being a tech lead managing offshore development. I learned early on that the key to leading successful offshore projects is precise, detailed specifications, and very rigorous code review and testing. Now I'm using the exact same discipline to "vibe" code. I really think there needs to be a different term for professional solution engineering using LLMs (like "prompt engineering" but for coding) to differentiate from casual prototyping or simple web UI hacking by non-devs that uses "vibe" coding.