The unbearable slowness of AI coding
34 comments
·August 21, 2025dsiegel2275
Prompting it better during development can really help here.
I have an emerging workflow orchestrated by Claude Code custom commands and subagents that turns even an informal description of a feature into a full fledged PRD, then an "architect" command researches and produces a well thought out and documented technical design. I can review that design document and then give it to the "planner" command, which breaks it down into Phases and Tasks. Then I have a "developer" command iterate through through and implement the Phases one by one. After each phase it runs a detailed code review using my "review" subagent.
Since I've started using this document-driven, guided workflow I've seen quality of the output noticeably improve.
tarruda
This illustrates a fundamental truth of maintaining software with LLMs: While programmers can use LLMs to produce huge amounts of code in a short time, they still need to read and understand it. It is simply not possible to delegate understanding a huge codebase to an AI, at least not yet.
In my experience, the real "pain" of programming lies in forcing yourself to absorb a flood of information and connecting the dots. Writing code is, in many ways, like taking a walk: you engage in a cognitively light activity that lets ideas shuffle, settle, and mature in the background.
When LLMs write all the code for you, you lose that essential mental rest. The quiet moments where you internalize concepts, spot hidden bugs, and develop a mental map of the system.
nchmy
This should be called the eternal, unbearable slowness of code review, because the author writes that the AI actually churns out code extremely rapidly. The (hopefully capable, attentive, careful) human is the bottleneck here, as it should be
JohnMakin
If only code and application quality could be measured in LoC - middle managers everywhere would rejoice
falcor84
> ... I’ll keep pulling PRs locally, adding more git hooks to enforce code quality, and zooming through coding tasks—only to realize ChatGPT and Claude hallucinated library features and I now have to rip out Clerk and implement GitHub OAuth from scratch.
I don't get this, how many git hooks do you need to identify that Claude had hallucinated a library feature? Wouldn't a single hook running your tests identify that?
sc68cal
They probably don't have any tests, or the tests that the LLM creates are flawed and not detecting these problems
manmal
Yesterday Claude Code assured me the following:
• Good news! The code is compiling successfully (the errors shown are related to an existing macro issue, not our new code).
When infact, it managed to insert 10 compilation errors that were not at all related with any macros.
AstroBen
Just tell the AI "and make sure you don't add bugs or break anything"
Works every time
deegles
I tried using agents in Cursor and when it runs into issues it will just rip out the offending code :)
loandbehold
"hallucinated" library features are identified even earlier, when claude builds your project. i also don't get what author is talking about.
pluto_modadic
AI agents have been known to rip out mocks so that the tests pass.
thrown-0825
I have had human devs do that too
Workaccount2
Gemini CLI is pretty weak, but the Gemini 2.5 pro is still the best for long contexts. Claude is great but it crumbles as you start to get in the 50-100k range. I find Gemini doesn't start to crack until the 150-200k range. It's too bad the tooling around it is mediocre at best.
doctoboggan
When building a project from scratch using AI, it can be tempting to give in to the vibe and ignore the structure/architecture and let it evolve naturally. This is a bad idea when humans do it, and it's also a bad idea when LLM agents do it. You have to be considering architecture, dataflow, etc from the beginning, and always stay on top of it without letting it drift.
I have tried READMEs scattered through the codebase but I still have trouble keeping the agent aware of the overall architecture we built.
dwringer
Slow is smooth, smooth is fast.
hodgehog11
AI tools seem excellent at getting through boilerplate stuff at the start of a project. But as time goes on and you have to think about what you are doing, it'll be faster to write it yourself than to convey it in natural language to an LLM. I don't see this as an issue with the tool, but just getting a better idea of what it is really good for.
Nextgrid
The role of a software engineer is to condense the (often unclear) requirements, business domain knowledge, existing code (if any) and their skills/experience into a representation of the solution in a very concise language: a programming language.
Having to instead express all that (including the business-related part, since the agent has no context of that) in a verbose language (English) feels counter-productive, and is counter-productive in my experience.
I've successfully one-shotted easy self-contained, throwaway tasks ("make me a program that fills Redis with random keys and values" - Claude will one-shot that) but when it comes to working with complex existing codebases I've never seen the benefits - having to explain all the context to the agent and correcting its mistakes takes longer than just doing it myself (worse, it's unpredictable - I know roughly how long something will take, but it's impossible to tell in advance whether an agent will one-shot it successfully or require longer babysitting than just doing it manually from the beginning).
pbalau
We are going to end up having boilerplate natural language text, that's been tested and proven to get the same output every time. Then we'll have a sort of transpiler and maybe a sub language of English, to make prompting easier. Then we will source control those prompts. What we actually do today, with extra steps.
ants_everywhere
I've found LLMs to be very good at writing design docs and finding problems in code.
Currently they're better at locating problems than fixing them without direction. Gemini seems smarter and better at architecture and best practices. Claude seems dumber but is more focused on getting things done.
The right solution is going to be a variety of tools and LLMs interacting with each other. But it's going to take real humans having real experience with LLMs to get there. It's not something that you can just dream up on paper and have it work out well since it depends so much on the details of the current models.
ricardo81
Somewhat related, I Found cursor/VS was slowing to the point of being unusable. Turning on privacy mode helped, but the main culprit was extremely verbose logging. Running `fatrace -c --command=cursor` discovered the issue.
The disk in question was an HDD and the problem disappeared (or is better hidden) after symlinking the log dir to an SSD.
As for code itself, I've never had an issue with slowness. If anything it's the verbosity of wanting to explain itself and excess logging in the code it creates.
doubleorseven
I've never done QA. Just thinking about doing QA makes my head swirl. But yes, because of LLMs I am now a part time QA engineer, and I think that it's kinda helping me be a better developer. Im working on a massive feature at work, something I can't just give to an agent and I already feel like something changed in how I think about every little piece of code im adding. didn't see that coming.
I'm still calibrating myself on the size of task that I can get Claude Code to do before I have to intervene.
I call this problem the "goldilocks" problem. The task has to be large enough that it outweighs the time necessary to write out a sufficiently detailed specification AND to review and fix the output. It has to be small enough that Claude doesn't get overwhelmed.
The issue with this is, writing a "sufficiently detailed specification" is task dependent. Sometimes a single sentence is enough, other times a paragraph or two, sometimes a couple of pages is necessary. And the "review and fix" phase again is totally dependent and completely unknown. I can usually estimate the spec time but the review and fix phase is a dice roll dependent on the output of the agent.
And the "overwhelming" metric is again not clear. Sometimes Claude Code can crush significant tasks in one shot. Other times it can get stuck or lost. I haven't fully developed an intuition for this yet, how to differentiate these.
What I can say, this is an entirely new skill. It isn't like architecting large systems for human development. It isn't like programming. It is its own thing.