Show HN: I've asked Claude to improve codebase quality 200 times
27 comments
·December 8, 2025xnorswap
plufz
I think slash commands are great to help Claude with this. I have many like /code:dry /code:clean-code etc that has a semi long prompt and references to longer docs to review code from a specific perspective. I think it atleast improves Claude a bit in this area. Like processes or templates for thinking in broader ways. But yes I agree it struggles a lot in this area.
kccqzy
Not at all my experience. I’ve often tried things like telling Claude this SIMD code I wrote performed poorly and I needed some ideas to make it go faster. Claude usually does a good job rewriting the SIMD to use different and faster operations.
dcchuck
I spent some time last night "over iterating" on a plan to do some refactoring in a large codebase.
I created the original plan with a very specific ask - create an abstraction to remove some tight coupling. Small problem that had a big surface area. The planning/brainstorming was great and I like the plan we came up with.
I then tried to use a prompt like OP's to improve it (as I said, large surface area so I wanted to review it) - "Please review PLAN_DOC.md - is it a comprehensive plan for this project?". I'd run it -> get feedback -> give it back to Claude to improve the plan.
I (naively perhaps) expected this process to converge to a "perfect plan". At this point I think of it more like a probability tree where there's a chance of improving the plan, but a non-zero chance of getting off the rails. And once you go off the rails, you only veer further and further from the truth.
There are certainly problems where "throwing compute" at it and continuing to iterate with an LLM will work great. I would expect those to have firm success criteria. Providing definitions of quality would significantly improve the output here as well (or decrease the probability of going off the rails I suppose). Otherwise Claude will confuse quality like we see here.
Shout out OP for sharing their work and moving us forward.
kderbyma
Yeah. I noticed Claud suffers when it reaches context overload - its too opinionated, so it shortens its own context with decisions I would not ever make, yet I see it telling itself that the shortcuts are a good idea because the project is complex...then it gets into a loop where it second guesses its own decisions and forgets the context and then continues to spiral uncontrollably into deeper and deeper failures - often missing the obvious glitch and instead looking into imaginary land for answers - constantly diverting the solution from patching to completely rewriting...
I think it suffers from performance anxiety...
----
The only solution I have found is to - rewrite the prompt from scratch, change the context myself, and then clear any "history or memories" and then try again.
I have even gone so far as to open nested folders in separate windows to "lock in" scope better.
As soon as I see the agent say "Wait, that doesnt make sense, let me review the code again" its cooked
embedding-shape
> Yeah. I noticed Claud suffers when it reaches context overload
All LLMs degrade in quality as soon as you go beyond one user message and one assistant response. If you're looking for accuracy and highest possible quality, you need to constantly redo the conversations from scratch, never go beyond one user message.
If the LLM gets it wrong in their first response, instead of saying "No, what I meant was...", you need to edit your first response, and re-generate, otherwise the conversation becomes "poisoned" almost immediately, and every token generated after that will suffer.
rtp4me
For me, too many compactions throughout the day eventually lead to a decline in Claude's thinking ability. And, during that time, I have given it so much context to help drive the coding interaction. Thus, restarting Claude requires me to remember the small bits of "nuggets" we discovered during the last session so I find myself repeating the same things every day (my server IP is: xxx, my client IP is: yyy, the code should live in directory: a/b/c). Using the resume feature with Claude simply brings back the same decline in thinking that led me to stop it in the first place. I am sure there is a better way to remember these nuggets between sessions but I have not found it yet.
snarf21
That has been my greatest stumbling block with these AI agents: context. I was trying to have one help vibe code a puzzle game and most of the time I added a new rule it broke 5 existing rules. It also never approached the rules engine with a context of building a reusable abstraction, just Hammer meet Nail.
someguyiguess
There’s definitely a certain point I reach when using Claude code where I have to make the specifications so specific that it becomes more work than just writing the code myself
flowerthoughts
There's no -c on the command line, so I'm guessing this is starting fresh every iteration, unless claude(1) has changed the default lately.
SV_BubbleTime
I’m keeping Claude’s tasks small and focused, then if I can I clear between.
It’s REAL FUCKING TEMPTING to say ”hey Claude, go do this thing that would take me hours and you seconds” because he will happily, and it’ll kinda work. But one way or another you are going to put those hours in.
It’s like programming… is proof of work.
thevillagechief
Yes, this is exactly true. You will put in those hours.
postalcoder
One of my favorite personal evals for llms is testing its stability as a reviewer.
The basic gist of it is to give the llm some code to review and have it assign a grade multiple times. How much variance is there in the grade?
Then, prompt the same llm to be a "critical" reviewer with the same code multiple times. How much does that average grade change?
A low delta between "review this code" and "review this code with a critical eye" is a major signal for me in terms of LLM quality. gpt-5.1 produces remarkably stable evaluations of code whereas Claude will completely change the tenor of its evaluation if you ask it to be critical to the extent that it makes stuff up.
You could also interpret these results to be a proxy for obsequiousness.
f311a
I like to ask LLMs to find problems o improvements in 1-2 files. They are pretty good at finding bugs, but for general code improvements, 50-60% edits are trash. They add completely unnecessary stuff. If you ask them to improve a pretty well-written code, they rarely say it's good enough already.
For example, in a functional-style codebase, they will try to rewrite everything to a class. I have to adjust the prompt to list things that I'm not interested in. And some inexperienced people are trying to write better code by learning from such changes of LLMs...
m101
This is a great example of there being no intelligence under the hood.
xixixao
Would a human perform very differently? A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step.
I'm not sure there's much to learn here, besides it's kinda fun, since no real human was forced to suffer through this exercise on the implementor side.
ebonnafoux
I have seen some codebase doubling the number of LoC after "refactoring" made by humans, so I would say no.
wongarsu
> A human who must obey orders (like maybe they are paid to follow the prompt). With some "magnitude of work" enforced at each step
Which describes a lot of outsourced development. And we all know how well that works
Capricorn2481
> Would a human perform very differently?
Yes.
SV_BubbleTime
Well… it’s more a great example that great output is a good model with the right context at the right time.
Take away everything else, there’s a product that is really good at small tasks, it doesn’t mean that changing those small tasks together to make a big task should work.
pawelduda
Did it create 200 CODE_QUALITY_IMPROVEMENTS.md files by chance?
andy99
It would be interesting to try a variation of this with a diffusion based LLM and ask it to output the input script (potentially modified in some way) over and over recursively
written-beyond
> I like Rust's result-handling system, I don't think it works very well if you try to bring it to the entire ecosystem that already is standardized on error throwing.
I disagree, it's very useful even in languages that have exception throwing conventions. It's good enough for the return type for Promise.allSettled api.
The problem is when I don't have the result type I end up approximating it anyway through other ways. For a quick project I'd stick with exceptions but depending on my codebase I usually use the Go style ok, err tuple (it's usually clunkier in ts though) or a rust style result type ok err enum.
turboponyy
I have the same disagreement. TypeScript with its structural and pseudo-dependent typing, somewhat-functionally disposed language primitives (e.g. first-class functions as values, currying) and standard library interfaces (filter, reduce, flatMap et al), and ecosystem make propagating information using values extremely ergonomic.
Embracing a functional style in TypeScript is probably the most productive I've felt in any mainstream programming language. It's a shame that the language was defiled with try/catch, classes and other unnecessary cruft so third party libraries are still an annoying boundary you have to worry about, but oh well.
The language is so well-suited for this that you can even model side effects as values, do away with try/catch, if/else and mutation a la Haskell, if you want[1].
maddmann
lol 5000 tests. Agentic code tools have a significant bias to add versus remove/condense. This leads to a lot of bloat and orphaned code. Definitely something that still needs to be solved for by agentic tools.
websiteapi
you gotta be strategic about it. so for example for tests, tell it to use equivalence testing and to prove it, e.g. create a graph of permutations of arguments and their equivalences from the underlying code, and then use such thing to generate the tests.
telling it to do better without any feedback obviously is going to go nowhere fast.
SKILNER
This strikes me as a very solid methodology for improving the results of all AI coding tools. I hope Anthropic, etc take this up.
Rather than converging on optimal code (Occam's Razor for both maintainability and performance) they are just spewing code all over the scene. I've noticed that myself, of course, but this technique helps to magnify and highlight the problem areas.
It makes you wonder how much training material was/is available for code optimization relative to training material for just coding to meet functional requirements. And therefore, what's the relative weight of optimizing code baked into the LLMs.
Claude is really good at specific analysis, but really terrible at open-ended problems.
"Hey claude, I get this error message: <X>", and it'll often find the root cause quicker than I could.
"Hey claude, anything I could do to improve Y?", and it'll struggle beyond the basics that a linter might suggest.
It suggested enthusiastically a library for <work domain> and it was all "Recommended" about it, but when I pointed out that the library had been considered and rejected because <issue>, it understood and wrote up why that library suffered from that issue and why it was therefore unsuitable.
There's a significant blind-spot in current LLMs related to blue-sky thinking and creative problem solving. It can do structured problems very well, and it can transform unstructured data very well, but it can't deal with unstructured problems very well.
That may well change, so I don't want to embed that thought too deeply into my own priors, because the LLM space seems to evolve rapidly. I wouldn't want to find myself blind to the progress because I write it off from a class of problems.
But right now, the best way to help an LLM is have a deep understanding of the problem domain yourself, and just leverage it to do the grunt-work that you'd find boring.