Skip to content(if available)orjump to list(if available)

What makes Claude Code so damn good

What makes Claude Code so damn good

149 comments

·August 23, 2025

sdsd

Oof, this comes at a hard moment in my Claude Code usage. I'm trying to have it help me debug some Elastic issues on Security Onion but after a few minutes it spits out a zillion lines of obfuscated JS and says:

  Error: kill EPERM
      at process.kill (node:internal/process/per_thread:226:13)
      at Ba2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19791)
      at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19664
      at Array.forEach (<anonymous>)
      at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19635
      at Array.forEach (<anonymous>)
      at Aa2 (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19607)
      at file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:19538
      at ChildProcess.W (file:///usr/local/lib/node_modules/@anthropic-ai/claude-code/cli.js:506:20023)
      at ChildProcess.emit (node:events:519:28) {
    errno: -1,
    code: 'EPERM',
    syscall: 'kill'
  }
I'm guessing one of the scripts it runs kills Node.js processes, and that inadvertantly kills Claude as well. Or maybe it feels bad that it can't solve my problem and commits suicide.

In any case, I wish it would stay alive and help me lol.

schmookeeg

Claude and some of the edgier parts of localstack are not friends either. It's pretty okay at rust which surprised me.

It makes me think that the language/platform/architecture that is "most known" by LLMs will soon be the preferred -- sort of a homogenization of technologies by LLM usage. Because if you can be 10x as successfully vibey in, say, nodejs versus elixir or go -- well, why would you opt for those in a greenfield project at all? Particularly if you aren't a tech shop and that choice allows you to use junior coders as if they were midlevel or senior.

yc-kraln

I get this issue when it uses sudo to run a process with root privileges, and then times out.

triyambakam

I would try upgrading or wiping away your current install and re-installing it. There might be some cached files somewhere that are in a bad state. At least that's what fixed it for me when I recently came across something similar.

sixtyj

Jump to another LLM helps me to find what happened. *This is not a official advice :)

idontwantthis

I have had zero good results with any LLM and elastic search. Everything it spits out is a hallucination because there aren’t very many examples of anything complete and in context on the internet.

fuckyah

[dead]

OtherShrezzing

I think it’s just that the base model is good at real world coding tasks - as opposed to the types of coding tasks in the common benchmarks.

If you use GitHub Copilot - which has its own system level prompts - you can hotswap between models, and Claude outperforms OpenAI’s and Google’s models by such a large margin that the others are functionally useless in comparison.

ec109685

Anthropic has opportunities to optimize their models / prompts during reinforcement learning, so the advice from the article to stay close to what works in Claude code is valid and probably has more applicability for Anthropic models than applying the same techniques to others.

With a subscription plan, Anthropic is highly incentivized to be efficient in their loops beyond just making it a better experience for users.

null

[deleted]

ahmedhawas123

Thanks for sharing this. At a time where this is a rush towards multi-agent systems, this is helpful to see how an LLM-first organization is going after it. Lots of the design aspects here are things I experiment with day to day so it's good to see others use it as well

A few takeaways for me from this (1) Long prompts are good - and don't forget basic things like explaining in the prompt what the tool is, how to help the user, etc (2) Tool calling is basic af; you need more context (when to use, when not to use, etc) (3) Using messages as the state of the memory for the system is OK; i've thought about fancy ways (e.g., persisting dataframes, parsing variables between steps, etc, but seems like as context windows grow, messages should be ok)

nuwandavek

(author of the blogpost here) Yeah, you can extract a LOT of performance from the basics and don't have to do any complicated setup for ~99% of use cases. Keep the loop simple, have clear tools (it is ok if tools overlap in function). Clarity and simplicity >>> everything else.

samuelstros

does a framework like vercel's ai sdk help, or is handling the loop + tool calling so straightforward that a framework is overcomplicating things?

for context, i want to build a claude code like agent in a WYSIWYG markdown app. that's how i stumbled on your blog post :)

the_mitsuhiko

Unfortunately, Claude Code is not open source, but there are some tools to better figure out how it is working. If you are really interested in how it works, I strongly recommend looking at Claude Trace: https://github.com/badlogic/lemmy/tree/main/apps/claude-trac...

It dumps out a JSON file as well as a very nicely formatted HTML file that shows you every single tool and all the prompts that were used for a session.

CuriouslyC

https://github.com/anthropics/claude-code

You can see the system prompts too.

It's all how the base model has been trained to break tasks into discrete steps and work through them patiently, with some robustness to failure cases.

the_mitsuhiko

> https://github.com/anthropics/claude-code

That repository does not contain the code. It's just used for the issue tracker and some example hooks.

CuriouslyC

It's a javascript app that gets installed on your local system...

alex1138

What do people think of Google's Gemini (Pro?) compared to Claude for code?

I really like a lot of what Google produces, but they can't seem to keep a product that they don't shut down and they can be pretty ham-fisted, both with corporate control (Chrome and corrupt practices) and censorship

CuriouslyC

Gemini is amazing for taking a merge file of your whole repo, dropping it in there, and chatting about stuff. The level of whole codebase understanding is unreal, and it can do some amazing architectural planning assistance. Claude is nowhere near able to do that.

My tactic is to work with Gemini to build a dense summary of the project and create a high level plan of action, then take that to gpt5 and have it try to improve the plan, and convert it to a hyper detailed workflow xml document laying out all the steps to implement the plan, which I then hand to claude.

This avoids pretty much all of Claude's unplanned bumbling.

koakuma-chan

I don't think Gemini Pro is necessarily worse at coding, but in my experience Claude is substantially better at "terminal" tasks (i.e. working with the model through a CLI in the terminal) and most of the CLIs use Claude, see https://www.tbench.ai/leaderboard.

jsight

For the web ui (chat)? I actually really like gemini 2.5 pro.

For the command line tool (claude code vs gemini code)? It isn't even close. Gemini code was useless. Claude code was mostly just slow.

upcoming-sesame

You mean Gemini CLI. Yeah it's confusing

jsight

Thanks, that's the one!

Herring

Yeah I was also getting much better results on the Gemini web ui compared to the Gemini terminal. Haven't gotten to Claude yet.

jonfw

Gemini is better at helping to debug difficult problems that require following multiple function calls.

I think Claude is much more predictable and follows instructions better- the todo list it manages seems very helpful in this respect.

divan

In my recent tests I found it quite smart at analyzing bigger picture (i.e. "hey, test failing not because of that, but because of whole assumption has changed and let me rewrite this test from scratch". But it also got stuck few times "I can't edit file, I'm stuck, let me try completely differently". But the biggest difference so far is the communication style - it's a bit.. snarky? I.e. comments like "yeah, tests are failing - as I suspected". Why the f it suspected failing test on the project it sees for the first time? :D

Keyframe

It's doing rather well at thinking, but not at coding. When it codes, often enough it runs in circles and ignores input. Where I find it useful is to read through larger codebases and distill what I need to find out from it. Even using gemini from claude to consult it for certain things. Opus is also like that btw, but a bit better at coding. Sonnet though, excels at coding.. from my experience though.

yomismoaqui

According to the guys from Amp Claude Sonnet/Opus are better at tool use.

nicce

If you could control the model with system command, it would be very good. But at last I have failed miserably. Model is too verbose and helpful.

diego_sandoval

It shocks me when people say that LLMs don't make them more productive, because my experience has been the complete opposite, especially with Claude Code.

Either I'm worse than then at programming, to the point that I find an LLM useful and they don't, or they don't know how to use LLMs for coding.

timr

It depends very much on your use case, language popularity, experience coding, and the size of your project. If you work on a large, legacy code base in COBOL, it's going to be much harder than working on a toy greenfield application in React. If your prior knowledge writing code is minimal, the more amazing the results will seem, and vice-versa.

Despite the persistent memes here and elsewhere, it doesn't depend very much on the particular tool you use (with the exception of model choice), how you hold it, or your experience prompting (beyond a bare minimum of competence). People who jump into any conversation with "use tool X" or "you just don't understand how to prompt" are the noise floor of any conversation about AI-assisted coding. Folks might as well be talking about Santeria.

Even for projects that I initiate with LLM support, I find that the usefulness of the tool declines quickly as the codebase increases in size. The iron law of the context window rules everything.

Edit: one thing I'll add, which I only recently realized exists (perhaps stupidly) is that there is a population of people who are willing to prompt expensive LLMs dozens of times to get a single working output. This approach seems to me to be roughly equivalent to pulling the lever on a slot machine, or blindly copy-pasting from Stack Overflow, and is not what I am talking about. I am talking about the tradeoffs involved in using LLMs as an assistant for human-guided programming.

ivan_gammel

Overall I would agree with you, but I start feeling that this „iron law“ isn’t as simple as that. After all, humans have limited „context window“ too — we don’t remember every small detail on a large project we have been working on for several years. Loose coupling and modularity helps us and can help LLM to make the size of the task manageable if you don’t ask it to rebuild the whole thing. It’s not the size that makes LLMs fail, but something else, probably the same things where we may fail.

timr

Humans have a limited short-term memory. Humans do not literally forget everything they've ever learned after each Q&A cycle.

(Though now that I think of it, I might start interrupting people with “SUMMARIZING CONVERSATION HISTORY!” whenever they begin to bore me. Then I can change the subject.)

Aurornis

I’ve found LLMs useful at some specific tasks, but a complete waste of time at others.

If I only ever wrote small Python scripts, did small to medium JavaScript front end or full stack websites, or a number of other generic tasks where LLMs are well trained I’d probably have a different opinion.

Drop into one of my non-generic Rust codebases that does something complex and I could spent hours trying to keep the LLM moving in the right direction and away from all of the dead ends and thought loops.

It really depends on what you’re using them for.

That said, there are a lot of commenters who haven’t spent more than a few hours playing with LLMs and see every LLM misstep as confirmation of their preconceived ideas that they’re entirely useless.

breuleux

Speaking for myself, LLMs are reasonably good at writing tests or adapting existing structures, but they are not very good at doing what I actually want to do (design, novelty, trying to figure out the very best way to do a thing). I gain some productivity from the reduction of drudgery, but that's never been much of a bottleneck to begin with.

The thing is, a lot of the code that people write is cookie-cutter stuff. Possibly the entirety of frontend development. It's not copy-paste per se, but it is porting and adapting common patterns on differently-shaped data. It's pseudo-copy-paste, and of course AI's going to be good at it, this is its whole schtick. But it's not, like, interesting coding.

SXX

This heavily depends on what project and stack you working on. LLMs are amazing for building MVPs or self-contained micro-services on modern, popular and well-defined stacks. Every single dependency, legacy or proprietary library and every extra MCP make it less usable. It get's much worse if codebase itself is legacy unless you can literally upload documentation for each used API into context.

A lot of programmers work on maintaining huge monolith codebases, built on top of 10-years old tech using obscure proprietary dependencies. Usually they dont have most of the code to begin with and APIs are often not well documented.

jsight

What is performance like for you? I've been shocked at how many simple requests turn into >10 minutes of waiting.

If people are getting faster responses than this regularly, it could account for a large amount of the difference in experiences.

totalhack

Agree with this, though I've mostly been using Gemini CLI. Some of the simplest things, like applying a small diff, take many minutes as it loses track of the current file state and takes minutes to figure it out or fail entirely.

tjr

What do you work on, and what do LLMs do that helps?

(Not disagreeing, but most of these comments -- on both sides -- are pretty vague.)

SXX

For once LLMs are good for building game prototypes. When all you care is to check whatever something is fun to play it really doesn'a matter how much of tech debt you generate in process.

And you start from the stratch all the time so you can generate all the documentation before you ever start to generate code. And when LLM slop become overwhelming you just drop it and go to check next idea.

lambda

It can be more than one reason.

First of all, keep in mind that research has shown that people generally overestimate the productivity gains of LLM coding assistance. Even when using a coding assistant makes them less productive, they feel like they are more productive.

Second, yeah, experience matters, both with programming and LLM coding assistants. The better you are, the less helpful the coding assistant will be, it can take less work to just write what you want than convince an LLM to do it.

Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.

pton_xd

> Third, some people are more sensitive to the kind of errors or style that LLMs tend to use. I frequently can't stand the output of LLMs, even if it technically works; it doesn't live to to my personal standards.

I've noticed the stronger my opinions are about how code should be written or structured, the less productive LLMs feel to me. Then I'm just fighting them at every step to do things "my way."

If I don't really have an opinion about what's going on, LLMs churning out hundreds of lines of mostly-working code is a huge boon. After all, I'd rather not spend the energy thinking through code I don't care about.

Uehreka

> research has shown that people generally overestimate the productivity gains of LLM coding assistance.

I don’t think this research is fully baked. I don’t see a story in these results that aligns with my experience and makes me think “yeah, that actually is what I’m doing”. I get that at this point I’m supposed to go “the effect is so subtle that even I don’t notice it!” But experience tells me that’s not normally how this kind of thing works.

Perhaps we’re still figuring out how to describe the positive effects of these tools or what axes we should really be measuring on, but the idea that there’s some sort of placebo effect going on here doesn’t pass muster.

ta12653421

Productivity boost is unbelieveable! If you handle it right, its a boon - its like having 3 junior devs at hand. And I'm talking about using the web interface.

I guess most people are not paying and cant therefore apply the project-space (which is one of the best features), which unleashes its full magic.

Even if I'm currently without a job, I'm still paying because it helps me.

ta12653421

LOL why do I get downvoted for explaining my experience? :-D

pawelduda

Because you posted a success story about LLM usage on HN

athrowaway3z

> "THIS IS IMPORTANT" is still State of the Art

Had a similar problems until I saw the advice "Dont say what it shouldn't but focus on what it should".

i.e. make sure when it reaches for the 'thing', it has the alternative in context.

Haven't had those problems since then.

amelius

I mean, if advice like this worked, then why wouldn't Anthropic let the LLM say it, for instance?

1zael

I've literally built the entire MVP of my startup on Claude Code and now have paying customers. I've got an existential worry that I'm going to have a SEV incident that will trigger a house of falling cards, but until then I'm constantly leveraging Claude for fixing security vulnerabilities, implementing test-driven-development, and planning out the software architecture in accordance with my long-term product roadmap. I hope this story becomes more and more common as time passes.

ComputerGuru

> but until then I'm constantly leveraging Claude for fixing security vulnerabilities

That it authored in the first place?

dpe82

Do you ever fix your own bugs?

janice1999

Humans have the capacity to learn from their own mistakes without redoing a lifetime of education.

ComputerGuru

Bugs, yes. Security vulnerabilities? Rarely enough that it wouldn’t make my HN list. It’s not remotely hard to avoid the most common issues.

lajisam

“Implementing test-driven development, and planning out software architecture in accordance with my long-term product roadmap” can you give some concrete examples of how CC helped you here?

imiric

Well, don't be shy, share what CC helped you build.

orsorna

You're speaking to a wall. For whatever reason, the type of people to espouse the wonders of their LLM workflow never reveal what kind of useful output they get from it, never mind substantiate their claims.

foobarbecue

> I hope this story becomes more and more common as time passes.

Why????????????

Why do you want devs to lose cognaizance of their own "work" to the point that they have "existential worry"?

Why are people like you trying to drown us all in slop? I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.

Is it because you're lazy?

BeetleB

> I bet you could replace your slop pile with a tenth of the lines of clean code, and chances are it'd be less work than you think.

Actually, no. When LLMs produce good, working code, it also tends to be efficient (in terms of lines, etc).

May vary with language and domain, though.

stavros

Eh, when is that, though? I'm always worrying about the bugs that I haven't noticed if I don't review the changes. The other day, I gave it a four-step algorithm to implement, and it skipped three of the steps because it didn't think they were necessary (they were).

Mallowram

second

lifestyleguru

duh, I ordered Claude Code to simply transfer money monthly to my bank account and it does.

syntaxing

I don’t know if I’m doing something wrong. I was using Sonnet 4 with GitHub Copilot. Recently a week ago switched to Claude Code. I find GitHub Copilot solves problem and bugs way better than Claude Code. For some reason, Claude Code seems very lazy. Has anyone experience something similar?

libraryofbabel

The consensus is the opposite: most people find copilot does less well than Claude with both using sonnet 4. Without discounting your experience, you’ll need to give us more detail about what exactly you were trying to do (what problem, what prompt) and what you mean by “lazy” if you want any meaningful advice though.

sojournerc

Where do you find this "consensus"?

rsanek

read HN threads, talk to people using AI alot. I have the same perception

StephenAshmore

It may be a configuration thing. I've found quite the opposite. Github Copilot using Sonnet 4 will not manage context very well, quite frequently resorting to running terminal commands to search for code even when I gave it the exact file it's looking for in the copilot context. Claude code, for me, is usually much smarter when it comes to reading code and then applying changes across a lot of files. I also have it integrated into the IDE so it can make visual changes in the editor similar to GitHub Copilot.

syntaxing

I do agree with you, Github Copilot uses more tokens like you mentioned with redundant searches. But at the end of the day, it solves the problem. Not sure if the cost out weights the benefit though compared to Claude Claude. Going to try Claude Code more and see if I'm prompting it incorrectly.

cosmic_cheese

I haven’t tried other LLMs but have a fair amount of experience with Claude Code, and there definitely times when you have to be explicit about the route you want it to take and tell it to not take shortcuts.

It’s not consistent, though. I haven’t figured out what they are but it feels like there are circumstances where it’s more prone to doing ugly hacky things.

wordofx

I have most of the tools setup so I can switch between them and test which is better. So far Amp and Claude Code are on top. GH Copilot is the worst. I know MS is desperately trying to copy its competitors but the reality is, they are just copying features. They haven’t solved the system prompts. So the outcomes are just inferior.

gervwyk

We’re considering building a coding agent for Lowdefy[1], a framework that lets you build web apps with YAML config.

For those who’ve built coding agents: do you think LLMs are better suited for generating structured config vs. raw code?

My theory is that agents producing valid YAML/JSON schemas could be more reliable than code generation. The output is constrained, easier to validate, and when it breaks, you can actually debug it.

I keep seeing people creating apps with vibe coder tools but then get stuck when they need to modify the generated code.

Curious if others think config-based approaches are more practical for AI-assisted development.

[1] https://github.com/lowdefy/lowdefy

hamandcheese

> easier to validate

This is essential to productivity for humans and LLMs alike. The more reliable your edit/test loop, the better your results will be. It doesn't matter if it's compiling code, validating yaml, or anything else.

To your broader question. People have been trying to crack the low-code nut for ages. I don't think it's solvable. Either you make something overly restrictive, or you are inventing a very bad programming language which is doomed to fail because professional coders will never use it.

gervwyk

Good point. i’m making the assumption that if the LLM has a more limited feature space to produce as output, then the output is more predictable, and thus faster to comprehend changes. Similar to when devs use popular libraries, there is a well known abstraction, therefore less “new” code to comprehend as i see familiar functions, making the code predictable to me.

ec109685

I wouldn’t get hung up on one shotting anything. Output to a format that can be machine verified, ideally in a format there is plenty of industry examples for.

Then add a grader step to your agentic loop that is triggered after the files are modified. Give feedback to the model if there any errors and it will fix them.

null

[deleted]

amelius

How do you specify callbacks?

Config files should be mature programming languages, not Yaml/Json files.

gervwyk

Callback: Blocks (React components) can register events with action chains (a sequential list of async functions) that will be called when the event is triggered. So it is defined in the react component. This abstraction of blocks, events, actions, operations and requests are the only abstraction required in the schema to build fully functional web apps.

Might sound crazy but we built full web apps in just yaml.. Been doing this for about 5 years now and it helps us scale to build many web apps, fast, that are easy to maintain. We at Resonancy[1] have found many benefits in doing so. I should write more about this.

[1] - https://resonancy.io

myflash13

CC is so damn good I want to use its agent loop in my agent loop. I'm planning to build a browser agent for some specialized tasks and I'm literally just bundling a docker image with Claude Code and a headless browser and the Playwright MCP server.

yumraj

I made insane progress with CC over last several weeks, but lately have noticed progress stalling.

I’m in the middle of some refactoring/bug fixing/optimization but it’s constantly running into issues, making half baked changes, not able to fix regressions etc. Still trying to figure out how to make do a better job. Might have to break it into smaller chunks or something. Been pretty frustrating couple of weeks.

If anyone has pointers, I’m all ears!!

imiric

> If anyone has pointers, I’m all ears!!

Give programming a try, you might like it.

yumraj

Yeah, have been doing that for 30 years.

Next…

fuckyah

[dead]