Just Talk to It – The No-Bs Way of Agentic Engineering
22 comments
·October 15, 2025barrkel
timr
On the contrary, this doesn’t sound impressive at all. It sounds like a cowboy coder working on relatively small projects.
300k LOC is not particularly large, and this person’s writing and thinking (and stated workflow) is so scattered that I’m basically 100% certain that it’s a mess. I’m using all of the same models, the same tools, etc., and (importantly) reading all of the code, and I have 0% faith in any of these models to operate autonomously (also, my opinion on the quality of GPT-5 vs Claude vs other models is wildly different).
There’s a huge disconnect between my own experience and what this person claims to be doing, and I strongly suspect that the difference is that I’m paying attention and routinely disgusted by what I see.
CuriouslyC
To be fair, codebases are bimodal, and 300k is large for the smaller part of the distribution. Large enterprise codebases tend to be monorepos, have a ton of generated code and a lot of duplicated functionality for different environments, so the 10-100 million line claims need to be taken with a grain of salt, a lot of the sub projects in them are well below 300k even if you pull in defs.
sarchertech
300k especially isn’t impressive if it should have been 10k.
timr
Yes, well put. And that’s a common failure mode.
N_Lens
I'm currently satisfied with Claude Code, but this article seems to sing the praises of Codex. I am dubious Whether it's actually superior or it's 'organic marketing' by OAI (Given that they undoubtedly do this, and other shady practices).
I'll give codex a try later to compare.
CuriouslyC
Use of the bell curve for this meme considered harmful.
If you're going to use AI like that, it's not a clear win over writing the code yourself (unless you're a mid programmer). The whole point of AI is to automate shit, but you've planted a flag on the minimal level of automation you're comfortable with and proclaimed a pareto frontier that doesn't exist.
srameshc
Everytime I read something like this, I question myself, what I am doing wrong ? And I tried all kinds of AI tools. But I am not even close to claiming that AI writes 50% of my code. My work which sometimes include feature enhancements and maintenance is where I get even less. I have to be extremely careful and make sure nothing unwanted or addition that I am unaware of has been added. Maybe it's me and I am not good yet to get to 100% AI code generation.
fhennig
I'm in the same boat, I still find the model to make mistakes or solve things in a less than ideal way - maybe the future is to just not care - but for now I want to maintain the level of quality that the codebase currently has.
I think it's good to keep up with what early adopters are doing, but I'm not too fussed about missing something. The plugins is a good example: A few weeks ago there was a post on HN where someone said they are using 18 or 25 or whatever plugins and it's the future, now this person says they are using none. I'm still waiting for the dust to settle, I'm not in a rush.
troupo
You're not doing anything wrong. You have to read past hyperbole.
Here's how the article starts: "Agentic engineering has become so good that it now writes pretty much 100% of my code. And yet I see so many folks trying to solve issues and generating these elaborated charades instead of getting sh*t done."
Here's how it continues:
- I run between 3-8 in parallel
- My agents do git atomic commits, I iterated a lot on the agents file: https://gist.github.com/steipete/d3b9db3fa8eb1d1a692b7656217...
- I currently have 4 OpenAI subs and 1 Anthropic sub, so my overall costs are around 1k/month for basically unlimited tokens.
- My current approach is usually that I start a discussion with codex, I paste in some websites, some ideas, ask it to read code, and we flesh out a new feature together.
- If you do a bigger refactor, codex often stops with a mid-work reply. Queue up continue messages if you wanna go away and just see it done
- When things get hard, prompting and adding some trigger words like “take your time” “comprehensive” “read all code that could be related” “create possible hypothesis” makes codex solve even the trickiest problems.
- My Agent file is currently ~800 lines long and feels like a collection of organizational scar tissue. I didn’t write it, codex did.
It's the same magical incantations and elaborated charades as everyone does. The "the no-bs Way of Agentic Engineering" is full of bs and has nothing concrete except a single link to a bunch of incantations for agents. No idea what his actual "website + tauri app + mobile app" is that he build 100% with AI, but depending on actual functionality, after burning $1000 a month on tokens you may actually have a fully functioning app in React + Typescript with little human supervision.
1718627440
> $1000 a month
Yeah at this point you could hire a software developer.
TheMrZZ
I don't know how much SWE get paid in your area, but I sure hope it's not 1000$/month.
Though I'm aligned that I don't (yet) believe in this "AI writes all my code for me" statements.
squirrel
I have to imagine that like pair programming, this multi-AI approach would be significantly more tiring than one-window, one-programmer coding. Do you have to force yourself to take breaks to keep up your stamina?
XenophileJKO
Well I don't use as many instances as they do, but using codex for example will take some time researching the code. It mostly just becomes a practice so that I don't have to wait for the model, I just context switch to a different one. It probably helps that I am ADHD, I don't pay much of a cost to ping pong around.
So I might tell one to look back in the git history to when something was removed and add it back into a class. So it will figure out what commit added it, what removed it, and then add the code back in.
While that terminal is doing that, on another I can kick off another agent to make some fixes for something else that I need to knock out in another project.
I just ping pong back to the first window to look at the code and tell it to add a new unit test for the new possible state inside the class it modified and I'm done.
I may also periodically while working have a question about a best practice or something that I'll kick off in browser and leave it running to read later.
This is not draining, and I keep a flow because I'm not sitting and waiting on something, they are waiting on me to context switch back.
throw-10-13
The human context window is the limiting factor.
That and testing/reviewing the insane amounts of ai slop this method generates.
sarchertech
Another person building zero stakes tools for AI coding. 300k LOC also isn’t impressive if it should have been 10k.
philipp-gayret
> But Claude Code now has Plugins
> Do you hear that noise in the distance? It’s me sigh-ing. (...) Yes, maintaining good documents for specific tasks is a good idea. I keep a big list of useful docs in a docs folder as markdown.
I'm not that familiar with Claude Code Plugins, but it looks like it allows integrations with Hooks, which is a lot more powerful than just giving more context. Context is one thing, but Hooks let you codify guardrails. For example where I work we have a setup for Claude Code that guides it through common processes, like how to work with Terraform, Git or manage dependencies and the whitelisting or recommendation towards dependencies. You can't guarantee this just by slapping on more context. With Hooks you can both auto-approve or auto-deny _and_ give back guidance when doing so, for me this is a killer feature of Claude Code that lets it act more intelligently without having to rely on it following context or polluting the context window.
Cursor recently added a feature much like Claude Code's hooks, I hope to see it in Codex too.
hansmayer
So, just trying to understand this - he admits to code being slop and in the same sentences states that agents (which created the slop in the first place) also refactor it? Where is the logic in that?
fhd2
I feel in this context, refactoring has lost its meaning a bit. Sure, it's often used analogous to changes that don't affect semantics. But originally, the idea was that you make a quick change to solve the problem / test the idea, and then spend some time on properly integrating the changes in the existing system.
LLMs struggle with simplicity in my experience, so they struggle with the first step. They also lack the sort of intelligence required to understand (let alone evolve) the system's design, so they will struggle with the second step as well.
So maybe what's meant here is not refactoring in the original meaning, but rather "cleanup". You can do it in the original way with LLMs, but that means you'll have to be incredibly micro manage-y, in my experience. Any sort of vibe coding doesn't lead to anything I'd call refactoring.
null
sebstefan
"So, just trying to understand this - he admits to code being buggy and in the same sentences states that he (the engineer who created the bugs in the first place) also debugs it? Where is the logic in that?"
grim_io
He, who does not produce slop in the first iteration, cast the first stone.
It all sounds somewhat impressive (300k lines written and maintained by AI) but it's hard to judge how well the experience transfers without seeing the code and understanding the feature set.
For example, I have some code which is a series of integrations with APIs and some data entry and web UI controls. AI does a great job, it's all pretty shallow. The more known the APIs, the better able AI is to fly through that stuff.
I have other code which is well factored and a single class does a single thing and AI can make changes just fine.
I have another chunk of code, a query language, with a tokenizer, parser, syntax tree, some optimizations, and it eventually constructs SQL. Making changes requires a lot of thought from multiple angles and I could not safely give a vague prompt and expect good results. Common patterns need to fall into optimized paths, and new constructs need consideration about how they're going to perform, and how their syntax is going to interact with other syntax. You need awareness not just of the language but also the schema and how the database optimizes based on the data distribution. AI can tinker around the edges but I can't trust it to make any interesting changes.