What happens when coding agents stop feeling like dialup?
16 comments
·September 21, 2025SirensOfTitan
polotics
My experience is exactly the opposite of "AI reduces cognitive engagement with tasks": I have to constantly be on my toes to follow what the LLMs are proposing and make sure they are not getting off track over-engineering things, or entering something that's likely to turn into a death loop several turns later. AI use definitely makes my brain run warmer, got to get a FLIR camera to prove it I guess...
walleeee
So, reduces cognitive engagement with the actual task at hand, and forces a huge attention share to hand-holding.
I don't think you two are disagreeing.
I have noticed this personally. It's a lot like the fatigue one gets from too long scrolling online. Engagement is shallower but not any less mentally exhausting than reading a book. You end up feeling more exhausted due to the involuntary attention-scattering.
add-sub-mul-div
> I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity.
"You're doing AI wrong" is the new "you're doing agile wrong" which was the new "you're doing XP wrong".
pjmlp
Unfortunely many of us are old enough to know how those wrong eventually became the new normal, the wrong way.
bitwize
More like the new "you're holding it wrong"
dist-epoch
> AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.
For me is the exact opposite. When not using AI, while coding you notice various things that could be improved, you can think about the architecture and what features you want next.
But AI codes so fast, that it's a real struggle keeping up to it. I feel like I need to focus 10 times harder to be able to think about features/architecture in a way that AI doesn't wait after me most of the time.
breakfastduck
It depends what environment you're operating within.
I've used LLMs for code gen at work as well as for personal stuff.
At work primarily for quick and dirty internal UIs / tools / CLIs it's been fantastic, but we've not unleashed it on our core codebases. It's worth noting all the stuff we've got out of out are things we'd not normally have the time to work on - so a net positive there.
Outside of work I've built some bespoke software almost entirely generated with human tweaks here and there - again, super useful software for me and some friends to use for planning and managing music events we put on that I'd never normally have the time to build.
So in those ways I see it as massively increasing productivity - to build lower stakes things that would normally just never get done due to lack of time.
joz1-k
From the article: Anthropic has been suffering from pretty terrible reliability problems.
In the past, factories used to shut down when there was a shortage of coal for steam engines or when the electricity supply failed. In the future, programmers will have factory holidays when their AI-coding language model is down.
catigula
>in the future
>programmers
Don't Look Up
corentin88
Same as GitHub or Slack downtimes severely impact productivity.
thw_9a83c
I would argue that dependency on GitHub and Slack is not the same as dependency on AI coding agents. GitHub/Slack are just straightforward tools. You can run them locally or have similar emergency backup tools ready to run locally. But depending on AI agents is like relying on external brains that have knowledge you suddenly don't have if they disappear. Moreover, how many companies could afford to run these models locally? Some of those models aren't even open.
infecto
Cursor imo is still one of the only real players in the space. I don’t like the claude code style of coding, I feel too disconnected. Cursor is the right balance for me and it is generally pretty darn quick and I only expect it to get quicker. I hope there are more players that pop up in this space.
mmmllm
Speed is not a problem for me. I feel they are at the right speed now where I am able to see what it is doing in real time and check it's on the right track.
Honestly if it were any faster I would want a feature to slow it down, as I often intervene if it's going in the wrong direction.
everyone
Ive used chatGPT to help me learn new stuff about which I know nothing (this is it's best use imo) and also write boilerplatey functions, eg. write a function that does precisely X.
Having it integrated into my IDE sounds like a nightmare though. Even the "intellisense" stuff in visual studio is annoying af and I have to turn it off to stop it auto-wrecking my code (eg. adding tonnes of pointless using statement). I dont know how the integrated llm would actually work, but I defo dont want that.
howmayiannoyyou
I expected to see OpenAI, Google, Anthropic, etc. provide desktop applications with integrated local utility models and sandboxed MCP functionality to reduce unnecessary token and task flow, and I still expect this to occur at some point.
The biggest long-term risk to the AI giant's profitability will be increasingly capable desktop GPU and CPU capability combined with improving performance by local models.
> Each of these 'phases' of LLM growth is unlocking a lot more developer productivity, for teams and developers that know how to harness it.
I still find myself incredibly skeptical LLM use is increasing productivity. Because AI reduces cognitive engagement with tasks, it feels to me like AI increases perceptive productivity but actually decreases it in many cases (and this probably compounds as AI-generated code piles up in a codebase, as there isn't an author who can attach context as to why decisions were made).
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
I realize the author qualified his or her statement with "know how to harness it," which feels like a cop-out I'm seeing an awful lot in recent explorations of AI's relationship with productivity. In my mind, like TikTok or online dating, AI is just another product motion toward comfort maximizing over all things, as cognitive engagement is difficult and not always pleasant. In a nutshell, it is another instant gratification product from tech.
That's not to say that I don't use AI, but I use it primarily as search to see what is out there. If I use it for coding at all, I tend to primarily use it for code review. Even when AI does a good job at implementation of a feature, unless I put in the cognitive engagement I typically put in during code review, its code feels alien to me and I feel uncomfortable merging it (and I employ similar levels of cognitive engagement during code reviews as I do while writing software).