Skip to content(if available)orjump to list(if available)

The role of developer skills in agentic coding

ikerino

I use Cursor for most of my development these days. This article aligns pretty closely with my experiences. A few additional observations:

1. Anecdotally, AI agents feel stuck somewhere circa ~2021. If I install newer packages, Claude will revert to outdated packages/implementations that were popular four years ago. This is incredibly frustrating to watch and correct for. Providing explicit instructions for which packages to use can mitigate the problem, but it doesn't solve it.

2. The unpredictability of these missteps makes them particularly challenging. A few months ago, I used Claude to "one-shot" a genuinely useful web app. It was fully featured and surprisingly polished. Alone, I think it would've taken a couple weeks or weekends to build. But, when I asked it to update the favicon using a provided file, it spun uselessly for an hour (I eventually did it myself in a couple minutes). A couple days ago, I tried to spin up another similarly scoped web app. After ~4 hours of agent wrangling I'm ready to ditch the code entirely.

3. This approach gives me the brazenness to pursue projects that I wouldn't have the time, expertise, or motivation to attempt otherwise. Lower friction is exciting, but building something meaningful is still hard. Producing a polished MVP still demands significant effort.

4. I keep thinking about The Tortoise and The Hare. Trusting the AI agent is tempting because progress initially feels so much faster. At the end of the day, though, I'm usually left with the feeling I'd have made more solid progress with slower, closer attention. When building by hand, I rarely find myself backtracking or scrapping entire approaches. With an AI-driven approach, I might move 10x faster but throw away ~70% of the work along the way.

> These experiences mean that by no stretch of my personal imagination will we have AI that writes 90% of our code autonomously in a year. Will it assist in writing 90% of the code? Maybe.

Spot on. Current environment feels like the self-driving car hype cycle. There have been a lot of bold promises (and genuine advances), but I don't see a world in the next 5 years where AI writes useful software by itself.

kristopolous

#1 has an easy fix.

Clone the dependency you want to use in the directory of your code.

Instruct it to go into the directory and look at that code in order to complete task X: "I've got a new directory xyz, in it contains a library to do feature abc. I'll need to include it here to do A to function B and so on"

The weird version mixing bug will disappear. If it's closed source, then just do the documentation.

You need to line up the breadcrumbs right.

#2 is "create a patch file that does X. Do not apply it". Followed by "apply the previous patch file". Manually splitting the task fixes the attention.

Another method is to modify the code. Don't use "to-do" it will get confused. Instead use something meaningless like 1gwvDn, then at the appropriate place

[1gwvDn: insert blah here]

Then go to the agent and say

"I've changed the file and given you instructions in the form [1gwvDn:<instructions>]. Go through the code and do each individually.

Then the breadcrumbs are right and it doesn't start deleting giant blocks of code and breaking things

#3 You will never start anything unless you convince yourself it's going to be easy. I know some people will disagree with this. They're wrong. You need to tell yourself it's doable before you attempt it.

#4 is because we lose ownership of the code and end up playing manager. So we do the human thing of asking the computer to do increasingly trivial things because it's "the computer's" code. Realize you're doing that and don't be dumb about it.

moqizhengz

This is a very typical reply when we see someone pointing out the flaws of AI coding tools. "You are using it wrong, AI can do everything if you properly prompt it"

Yes, it can write everything if I provide enough context, but it ain't 'Intelligence' if context ~= output.

The point here is providing enough context itself is challenging and requires expertise, this makes AI ides unusable for many scenarios.

amputect

We already have a term for prompting a computer in a way that causes it to predictably output useful software; we called that programming, and people on this website used to think that knowing how to do that was a worthwhile field of study.

kristopolous

It's a piece of software, not a magical wand

null

[deleted]

infecto

This is a very typical reply when we see someone excited about AI and then a luddite needs to come along and tell them why they should not be so excited and helpful.

I mostly jest but your comment comes off quite unhelpful and negative. The person you replied to wasn’t blaming the parent comment, just offering helpful tips. I agree that today’s AI tools aren’t perfect, but I also think it’s important for developers to invest time refining their toolset. It’s no different from customizing IDE shortcuts. These tools will improve, but if devs aren’t willing to tinker with what they use, what’s the point?

usrbinbash

> You need to line up the breadcrumbs right.

And in the time I line up the breadcrumbs to help this thing to emulate an actual thought process, I would probably alrady have finshed doing it myself, especially since I speed up the typing-it-all-out part using a much less "clever" and "agentic" AI system.

andai

We do this not because it is easy, but because we thought it would be easy.

airstrike

I need that on a shirt

ozim

This feels like when I explain to a layperson cool dev tool like GIT.

They just roll eyes and continue to do copies of the files they work on.

I just roll eyes on that explanation because it exactly feels like additional work I don’t want to do. Doing my stuff the old way works right away without having to do some explanation tricks and setting up context for the tool I expect to do correct thing on the first go.

kristopolous

it's clearly a preference I feel strongly about. I've been programming for over 30 years btw - I can manually do this. It's a new tool I'm trying to learn.

I was personally clocking in about 1% of the openrouter token count last year every day. openrouter has grown quite a bit, but I realize I'm certainly in the minority/on the edge here.

dimitri-vs

Just so you know, none of what you said sounds easy and I'm a fairly technical person that uses Cursor and other AI tools all day.

hnlurker22

It's all true but I'm surprised it took so long to realize this. As someone who is not an early adopter, I decided to give AI a shot in helping with an existing project a few days ago. I immediately realized almost everything mentioned here. However, the sentiment of everything i've read before was that AI can already replace me. In reality I enjoy steering it to reach a solution.

sReinwald

> In reality I enjoy steering it to reach a solution.

That resonates with me. It actually brings back some nostalgic memories about setting up and constantly tweaking bots in World of Warcraft (sorry, I was young).

There's something incredibly engaging and satisfying about configuring them just right and then sitting back, watching your minions run around and do your bidding.

I get a similar vibe when I'm working with AI coding assistants. It's less about the (currently unrealistic) hype of full replacement and more about trying to guide these powerful, and often erratic, tools to do what they're supposed to.

For me, it taps into that same core enjoyment of automating and observing a process unfold that I got with those WoW bots. Perhaps unsurprisingly, automation is now a fairly big part of my job.

Nemi

I don’t do much development anymore, but this is always how I felt when ORM’s like Hibernate came out. At first they were amazing and were used for everything. Eventually it became obvious that as the complexity of the code went up linearly, the difficulty of troubleshooting and optimizing the code went up exponentially.

I learned to only used ORM’s for basic stuff, which they are very much useful, but when things got a little bit complicated to drop back to hand coding SQL.

jdkoeck

One of the things I love most about AI coding tools is how they make it easier than ever to move away from complex, fragile abstractions like ORMs. The main reason we’ve often avoided writing raw SQL repositories by hand is that it’s tedious, but now, much of that boilerplate can be generated by AI.

More broadly, wherever you might have relied on a heavyweight dependency, you can often replace it with AI-generated code tailored to the task at hand. You might think this would increase the maintenance burden, but au contraire: reviewing and taking ownership of AI-generated code is often simpler and more sustainable in the long term than dealing with complex libraries you don’t fully control.

Aurornis

> It feels like my AI agents are stuck somewhere circa ~2021. If I install newer packages or more recent versions, Claude will often revert to outdated packages/implementations that were popular four years ago.

My experience is the same, though the exact dates differ.

I assume LLMs gravitate toward solutions that are most represented in their training material. It's hard to keep them pulled toward newer versions without explicitly mentioning it all the time.

spwa4

But they can't really be low-latency if they have to search for new versions ... and that makes such a big difference in how usable they are.

jb_briant

While you give your process feedback, here is my emotions related one. When I dev with LLM, I don't face my own limits in term of reasoning and architecturing, but I face the limit of the model to interpret prompts. Instead of trying to be a better engineer, I'm frustratingly prompting an unintelligent human-like interface.

I'm not fuding LLM, I use it everyday, all the time. But it won't make me a better engineer. And I deeply believe that becoming a good engineer helped me becoming a better human, because how the job make you face your own limits, train you to be humble and constantly learning.

Vibe coding won't lead to that same and sane mindset.

jgilias

I feel like this is the big question now. How to find the correct balance that lets you preserve and improve your skills. I haven’t yet found the answer, but I feel like it should be in being very deliberate about what you let the agent do, and what kind of work you do in the “artisanal” way, so not even AI-enabled auto-complete.

But in order to be able to find the right balance, one does need to learn fully what the agent can do, and have a lot of experience with that way of coding. Otherwise the mental model is wrong.

jb_briant

Agreed. Auto-complete on steroid isn't what I call using LLM for coding. It's just a convenience.

What I do is to write pure functions with LLM. Once I designed the software and I have the API, I can tell the model to write a function which does a specific job where I know the inputs and outputs but I'm lazy to write the code itself.

liveoneggs

it will make you a worse thinker

petesergeant

> Spot on. Current environment feels like the self-driving car hype cycle. There have been a lot of bold promises (and genuine advances), but I don't see a world in the next 5 years where AI writes useful software by itself.

My worry is that we get an overnight sea-change like 4o image generation. The current tools aren't good enough for anything other than one-shotting, and then suddenly overnight, they're good enough to put a lot of people out of work.

impjohn

Hm. The 'prompt obedience' was definitely a huge step up, but there's a huge number of acceptable results to an image generation prompt, while for coding usually there a handful, many times just one right solution. So I don't think this parallel here is telling

prisenco

#1 is going to be an issue until we have another breakthrough or genuinely innovative approach.

We all know that 2 years is a lifetime in tech (for better or for worse), and we've all trained ourselves to keep up with a rapidly changing industry in a way that's more efficient than fully retraining a model with considerably more novel data.

For instance, enough people have started to move away from React for more innovative or standards-based approaches. HTML and CSS alone have come a long way since 2013 when React was a huge leap forward. But while those of us doing the development might have that realization, the training data won't reflect that for a good amount of time. So until then, trying to build a non-React approach will involve wrestling with the LLM until the point when the model has caught up.

At which point, we will likely still be ahead of the curve in terms of the solutions it provides.

beepbooptheory

But doesnt this just ultimately "solve" itself? If everybody is just going to use LLMs like this, they will keep using React longer, and 2 years is not going to feel like that much of a lifetime anymore. How would, even, a developer inside their AI ecosystem like this even know or care about new frameworks or new ways to do things?

aprilthird2021

No because big companies see far more value from actually innovating and producing better technology (see Bytedance with their new cross-platform mobile framework Lynx) that speeds up things and provides better User Experience at their scale than they save in developer salaries by just using what an AI is most familiar with

aprilthird2021

> A few months ago, I used Claude to "one-shot" a genuinely useful web app. It was fully featured and surprisingly polished. Alone, I think it would've taken a couple weeks or weekends to build. But, when I asked it to update the favicon using a provided file, it spun uselessly for an hour (I eventually did it myself in a couple minutes).

This reminds me of when cross-platform was becoming big for mobile apps and all of us new app developers would put up templates on GitHub which gave a great base to start on, but you really quickly realized you'd have to change a lot of it for your use case anyways.

usrbinbash

Here is how I use it: As a writing assistant that lives in my IDE, and as a very very cool and sophisticated rubber duck that can answer me.

Something I do quite a lot is throwing back and forth a discussion over a particular piece of code, usually provided with little to no context (because that's my task to worry about), hammering it until we get that functionality correct, then presenting it with broader context to fit it in (or I simply do that part by hand).

Here is how I don't use it: As an agent that gets broad goals that he is supposed to fulfill on its own.

Why? Because the time and effort I have to invest to ensure that the output of an agentic system is in line with what I actually try to accomplish, is simply too much, for all the reasons outlined in this excellent article.

Ironically, this is even more true, since using AI as an incredibly capable writing assistant, already speeds up my workflow considerably. So in a way, less agentic AI empowers me in a way that makes me more critical of the additional time I'd have to invest to play around the quirks of agentic AI.

kubanczyk

This doesn't parse for me:

> a discussion over a particular piece of code [...] hammering it until we get that functionality correct

Care to provide an example of sorts?

> then presenting it with broader context to fit it in

So after you have a function you might convert it to a method of a class. Stuff like that?

usrbinbash

> Care to provide an example of sorts?

For example, recently I needed to revise some code I wrote a few years back, re-implementing a caching mechanism to make it work across networked instances of the same software. I had a rough idea how I wanted to do that, and used an LLM to flesh out the idea. The conversation starts with an instruction that I don't want any code written until I ask for it, then I describe the problem itself, let it list the key points, and then present my solution (all still as prose, no code so far).

Next step, I ask for its comments, and patterns/implementation details how to do that, as well as alternatives to those. This is the "design phase" of the conversation.

Once we zoom in on a concrete solution, I ask it to produce a minimal example code of what we discussed. Then we repeat the same process, this time discussing details about the code itself. During that phase I tell it what parts of the implementation it doesn't t need to worry about and what to focus on, keeping it from going off to mock-up-wonderland.

At the end it usually gets an instruction like "alright, please write out the code implementing what we discussed so far, in the same context as before".

This gives me a starting point to work from. If the solution is fairly small, I might then give it some of the context this code will live in, and ask it to "fill in the blanks" as it were...often though I do that part myself, as its mostly small refactoring and renaming.

What I find so useful about this workflow, as opposed to just throwing the thing at my project directory; it prevents the AI from getting side tracked, lost, as it were, in some detail, endlessly chasing its own tail trying to make sense of some compiler error. The human in the loop (yours truly), sets the stage, presents the focus, and the starting point is no code at all, just an ephemeral description of a problem and a discussion about it, grounding all the latter steps of the interaction.

Hope that makes sense.

myflash13

Developer skill is obviously still essential — you can’t steer if you couldn’t drive. But what about developer energy? Before AI I could only code about 2 hours per day (actual time spent writing code) but with Claude Code I can easily code for 5 hours straight without breaking a sweat. It feels like riding an e-bike instead of a bicycle. AI genuinely feels like Steve Jobs analogy of a bicycle for the mind — it doesn’t replace me but now I can go much farther and faster.

_puk

Love the ebike analogy!

You're right though, you need to be able to steer, but you don't necessarily need to be able to map read.

Case in point, I recently stood up my first project in supabase, cursor happily created the tables, secure RLS rules etc in a fraction of the time it would take me.

To stop it getting spaghetti I had to add a rule "I'm developing a first version - add everything to an SQL file that tears down and recreates everything cleanly".

This prevented hundreds of migration files being created, allowed me to retain and context, and every now and then ask "have you just made my database insecure", which 50:50 resulted in me learning something, or a "whoopsie, let me sort that".

If I wasn't aware of this then it's highly likely my project would be full of holes.

Maybe it still is, but ignorance is bliss (and 3 different LLMs can't be wrong can they?!)

Smar

I wonder how much more common vulnerabilities will be in coming years...

thom

Yeah this feels right. It helps with the bits I care about and definitely reduces friction there, but it also breaks through every wall when I’m off the happy path. I had to solve three layers of AWS bullshit yesterday to get back to training a model and if I’d had to solve them myself or bring in someone from my platform team I’d have ended up stopping short. I don’t really want to replace the parts of my job that I enjoy with AI, but I love having something to pick up the miserable bits when I’m stuck, confused, frustrated or bored. You’re absolutely right that this helps conserve energy.

senbrow

The ebike analogy is perfect!

You still have to pedal, steer, and balance, but you're much faster overall.

codr7

I don't get it, at all.

Why are experienced developers so enthusiastic about chaining themselves to such an obviously crappy and unfulfilling experience?

I like writing code and figuring stuff out, that's why I chose a career in software development in the first place.

quest88

I don't enjoy the keypresses of building useful features. I like identifying what needs to be changed, and how, in abstract terms. These tools quickly help me verify those changes are right.

If I need to call the VideoService to fetch some data, I don't want to spend time writing that and the tests that come with it. I'd rather outsource that part.

null

[deleted]

codr7

I don't object to abstracting, at all; or reducing labor in general.

But this method of getting there makes me feel like I'm degraded to being the assistant and the machine is pulling my strings; and as a result I become dumber the more I do it, more dependent on crap tech.

marcellus23

I can imagine people making the same argument decades ago about higher-level languages being "crappy and unfulfilling" compared to writing assembly. After all, you're not even writing the instructions yourself, you're just describing what you want and the computer figures out what to do.

codr7

The (pretty substantial) difference is now the computer is telling you what to do.

marcellus23

In what way?

_acco

Why do we use hotkeys and snippets?

There is a lot of tedium in software development and these tools help alleviate it.

wrasee

If all these tools do is wipe out all the tedium in software then we are all laughing. Happy days to come.

There are tools and then there are the people holding the tools. The problem is no-one really knows which one AI is going to be.

6thbit

At some point in your career you realize every line of code you write is not just an asset—it's also a liability that needs to be maintained, debugged, and understood by others.

The excitement around AI coding tools isn't about chaining yourself to a crappy experience — it's about having support to offload cognitive overhead, reduce boilerplate, and help spot potential missteps early.

Sure, the current gen AI isn't quite there yet, but it can lighten the load, leaving more space to solving interesting problems, architecting elegant solutions and "figuring stuff out".

abenga

I really don't understand how replacing writing N lines of code by reading N lines of code reduces mental load. Reading and understanding code is generally harder than writing equivalent code.

6thbit

You'd still have to review code if you asked another human to write it.

That's why minimizing the generated code is important as well as working on smaller parts at once to avoid what the author refers as "too much up-front work" -- It is also easier mentally when you can iterate on this whole process in seconds rather than days in a pull request review.

malyk

Because you'll be replaced by those engineers in N months/years when they can outperform you because they are wizards with the new tools.

It's like failing to adopt compiled code and sticking to punch cards. Or like refusing to use open source libraries and writing everything yourself. Or deciding that using the internet isn't useful.

Yes, developing as a craft is probably more fulfilling. But if you want it to be a career you have to adapt. Do the crafting on your own time. Employers won't pay you for it.

codr7

Let them replace me then, it's not a job I feel like doing anyway.

And when they have forgotten all about how to actually write software, the market is mine.

elric

Instead of writing code, you can now figure out the code that your AI wrote and effectively treat everything as a legacy system.

codr7

How is correcting someone else's solutions even close to as fulfilling as creating your own? How can you not lose something by choosing that method?

bradlys

> I like writing code and figuring stuff out, that's why I chose a career in software development in the first place.

That's not why I got into software development. I got into it to make money. I think most people in Silicon Valley these days are the same mentality. How else could you tolerate the level of abuse you experience in the workplace and how little time you get to really dig on that particular aspect of the job?

This is a website that is catered to YC/Silicon Valley. My perspective is going to be common here.

codr7

I guess there were always two kinds, Bill Gates doesn't strike me as a person who loves technology. Compared to, say Dennis Ritchie or Wozniak.

I'm firmly in the problem solver/hacker/artist camp.

Which I guess is why we're more concerned about the current direction. Because we value those aspects more than anything; consider them essential to creating great software/technology, to staying human; and that's exactly what GenAI takes away.

I see how not giving a crap about anything but money means you don't see many problems with GenAI.

meowface

Because on net it often saves tons and tons of time and effort if you know how to use it.

Luddism is a strange philosophy for a software engineer.

usrbinbash

> Luddism is a strange philosophy for a software engineer.

Luddism and critically evaluating the net benefit and cost of a piece of tech, are 2 very different things.

And the latter is not strange for a SWE at all, in fact I'd say it's an essential skill.

meowface

I'm referring to this part:

>I like writing code and figuring stuff out

This is an alien mentality to me.

FfejL

> Example: When encountering a memory error during a Docker build, it increased the memory settings rather than questioning why so much memory was used in the first place.

AI really is just like us!

jvanderbot

I'm surprised to not see an obvious one (for me): Use AI around the periphery.

There's very often a heap of dev tools, introspection, logging, conversion, etc tools that need to be build and maintained. I've had a lot of luck using agents to make and fix these. For example a tool that collates data and logs in a bespoke planning system.

It is a lot of generated boilerplate off the critical path to build these tools and I just don't want to do it most days.

aulin

In my every day experience that's pretty risky. The periphery as you call it is often an area where you lack the expertise to spot and correct AI mistakes.

I am thinking about build systems and shell scripts. I see people everyday going to AI before even looking at the docs and invariably failing with non-existent command line options, or worst options that break things in very subtle ways.

Same people that when you tell them why don't you read the f-ing man page they go to google to look it up instead of opening a terminal.

Same people that push through an unknown problem by trial and error instead of reading the docs first. But now they have this dumb counselor that steers them in the wrong direction most of the time and the whole process is even more error prone.

jvanderbot

You're wrong. I have all the expertise but none of the time to generate 100s of lines of boilerplate API calls to get the data together, and no interest in formatting it correctly for consumption, let alone doing so state fully to allow interaction. These are trivial problems to solve that are highly tedious and do not affect whatsoever the business at hand. Perfect drudgery for automation, and just scanning the result is easy to verify the output or code.

skydhash

> . I have all the expertise but none of the time to generate 100s of lines of boilerplate API calls to get the data together, and no interest in formatting it correctly for consumption,

Time to learn some Emacs/Vim and Awk/Perl

aulin

I am not wrong. You simply are not the kind of developer I am thinking about. And believe me the other kind is way more represented.

timdellinger

I find that I have to steer the AI a lot, but I do have optimism that better prompting will lead to better agents.

To take an example from the article: code re-use. When I'm writing code, I subconsciously have a mental inventory of what code is already there, and I'm subconsciously asking myself "hey, is this new task super similar to something that we already have working (and tested!) code for?". I haven't looked into the details of the initial prompt that a coding agent gets, but my intuition is that an addition to the prompt instructing the agent to keep an inventory of what's in the codebase, and when planning out a new batch of code, check the requirements of the new tasks against what's already there.

Yes, this adds a bunch of compute cycles to the planning process, but we should be honest and say "that's just the price of an agent writing code". Better planning > ability to fix things.

hnuser123456

There are certain pieces of text that appear right before some of the greatest pieces of code ever written. For example, we've all heard of NASA code requirements. If you get the LLM into the "mindset" of a top-tier professional developer before getting it to spit out code, the code quality will reflect that. If your prompt is sloppy and poorly defined, you'll get copy-pasted StackOverflow code, since that's how most SO questions look. If it's stupid but it works, it's not stupid.

The hard part is that finding a local optimum for prompting style for one LLM may or may not transfer to another depending on personality post-training.

And whatever style works best with all LLMs must be approaching some kind of optimum for using English to design and specify computer programs. We cannot have better programs without better program specifications.

yaj54

Can you share some examples of these certain pieces of text and greatest pieces of code?

hnuser123456

Well, if you want safety-critical code, you could have the LLM read this before asking it to write its own code: https://ieeexplore.ieee.org/document/1642624

GP was pondering about code re-use. My typical use involves giving an entire file to the LLM and asking the LLM to give the entire file back implementing requested changes, so that it's forced to keep the full text in context and can't get too off-track by focusing on small sections of code when related changes might be needed in other parts of the file.

I think all of this is getting at the fact that an LLM won't spit out perfect code in response to a lazy prompt unless it's been highly post-trained to "reinterpret" sloppy prompts just as academically as academic prompts. Just like a human programmer, you can just give the programmer project descriptions and wait for the deliverable and accept it at face value, or you can join the programmer along their journey and verify their work is according to the standards you want. And sometimes there is no other way to get a hard project done.

Conversely, sometimes you can give very detailed specifications and the LLM will just ignore part of them over and over. Hopefully the training experts can continue to improve that.

skydhash

This is one of the reasons I never needed to use LLMs. In any given codebase where you're experienced enough in the language and the framework/platform/libraries, more often than not, you're just copy-pasting code, or tab-completing (if you're in an IDE). The actual problems are more often solved on the sofa and with a lot of reading, then trying out hypothetical solutions.

osigurdson

Is Martin Fowler now just renting out space on his website?

Cupprum

Martin Fowlers page is basically the place where Thoughtworks [1] places their articles. A lot of good stuff there on many different topics.

1: https://www.thoughtworks.com/

onionbagle

right!? It was a little misleading that the article was written by someone else. (Birgitta Böckeler)

nikolayasdf123

yeah, confusing. URL literally says martinfowler, and article written by someone else. what... :/

Apocryphon

What are you referring to?

Jtsummers

GP was apparently unaware until now that martinfowler.com has basically been a blog/article hosting site (though more in the highly curated sense than a generic hosting site, more akin to a trade publication) for the last couple decades and that not all the content is written by Martin Fowler himself. The author of this piece is Birgitta Böckeler.

bitwize

Kinda like rogerebert.com, even before the esteemed film critic's death.

adamgordonbell

    Lack of reuse
    AI-generated code sometimes lacks modularity, making it difficult to apply the same approach elsewhere in the application.

    Example: Not realising that a UI component is already implemented elsewhere, and therefore creating duplicate code.

    Example: Use of inline CSS styles instead of CSS classes and variables
This is the big one I hit for sure. I think it's a problem with agentic RAG, where it only knows the files it's looked in and not the overall structure or where to look for things, so it just recreates them.

nikolayasdf123

yep. noticed this too. but I think this is due to short context LLMs have. now my tools, mostly browser LLMs copy-paste code, are analysing small sub-problem at time. once LLMs will see whole repository + history of that repository, likely they will write abstractions that can be reused

svilen_dobrev

i have mentored a few people to become Programmers. Some for months, some for years. It's like teaching someone to ride a bycicle. Hand-holding first, then hand-guiding, then short flights, then longer... Different people pick stuff at different pace and shape.. but they do learn.. if they want to.

What i completely miss in these LLM parrots-agents-generators, is the learning. You can't teach them anything. They would not remember. Tabula rasa / Clean slate, every time. They may cite Shakespeare - or whatever code scrubbed from github - and concoct it to unrecognizability - but that's it. Hard rules or guardrails for every-little-thing are unsustainable to keep (and/or create) - expert-systems, rule-based no-code/low-code.. has been unsuccessful for decades).

Maybe, next AI wave.

And, there's no understanding. But that also applies to quite some people :/

ebiester

Consider rules for projects. It's not always perfect, but it does adapt based on my instructions.

For example, I have had good success in test first development as a rule. That means that I can make sure it has the specifications correct first.

owebmaster

LLMs don't learn but agents do. You just need to insert that new knowledge in the prompt.

achierius

Agents are still LLMs. LLMs also have prompts. There's no difference!

trash_cat

Agents HAVE Models. There is a big difference if you give a model access to tools and perhaps some form of memory.

owebmaster

LLMs don't have prompts, you can use prompts to query LLMs.

Agents are a mix of models, prompts, RAG and an event loop.

cheevly

Literally all of this is easy to solve if you actually tried. Developers are literally too lazy to write their own AI tooling, it’s bizarre to me.

danmur

The goal isn't to use AI, though, it's to be productive. Maybe for you AI + writing support tools to improve your workflow makes you more productive and that's great! For me, for the kind of work I'm currently doing, I'm more productive in other ways.

ferguess_k

I don't really like AI in IDE. I don't want them to think for me. Code completion and Intellisense is good enough.

That said, I think there are 3 items that are important:

- Quickly grasp a new framework or a new language. People might expect you to do so because of AI's help. 2 weeks might be the maximum, instead of the minimum. The same for juniors.

- Focus on the real important things. So instead of trying to memorize a shell script you are going to use a couple of times per year, maybe use the time to learn something more fundamental. You can also use AI to help you to bootstrap the learning. If you need something for interviews, spend a week to memorize them.

- Be willing to exclude AI from your thought process. If you rely AI on everything, including algorithms and designs, this might impact your understanding.

all2

    - Be willing to exclude AI from your thought process. If you rely AI on everything, including algorithms and designs, this might impact your understanding.
Most of the time I'm using AI for problem space mapping (I'm doing dirt simple CRUD dev right now) and decomposition. It's ok at that, but even the deep research mode of Claude leaves some things to be desired.

I feel like an editor now, more than an engineer. I know the kinds of things I'm looking for, and I use AI to walk a solution in. Either I use the output of the LLM as-is (for throwaway stuff) or I use it as a jumping off point for my own work _without_ the AI.

ferguess_k

>I feel like an editor now, more than an engineer. I know the kinds of things I'm looking for, and I use AI to walk a solution in.

I agree. I think it's fine to do so. I usually prefer to write my code without AI (except for bootstrapping it).

In my work as a DE, I mostly use AI to write scripts for me. For example, how to do this in PySpark? I kinda refused to memorize any of these because I'm simply not very interested, and I can always spend a week to memorize the fundamentals if I need.

In my side projects, I use AI extensively. Same as you, I use AI for problem space mapping, or sort of. For example, I have some source code, how do I structure them better? I have read the MIDI standard and thought this piece of binary code means blah, can you please confirm for me? Well AI is OK for these kinds of work.

tomrod

For me I have two extremes:

1. Complete Vibe coding -- greenfield and just playing or doing a small quick prototype

2. Get out of my IDE, but know my code base -- small snippets, methods, classes that add desired functionality but off to the side completely -- and don't run things in my shell that's just wrong

What I don't like it current codebases getting whacked because it decided to downgrade a dependency or lie about a function signature.

giantg2

Even code completion has issues. It might get the structure right, but it usually doesn't understand the business logic and I end up switching out what codes/vars are being used.

ferguess_k

It's definitely possible. But in my case so far it's fine. I work as a DE so I only need the auto-completion to remind me what the column name is once I typed out the first few characters, because there are so many columns.

In my side projects I mostly use C/C++ so auto-completion helps me to find a struct member or something similar.

I guess it can become quite complicated when the projects becomes very large.

jillesvangurp

I use LLMs for various purposes in day to day development. I don't use any of the tools mentioned in the article because I'm using intellij and don't want to replace a tool that has lots of stuff that I use all the time. But aside from that, it's good advice and matches my experience.

I've dabbled with plugins for intellij but wasn't really happy with those. But ever since chat gpt for desktop started interfacing directly with jetbrains products (and vs code as well), that's my goto tool. I realized that I like being able to pull that up with a simple keybinding and it auto connects to the IDE when I do. I don't need to replace my tools and I get to have AI support ready to go. Most of the existing plugins seem to insist on some crappy auto complete, which in a tool that offers a lot of auto complete features already is a bit of an anti feature. I don't need clippy style autocomplete.

What matters here is the tool integration, not the model quality. Better tool integration means better prompts with less work and getting better answers that way.

Example: I run a test, it fails with some output. I had this yesterday. So I asked, "why is this failing" and had a short discussion about what could be wrong. No need for me to specify any detail; all extracted from the IDE. We ticked off a few possible causes, I excluded them. And then it noticed a subtle change in the log messages that I had not noticed (a co-routine context switch) that turned out to be the root cause.

That kind of open ended debugging is a bit of a mixed bag. Sometimes it finds stuff. Mostly it just starts proposing solutions based on a poor analysis of the problem.

What works pretty reliably is:

- address the TODOs / FIXMEs, especially if you give it some examples of what you expect

- write documentation (very good for this)

- evaluate if I covered all the edge cases (often finds stuff I want to fix)

- simple code transformations (rewrite this using framework X instead of Y)

I don't trust it blindly. But it's generally giving me good code and feedback. And I get to outsource a lot of the boring crap.

marstall

Yes I find chatGPT/Jetbrains (RubyMine) in my case is the most usable setup I've encountered.

It's like Rubymine is "home" for me - and chatGPT's macOS client has become another "home" for me so it's quite convenient that they talk to each other now.

I have a little FOMO about Cursor though. ChatGPT will automatically apply its suggested changes to my open editor - but I have the sense Cursor will do a bit more? Apply changes to multiple files? And have knowledge of your whole project, not just open files? Can someone fill me in

nikolayasdf123

this. very similar experience. but toolset is rather different.