Learn to code, ignore AI, then use AI to code even better
149 comments
·March 28, 2025nkrisc
kshacker
I have played around wit AI code from time to time. I do not code routinely but I have pet personal projects that allow me to do some code and this is where I experimented.
Rule number 1 and the only rule: You need to be a subject matter expert. Be it program logic or be it programming language. AI is only a helper, it will go wrong, frequently, and if you do not understand the reason for the code and the programming language, you will take so much more time than if you did not even use the AI.
Without naming the IDE, but one of the top 3 I guess, I asked it to simplify some code. I had repeated code 8 times. 6 of them were identical, the last 2 had a variation. The AI just did not catch it, and refactored all 8 blocks to use the logic of first block. How can you even do that? The code is similar but different, it looks the same but there are extra lines of code in the last 2 blocks !!
And it took me a while to realize this. I never ingest AI code directly so at first I was marveling at a job well done, and as I read and compared, the horror !! And that was not the first time it happened, but once again I got tricked by the soft spoken well mannered AI to believe that it did a fantastic job when it did not.
Edit: It is just an assistant. You give it a task, it will make a mistake, you tell it to fix the mistake, it will fix the mistake. It still saves you time. Next day, it will make the same mistake - and hopefully that gets reduced as the versions evolve.
theshrike79
AI is excellent for tasks you know how to do, but can't be arsed to spend the time.
Example: I wanted a tool that notifies me of replies or upvotes to my recent Hacker News comments. Grok3 did it in 133 seconds with Think mode enabled. Total time including me giving it the example HTML as attachment and writing the specs + pasting the response to a file and running it? About 5 minutes.
I know perfectly well how to do it myself, but do I want to spend the hour or so to write all the boilerplate for managing state and grabbing HTML and figuring out the correct html elements to poll? Fuck no.
mirsadm
Yes I find this to be it's best use case. Unfortunately for anything I actually need help with the results are often terrible.
ElectricalUnion
From my experience using ai, if you don't write a really precise description of your initial requirements in the initial prompt, and it it doesn't one-shot the answer, I don't bother asking it to fix the mistake.
Unless you're using a LLM with really long context, it's prone that some context loss will happen soner or later - when that happens, pointing errors that dropped out of the context will just result in repeated or garbage output.
theshrike79
This is why platforms(?) like Claude Code, Cursor and Windsurf are essential.
Claude Code forces you to create a CLAUDE.md to direct how it works, with Cursor you can (and should) write Cursor Rules.
The difference with a good spec + AI vs just vibe coding from scratch is like night and day.
derekp7
What I do is go back through the conversation history, select the response that has the somewhat working code, then submit a prompt with what I want changed. By selectively including context and adjusting temperature, top_p/k and sometimes swapping model or system prompt for a given query will give better results. Combine this with repeating the query multiple times with that same context, then select which result is the best and move on.
nkrisc
I like to think of AI as a force (or skill) multiplier. If you’re low skill, it doesn’t do much. The higher your skill, the more useful it is.
chii
That hasn't been my experience, nor that of others using the AI.
It's a force constant, rather than a multiplier. If you're low skilled, asking it to do a low skilled task, it works fine. If you're high skilled, and asked it to do the low skilled task, you saved a tiny bit of time (less than that of the low skilled person).
But it cannot do a high skilled task (at least, not right now). It can pretend, which can lead the low skilled person astray (but not the high skilled person).
Therefore, all AI does is raise the floor of what is achievable by the laymen, rather than multiply the productivity of a high skilled programmer.
NitpickLawyer
> Rule number 1 and the only rule: You need to be a subject matter expert.
Strong disagree. I've been coding for 25+years but never on the front-end side. I couldn't write JS w/ pen & paper with a gun against my head. But I know what to ask and how to make sure a react component does what I want it to do, with these tools.
latexr
Or, in other words, you are a subject matter expert and are agreeing with the person you’re responding to. Quote the full argument (emphasis added):
> Rule number 1 and the only rule: You need to be a subject matter expert. Be it program logic or be it programming language.
You are a subject matter in program logic, just not programming language. You are supporting their point, not disagreeing.
arijo
How many thousands of lines does your AI-generated frontend code have?
Do you have to maintain the code?
nkrisc
> head. But I know what to ask and how to make sure a react component does what I want it to do
That is the subject matter expertise that many lack.
adriand
Knowing how to code, and having a lot of experience and an "intuitive" sense of what is a good idea and what is a bad idea, also puts you in a position to question the advice the AI gives you. Just now I was asking Claude to help me with an issue with a React component and it told me to add useEffect with a timer. I am not a React expert, but that immediately felt like a code smell to me, so I followed up:
> is it weird or an anti-pattern to use a timer like this?
The response:
> Yes, using a timer like this is generally considered an anti-pattern in React for several reasons: It introduces non-deterministic behavior (timing-dependent code), It's a workaround rather than addressing the root cause, It can be brittle and lead to race conditions.
I'm sure all those things are true. This is a classic example of the problem with people using AI programming tools but lacking a real understanding of what they're doing. They don't know enough to question the advice they're getting, let alone properly review the code its generating.
The other day, in a Rails app, Claude generated a bunch of code that spawned various threads to accomplish certain things I needed to do asynchronously. Maybe these days, in Ruby 3 and Rails 8, this is safe. But I remember that back in the Rails 2 days, going off and spawning new threads was not a good idea. Plus, I have a back-end async job processor already set up. Again, I questioned the approach. The revised code I got back was a lot simpler, and once I'd reviewed and tested it, I (mostly) used it as-is.
sharemywin
that's the thing if your inquisitive and have an interest in learning things then you can still go far with AI coding. can you explain why this code works?
is this the best way to do it or are there other solutions? what are the pros and cons?
are there security problems with this? how could I make this code more secure?
what are some things I should look out for with AI coding(meta question)?
what does this error mean?
just talking back and forth with the AI on the phone you can get a high level understanding of a topic pretty quickly and way more in depth and personalized than a tutorial on the internet.
nkrisc
> inquisitive and have an interest in learning things
Traits much less common than you might think among people who want to get into programming.
sharemywin
and that's just about any topic.
bob1029
> Much of the time the problem is simply an invented method name that doesn’t exist
I spent a solid 2 hours yesterday trying to get an SSDP protocol implementation going because the LLM was absolutely insistent upon using 3rd party libraries that don't exist and UDP client methods defined in Narnia. I had to spoon feed it half-way attempts before I could get it to budge on useful code. This was all before I realized we had a problem with multicast group membership and multiple network adapters.
These models definitely can help (I wouldn't have gotten as far as I did without one), but you need to know what you want every step of the way. Having mere "vibes" about a sophisticated end result will result in unhappy outcomes. I think the model would have made my life much worse if I wasn't as cynical and suspicious regarding every aspect of its operation. I can see how these models would steal learning opportunities from more novice developers. Breaking out Wireshark is the sort of desperation that only arises when you can't constantly ping some rubber duck for shreds of hope (or once you realize there is no hope).
bluGill
I gave up on AI because of that. The old IDEs that use an AST for autocomplete still exist and work very well for allowing me to hit tab and get the correct function filled in. They are also very good at the little pop-up that tells me details about the parameter I'm trying to fill in really is - the AI has no clue what order the arguments really are and so often gets it wrong. They won't complete 1000 lines of code - but that is only rarely a savings as most 1000 line code snippets I've worked with just as fast to write myself (I've been programming for 30 years) as to figure out how the AI got some details wrong.
If the AI had access to the AST and could know what functions exist they might be helpful. Then they could write that function they wished existed if it doesn't. However that means they need to know how to understand the code not just the structure.
NeutralCrane
> Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI.
I’m not sure this is true. Prior to AI you saw a lot of the same behavior, but it was with code copied and pasted from stack overflow, or tutorials, or what have you.
I don’t think AI has changed much in terms of behavior. There has always been a subset of people who have just looked for getting something that “worked” without understanding why, whether that’s from an AI code assistant, or an online forum, or their fellow teammates, and others who want to understand why something works. AI has perhaps made this more apparent, but it’s always been a thing.
nkrisc
The difference is that code are copy-pasting isn’t randomly mutated for each person doing so, and likely if they take the time to go back to where they got it there is likely also an explanation or more info about if they care to take the time to read.
whstl
Subreddits focused on gamedev have long stopped being about the craft itself, unfortunately.
90% of the posts are about marketing or are self-help/motivational.
Anything related to art, sound or programming barely gets upvotes.
elpocko
They like to talk about feelings a lot. Lots of posts about how it feels when their upcoming game reaches <number> of wishlists on Steam, or how it felt when their low resolution pixel art game using Kenney's asset packs flopped against all odds.
Arisaka1
The other side effect to that is the difficulty to socialize, albeit online, with those who care about the craft itself.
To paraphrase a meme "best I can do is text editor wars".
whstl
Yep.
Gamedev.net was an amazing hang back in the early 2000s up to the 2010s.
Now it's just a perpetual "how do I do this with Unity" that is super hard to filter.
I just go to meetups now.
> best I can do is text editor wars
Or Unreal vs Unity wars in this case :'D
pydry
IME these tend to be the same people arguing that programmers will all be out of a job in 10 years. It makes me wonder why they persist.
vincnetas
Real live example. Recent conversation with colleague :
Hey, trying to translate excel sheet with chatgpt, cant understand what to do (posts screenshot with explanation and example "pip install [package-name]"
You just need to execute specified command in your environment
What is "my environment"?
dansmyers
I'm a professor at a small college. I teach intro programming most semesters and we're now moving to using tools like Cursor with no restrictions in upper-level courses.
"How do students learn to code nowadays?" - I think about this pretty much all the time.
In my intro class, the two main goals are to learn about structured programming (using loops, functions, etc.) and build a mental model of how programs execute. Students should be able to look at a piece of code and reason through what it does. I've moved most of the traditional homework problems into class and lab time, so I can observe the students coding without using AI. The out-of-class projects are now bigger and more creative and include specific steps to teach students how to use AI collaboratively.
My upper-level students are now doing more ambitious and challenging projects. What we've seen is that AI moves the difficulty of programming away from remembering details of languages or frameworks, but rewards having a careful, structured development process:
- Thinking hard and chatting about the problem and the changes you need to implement before doing anything
- Keeping components encapsulated and thinking about interfaces
- Controlling the scope of your changes; current AIs work best at the function or class level
- Testing and validation
- Good manual debugging skills; you can't rely on AI to fix everything for you
- General system knowledge: networking, OS, data formats, databases
One of my key theories is that AI might lower the value of "computer science" as a standalone major, but will lead to a lot more coding across fields that currently don't use it. The intersection of "not a traditional engineer" and "can work with AI to solve problems with code" is going to be an emerging skill set that will change a lot of disciplines.
NeutralCrane
This is by far the most interesting insight in this thread
ghaff
I observe the course catalog at one not small college now has options for a lot of majors—many of which weren’t terribly computer-heavy historically.
greenchair
appreciate the insight
marjann
The rise of tools like Cursor reminds me of the Industrial Revolution in France. When machines first appeared in factories, unskilled workers who didn’t understand how they operated often got injured - sometimes quite literally losing fingers. But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
The same applies to software development. If you lack the fundamentals - how memory, I/O, networking, and databases work - you’re at risk of building something fragile that will break under real-world conditions. But for those who understand the moving parts, tools like Cursor supercharge efficiency, allowing them to focus on high-level problem-solving rather than boilerplate coding.
Technology evolves, but the need for deep knowledge remains. Those who invest in learning the craft will always have the advantage.
Frieren
> When machines first appeared in factories, unskilled workers who didn’t understand how they operated often got injured - sometimes quite literally losing fingers.
Factories were extremely dangerous because the machines had no safety measures. And they continued to be dangerous, for everybody skilled or not, until the introduction of workers rights, regulations and enforced safety measures and protocols.
> But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
Skilled craftsmen continued working as they traditionally did so much so that up to today it is possible to find craftsmen that use traditional tools.
> Those who invest in learning the craft will always have the advantage.
I agree with your conclusion, thou.
fludlow
I like your comparison. A related thought: what should be really valuable right now for Cursor, Windsurf etc is figuring out who the skilled users are and further training their models based on their usage. In fact, actively courting skilled devs would give them very high quality data to finesse the tools further.
If I could honestly say I was any good at coding I'd be using this as an argument for unlimited free access to these platforms!
shinycode
Well it’s a good point that proves at least two things. First in the industrial world machine have not yet replaced man after decades. Still a force multiplier.
The second point is the one who control « what » produces value wins it all. In France we had amazing industries and some were deported offshore. Maybe some genius thought that only the brain mattered. Now countries have to rely on other countries to build or make products evolve and those countries can make their own products now and can charge us whatever they want (I’m simplifying) because we don’t know how to build things anymore, tools and craftsmanship is gone and not learned anymore. I feel the article pin points exactly the main idea behind AI : who will have control and who will be able to decide that the API price can be x100 ? If no one knows how to code, that is very dangerous and what happened in the industrial world shows it’s dangerous. Companies have an endgame of power and as a developer deciding to not learn or delegate my know how makes me at mercy in the end
MonkeyClub
> machine have not yet replaced man after decades
When I look at fields like car manufacturing, which is mostly robotic, it seems that nowadays humans are force multipliers for machines rather than the other way around.
shinycode
Yeah but there isn’t one self operating supply chain that makes cars. We make more cars of ship them faster.
The day machines 100% replaced humans throughout the industries it will be an other problem because capitalism is built upon the premise that man is paid because he brings value. Once that’s over and you don’t have money the things you’ll consume less are the nice to have so whole countries might be in trouble. So either we all be able to bring other kinds of values, either the system will have to change not to collapse ?
isolli
But the usual way of learning the craft is broken. Experienced developers will now work with AI instead of hiring junior developers. Some exceptional individuals might still learn on their own, but the path from junior to senior, learning by doing, could vanish. That's my worry.
Frieren
> Some exceptional individuals might still learn on their own
And people with money/means. Children of software engineers may be able to learn the profession easier than others. The same goes for children with affluent parents that can pay for many years of education.
It seems a retreat back to a more medieval economy that excludes large parts of society.
wiether
The free content to learn how to code is still available on the Internet and it won't go away.
SE is one of the few professions that one can _learn_ for free, by themselves.
It could take longer than going into a fancy university, and it won't open corporate doors as easily, but basically anyone with a computer and an Internet connection can learn SE.
ghaff
Probably too B&W. But I’ve had a lot of discussion about this recently and the general consensus is that there’s something to it—especially developers who just got into the field solely because it’s where they thought the money was.
darkwater
And what's wrong with that?
shinycode
You’re right, many, many people choose the path of least resistance to learn. Instead of digging a subject it’s easy to see the answer unfold …
pydry
As a skilled craftsman I have to say Im underwhelmed.
It's not that theyre not useful at all it's just that they look more like a step change dressed up as a revolution.
dijksterhuis
aye, to me they’re just a different interface to the same information publicly available via a search engine.
for folks who haven’t spent the last 15 years honing their finding out technical information with a search engine craft i can see why they might be useful.
but a search engine won’t sometimes mangle the output and provide an incorrect answer — it only provides a link to the raw data (webpage), rather than trying to create a paragraph of text about it.
i’d rather have access to the raw data guaranteed unmangled. i’m fast enough using that method.
raincole
> But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity
Until they eventually and inevitably got injured themselves. Factories were just dangerous (and still are in many many places around the world).
CharlieDigital
> But for skilled craftsmen, these machines became force multipliers, dramatically increasing productivity and improving overall living standards.
I don't know if I agree with this line of thought (is there evidence this is true?). Once you have a metal press, you precisely no longer need a blacksmith skilled at swinging a hammer; in fact, all you need is someone that can be trained to read the manual and follow the instructions -- the exact opposite of a skilled tradesman.I do think it is like an industrialization of software engineering[0], but I don't think it favors the skilled craftsman; rather it shifts the sets of skills required and focuses more on reading code rather than writing.
[0] https://coderev.app/blog/ais-coming-industrialization-of-cod...
ChrisMarshallNY
> Because if you can vibe code… so can everyone else.
That's really the money shot, right there.
CEOs have this dream of firing all their "obnoxious" engineers, and "vibe-coding" their own products. That's not something new. People have been selling this dream to gullible C-suiters since I first started coding in Machine Code (1980s).
The future will belong to the engineers that can leverage AI. Engineering is a lot more involved than "HAL, write me a Facebook," which is the C-suite dream.
It's just that engineering will move another level up, as it has, for hundreds of years.
DrScientist
> CEOs have this dream of firing all their "obnoxious" engineers, and "vibe-coding" their own products.
Give one of the key skills of a CEO's is pulling together the resources to make something happen, what happens to CEO's if you no longer need resources to make stuff happen?
ie if everybody can Vibe code, vibe market, vibe deploy - aren't you going to be swamped with competitors?
So the interesting thought experiment here is - in such an environment - what are the critical success factors?
moregrist
> Give one of the key skills of a CEO's is pulling together the resources to make something happen, what happens to CEO's if you no longer need resources to make stuff happen?
Never underestimate how much importance a lot of people in middle and upper management place in the number of reports they have. It’s almost a <thing> measuring contest with some of them.
They’ll hire people. It might not be in engineering. But I bet they’ll find a reason to hire more engineering. It might even be justified. Software productivity has been increasing for decades. This has not led to a smaller number of software engineers. Only to more ambitious projects.
The future might be different, but I think the chances of that are small.
supriyo-biswas
Mostly that in such a reality software engineering will cease to be a thing; however industries based around physical resources such as manufacturing, construction and healthcare will continue to employ people, and by extension, the CEO.
null
parliament32
Absolutely. Just like, I'm sure, plenty of engineers are starting to think about developing a cool product and vibe-marketing, vibe-salesing, vibe-accounting, etc their way into a functioning business. See: the plethora of SaaS platforms promising to automate away entire sections of business operations "with AI".
Both will fail, unfortunately, because it's easy to underestimate the complexities and intricacies of processes you do not understand in the first place. These various AI offerings are just making the situation worse, because they (as with most things in AI) give the appearance of being functional while falling apart under scrutiny -- the "confidently wrong" problem and all.
ChrisMarshallNY
I was just talking to someone about this, this morning.
I will use ChatGPT (generally) to help me solve occasional issues. I'll come across some conundrum, and ask ChatGPT for a suggestion, which it confidently delivers.
The first suggestion is almost always wrong.
I'll say something like "That won't work," or "That answer is deprecated."
It will say "You're right!", followed by one that is more useful.
I suspect lots of folks run with the first answer.
jillesvangurp
I've been programming for a few decades. I love LLMs. They make tedious things quick. Help me resolve gnarly issues. Make short work of writing unit tests. Generate oodles of boilerplate at will. Etc. It makes me more productive and less reluctant to take on risky things. By risky I mean things that formerly would have likely derailed my busy schedule because I'd get side tracked for to long and would have to de-prioritize more important stuff.
Anyway, resistance is futile. You will be assimilated ... or retired. The reality of our job is that new generations are going to come in and they'll be using all the latest tools and gadgets. That's nothing new. And I'm part of a generation that in a decade or two will be mostly on the sidelines enjoying retirement. So, I'm well aware that progress isn't going to stop over my whining and grumbling. It annoys me when I catch myself doing that. I want to be better than that.
LLMs are part of the job now. They are tools. And tools are only as good as the people wielding them. So, skill up and learn. It's not like it's very hard. If you are getting poor results, you might be doing it wrong. Figure it out; part of the job. Your mileage may vary. But there are a lot of tools and chances are you just haven't found the right one yet. Also, if some tool/llm limitation is blocking getting good results for something, wait 3 months and try again. The pace of progress is ridiculous currently.
Or better yet: become part of the solution and make your own tools. This stuff is stupidly easy. It's prompt engineering with some trivial plumbing around them mostly. And you can generate the plumbing (what, you were going to do that manually?). That's why there are so many AI tools popping up right now. Most of them won't survive very long. But there are some good ideas lurking there.
dingnuts
I've been programming for a few decades. I hate LLMs. They generate oodles of buggy shite that I have to fix by hand. They frequently steal my time and make me less productive because people on this site say I have to learn them or retire, and then I waste time looking up the details the bot got wrong. They're a slot machine and the people who think they are good are justifying an addiction and sunk costs.
So retire me, I guess. I'm probably younger than you, but I'm almost ready to retire because I'm cheap and I don't buy into expensive fads, so I'm almost ready to cash out of this nightmare
except then you realize the valuations of anthropic etc are propping up the whole economy and doing so on the promise that LLMs are going to deliver AGI!
LLMs are marginally useful in some contexts. But I have seen absolutely nothing -- nothing -- to justify the costs or the valuations of these companies. They are definitely not AGI and before you accuse me of moving the goalposts, the AI companies are the ones promising this.
It's a bubble. It's easy to get started. Good luck building a real product with just AI though. Good luck with that.
If you turn out to be right I will happily exit this God forsaken industry. Lord free me from silicon valley; I liked computers. Not this. Not these people.
bluGill
The real answer is probably somewhere between the two. There is value in AI - and the versions that will come up in the future will be better. However it isn't nearly as valuable as the advocates say either. I've given up on the current rounds, but I'm still going to keep watching for when they get better. They might or might not get enough better before I retire (I'm likely older than you), but there are a lot of things they can do better. I have no idea how hard those things are.
parliament32
I'd like to agree with you and remain optimistic, but so much tech has promised the moon and stagnated into oblivion that I just don't have any optimism left to give.
I don't know if you're old enough, but remember when speech-to-text was the next big thing? DragonSpeak was released in 1997, everyone was losing their minds about dictating letters/documents in MS Word, and we were promised that THIS would be the key interface for computing evermore. And.. 27 years later, talking to the latest Siri, it makes just as many mistakes as it did back then. In messenger applications people are sending literal voice notes -- audio clips -- back and forth because dictation is so unreliable. And audio clips are possibly the worst interface for communication ever (no searching, etc).
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
CharlieDigital
Probably the opposite is true: the more you know how to code, the less productive you'll be with AI. This has been my observation watching a non-technical friend build a SaaS as a one-man team in 4 months and is generating $#,000 in revenue within 2 months.
The way he uses AI is just completely different from how the technical folks I know use AI because he doesn't think about the code at all. The way he instructs the AI is different from how engineers prompt the AI.
I actually think that his success with AI is in particular because he doesn't know how to code but was previously managing projects and offshore teams (so lots of writing down exactly what he wants, but with no specifics on how it gets implemented).
intelVISA
Problem is what kind of moat does that SaaS have? The flip side of 'we replaced our dev team with vibe coders look how fast we print shitware' is now shitware value falls close to 0, as anyone can make it.
Though you suggest he's non-technical... "writing down exactly what he wants" is just coding!
CharlieDigital
> "writing down exactly what he wants" is just coding!
That would make good project managers and business analysts "coders" and they are not coders. It is only in this age with LLMs does that line between functional requirements and code become blurred.He doesn't know how to write code well; he knows how to write requirements and instructions well from managing offshore teams.
In practice, his instructions are detailed in the functional domain. Engineers bias too much in the technical domain.
> The flip side of 'we replaced our dev team with vibe coders look how fast we print shitware' is now shitware value falls close to 0, as anyone can make it.
Actually, he recognizes this and said something to the effect of "this is the end of SaaS" meaning that anyone can build this. That's his biggest fear going all in on this (he is still keeping his day job despite this project gaining traction so quickly)But I don't think this is true; I think there are still some technical barriers (at the moment) like one needs to know to instruct about databases, set up external services, etc. The AI is writing the connections to some third party APIs, but one needs to know what an API is and which one to use to instruct the agent.
A future may be coming where this is no longer the case (e.g. combining deep research and computer use that will automatically set up domains, connect external services, etc.), but it's not here yet.
> 'shitware'
Is it "shitware" if customers are paying because they are deriving value from it? He's got 30 customers, a few of which paid annual subscriptions because it provided value to their actual business. Is it "shitware" because it's not handcrafted? Does it matter if it's solving some real problem and customers want to pay for it?bluGill
I assume by functional domain you mean how the program functions.
Most of us here will assume functional refers to functional programming which is very much in the technical domain.
exodust
> "writing down exactly what he wants" is just coding!
Agreed. The more exact and clear your instructions are, the closer to programming it is. Presumably the non-technical person has an application where they care about things like performance, scalability, compatibility and all those things coders sweat over.
anantdole
This resonates a lot with what I’ve seen outside of code too. I’ve been building an AI chess coach and noticed the same pattern: people plug their games into Stockfish, see a list of best moves, and walk away thinking they’ve “analyzed” the game. But real understanding — like in programming — only comes from engaging with why things went wrong.
That’s what I’m trying to fix. Instead of just showing lines, my AI coach gives voice-guided feedback, visual highlights, and practical insights. More like working with a real coach than sifting through raw engine output.
The goal is to make analysis as engaging as playing—and shift the mindset from “just tell me the best move” to “help me think better.”
Demo if curious: https://www.loom.com/share/9e1578f1348841c1992c5d902e371312?...
morcus
Seems like an interesting idea - is it only tactics in scope or has does the AI also do well at analyzing strategic ideas?
Some other thoughts:
Isn't the first example just wrong? The AI says "after dxe3 Rxd8 Rxd8, white wins the exchange, gaining a Rook for a Bishop" but unless I am mistaken actually it's a Queen for a Rook and Bishop?
Also, it seems the visual highlight AI referenced is not working? Talks about Rad1 while the pawn is still highlighted.
anantdole
It will do both tactics and strategy. Also working on incorporating positional concepts.
Yes you are right. It is still in demo phase, it still does make mistakes. I am refining the model and inputs, so definitely a work in progress :)
Regarding the circle highlighting, the agent is deciding / reasoning which square to highlight. So it is non-determinsitic, so it is sometimes right / wrong.
It will definitely get better as the models improve
DeathArrow
To each, their own. To deeply understand an area you have to learn it from bottom up.
I learned BASIC as a small kid, using a clone of ZX Spectrum. I was aware that the memory was limited and that I can poke and peek a memory address to set or retrieve the info.
I learned Pascal and then rapidly went to C and C++. I learned about pointers and how the memory is layed out and what are system calls.
I learned about CPUs and I learned some X86 assembly.
Ar the university I learned about digital circuits and how to assemble one using logical gates. Of course I learned much more, data structures, algorithms, operating systems, distributed systems, parallel and concurrent programming, formal languages and automata theory, cryptography, web, lots of stuff.
I learned other lots of stuff by myself.
I've built desktop apps, websites, software for microcontrollers, games, web applications and now I am working on microservices based apps running in cloud.
I was a junior developer, graduated somehow to senior. I worked as a software architect and now I am a team leader.
These days I work almost exclusively with C#, but I am also interested in other languages if I have some spare time to evaluate them.
What I want to say is this: is not enough to learn the highest level of technology of today. Today that is AI, few years ago it was JS frameworks, more years ago it was Java, .NET, Python.
To be good at what you do you always have to learn all layers under the current top layer. Learn from bottom up. You don't have to be good at every technical detail, but you have to understand at least how the things work.
daerogami
To add to this, if someone depends on AI (the top-layer in your example) and doesn't learn the the 'how's and the 'why's of programming, they or their organization will be completely beholden and dependent on the organizations running those AI tools. Not a great position to be in.
grumbel
Ignoring AI would be foolish, since AI is the best programming tutor you can wish for and will speed up your learning noticeably, since you can always ask for clarification or examples when something isn't clear. It's also a great way to get random data for testing. And it will help you unearth lesser known corners of a programming language that you might have overlooked otherwise.
The downside is that at the current speed of improvement, AI might very well already be at escape velocity, where it improves faster than you, and you'll never be able to catch up and contribute anything useful. For a lot of small hobby projects, that's already the case.
I don't think there are any easy answers here. Nothing wrong with learning to code because it's fun. But as a career choice, it might not have a lot of future, but then, neither might most other white-collar jobs. Weird times are ahead of us.
exodust
> AI is the best programming tutor you can wish for...you can always ask for clarification or examples when something isn't clear
Yep, AI can teach beginners the fundamentals with endless patience and examples to suit everyone's style and goals. Walking you through concepts and giving you simulated encouragement as you progress. Scary stuff, but that's how it is.
But... as we know, it doesn't always provide the best solution. Or it gets muddled. When you point out its mistakes, it apologises, recognises the mistake and explains why its a mistake. It's reasoning is incredible, but it still makes mistakes. This could be very risky for production code.
Related anecdote... I needed Photoshop help recently for horizontally offsetting a vignette effect. Surprisingly not easy. The built-in vignette filter can't be applied to a new blank layer, and is always centred on the image. AI suggested making it manually but I didn't want to do that, as I like the built-in vignette better. AI's next solution involved several complicated steps using channel isolation and weird selection masking etc. No thanks. Then my own brain sparked a better idea... simply increase the canvas size temporarily, apply the vignette, then crop back to the original size. Job done. I told AI about my solution and it was gushing with praise about how brilliant my solution was compared to its own. Moral: never stop trusting your own brain.
guappa
I wonder what kind of trivial boring jobs people who keep saying that LLMs are so incredibly useful at programming must have to have that opinion.
What does your day look like?
wickedsight
I'm getting back into programming after a couple of years other roles. I have to learn a framework I haven't worked before and get to know new paradigms I never used.
In a way, I'm feeling lucky that my company currently explicitly bans the use of AI tools on our code bases (for good reasons). This forces me to write all my own code and understand what it's doing. The only thing I use AI tools for is to explain some new concepts or paradigms in ways I can understand.
What's also great is that I can throw some code from Stack or Github in there and get it explained. I'm glad these tools exist, since they make learning much easier, but they're also a trap if you depend on them in stead of your own knowledge.
koopuluri
> Because if you can vibe code… so can everyone else. > And if everyone can do it, what makes you think Devin won’t replace you?
Devin won't replace you if you can create valuable products through "vibe coding" or whatever else you call it.
when coding itself becomes a commodity, value creation becomes more concentrated up the stack: what you choose to build, how well you market / sell it, how you connect with your customers, product design. Devin won't outcompete humans at these skills anytime soon.
instead of sticking to a skill that's quickly becoming a commodity (as the author recommends), moving up the stack is the way to go (outside of very niche, specialized engineering domains - e.g: training base models).
ZaoLahma
AI does not and will not solve the most difficult part of programming: Expressing how you wish to solve a problem in simple terms.
It doesn't matter if you communicate with a compiler or an LLM - you still need to express your thoughts and ideas with no ambiguity for it to produce the wanted behavior. What makes "vibe coding" with an LLM both easier and more challenging at the same time is that is will guess what you mean and give you results that "kind of " work even when you express yourself unclearly. For someone who can code, the "kind of work" results can be used as a starting point to evolve into something useful. For someone who can't code, it's an inevitable dead end.
I find that those who struggle with programming have the exact same type of struggles when trying to do it with LLMs - no structured plan on how to approach a problem and difficulties to understand the context in which they are working.
If you spend time on places that attract newbie programmers (some subreddits focused on game dev or game engines, for example) you’ll see the outcome of “I no longer think you should learn to code.” And it’s not pretty.
Many, many posts of people looking for help fixing AI-generated code because the AI got it wrong and they have no idea what the code even does. Much of the time the problem is simply an invented method name that doesn’t exist, a problem that is trivially solved by the error message and documentation. But they say they’ve spent several days or whatever going back and forth with the AI trying to fix it.
It’s almost a little sad. If they just take the time to actually learn what they’re doing they’ll be able to accomplish so much more.
Now of course people learning the traditional way have these same problems, but they’re debugging code they wrote, not gobbledygook from an AI. It’s also easier to explain the solution to them because they wrote the code, so it tends to be simpler. Several times I’ve pitied someone asking for help with AI code and even when I explained the solution they still didn’t understand it, and I had to just give up on them - I’m not getting paid to help them.