Revenge of the Junior Developer
79 comments
·March 22, 2025disambiguation
Kiro
It's the opposite. Most people are not are boasting about their productivity improvements but it's everywhere. Unless you work at a company where you're not allowed to use these tools it should be impossible to miss. Even the most hardcore naysayers I know are now using AI tools. The new discourse is whether the massive increase in code output leads to issues or not (I think it does), but claiming it doesn't happen is not a serious take anymore.
rocmcd
This is the true litmus test IMO. If LLMs are so great and make everyone so productive, then where are the results? Where are all of the amazing products being released that otherwise would have required 10x the investment? Shouldn't there be _anything_ we can point to that shows that the "productivity needle" is being moved?
ramigb
I don't think it makes everyone so productive, tbh! If you really know what you are doing and are willing to burn tokens, it will really take your work to the next level—provided you don’t work with a niche product or language that the models weren’t trained on.
If I may use an analogy, it's like what sampling is for music producers. The sample is out there—it’s already a beautiful sample, full of strings and keys, a glorious 8-bar loop. Anyone can grab it, but not every producer can sell it or turn it into a hit.
In the end, every hype train has some truth to it. I highly suggest you give it a shot, if you haven't already. I’m glad I did—it really helped me a lot, and I am (unfortunately, financially) hooked on it.
Kiro
What are you on about? It's happening everywhere, in all companies. It's literally impossible to miss and how to handle the massive increase in code output is becoming a well-known problem. I think it's interesting how the naysayers claim that it's not happening, yet simultaneously worry about the large amount of low-quality code being produced.
Timber-6539
The demo is the work. Social media views is where the productivity is being moved. And VCs are paying for this ball to keep rolling hoping to eventually cash out big some day.
AI is an irrational market at the moment and this is not going to change anytime soon.
null_name
This sapling is twice as large as it was a week ago, which was twice again as large as it was the week before. Why, at this rate, it'll be bigger than the whole world in but a month.
bayindirh
I remembered the dialogue from the movie Snatch:
T: What's happening with this sausages, Charlie?
C: 2 minutes, Turkish.
-- 5 Minutes later ---
T: How long for the sausages?
C: 5 minutes, Turkish.
T: It was 2 minutes, 5 minutes ago.
I don't know why I remembered it. Is it AI, or self driving cars, or both. Huh.jcgrillo
Don't forget the electric air taxis!
fragmede
self driving cars are here, just unevenly distributed. Waymo operates in several cities already, providing millions of rides to people. Just because you haven't seen them doesn't mean they don't exist.
jcgrillo
> We’re talking about each developer gradually boosting their productivity by a multiplier of ~5x by Q4 2025 (allowing for ramp-up time), for an additional amortized cost of only maybe $50k/year the first year. Who wouldn’t go for that deal?
OK, I'll take the other side of that bet. If in Q4 '25 devs using cursor or whatever are 5x as productive as me using emacs, I'll give this AI stuff another chance. But I'm pretty sure it won't happen.
victorbjorklund
You should probably compare the same dev using AI vs not using AI. Otherwise you can use that argument for anything.
Notepad is a better IDE than emacs if we compare a really good dev using notepad vs a shitty dev using emacs.
jcgrillo
Ok, if I notice other devs suddenly getting 5x as productive I'll give it a try, but so far no such effect has been demonstrated. It seems like a pretty straightforward research question, and you'd think if there was any demonstrable effect the companies selling these things would use such research to market their products. So where is it?
throwaway173738
Oh good. When I saw the graph I started wondering what I was being sold.
nickysielicki
I’m usually one of the people complaining about hype cycles, and it’s usually been correct to be pessimistic about them.
But in this particular case I have to think a lot of people just haven’t tried it in its best form. No, not a local model on your MacBook. No, not the web interface on the free plan. Go lay down $300 into API credits, spend a weekend (or maybe two) fully setting up aider, really give it a shot. It’s ultimately a pretty small amount of money when it comes to figuring out whether the people who are telling you there’s an existential risk to your livelihood on the horizon are onto something, don’t you think?
myko
I find myself much the opposite - I don't usually complain about hype cycles, thinking we should wait and see before reserving judgement. In this case I feel like we've seen enough to know LLMs are not capable of performing anyone's job.
LoganDark
$300? That much sounds like it would last a year. You don't need to spend anywhere near $300 just to try things out
hombre_fatal
I burned through $50 credits in a sitting the first time I used Claude Code to see what I could vibe-code from scratch. You're pushing whole files of tokens through it every time you interact with it.
LoganDark
Huh, maybe the reason it doesn't use this much for me is because my work has mostly source files that are far too large for Claude Code to read. It always has to ask for permission to use grep because it can't figure out how to use the read tool properly. I've done entire new features on probably around $15 of credit max.
fragmede
for better or worse, Claude code is less parsimonious with tokens compared to aider for the same thing.
danielbln
We're using Cline, and it's very easy to blow through $20-40 if you get going. The value proposition is absolutely there so we eat the cost, but OP is correct in that agentic coding eats tokens like there's no tomorrow.
nickysielicki
RTFA, agents churn through credits. Further enforcing my belief that the vast majority of people have not actually tried it yet.
LoganDark
> RTFA, agents churn through credits.
I literally use Claude Code as part of my job, so either the "FA" is wrong, or my costs are only low because work just happens to have a codebase that reduces costs (lol)
Timber-6539
How much does the Deepseek's equivalent cost?
nickysielicki
I know this isn't what you were asking, but the answer is $300. And if I wrote $3000 in the original comment, the answer would be the same. You cannot have enough money.
skydhash
> It’s ultimately a pretty small amount of money when it comes to figuring out whether the people who are telling you there’s an existential risk to your livelihood on the horizon are onto something, don’t you think?
Nope. I'd rather buy some books or a Jetbrains subscription.
djha-skin
> I have bad news: Code completions were very popular a year ago, a time that now feels like a distant prequel. But they are now the AI equivalent of “dead man walking.”
I disagree. I view this as a "machine guns versus heat sinking missiles from the 70s" dichotomy. Sure, using missiles is faster. However, sometimes you're too close for missiles. Also, machine gun rounds are way cheaper than missiles. However, when they first came out, missiles were viewed as the future. For a while, fighter jets were made without machine guns, but they added them back later because they decided they needed both.
Sometimes I find I want to drill down and edit what Claude generated. In that case, copilot is still really nice.
With regard to ai assisted coding: the more you know what you're doing, the more you know the code base, the better result you'll get. To me it feels like a rototiller or some other power tool. It plows soil way faster than you can and is self propelled, but it isn't self directed. Using it still requires planning and it's expensive to run. While using the tool, you must micromanage its direction, constantly giving it haptic feedback from the hands, or it goes off course.
A rototiller could be compared to a hired hand plowing himself, I guess, but there's way less micromanagement with a hired hand vs a rototiller.
Kind of like horses and cars. Horses can get you home if you're drunk. Cars can't.
The proper use of AI agentic tools is like operating heavy machinery. Juniors can really hurt themselves with it but seniors can do a lot of good. The analogy goes further: sometimes you need to get out of the backhoe and dig with smaller tools like jackhammers or just shovels. The jackhammer is like copilot -- a mid-grade power tool -- and Claude code is like the backhoe. Clunky, crude, but can get massive amounts done quickly, if that's what's needed.
skydhash
> Clunky, crude, but can get massive amounts done quickly, if that's what's needed.
You know what's quicker in your analogy? A spell. Or in the coding world. Template, snippets, code generators, framework, and metaprogramming. Where you abstract all the boilerplate behind a few commands. You already know the blast radius of your brute modification tools, so you no longer have to micromanage them. And it's reliable
tyleo
I love how new technology becomes like religion. It develops both cult followers and critics.
In that lens think the AI cult is more right than the Crypto cult. At least I can use it to do something tangible right now while crypto is still pretty useless after many years.
In some sense I think these technologies need the cults and the critics though. It’s good to have people push new things forwards even if everyone isn’t along for the ride. It’s also good to have a counter side poke holes. I think the world is better with both optimists charting new paths forwards and pessimists making sure they don’t walk right off a cliff.
aorloff
Crypto is still pretty valuable relative to how useless it remains.
Unless you are imagining a world in which there's a global conflict and crypto isn't shut down in the first 12 months.
LightHugger
Cryptocurrency allows people who are otherwise limited by draconian payment platform limitations, puritanical moralists who own credit card companies and such to buy and sell things online without being stopped. It's obviously not a good system but it provides a high effort release valve, a fallback mechanism that hopefully will undermine some of these draconian measures. Lots of people have been helped by the existence of cryptocurrency because of this, especially since lately a lot of bad actors control payment platforms and often just shut down businesses on whims.
Whether more have been helped or hurt is debatable but it certainly has a tangible, if niche, use case with real value. It certainly has no value as a store of value, though.
glitchc
Evidence suggests it's the exact opposite. The only thing crypto has really been good for is a store of value. Sure, it's a volatile commodity, but over any 5 year average window, Bitcoin definitely beats inflation. That makes it a safe store of value.
jdlshore
Huh. While I agree that Bitcoin specifically has been a successful speculative investment in the past, "a safe store of value" is a massive stretch. It's a safe store of value in the same way a S&P 500 index fund is, except more volatile. What's that phrase... "Past performance is not indicative of future gains?"
LightHugger
What evidence? People use it to bypass the payment processor cartels all the time.
DonHopkins
A lot of bad actors control most Crypto platforms. Some of them are even justly rotting in jail (until Trump pardons them), and a hell of a lot more of them deserve to be in jail, while most AI users aren't actively committing crimes, laundering and embezzling money, dodging taxes, pumping and dumping rug pulls and pyramid schemes, and committing fraud.
It's been such a relief since the Crypto scammers finally shut the fuck up with their incessant ShitCoin and NFT shilling and get-rich-quick pyramid schemes, so please don't any of you start up again, for the love of God.
AI is NOT like Crypto in any way shape or form, and this is an AI discussion, not a Crypto discussion. And I'm sick and tired of hearing from Crypto shills yapping HODL and FUD while I'm actually getting productive work done and making real money while creating tangible value and delivering useful products with AI, without even having to continuously recruit greater fools and rip off senior citizens and naive suckers of their life savings by incessantly shilling and pumping and dumping and pulling rugs out from under people.
davydm
the only good part was the joke about "vibecoding" (shudder what a stupid term) being like a fart and attracting flies... ok investors
still, this "ai code tools will deprecate real programming" bullshit will one day be laughed at just like how most of us laugh at shitcoin maniacs
it just takes a lot of people way too long to learn
skydhash
Maybe there's a different universe out there, where the code you write is not expected to work, so you can poke the LLM for a whole day to see if it barfs something out.
I spend much of the day reading and thinking and only a small portion actually writing code, because when I'm typing, I usually have a hypothetical solution that is 99% correct and I'm just bringing it to life. Or I'm refactoring. You can interrupt me at any time and I could give you the complete recipe of what I'm doing.
Which is why I don't use LLMs, because it's actually twice the work for me. Typing out the specs, then verifying and editing the given result, while I could type the code in the first place. And they suck at prototyping. Sometimes I may want to leave something in the bare state where only one incantation works, because I'm not sure of the design yet, and have a TODO comment, but they go to generate a more complicated code. Which is a pain to refactor later.
wrs
I agree with the spirit of the argument, but I don’t think you’re taking into account the scale of “typing” we’re talking about now.
For example, yesterday I needed a parser for a mini-language. I wrote a grammar — actually not even the formal grammar, just some examples — and what I wanted the AST to look like. I said “write the tokenizer”, and it did. I told it to tweak a few things and write tests. It did. I told it to “write a recursive descent parser”, and it did. Add tests and do some tweaks, done.
The whole thing works and it took less than an hour.
So yeah, I had to know I needed a parser, but at that point I could pretty much hand off the details. (Which I still checked over, so I’m not even in full vibe mode, guess.)
LPisGood
This is a cool use case that probably saved you some time, but writing a recursive descent parser is something freshman CS students do for a lab assignment.
It isn’t exactly breaking new ground or doing anything you couldn’t find with a quick google search.
skydhash
Isn't that what tools like antlr [0], bison[1] do?
danielbln
I disagree, agentic coding is amazing for prototyping. It removes a ton of friction and inertia and allows me to try a bunch of ideas to see what works (from a delivery and/or UX perspective). Of course, you should have the systems thinking and experience in place to know what it is you're doing with it at any given point , but for demos and prototypes it has been an absolute boon.
Generally, you wouldn't type out the spec either, either you provide an existing spec to the model (in form of white board notes, meeting notes, etc.) or you iterate conversationally until you arrive at an initial implementation plan.
It's a different way of working for sure, and it has distinct draw backs and requires different mental modes depending if you're doing green field, demo/prototype, existing large app feature development etc. but it's been a massive productivity enhancement, especially when tackling multiple projects in quick succession.
hombre_fatal
A good example of this at the extreme is game design.
When you're making a game, it's really expensive to try out different ideas, like a few different implementations of a mechanic. It could be hours of work to make a change. You tend to have to do a lot of thinking to tweak a mechanic in a fundamental way just to see what it feels like, knowing that you're probably going to throw it away.
LLMs are really good at this. Make a git branch, ask Claude Code to tweak the physics so that it does X, see what it feels like. Rollback the change or continue tweaking.
Same with branching the UI to see what a different idea would feel like. Simple changes to explain could result in hours of refactoring UI code (chore work) just to see if you like the result. Or just ask the LLM to do it, see what it feels like, roll it back.
skydhash
Usually my own way of working is to use Balsamiq[0] to have a visual prototype to test out flows, Figma|Sketch for the UI specs, then to just code it. Kinda the same when drawing where you just doodle until you have a few workable ideas, iterate of these to judge colors and other things, and then commit to one for the final result.
null_name
Yeah, call me a cynic or conservative or whatever - I'll believe it when I see it. I give very little weight to predictions about the future from AI shills, especially when they include some variant of "we're 90% there already" or "an exponential shift is imminent, if things keep improving at this rate, which they Will." Opinion discarded, create your thing and come back if/when it works.
Everything is shifting so fast right now that it hardly matters anyways. Whatever I spend time learning will be outdated in a few years (when things are predicted to get good). It does matter if you're trying to sell AI products, though. Then you gotta convince people they're missing out, their livelihood is at stake if they don't use your new thing now now now.
monsieurbanana
Even if I would agree with everything the article says, I have no idea how the author gets to the conclusion that junior developers will prevail because they are faster at adopting LLMs.
Didn't he just made a point about how fast the situation is evolving? I had some FOMO about ai last year, not anymore. I don't care that I don't have time to fully explore the current LLM state of the art, because in a month it will be obsolete. I'm happy waiting until it settles down.
And if their scenario ends up happening, and you can basically multiply a dev's productivity by N by paying N x K dollarinos, why would you chose a junior dev? It's cheaper, but sometimes a junior dev doesn't take longer to arrive at a solution, it never does (same for senior devs, don't get me wrong, but it happens less often).
bryanlarsen
It's a counterpoint to Yegge's original post https://sourcegraph.com/blog/the-death-of-the-junior-develop...
And it's not saying his original post is wrong, they should be taken together. He's saying those who adapt to the new paradigm will "win", whether senior or junior.
jsdalton
Much of this post was spot on — but the blind spots are highly problematic.
In this agentic AI utopia of six months from now:
* Why would developers — especially junior developers — be assigned oversight of the AI clusters? This sounds more like an engineering management role that’s very hands on. This makes sense because the skill set required for the desired outcomes is no longer “how do I write code that makes these computers work correcty” and rather “what’s the best solution for our customers and/or business in this problem space.” Higher order thinking, expertise in the domain, and dare I say wisdom are more valuable than knowing the intricacies of React hooks.
* Economically speaking what are all these companies doing with all this code? Code is still a liability, not an asset. Mere humans writing code faster than they comprehend the problem space is already a problem and the brave new world described here makes this problem worse not better. In particular here, there’s no longer an economic “moat” to build a business off of if everything can be “solved” in a day with a swarm of AI agents.
* I wonder about the ongoing term scaling of these approaches. The trade off seems to be extremely fast productivity at the start which falls off a cliff as the product matures and grows. It’s like a building that can be constructed in a day up to a few floors but quickly hits an upper limit as your ability to build _on top of_ the foundational layer of poorly understood garbage.
* Heaven help the ops / infrastructure folks who have to run this garbage and deal with issues at scale.
Btw I don’t reject everything in this post — these tools are indeed powerful and compelling and the trendlines are undeniable.
mdaniel
I still have to get used to the fact that a (sourcegraph.com) /item may contain (steve-yegge.blogspot.com) content
haburka
One of the funniest things I’ve read in a while. Also full of some truths. I think learning how to use AI will become a core part of being a dev but I seriously doubt they’ll have anywhere near the competency of solving a problem that a junior engineer has. They can certainly write code like one though.
I really recommend this to anyone reading - if you haven’t tried using cursor or copilot, check them out. It makes writing code less tedious.
6510
I had this rather comical picture where developers finally get to experience what it is like to have someone write software for you. You get sort of what you asked for but it is obviously wrong to even the most novice user. You then change the requirements and get something entirely different but equally wrong or worse... and a new invoice. hahaha It gets more funny the more I think about it.
DanHulton
As always, citation needed.
(Also, grain of salt required, because this is a blatant marketing post.)
Look, I've been hearing "the models will get better and make these core problems go away" since it become common to talk about "the models" at all. Maybe they will some day! But also, and critically, maybe they won't.
You also have to consider the future where some companies spend an additional $50-100k per developer and they DON'T see any of this supposed increase in performance, if these "trust me, it'll happen this time" promises don't come true. This is the kind of bet that can CRATER companies, so it's not surprising to see some hesitation here, a desire to see if the football will be again yanked away.
Plus, and I believe most damningly, this article appears to be engaging in the classic technocratic failure mode: mistaking social problems for technical ones.
Obviously, yes, developers engage in solving technical problems, but that is not all they do, and at the higher level, that becomes the least of what they do. More and more, a good developer ensures that they are solving the RIGHT problem in the RIGHT WAY. They're consulting with managers, (ideally) users, other teams, a whole host of people to ensure the right thing is built at the right time with the right features and that the right sacrifices are being made. LLMs are classically bad at this.
The author dismissively calls this "getting stuck", and handwaves the importance of it away, saying that the engineer will be able to unstuck the model at first (which, if we're putting armies of "vibe coding" junior engineers in charge of the LLMS, who've not had time enough in their career to develop this skill, HOW?), and then makes the classic claim "but the models will get better", and predicts the models will eventually be able to do it (which, if this is an intractable problem with LLMS -- and so far evidence has been leaning this way -- again, HOW?).
Forgive that apalling grammar. I am het up. But note well what I'm doing: I'm asking "should we even be doing this?" Which is something these models a) will have to do well to accomplish what the author insinuates they will, and b) have been persistently terrible at.
I'm going to remain skeptical for now, since it seems that's my one remaining superpower versus these LLMs, and I guess I'm going to need to keep that skill sharp if I want to avoid the breadline in this author's future. =)
sfjailbird
LOL, pretty good, he had me going a few times. I'm sure a surprising amount of people will actually take this seriously, showing how ludicrous the situation is right now.
Everyone: telling me how great AI is.
No one: making anything great with AI.
Sourcegraph: an AI company, routinely promoting their LLM-optimism blogposts to HN, perpetuating the hype cycle their business model depends on.