AI can't even fix a simple bug – but sure, let's fire engineers
147 comments
·May 24, 2025HelloUsername
zihotki
It's not related, it's actually the original source of this article. The submitted article doesn't add anything to the source apart from advertizements.
csallen
AI is a tool.
Just like any other tool, there are people who use it poorly, and people who use it well.
Yes, we're all tired of the endless parade of people who exaggerate the abilities of (current day) AI and claim it can do more than it can do.
But I'm also getting tired of people writing articles that showcase people using AI poorly as if that proves some sort of point about its inherent limitations.
Man hits thumb with hammer. Article: "Hammers can't even drive a simple nail!"
nessbot
It's not a tool its a SaaS. I own and control my tools. I think a John Deer tractor looses it's "tool" status when you can't control it. Sure there's the local models but those aren't what the vast majority of folks are using or pushing.
steventruong
This is an incredibly weird view to me. If I borrow a hammer from my neighbor, although I don’t own the hammer, it doesn’t suddenly make the hammer not a tool. Associating a tool with the concept of ownership feels like an odd argument to make.
tacheiordache
Now the hammer has mapped your house and knows where all the screws are and is uploading all that to some servers.
Kinrany
You don't own the hammer but you control it.
benreesman
You get different models, configurations, system prompts (and DAN-descended stuff is like DMCA now, unaccountable blacklist for even trying in some cases), you get control vectors and modern variants with more invasive dynamic weight biasing. The expert/friend/coworker drop down doesn't have all the entries in it: there's a button to make Claude code write files full of "in production code we'd do the calculation" mocks and then write a commit message about all the passing tests (with a byline!), but some ops guy pushes that button in the rare event the PID controller or whatever can't cope.
These are hooked up to control theory algorithms based on aggregate and regional KV and prompt cache load. This is true of both fixed and per-token billing. The agent will often be an asset at 4am but a liability at 2pm.
You get experiment segmented always, you get behavior scoped multi-armed badit rotated into and out of multiple segment categories (an experiment universe will typically have not less than 10000 segments, each engineer will need maybe 2 or 3 and maybe hundreds of arms per project/feature, so that's a lot of universes).
At this stage of the consumer internet cycle its about unit economics and regulatory capture and stock manipulation via hype rollercoaster. and make no mistake about what kind of companies these are: they have research programs with heavy short-run applications in mind and a few enclaves where they do AlphaFold or something. I'm sure they created an environment Carmack would tolerate at least for a while, but I gibe it a year or two we saw that movie at Oculus and Bosworth is a pretty good guy, he's like Jesus compared to the new boss.
In this extended analogy about users, owners, lenders, borrowers and hammers, I'd be asking what is the hammer and who is the nail.
bcyn
Many SaaS products are tools. I'm sure when tractors were first invented, people felt that they didn't "control" it compared to directly holding shovels and manually doing the same work.
Not to say that LLMs are at the same reliability of tractors vs. manual labor, but just think that your classification of what's a tool vs. not isn't a fair argument.
pempem
I think the OP comment re: AI's value as a tool comes down this this:
Does what it says: When you swing a hammer and make contact, it provides greater and more focused force than your body at that same velocity. People who sell hammers make this claim and sometimes show you that the hammer can even pull out nails really well. The claims about what AI can do are noisy, incorrect and proffered by people who - I imagine OP thinks and would agree - know better. Essentially they are saying "Hammers are amazing. Swing them around everywhere"
Right to repair: Means an opportunity to understand the guts of a thing and fix it to do what you want. You cannot really do this to AI. You can prompt differently but it can be unclear why you're not getting what you want
benreesman
Intentionally or not the tractor analogy is a rich commentary on this but it might not make the point you intend. Look into all the lawsuits and shit like that with John Deere and the DRM lockouts where farmers are losing whole crops because of remote shutdown cryptography that's physically impossible to remove at a cost or in a timeframe less than a new tractor.
People on HN love to bring up farm subsidies, and its a real issue, but big agriculture has special deals and what not. They have redundancy and leverage.
The only time this stuff kicks in is when the person with the little plot needs next harvest to get solvent and the only outcome it ever achieves is to push one more family farm on the brink into receivership and directly into the hands of a conglomorate.
Software engineers commanded salaries that The Right People have found an affront to the order of things long after they had gotten doctors and lawyers and other high-skill trades largely brought to heel via the joint licensing and pick a number tuition debt load. This isn't easy in software for a variety of reasons but roughly that the history of computer science in academia is kind of a unique one: it's research oriented in universities (mostly, there are programs with an applied tilt) but almost everyone signs up, graduates, and heads to industry without a second thought, and so back when the other skilled trades were getting organized into the class system it was kind of an oddity, regarded as almost an eccentric pursuit by deans and shit.
So while CS fundamentals are critical to good SWE's, schools don't teach them well as a rule any more than a physics undergraduate is going to be an asset at CERN: its prep for theory research most never do. Applied CS is just as serious a topic, but you mostly learn that via serious self study or from coworkers at companies with chops. Even CS graduates who are legends almost always emphasize that if you're serious about hacking then undergrad CS is remedial by the time you run into it (Coders at Work is full of this sentiment).
So to bring this back to tractors and AI, this is about a stubborn nail in what remains of the upwardly mobile skilled middle class that multiple illegal wage fixing schemes have yet to pound flat.
This one will fail too, but that's another mini blog post.
eddd-ddde
Is your email not a tool since you likely pay some cloud provider for it and the way it works is largely outside your control?
Something can be a SaaS, and a useful tool, at the same time.
hoppp
A tool and a SaaS. It can be both, they are not mutually exclusive.
johnisgood
You can use it locally. FWIW many tools are SaaS, yet people have no trouble with that.
nessbot
Good for them?
Jackson__
More like:
Man hits thumb with hammer. Hammer companies proclaim Hammers will be able to build entire houses on their own within the next few years [0].
[0] https://www.nytimes.com/2025/05/23/podcasts/google-ai-demis-...
csallen
Okay, so write an article about how the hammer companies are dumb. Don't write an article about how hammers can't drive nails.
Why is this complex?
ruraljuror
> Okay, so write an article about how the hammer companies are dumb
This is exactly how I interpreted the OP blog post: it is a great example of how companies are (mis)using AI given it's current abilities.
pempem
This is a way better version of my comment.
JHer
The television, the atom bomb, the cigarette rolling machine, and penicillin are also "just tools". They nevertheless changed our world entirely, for better or worse. If you ascribe the impact of AI to the people using AI, you will be utterly, completely bewildered by what is happening and what is going to happen.
_se
[flagged]
croes
> Yes, we're all tired of the endless parade of people who exaggerate the abilities of (current day) AI
You mean the people who create and sell these AIs.
You would blame the hammer or at least the manufacturer if they claimed the hammer can do it al by itself.
This is more a your-car-can-drive-without-supervision-but-it-hit-a-another-car case.
baxtr
I think the difference is that a hammer manufacturer wouldn’t suggest that the hammer will replace the handyman with its next update.
deadlydose
> You would blame the hammer or at least the manufacturer if they claimed the hammer can do it al by itself.
I wouldn't because I'm not stupid and I know what a hammer is and isn't capable of despite any claims to the contrary.
tedunangst
What's the best way to insulate myself from the output of people using AI poorly?
benreesman
It's increasingly a luxury to be a software engineer who is able to avoid some combination of morally reprehensible leadership harming the public, quality craftsmanship in software being in freefall, and ML proficiency being defined downwards to admit terrible uses of ML.
AI coding stuff is a massive lever on some tasks and used by experts. But its not self-driving and the capabilities of tge frontier vendor stuff might be trending down, they're certainly not skyrocketing.
Any other tool: a compiler, an editor, a shell, even a browser, but I'd say build tools are the best analogy: you have chosen to become proficient or even expert or you haven't and rely on colleagues or communities that provide that expertise. Pick a project or a company: you know if you should be messing around with the build or asking a build person.
AI is no diffetent. Claude 4 Opus just went GA and its in power user tune still, they don't have the newb/cost-control defaults dialed in yet and so its really useful and probably will be for a few days until they get the PID controller wired up to whatever a control vector is these days, and then it will tank to useless slop just like 3.7.
For a week I'll get a little boost in my ouyput and pay them a grand and be glad I did, and then it will go back to worse than useless.
These guys only know one business plan.
OutOfHere
> there are people who use it poorly, and people who use it well.
Precisely. AI needs appropriate and sufficient guidance to be able to write code that does the job. I make sure my prompts have all of the necessary implementation detail that the AI will need. Without this guidance, the expected result is not a good one.
spookie
This often doesn't scale well. And coming up with all the necessary context in writing is a bit harder than when programming.
OutOfHere
Well, it's where we are now with AI technology. Perhaps a superior future AI will need less of it. For now I give it all that I think it won't reliably figure out on its own.
dinfinity
> AI needs appropriate and sufficient guidance to be able to write code that does the job.
Note that the example (shitty Microsoft) implementation was not able to properly run tests during its work, not even tests it had written itself.
If you have an existing codebase that already has a plenty tests and you ask AI to refactor something whilst giving it the access it needs to run tests, it can already sometimes do a great job all by itself.
Good specification and documentation also do a lot, of course, but the iterative approach with feedback if things are actually working as intended is a game changer. Not unsurprisingly also a lot closer to how humans do things.
OutOfHere
The iterative approach has one problem -- it is onerous to repeat the lengthy iterative process with a different model, as it will lead to an entirely different conversation. In contrast, when the spec is well-written up-front, it is trivial to switch models to see how the other model implements it differently.
despera
A tool can very well be broken thought or simply useless for anything but the most lightweight job (despite all the PR nonsense)
They call it a tool and so people leave reviews like any other tool.
dvfjsdhgfv
> Just like any other tool, there are people who use it poorly, and people who use it well.
We are not talking about random folks here but about the largest software company with high stakes in the most popular LLM trying to show off how good it is. Stephen Toub is hardly a newbie either.
zkmon
AI adoption is mostly driven from the top. What this means is, shareholders and regulators would make the CEOs to claim that the company is using AI. CEOs trickle this grand vision and goals down, allocating funds and asking for immediate reports showing the evidence of AI everywhere in the company. One executive went to the extent saying that anyone not using AI in their work would face disciplinary action.
So the point is, it is not about whether AI can fix a bug or do something useful. It is about reporting and staying competitive via claiming. Just like many other reports which don't have any specific other purpose other than reporting itself.
A few years back, I asked an Architect who was authoring an architecture document, about who the audience for this document is. She replied saying the target audience is the reviewers. I asked, does anyone use it after the review? She says, not sure. And not surprisingly, the project which took 3 years to develop with a large cost, was shelved after being live for an hour in prod, because the whole thing was done only for a press release, saying the company has gone live with a new tech. They didn't lie.
sixtram
Yesterday, I asked AI for help:
Check my SQL stored procedure for possible logical errors. It found a join error that I didn't remember including in my SQL. After double-checking, I found that it had hallucinated a join that wasn't there before and reported it as a bug. After I asked for more information, it apologized for adding that.
I also asked for a C# code with some RegEx. It compiled, but it didn't work; it replaced the order of two string parameters. I had to copy and paste it back to show why it didn't work, and then it realized that it had changed the order of the parameters.
I asked for a command-line option to zip files in a certain way. It hallucinated a nonexistent option that would be crucial. In the end, it turned out that it was not possible to zip the files the way I wanted.
My manager plans to open our Git repository for AI code and pull request (PR) review. I already anticipate the pain of reviewing nonsensical bug reports.
aerhardt
I'm an experienced programmer but currently picking up C# for a masters course on Game AI. I appreciate having the LLMs at hand but I am surprised by how much of a step down the quality of the code output is compared to Python.
NBJack
Step down in terms of LLM performance? I think that is easily explained in the sheer bulk of articles, blogs, and open source projects in Python rather than C#. I actually prefer the latter to the former, but I know it is still not that widely adopted.
craftkiller
One of my teammates recently decided to us AI to explain a config option instead of reading the 3 sentences in the actual documentation. The AI told him the option did something that the documentation explicitly stated the option did not do. He then copied the AI's lies as a comment into our code base. If I hadn't read the actual documentation like our forefathers used to, that lie would be copied around from one project to the next. I miss having coworkers that are more than a thin facade to an LLM.
yetihehe
I will say it again and again - AI will replace developers like excel and accounting programs replaced accountants.
tocs3
An old (2015) NPR Planet Money story about VisiCalc.
Episode 606: Spreadsheets! https://www.npr.org/sections/money/2015/02/25/389027988/epis...
The 1984 story it was inspired by (acording to the episode description).
https://medium.com/backchannel/a-spreadsheet-way-of-knowledg...
There are of course still accountants.
mllev
Didn’t they though? I’m sure accounting firms hired way more accountants back in the days of paper records. AI definitely won’t rid society of the developer role, but there will certainly be fewer employment opportunities.
yetihehe
There is already fewer employment opportunities, because big tech firms hired anyone they could and let them sit on unimportant things. At this moment, AI is used mostly as a convenient excuse to "trim the fat". Of course, it's a catastrophe for those fired programmers, but the good ones will find a job or create a new one.
namaria
Are there fewer accountants now than there were in 1985?
Also relevant, do companies find it hard to hire accountants now?
Joeboy
Which is how? This is an honest question, I genuinely don't know what happened to all the people who used to be employed to do manual calculations. Or all the people who worked as typists for that matter.
bgwalter
I took it for irony, that is, accountants weren't replaced by Excel. There are thousands of articles right now of course that accountants will be replaced by AI.
Joeboy
Of course, but my question stands. Did all the people who used to do easily automatable clerical work remain in those roles? I presume that isn't the case, but like I say I don't really know what happened to them.
HDThoreaun
Tons of accountants were replaced by excel though, just not all of them.
sotix
Do you mind clarifying your point? Are you implying excel didn’t replace accountants? As a CPA and software engineer I can tell you that I left the accounting industry because accountants had been replaced. Wages are abysmally low for a field that required a graduate degree to sit for the CPA exam, and the industry has been in a crisis mode with a shortage of entrants.
Accounting has probably been hit significantly harder by offshoring than by tools like excel, but the market for it is not what I would consider healthy. Excel making it easier for offshore talent is also a possibility. Further, the industry has been decimated due to a lack of union / organization. The AICPA allowing people in India and the Philippines to become CPAs has been devastating to US CPA wages.
Accordingly I find your original comparison of software engineering to accounting a bit concerning!
yetihehe
I clarified in another comment[0].
> and the industry has been in a crisis mode with a shortage of entrants.
The accountants I know are well paid. Not everyone of course, there are some that are bad at their job and there are some good ones. Many are essentially working like programming freelancers. There will still be a lot of clients that can't even request AI to do some simple programs for them but are willing to do some small programming work and have some money to pay a freelancing programmer to create some small solution for them.
> Accordingly I find your original comparison of software engineering to accounting a bit concerning!
In my original post, I was telling that some programmers will indeed be replaced. The programming field will probably migrate like accounting field. It WILL be concerning for a lot of programmers, but programming as a field will be here to stay like accounting.
sotix
Thanks for the clarification. Personally, I view accounting as a dying industry in America. The number of well paid accountants is few and decreasing each year. If programming follows that path, it will be devastating for the industry.
Some statistics on US accounting trends[0]:
- As a direct result of enrollment declines in accounting programs, candidates sitting for the CPA exam have also decreased from 48,004 first-time candidates in 2016 to 32,188 in 2021, a drop of 33% - 7% decrease from 2021 to 2022 for the total number of candidates taking the CPA exam - 2022 saw the lowest number of exam takers since 2006 - 30% fewer candidates passed their final section of the CPA exam in 2021 compared to 2016 - the AICPA has stated that roughly 75% of its members are at retirement age - Accounting graduates trended downward in the 2019–2020 academic year, with decreases of 2.8% and 8.4% at the bachelor’s and master’s levels, respectively
Having gone through public accounting myself, and knowing hundreds of accountants, It’s not a career path I recommend others follow whereas computer science and software engineering is one I still recommend. Accounting has largely gone overseas. There are not enough partner positions at the big firms or smaller, local firms to supply enough work for accountants. At least software engineering still has the unlimited growth potential that software enables.
[0]: https://www.cpajournal.com/2024/11/25/the-accounting-profess...
eastbound
1000x more reporting requirements?
data-ottawa
My experiences with Copilot are exactly like these threads – once it's wrong you have to almost start fresh or takeover, which can be a huge time sink.
Claude 4 Opus and Sonnet seem much better for me. The models needed alignment and feedback but worked fairly well. I know Copilot uses Claude but for whatever reason I don't get nearly the same quality as using Claude Code.
Claude is expensive, $10 to implement a feature, $2 to add some unit tests to my small personal project. I imagine large apps or apps without clear division of modules/code will burn through tokens.
It definitely works as an accelerator but I don't think it's going to replace humans yet, and I think that's still a very strong position for AI to be in.
fuzzzerd
Are those costs from the pay as you go credit system or is that on top of a max subscription?
I've tinkered with the pay as you go, but wonder if a higher cap on max for 100/month would be worth it?
data-ottawa
I bought $20 of pay as you go API credits to test out the models and how to use them.
I have not tried the $100/month subscription. If it's net cheaper than buying credits I would consider it, since that's basically 10 features per month.
fuzzzerd
Gotcha. I am in the same boat with pay as you go credits and evaluating what it can do. There are limits with the max subscription but I'm not sure exactly how to measure it against the pay as you go tokens. So for now I just use tokens sparingly.
NBJack
Note we aren't really seeing the price reflect the true costs of using LLMs yet. Everyone is prioritizing adoption over sustainable business models. Time will tell how this pans out.
bee_rider
The line:
> Become the AI expert on your team. Don't fight the tools, master them. Be the person who knows when AI helps and when it hurts.
Is something I’ve been wondering about. I haven’t played with this AI stuff much at all, despite thinking it is probably going to be a basically interesting tool at some point. It just seems like it is currently a bit bad, and multiple companies have bet millions of dollars on the idea that it will eventually be quite good. I think that’s a self fulfilling prophecy, they’ll probably get around to making it useful. But, I wonder if it is really worthwhile to learn how to work around the limitations of the currently bad version?
Like, I get that we don’t want to become buggywhip manufacturers. But I also don’t want to specialize in hand-cranking cars and making sure the oil is topped off in my headlights…
blooalien
If you want to play with "this A.I. stuff" and you have a half-way modern-ish graphic card in your PC or laptop (or even a somewhat modern-ish phone) there's a fair few ways to install and run smaller(ish) models locally on your own hardware, and a host of fancy graphical interfaces to them. I personally use ChatBox and Ollama with the Qwen2.5 models, IBM's Granite series models, and the Gemma models fairly successfully on a reasonably decent (couple years old now) consumer-class gaming rig with an NVIDIA GeForce GTX 1660 Ti and 64 gig of RAM. There's also code editors like Zed or VSCode that can connect to Ollama and other local model runners, and if you wanna get really "nerdy" about it, you can pretty easily interface with all that fun junk from Python and script up your own interfaces and tools.
AstralStorm
Except your toy model or toy version of the model will barely work to talk to you, much less write code. I've done this experiment with a much beefier set of GPUs (3080 10 GB + 3060 12 GB) allowing me to run one step up bigger model.
It's not even comparable to free tiers. I have no idea how big the machines or clusters running that are, but they must be huge.
I was very unimpressed with the local models I could run.
blooalien
I dunno. Maybe you're expecting too much of them? They're obviously not gonna be like those massive data-center LLMs, but I've had some pretty good "brainstorming" sessions about code and documentation with Qwen and Gemma, and the latest vision-capable Qwen models do a really decent job of extracting data and text out of images (often more accurately than typical OCR engines) and even describing or captioning images, and for writing code, I've had good success with asking it to write me docstrings and type hints, and common "boilerplate" code, and even some pretty solid function requests that often come out exactly as I expected it to (as long as it's fed good "context" to work with, like some basic code-style rules, and access to library documentation and existing codebase).
bee_rider
This seems like becoming an expert at hand-cranking engines or writing your own Forth compiler back when compilers were just getting started.
My point of view, I guess, is that we might want to wait until the field is developed to the point where chauffeurs or C programmers (in this analogy) become a thing.
grogenaut
It's rapidly evolving, if you were to master just one explicit revision of say cursor then I think your anology would be correct. However to me it's more like keep abreast of the new things, try new stuff, don't settle on a solution, let the wave push you forward, don't be the coder who doesn't turn their camera on during meetings coding away in a corner for another year or 2 berfore trying ai tools because "they're bad".
But this is the same for any tech that will span the industry. You have people who want to stay in their ways and those who are curious and moving forward, and at various times in people's careers they may be one or the other. This is one of those changes where I don't think you get the option to defer.
cjalmeida
Definitely don’t dismiss it. While there are limitations, it’s already very capable for a number of tasks. Tweaking it to be more effective is skill itself.
blooalien
> Tweaking it to be more effective is skill itself.
^^^ This is actually one of the currently "in-demand" skills in "The Industry" right now... ;)
abletonlive
It is worth it and anybody that hasn’t spent the past month using it has nothing useful to say or contribute, their knowledge about what these tools are capable of is already outdated.
I would bet about 90% of the people commenting how useless llms are for their job are people that installed copilot and tried it out for a few days at some point not in the last month. They haven’t even come close to exploring the ecosystem that’s being built up right now by the community.
bee_rider
The issue (if you aren’t interested in AI in and of itself, but just in the final version of the tool it will produce) is that the AI companies advertise that they’ve come up with an incredible new version that will obsolete any skills you learned working around the previous version’s limitations, on like a monthly basis.
Like, you say these folks have clearly tried the tool out too long ago… but, I mean, at the time they tried it out they could have found other comments just like yours, advertising the fact that now the tool really is revolutionary right now. So, we can see where the skepticism comes from, right?
abletonlive
if you're someone in tech and can't separate the noise from the practicality i'm not sure what to say at this point. do you drop everything you're doing every time a saas product comes out and change your entire life, or do you go play with it and figure out if it's useful for your life? seems like one should be able to determine this for themself without resorting to taking everything that marketing says at face value
ironmagma
Over-hiring may have been part of the issue, but a large part is also Section 174:
tocs3
The tax code Section 174 stuff sounds like a likely case for reducing the number of developers on staff.
mrcsharp
The really sad thing here is that Stephen Toub is a big figure in the .Net land and a respected dev. I love reading his yearly .net performance improvements article/book and he has some great deep dives into .Net topics on YouTube.
Seeing him waste his time on stupid LLM handholding like this is quite the sad scene.
It probably wasn't his decision to use CoPilot. Microsoft probably has some dumb KPI being enforced.
amarant
I feel like this whole "but sure, let's fire the engineers" part is just anti-ai narrative. Like, is anyone actually firing engineers because they have AI now? Does anyone know anyone who's lost their job as an SWE due to being replaced by AI?
I sure don't, and noone I've asked so far knows anyone who's had that happen to then either.
I want to say it's a sign of the times to try and make technology a political issue, but my understanding is the same thing happened during the industrial revolution, so maybe it's just human nature?
Well the industrial revolution didn't make us all homeless beggars, so I doubt AI will, either.
aerhardt
I have literally seen dozens, if not hundreds of people claim here, on Reddit and LinkedIn things like "we have downsized our engineering team from 50 to 5 engineers", "I produce 5x the amount of code", etc. A few tech CEOs - both from frontier labs and traditional software companies - are also claiming software engineers are a thing of the past.
I think these two audiences - tech CEOs and random anons on the internet - tend to be full of shit and I give very little credence to what they say, but to your point, some people are at least claiming things of the sort.
geraneum
> from frontier labs
The target market of these companies are software shops. You don’t see them advertising the model capabilities in i.e. Civil Engineering much. They are trying specifically to sell a cheaper replacement of software engineers to those CEOs. You have to read the news and announcements with this lens:
A vacuum cleaner manufacturer claims that dirty dusty house is a thing of the past. Of course they say that.
aerhardt
A similar thing could be said about traditional tech CEOs like Benioff - that it's part of their fiduciary duty to stakeholders to say things like that. I don't disagree with you; I'm merely saying it's the nature of the game, and that people are definitely claiming that there is an imminent (or even ongoing) firing of engineers.
softwaredoug
Well at least companies use AI as a reason for layoffs / not hiring. Though there are other reasons that may be masking.
Generally what actually seems to be happening is companies want to focus on AI so they close non-AI parts of their business and beef up the AI parts. Which isn’t the same as “replacing engineers with AI”
cjbgkagh
Perception precedes reality, being a dev was already a low status job. Consider a top tier FAANG vs white shoe lawyer. With AI that has dropped further. Even if AI only takes the very junior roles there will be increased competition at the more senior roles which will again drive down prestige and salary.
Additionally, I’m not hiring anymore which is kind of the same thing as firing. I did have a few roles open for junior assistant types but those are now filled by using AI. AI helps improve my productivity without the added overhead of dealing with staff / contractors. I even hired junior devs and put them on full time study mode to try to skill them up to be useful, one guy was in that mode for 6 months before I had to fire him for learning too slowly. Technically LLMs did learn faster than he did and it was his full time job. It’s easier for me to communicate with the AI, especially with the quick responses and being always available.
I figure the AI will eventually get to my level and eat my lunch but hopefully there are a few years before that happens.
And designers, OMG AI is far easier to deal with and produces far better results.
bgwalter
But why don't you steal directly from GitHub, it will make you more "productive", too?
cjbgkagh
I need code that works within the specialized context of my project, I couldn't steal code from GitHub even if I wanted to.
djhn
What kind of design work is AI even capable of? I’m not saying the output isn’t occasionally very impressive (as a toy), I’m saying it’s utterly useless in practice.
Coming up with a spec or design system? I’m not sure what an AI-generated design could possibly be based on and how it would get the necessary inputs from stakeholders other than… in the form of a spec, formulated by someone collating those requirements.
Creating assets based on a spec or design system? Again, I have yet to see any output that can stay internally consistent. Frontend code might even work, despite being a mishmash of half a dozen flavours of react with two or three different vintages of tailwind. But good luck with native code or anything beyond basic CRUD forms.
I recently tried to get claude and gemini to scaffold a tailwind component library from a pretty comprehensive design document. It felt like getting a roomba to swim across a lake.
cjbgkagh
Sorry by designers I mean UI assets. Art and iconography.
I recently needed a transpiler for a mini-DSL to Lua. Google Gemini did a pretty good job of it, hallucinated a bunch of non existent helper functions but they were easy enough to add. It knew the target language Lua better than I did so it has some good ideas for simple generation of code. I naively would have generated more natural looking code at the cost of more effort. Once I fixed the bugs it worked fine. I estimate it would have taken me 4x as long without it.
marcosdumay
> Like, is anyone actually firing engineers because they have AI now?
Look no further than the Microsoft layoff they announced earlier this week.
Of course, you are free to doubt the honesty of the people making those announcements.
deelowe
Literally the org I was formerly in.
codingdave
I know many people who have lost their jobs after the leadership wanted to go "all-in" on AI. I also know of multiple leadership teams who have been fired en masse after pushing for that direction and the organization failing to deliver.
But if you are asking if I know of an organization that was successful after firing their people... Nope, I don't know any of those.
rvz
> Microsoft’s .NET runtime,
That is why, it's not a web app. (As Javascript is the most used language on the internet)
This sort of software (language runtime) requires that it HAS to be correct and no room for clumsiness.
Why do you think almost every AI copilot "demo" is on typical web apps and not on the Linux kernel or on the PyTorch compiler?
It would ruin the "AGI" coding narrative.
The AI boosters need to show that the hype train isn't slowing down. Tell it to replace the Linux kernel developers and watch it struggle in real time.
Related:
Watching AI drive Microsoft employees insane https://news.ycombinator.com/item?id=44050152 21-may-2025 544 comments