The ‘white-collar bloodbath’ is all part of the AI hype machine
1181 comments
·May 30, 2025simonsarris
rglover
IMO this is dead on. AI is a hell of a scapegoat for companies that want to save face and pretend that their success wasn't because of cheap money being pumped into them. And in a world addicted to status games, that's a gift from the heavens.
esperent
ZIRP is an American thing? In that case maybe we could try comparisons with the job markets in other developed Western countries that didn't have this policy. If it was because of ZIRP, then their job markets should show clearly different patterns.
Armisael16
There isn’t anything magically about precisely zero percent interest rates; the behavior we see is mostly a smooth extension of slightly higher rates, which the EU was at.
And of course ZIRP was pioneered in Japan, not the US.
perrygeo
Such an important point, I've seen and suspected the end of ZIRP being a much much greater influence on white collar work than we suspect. AI is going to take all the negative press but the flow of capital is ultimately what determines how the business works, which determines what software gets built. Conway's law 101. The white collar bloodbath is more of a haircut to shed waste accumulated during the excesses of ZIRP.
hoosieree
AI also happens to be a perfect scapegoat: CEOs who over-hired get to shift the blame to this faceless boogeyman, and (bonus!) new hires are more desperate/willing to accept worse compensation.
steveBK123
ZIRP and then the final gasp of COVID bubble over hiring.
At least in my professional circles the number of late 2020-mid 2022 job switchers was immense. Like 10 years of switches condensed into 18-24 months.
Further lot of experiences and anecdotes talking to people who saw their company/org/team double or triple in size when comparing back to 2019.
Despite some waves of mag7 layoffs we are still I think digesting what was essentially an overhiring bubble.
steve_adams_86
Is it negative press for AI, or is it convincing some investors that it’s actually causing a tectonic shift in the workforce and economy? It could be positive in some sense. Though ultimately negative, because the outcomes are unlikely to reflect a continuation of the perceived impact or imaginary progress of the technology.
e40
Also section 174’s amortization of software development had a big role.
Lu2025
I agree, R&D change is what triggered 2022 tech layoffs. Coders used to be free, all this play with Metaverse and such was on public dime. As soon as a company had to spend real money, it all came crashing down.
rbultje
This is a weird take. Employees are supposed to be business expenses, that's the core idea of running a business: profit = revenue - expenses, where expenses are personnel / materials, and pay taxes over profit. Since the R&D change, businesses can't fully expense employees and need to pay (business) taxes over their salaries. Employees - of course - still pay personal taxes also (as was always the case).
jameslk
Keynes suggested that by 2030, we’d be working 15 hour workweeks, with the rest of the time used for leisure. Instead, we chose consumption, and helicopter money gave us bullshit jobs so we could keep buying more bullshit. This is fairly evident by the fact when the helicopter money runs out, all the bullshit jobs get cut.
AI may give us more efficiency, but it will be filled with more bullshit jobs and consumption, not more leisure.
autobodie
Keynes lived in a time when the working class was organized and exerting its power over its destiny.
We live in a time that the working class is unbelievably brainwashed and manipulated.
pif
> Keynes lived in a time when the working class ...
Keynes lived in a time when the working class could not buy cheap from China... and complain that everybody else was doing the same!
kergonath
He was extrapolating, as well. Going from children in the mines to the welfare state in a generation was quite something. Unfortunately, progress slowed down significantly for many reasons but I don’t think we should really blame Keynes for this.
> We live in a time that the working class is unbelievably brainwashed and manipulated.
I think it has always been that way. Looking through history, there are many examples of turkeys voting for Christmas and propaganda is an old invention. I don’t think there is anything special right now. And to be fair to the working class, it’s not hard to see how they could feel abandoned. It’s also broader than the working class. The middle class is getting squeezed as well. The only winners are the oligarchs.
eastbound
It is very possible that foreign powers use AI to generate social media content in mass for propaganda. If anything, the internet up to 2015 seemed open for discussion and swaying by real people’s opinion (and mockery of the elite classes), while manipulation and manufactured consent became the norm after 2017.
hoseyor
He also lived in a time when the intense importance and function of a moral and cultural framework for society was taken for granted. He would have never imagined the level of social and moral degeneration of today.
I will not go into specifics because the authoritarians still disagree and think everything is fine with degenerative debauchery and try to abuse anyone even just pointing to failing systems, but it all does seem like civilization ending developments regardless of whether it leads to the rise of another civilization, e.g., the Asian Era, i.e., China, India, Russia, Japan, et al.
Ironically, I don’t see the US surviving this transitional phase, especially considering it essentially does not even really exist anymore at its core. Would any of the founders of America approve of any of America today? The forefathers of India, China, Russia, and maybe Japan would clearly approve of their countries and cultures. America is a hollowed out husk with a facade of red, white, and blue pomp and circumstance that is even fading, where America means both everything and nothing as a manipulative slogan to enrich the few, a massive private equity raid on America.
When you think of the Asian countries, you also think of distinct and unique cultures that all have their advantages and disadvantages, the true differences that make them true diversity that makes humanity so wonderful. In America you have none of that. You have a decimated culture that is jumbled with all kinds of muddled and polluted cultures from all over the place, all equally confused and bewildered about what they are and why they feel so lost only chasing dollars and shiny objects to further enrich the ever smaller group of con artist psychopathic narcissists at the top, a kind of worst form of aristocracy that humanity has yet ever produced, lacking any kind of sense of noblesse oblige, which does not even extend to simply not betraying your own people.
1776smithadam
Keynes didn't anticipate social media
ccorcos
If you work 15 hours/week then presumably someone who chose to work 45 hours/week would make 3x more money.
This creates supply-demand pressure for goods and services. Anything with limited supply such as living in the nice part of town will price out anyone working 15 hours/week.
And so society finds an equilibrium…
jjk166
Presumably the reduction to a 15 hour workweek would be much the same as the reduction to the 40 hour workweek - everyone takes the same reduction in total hours and increase in hourly compensation encoded in labor laws specifically so there isn't this tragedy of the commons.
tim333
I think something Keynes got wrong there and much AI job discussion ignores is people like working, subject to the job being fun. Look at the richest people with no need to work - Musk, Buffett etc. Still working away, often well past retirement age with no need for the money. Keynes himself, wealth and probably with tenure working away on his theories. In the UK you can quite easily do nothing by going on disability allowance and doing nothing and many do but they are not happy.
There can be a certain snobbishness with academics where they are like of course I enjoy working away on my theories of employment but the unwashed masses do crap jobs where they'd rather sit on their arses watching reality TV. But it isn't really like that. Usually.
trinix912
The reality of most people is that they need to work to financially sustain themselves. Yes, there are people who just like what they do and work regardless, but I think we shouldn't discount the majority which would drop their jobs or at least work less hours had it not been out of the need for money.
timacles
What percentage of people would you say like working for fun? Would you really claim they make up a significant portion of society?
Even myself, work a job that I enjoy building things that I’m good at, that is almost stress free, and after 10-15 years find that I would much rather spend time with my family or even spend a day doing nothing rather than spend another hour doing work for other people. the work never stops coming and the meaninglessness is stronger than ever.
navane
Meanwhile your examples for happy working are all billionaires who do w/e tf they want, and your example of sad non working are disabled people.
Slow_Hand
Not to undercut your point - because you’re largely correct - but this is my reality. I have a decent-paying job in which I work roughly 15 hrs a week. Sometimes more when work scales up.
That said, I’m not what you’d call a high-earning person (I earn < 100k) I simply live within my means and do my best to curb lifestyle creep. In this way, Keynes’ vision is a reality, but it’s a mindset and we also have to know when enough wealth is enough.
oblio
You're lucky. Most companies don't accept that. Frequently, even when they have part time arrangements, the incentives are such that middle managers are incentivized to squeeze you (including squeezing you out), despite company policies and HR mandates.
seydor
Most of the people are leisuring af work (for keynes era standards) and also getting paid for it
WorkerBee28474
> Keynes suggested that by 2030, we’d be working 15 hour workweeks
Yeah, I'd say I get up to 15 hours of work done in a 40 hour workweek.
JumpCrisscross
> Keynes suggested that by 2030, we’d be working 15 hour workweeks
Most people with a modest retirement account could retire in their forties to working 15-hour workweeks somewhere in rural America.
steveBK123
The trade is you need to live in VHCOL city to earn enough and have a high savings rate. Avoid spending it all on VHCOL real estate.
And then after living at the center of everything for 15-20 years be mentally prepared to move to “nowhere”, possibly before your kids head off to college.
Most cannot meet all those conditions and end up on the hedonic treadmill.
raincom
Now one has to work 60 hours to afford housing(rent/mortgage) and insurance (health, home, automotive). Yes, food is cheap if one can cook.
digitcatphd
As of now yes. But we are still in day 0.1 of GenAI. Do you think this will be the case when o3 models are 10x better and 100x cheaper? There will be a turning point but it’s not happened yet.
godelski
Yet we're what? 5 years into "AI will replace programmers in 6 months"?
10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
hn_throwaway_99
I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
jjani
> Yet we're what? 5 years into "AI will replace programmers in 6 months"?
Realistically, we're 2.5 years into it at most.
tim333
Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.
roenxi
As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.
hansmayer
100% this. I always argue that groundbreaking technologies are clearly groundbreaking from the start. It is almost a bit like a film, if you have to struggle to get into it in the first few minutes, you may as well spare yourself watching the rest.
tsunamifury
[flagged]
gardenhedge
Over ten years for the we'll have self driving car spiel
directevolve
We’re already heading toward the sigmoid plateau. The GPT 3 to 4 shift was massive. Nothing since had touched that. I could easily go back to the models I was using 1-2 years ago with little impact on my work.
I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.
threeseed
> I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved
It really hasn't.
The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.
Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.
I was assured that MCP was going to solve all of this but nope.
chrsw
> I could easily go back to the models I was using 1-2 years ago with little impact on my work.
I can't. GPT-4 was useless for me for software development. Claude 4 is not.
xnx
> 5 years into "AI will replace programmers in 6 months"?
Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)
> 10 years into "we'll have self driving cars next year"
They're here now. Waymo does 250K paid rides/week.
nothercastle
I think they will be 10-100x cheaper id be really surprised if we even doubled the quality though
threeseed
How are we in 0.1 of GenAI ? It's been developed for nearly a decade now.
And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.
So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.
Flemlo
A decade?
The explosion of funding, awareness etc only happened after gpt-3 launch
makeitdouble
How does it work if they get 10x better in 10 years ? Everything else will have already moved on and the actual technology shift will come from elsewhere.
Basically, what if GenAI is the Minitel and what we want is the internet.
nradov
10× better by what metric? Progress on LLMs has been amazing but already appears to be slowing down.
jaggederest
All these folks are once again seeing the first 1/4 of a sigmoid curve and extrapolating to infinity.
elif
with autonomous vehicles, the narrative of imperceptibly slow incremental change about chasing 9's is still the zeitgeist despite an actual 10x improvement in homicidality compared to humans already existing.
There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.
ricardobayes
Frankly, we don't know. That "turning point" that seemed so close for many tech, never came for some of them. Think 3D-printing that was supposed to take over manufacturing. Or self-driving, that is "just around the corner" for a decade now. And still is probably a decade away. Only time will tell if GenAI/LLMs are color TV or 3D TV.
kergonath
> Think 3D-printing that was supposed to take over manufacturing.
3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.
Ray20
>Think 3D-printing that was supposed to take over manufacturing
This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic
>Or self-driving, that is "just around the corner" for a decade now.
But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.
hombre_fatal
Why would you interpret data cut off at 2020 so that you're just looking at a covid phenomenon? The buttons don't seem to do anything on that site, but why not consider 2010-2025?
That said, the vibe has definitely shifted. I started working in software in uni ~2009 and every job I've had, I'd applied for <10 positions and got a couple offers. Now, I barely get responses despite 10x the skills and experience I had back then.
Though I don't think AI has anything to do with it, probably more the explosion of cheap software labor on the global market, and you have to compete with the whole world for a job in your own city.
Kinda feels like some major part of the gravy train is up.
lbotos
It looks like that specific graph only starts in 2020...
hombre_fatal
Why not just find one that starts in 2022 then. It would look even more dire.
niuzeta
FRED continues to amaze me with the kind of data they have availab.e
brfox
That's from Indeed. And, Indeed has fewer job postings overall [https://fred.stlouisfed.org/series/IHLIDXUS]. Should we normalize the software jobs with the total number of Indeed postings? Is Indeed getting less popular or more popular over this time period? Data is complicated
simonsarris
Look at that graph again. It's indexed to 100 in Feb 1, 2020. It's now at 106. In other words, after all the pandemic madness, the total number of job postings on indeed is slightly larger than it was before, not smaller.
But for software, it's a lot smaller.
dxxmxnd
This website has its own graph which looks different.
https://www.trueup.io/job-trend
I have never gone to Indeed to apply for a job.
xorcist
> because he simply thought he could run a lot leaner
Because he suddenly had to pay interest on that gigantic loan he (and his business associates) took to buy Twitter.
It may not be the only reason for everything that happened, but it sure is simple and has some very good explanatory powers.
huntertwo
Other companies have different reasons to cut costs, but the incentive is still there.
xorcist
Stocks are valued against the risk free interest, or so the saying goes.
Doubling interest rate from .1% to .2% does a lot for your DCF models, and in this case we went from zero (or in some cases negative) to several percentage units. Of course stock prices tanked. That's what any schoolbook will tell you, and that's what any investor will expect.
Companies thus have to start turning dials and adjust parameters to make number go up again.
tdeck
Maybe someone can help me wrap my head around this in a different way, because here's how I see it.
If these tools are really making people so productive, shouldn't it be painfully obvious in companies' output? For example, if these AI coding tools were an amazing productivity boost in the end, we'd expect to see software companies shipping features and fixes faster than ever before. There would be a huge burst in innovative products and improvements to existing products. And we'd expect that to be in a way that would be obvious to customers and users, not just in the form of some blog post or earnings call.
For cost center work, this would lead to layoffs right away, sure. But companies that make and sell software should be capitalizing on this, and only laying people off when they get to the point of "we just don't know what to do with all this extra productivity, we're all out of ideas!". I haven't seen one single company in this situation. So that makes me think that these decisions are hype-driven short term thinking.
acrooks
I wonder if some of this output will take a while to be visible en masse.
For example, I founded a SaaS company late last year which has been growing very quickly. We are track to pass $1M ARR before the company's first birthday. We are fully bootstrapped, 100% founder owned. There are 2 of us. And we feel confident we could keep up this pace of growth for quite a while without hiring or taking capital. (Of course, there's an argument that we could accelerate our growth rate with more cash/human resources)
Early in my career, at different companies, we often solved capacity problems by hiring. But my cofounder and I have been able to turn to AI to help with this, and we keep finding double digit percentage productivity improvements without investing much upfront time. I don't think this would have been remotely possible when I started my career, or even just a few years ago when AI hadn't really started to take off.
So my theory as to why it doesn't appear to be "painfully obvious": you've never heard of most of the businesses getting the most value out of this technology, because they're all too small. On average, the companies we know about are large. It's very difficult for them to reinvent themselves on a dime to adapt to new technology - it takes a long time to steer a ship - so it will take a while. But small businesses like mine can change how we work today and realize the results tomorrow.
AndrewKemendo
This is exactly how it’s going down
Companies that needed to hire 10 people to grow, only need to hire 9 now
In less than 5 years that’s going to be 7 or 6 people
I’m doing more with 5 engineers than I was able to do with 15 just 10 years ago
Part of that is libraries etc have matured too but we’ve reached the point from a developer perspective that you don’t need to build new technologies, you just need to put what exists together in new ways
All the parts exist for any technology to be built, it’s about composition and distribution at this point
mixmastamyk
Curious, if you don’t mind mentioning what AIs you’re using (besides the obvious Claude, etc) and what for to augment your reach?
topspin
"shouldn't it be painfully obvious in companies' output?"
No.
The bottleneck isn't intellectual productivity. The bottleneck is a legion of other things; regulation, IP law, marketing, etc. The executive email writers and meeting attenders have a swarm of business considerations ricocheting around in their heads in eternal battle with each other. It takes a lot of supposedly brilliant thinking to safely monetize all the things, and many of the factors involved are not manifest in written form anywhere, often for legal reasons.
One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields. Another is art "creatives": graphic artists in particular. They're early victims and likely to be fully supplanted in the near future. A little further on and it'll be writers, actors, etc.
ImaCake
Maybe this means that LLMs are ultimately good for small buisness. If large buisness is constrained by being large and LLMs are equally accesible to 5 people or 100 then surely what we will see is increased productivity in small companies?
topspin
My direct experience has been that even very small tech businesses contend with IP issues as well. And they don't have the means to either risk or deliberately instigate a fight.
throwaway2037
> One place where AI is being disruptive is research: where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Great point. The perfect example: (From Wiki): > In 2024, Hassabis and John M. Jumper were jointly awarded the Nobel Prize in Chemistry for their AI research contributions for protein structure prediction.
AFAIK: They are talking about DeepMind AlphaFold.Related: (Also from Wiki):
> Isomorphic Labs Limited is a London-based company which uses artificial intelligence for drug discovery. Isomorphic Labs was founded by Demis Hassabis, who is the CEO.
SirHumphrey
I think AlphaFold is where current AI terminology starts breaking down. Because in some real sense, AlphaFold is primarily a statistical model - yes, it's interesting that they developed it using ML techniques, but from the use standpoint it's little different than perturbation based black boxes that were used before that for 20 years.
Yes, it's an example of ML used in science (other examples include NN based force fields for molecule dynamics simulations and meteorological models) - but a biologist or meteorologist usually cares little how the software package they are using works (excluding the knowledge of different limitation of numerical vs statistical models).
The whole thing "but look AI in science" seem to me like Motte-and-bailey argument to imply the use of AGI-like MLLM agents that perform independent research - currently a much less successful approach.
bawolff
Even still, in theory this should free up more money to hire more lawyers, markerters, etc. The effect should still be there presuming the market isn't saturated with new ideas .
xkcd1963
Something else will get expensive in the meantime, e.g. it doesn't matter how much you earn, landlords will always increase rent to the limit because a living space is a basic necessity
SteveNuts
>A little further on and it'll be writers, actors, etc.
The tech is going to have to be absolutely flawless, otherwise the uncanny-valley nature of AI "actors" in a movie will be as annoying as when the audio and video aren't perfectly synced in a stream. At least that's how I see it..
Izkata
This was made a little over a week ago: https://www.reddit.com/r/IndiaTech/comments/1ksjcsr/this_vid...
For most of them I'm not seeing any of those issues.
kevinventullo
It does not need to be flawless. It needs to be good enough to put butts in seats.
csomar
> where researchers are applying models in novel ways and making legitimate advances in math, medicine and other fields.
Can you give an example, say in Medicine, where AI made a significant advancement? That is we are talking neural networks and up (ie: LLM) and not some local optimization.
pkroll
https://arxiv.org/abs/2412.10849
"Our study suggests that LLMs have achieved superhuman performance on general medical diagnostic and management reasoning"
pera
Bullshit: Chatbots are not failing to demonstrate a tangible increase in companies' output because of regulations and IP law, they are failing because they are still not good for the job.
LLMs only exist because the companies developing them are so ridiculously powerful that can completely ignore the rule of law, or if necessary even change it (as they are currently trying to do here in Europe).
Remember we are talking about a technology created by torrenting 82 TB of pirated books, and that's just one single example.
"Steal all the users, steal all the music" and then lawyer up, as Eric Schmidt said at Stanford a few months ago.
hoosieree
And currently trying in the US too: https://apnews.com/article/ai-regulation-state-moratorium-co...
They want to ban states from imposing their own regulations on AI.
throwawayffffas
The things you mention in the legion of other things are actually things LLMs do better than intellectual productivity. They can spew entire libraries of marketing bs, summarize decades of legal precedents and fill out mountains of red tape checklists.
They have trouble with debugging obvious bugs though.
Teever
Maybe in some industries and for some companies and their products but not all.
Like let's take operating systems as an example. If there are great productivity gains from LLMs while aren't companies like Apple, Google and MS shipping operating systems with vastly less bugs and cleaning up backlogged user feature requests?
throwaway2037
Regarding the impact of LLMs on non-programming tasks, check out this one:
https://www.ft.com/content/4f20fbb9-a10f-4a08-9a13-efa1b55dd...
> The bank [Goldman Sachs] now has 11,000 engineers among its 46,000 employees, according to [CEO David] Solomon, and is using AI to help draft public filing documents.
> The work of drafting an S1 — the initial registration prospectus for an IPO — might have taken a six-person team two weeks to complete, but it can now be 95 per cent done by AI in minutes, said Solomon.
> “The last 5 per cent now matters because the rest is now a commodity,” he said.
In my eyes, that is major. Junior ibankers are not cheap -- they make about 150K USD per year minimum (total comp).fourside
This is certainly interesting and I don’t want to readily dismiss it, but I sometimes question how reliable these CEO anecdotes are. There’s a lot of pressure to show Wallstreet that you’re at the forefront of the AI revolution. It doesn’t mean no company is achieving great results but that it’s hard to separate the real anecdotes from the hype.
asadotzler
Claims by companies with an interest in AI without supporting documentation are just that, claims, and probably more PR and marketing than anything.
zkry
I find that this is on point. I've seen a lot of charts on the AI-hype side of things showing exponential growth of AI agent fleets being used for software development (starting in 2026 of course). Take this article for example: https://sourcegraph.com/blog/revenge-of-the-junior-developer
Ok, so by 2027 we should be having fleets of autonomous AI agents swarming around every bug report and solving it x times faster than a human. Cool, so I guess by 2028 buggy software will be a thing of the past (for those companies that fully adopt AI of course). I'm so excited for a future where IT projects stop going overtime and overbudget and deliver more value than expected. Can you blame us for thinking this is too good to be true?
hombre_fatal
This is like asking if tariffs are so bad, why don't you notice large price swings in your local grocer right now?
In complex systems, you can't necessarily perceive the result of large internal changes, especially not with the tiny amount of vibes sampling you're basing this on.
You really don't have the pulse on how fast the average company is shipping new code changes, and I don't see why you think you would know that. Shipping new public end-use features isn't even a good signal, it's a downstream product and a small fraction of software written.
It's like thinking you are picking up a vibe related to changes in how many immigrants are coming into the country month to month when you walk around the mall.
jayd16
Maybe not a great analogy. The market reacted instantly and you can see prices fluctuate almost as fast as tariff policy.
bawolff
Reistically its because layoffs have a high reputational cost. AI provides an excuse that lets companies do lay offs without suffering the reputation hit. In essence AI hype makes layoffs cheaper.
Doesnt really matter if AI actually works or not.
zelphirkalt
I would dispute, that there is no reputation cost, when you replace human work with LLMs.
bawolff
Sure, i don't think its none, just less.
It also matters a bit where the reputation cost hits. Layoffs can spook investors because it makes it look like the company is doing poorly. If the reputation hit for ai is to non-investors, then it probably matters less.
casualscience
In big companies, this is a bit slower due to the need to migrate entrenched systems and org charts into newer workflows, but I think you are seeing more productivity there too. Where this is much more obvious is in indie games and software where small agile teams can adopt new ways of working quickly...
E.g. look at the indie games count on steam by year: https://steamdb.info/stats/releases/?tagid=492
kaibee
Has the amount of 95%+ reviews games-released increased though? And how much of that is due to the pandemic? Its anecdotal, but the game-dev discord I'm in has had a decent reduction in # of regulars since the tail end of the pandemic 24-25. And ironically, I was one of them until recently. I think people actually just had more time.
casualscience
95 is a pretty random cutoff. But you can see for yourself the number of 65%, 75%,85%,90% positively reviewed games has increased similarly: https://steamdb.info/instantsearch/?refinementList%5Btags%5D...
bojan
The number of critically acclaimed games remains the same though. So for now we're getting quantity, but not the quality.
karagenit
What if the number of game critics just hasn’t increased, and since they can only play/review a fixed number of games each year due to time constraints, the number that they acclaim each year hasn’t grown? Not saying this is necessarily the case, just suggesting the possibility.
casualscience
source?
CMCDragonkai
It's cause there are still bottlenecks. AI is definitely boosting productivity in specific areas, but the total system output is bottlenecked. I think we will see these bottlenecks get rerouted or refactored in the coming years.
_heimdall
> AI is definitely boosting productivity in specific areas
What makes you so sure of the productivity boost when we aren't seeing a change in output?
tdeck
What do you think the main bottlenecks are right now?
CMCDragonkai
Informational complexity bottlenecks. So many things are shackled to human decision making loops. If we were truly serious, we would unshackle everything and let it run wild. Would be chaotic, but chaos create strange attractors.
kergonath
Quality control, for one. The state of commercial software is appalling. Writing code itself is not enough to get a useable piece of software.
LLMs are also not very useful for long term strategy or to come up with novel features or combinations of features. They also are not great at maintaining existing code, particularly without comprehensive test suites. They are good at coming up with tests for boiler plate code, but not really for high-level features.
esperent
> It's cause there are still bottlenecks
How do you know this? What are the bottlenecks?
financltravsty
[flagged]
jayd16
We'll take cheaper over faster but is that the case? If it's not cheaper or faster what is the point?
idkwhattocallme
I worked at two different $10B+ market cap companies during ZIRP. I recall in most meetings over half of the knowledge workers attending were superfluous. I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend. Why could we do that? Because the company was growing and hiring someone to attend meetings wasn't going to hurt the skyrocketing stock. Plus hiring someone gave my VP more headcount and therefore more clout. The market only valued company growth, not efficiency. But the market always capitulates to value (over time). When that happens all those overlay hires will get axed. Both companies have since laid off 10K+. AI was the scapegoat. But really, a lot of the knowledge worker jobs it "replaces" weren't providing real value anyway.
hn_throwaway_99
This is so true. We had a (admittedly derogatory) term we used during the rise in interest rates, "zero interest rate product managers". Don't get me wrong, I think great product managers are worth their weight in gold, but I encountered so many PMs during the ZIRP era who were essentially just Jira-updaters and meeting-schedulers. The vast majority of folks I see that were in tech that are having trouble getting hired now are in people who were in those "adjacent" roles - think agile coaches, TPMs, etc. (but I have a ton of sympathy for these folks - many of them worked hard for years and built their skills - but these roles were always somewhat "optional").
I'd also highlight that beyond over-hiring being responsible for the downturn in tech employment, I think offshoring is way more responsible for the reduction in tech than AI when it comes to US jobs. Video conferencing tech didn't get really good and ubiquitous (especially for folks working from home) until the late teens, and since then I've seen an explosion of offshore contractors. With so many folks working remotely anyway, what does it matter if your coworker is in the same city or a different continent, as long as there is at least some daily time overlap (which is also why I've seen a ton of offshoring to Latin America and Europe over places like India).
catigula
Off-shoring is pretty big right now but what shocks me is that when I walk around my company campus I see obscene amounts of people visibly and culturally from, mostly, India and China. The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with. These are low level business and accounting analyst positions.
Both sides of the aisle retreated from domestic labor protection for their own different reasons so the US labor force got clobbered.
ajmurmann
I am VERY pro-immigration. I do have concerns about the H1B program though. IMO it's not great for both immigrant workers, as well as non-immigrant workers because it creates a class of workers for whom it's harder to change employers which weakens their negotiation position. If this is the case for enough of the workforce it artificially depresses wages for everyone. I want to see a reform that makes it much easier for H1B workers to change employers.
yobbo
> The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads
One theory is that the benefit they might be providing over domestic "grads" is lack of prerequisites for promotion above certain levels (language, cultural fit, and so on). For managers, this means the prestige of increased headcount without the various "burdens" of managing "careerists". For example, less plausible competition for career-ladder jobs which can then be reserved for favoured individuals. Just a theory.
spoaceman7777
It's also worth noting that it's almost entirely native born Americans that are pushing back against nepotism. Extreme nepotism is still the norm (an expectation even) in most South and East Asian cultures. And it's quite readily acknowledged if you speak to newer hires who haven't realized yet that it is best kept quiet.
It's a hard truth for many Americans to swallow, but it is the truth nonetheless.
Not to say there isn't an incredible amount of merit... but the historical impact of rampant nepotism in the US is widely acknowledged, and this newer manifestation should be acknowledged just the same.
lostlogin
> The idea that literally massive amounts of this workforce couldn't possibly be filled by domestic grads is pretty hard to engage with.
I hear this argument where I live for various reasons, but surely it only ever comes down to wages and/or conditions?
If the company paid a competitive rate (ie higher), locals would apply. Surely blaming a lack of local interest is rarely going to be due to anything other than pay or conditions?
therealpygon
My opinion is that off-shore teams are also going to be some of the jobs more easily replaced, because many of these are highly standardized with instructions due to the turnover they have. I wouldn’t be surprised if these outsourcing companies are already working toward that end. They are definitely automating and/or able to collect significant training data from the various tools they require their employees to use for customers.
gedy
I was working at a SoCal company a couple years ago (where I’m from), and we had a lot of Chinese and Indian folks. I remember cracking up when one of the Indian fellows pulled me aside and asked me where I was from, because I sounded so different with my accent and lingo. He thought I was from some small European country, lol.
jayd16
I mean, aren't 3 out of 8 humans from India or China? If the company is big enough to appeal to a global applicant pool its a bit expected.
renewiltord
Any immigrants should read these threads carefully. If you're pro-union you're going to get screwed by your fellow man. Don't empower a union unless you want to be kicked out of the country.
According to these people, politicians like you here and labour doesn't. If that's true, do you want to empower labour to kick you out?
adamtaylor_13
I’m realizing that 100% of all product managers I have ever worked with were just ZIRP-PMs.
I have never once worked with a product manager who I could describe as “worth their weight in gold”.
Not saying they don’t exist, but they’re probably even rarer than you think.
cavisne
My theory for these PM's is its basically a cheap way to take potential entrepreneurs off the market. Its hard to predict if a startup will succeed but one genre of success is having a Type A "fake it till you make it" non technical cofounder who can keep raising long enough to get product market fit.
These types all go to the same schools and do really well, interview the same, and value the prestige of working in big tech. So it's pretty easy to identify them and offer them a great career path and take them off the market.
Technical founders are way trickier to identify as they can be dropouts, interview poorly, not value the prestige etc.
boogieknite
first job out of college i was one of these pms. luckily i figured it out quickly and would spend maybe 2 hours a day working, 6 hours a day teaching myself to program. i cant believe that job existed and they gave it to me. one of my teammates was moved to HR and he was distraught over how he actually had work to do
icedchai
I worked at a small company with more PMs than developers. It was incredible how much bull it created.
aswegs8
How are TPMs optional? In my experience they provide more value than PMs that don't understand technology.
hn_throwaway_99
Perhaps the terminology differs between companies, but in my experience TPM means technical program manager. For large projects they were responsible for creating project Gantt charts, identify blockers early, and essentially "greasing the wheels" between disparate teams.
Again, IMO the good ones added a lot of value by making sure no balls got dropped, which is easy to do with large, multi-team projects. Most of them, though, did a lot of just "status checks" and meeting updates.
mlsu
I suspect that these "AI layoffs" are really "interest rate" layoffs in disguise.
Software was truly truly insane for a bit there. Straight out of college, no-name CS degree, making $120, $150k (back when $120k really meant $120k)? The music had to stop on that one.
spamizbad
Yeah, my spiciest take is that Jr. Dev salaries really started getting silly during the 2nd half of the 2010s. It was ultimately supply (too little) and demand (too much) pushing them upward, but it was a huge signal we were in a bubble.
LPisGood
As someone who entered the workforce just after this, I feel like I missed the peak. A ton if those people got boatloads of money, great stock options, and many years of experience that they can continue to leverage for excellent positions.
nyarlathotep_
The irony now is that 120k is basically minimum wage for major metros (and in most cases that excludes home ownership).
Of course, that growth in wages in this sector was a contributing factor to home/rental price increases as the "market" could bear higher prices.
rekenaut
I feel that saying "120k is basically minimum wage for major metros" is absurd. As of 2022, there are only three metro areas in the US that have a per capita income greater than $120,000 [1] (Bay Area and Southwest Connecticut). Anywhere else in the US, 120k is doing pretty well for yourself, compared to the rest of the population. The average American working full time earns $60k [2]. I'm sure it's not a comfortable wage in some places, but "basically minimum wage" just seems ignorant.
[1] https://en.wikipedia.org/wiki/List_of_United_States_metropol...
[2] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...
bravesoul2
Yeah 120k is the maximum I have earned over 20 years in the industry. I started off circa. 40k maybe that's 70k adj for inflation. Not in US.
alephnerd
CoL in London or Dublin is comparable to much of the US, but new grad salaries are in the $30-50k range.
The issue is salary expectations in the US are much higher than those in much of Western Europe despite having similar CoL.
And $120k for a new grad is only a tech specific thing. Even new grad management consultants earn $80-100k base, and lower for other non-software roles and industries.
catigula
That really only happened in HCOL areas.
bravesoul2
HCOL wasn't the driver though. It is abundance of investment and desire to hire. If the titans could collude to pay engineer half as much, they would. They tried.
xp84
Sure, but there was a massive concentration of such people in those areas.
bachmeier
> I mean, we hired someone on my team to attend cross functional meetings because our calendars were literally too full to attend.
Some managers read Dilbert and think it's intended as advice.
trhway
AI has been also consuming Dilbert as part of its training...
DonHopkins
Worse yet, AI has been consuming Scott Adams quotes as part of its training...
"The reality is that women are treated differently by society for exactly the same reason that children and the mentally handicapped are treated differently. It’s just easier this way for everyone. You don’t argue with a four-year old about why he shouldn’t eat candy for dinner. You don’t punch a mentally handicapped guy even if he punches you first. And you don’t argue when a women tells you she’s only making 80 cents to your dollar. It’s the path of least resistance. You save your energy for more important battles." -Scott Adams
"Women define themselves by their relationships and men define themselves by whom they are helping. Women believe value is created by sacrifice. If you are willing to give up your favorite activities to be with her, she will trust you. If being with her is too easy for you, she will not trust you." -Scott Adams
"Nearly half of all Blacks are not OK with White people. That’s a hate group." -Scott Adams
"Based on the current way things are going, the best advice I would give to White people is to get the hell away from Black people. Just get the fuck away. Wherever you have to go, just get away. Because there’s no fixing this. This can’t be fixed." -Scott Adams
"I’m going to back off from being helpful to Black Americas because it doesn’t seem like it pays off. ... The only outcome is that I get called a racist." -Scott Adams
icedchai
I've worked at smaller companies where half the people in the meetings were just there because they had nothing else to do. Lots of "I'm a fly on the wall" and "I'll be a note taker" types. Most of them contributed nothing.
xp84
My friend's company (he was VP of Software & IT at a non-tech company) had a habit of meetings with no particular agenda and no decisions that needed making. Just meeting because it was on the calendar, discussing any random thing someone wanted to blab about. Not how my friend ran his team but that was how the rest did.
Then they had some disappointing results due to their bad decision-making elsewhere in the company, and they turned to my friend and said "Let's lay off some of your guys."
osigurdson
It is almost like once a company gets rolling, there is sufficient momentum to keep it going even if many layers aren't doing very much. The company becomes a kind of meta-economic zone where nothing really matters. Politics / fights emerge between departments / layers but has nothing to do with making a better product / service. This can go on for decades if the moat is large enough.
Nasrudith
The first mistake is thinking that contribution must be in the form of output instead of ingestion. Of course meetings aren't often the most efficient form of doing so. More being forced to listen (at least officially) so there isn't an excuse.
icedchai
This is true, but generally speaking there should be more people "producing" than "ingesting." This is often not the case. Most meetings are useless, and this has become much worse in modern times. Example: agile "scrum" and its daily stand ups, which inevitably turn into status reports.
At some point in the 2000's, every manager decided they needed weekly 1:1's, resulting in even more meetings. Many of these are entirely ineffective. As one boss told me, "I've been told I need to have 1:1's, so I'm having them!" I literally sat next to him and talked every day, but it was a good time to go for coffee...
JSR_FDED
I don’t doubt there’s a lot of knowledge workers who aren’t adding value.
I’m worried about the shrinking number of opportunities for juniors.
hn_throwaway_99
I agree with this, but I still think that offshoring is much more responsible for this than AI.
I have definitely seen real world examples where adding junior hires at ~$100k+ is being completely forgone when you can get equivalent output from someone making $40k offshore.
phendrenad2
To the contrary - they were providing value to the VP who benefitted from inflated headcount. That's "real value", it's just a rogue agent is misaligned with the company's goals.
And AI cannot provide that kind of value. Will a VP in charge of 100 AI agents be respected as much as a VP in charge of 100 employees?
At the end of the day, we're all just monkeys throwing bones in the air in front of a monolith we constructed. But we're not going to stop throwing bones in the air!
idkwhattocallme
True! I golfed with the president of the division on a Friday (during work) and we got to the root of this. Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes. When you have 70%+ margin on your software, you have money to burn. Dividends back to shareholders was not rewarded during ZIRP. On VP's being respected. I found at the companies I worked at VPs and their directs were like Nobles in a feudal kingdom constantly quibbling/battling for territory. There were alliances with others and full on takeouts at points. One VP described it as Game of Thrones. Not sure how this all changes when your kingdom is a bunch of AI agents that presumably anyone can operate.
lotsofpulp
> Companies would rather burn money on headcount (counted as R&D) than show profits and pay the govt taxes
The data does not support this. The businesses with the highest market caps are the ones with the highest earnings.
https://companiesmarketcap.com/
Sort by # of employees and you get a list of companies with lower market caps.
myko
Not so fun in real life but I kind of like this as a video game concept
BriggyDwiggs42
We really oughta work on setting up systems that don’t waste time on things like this. Might be hard, but probably would be worth the effort.
PeterStuer
"Hiring someone gave my VP more headcount and therefore more clout"
Which is the sole reason automation will not make most people obsolete until the VP level themselves are automated.
dlivingston
No, not if the metric by which VPs get clout changes.
monkeyelite
That metric is evaluated deep in the human psyche.
thfuran
The more cloud spend the better. Take 10% of it as a bonus?
0xpgm
It's about to change to doing more with less headcount and higher AI spend
Nasrudith
Automation is just one form of "face a sufficiently competitive marketplace such that the company can no longer tolerate the dead-weight loss of their egos".
lukev
Whenever I think about AI and labor, I can't help thinking about David Graeber's [Bullshit Jobs](https://en.wikipedia.org/wiki/Bullshit_Jobs).
And there's multiple confounding factors at play.
Yes, lots of jobs are bullshit, so maybe AI is a plausible excuse to downside and gain efficiency.
But also the dynamic that causes the existence of bullshit jobs hasn't gone away. In fact, assuming AI does actually provide meaningful automation or productivity improvemenet, it might well be the case that the ratio of bullshit jobs increases.
alvah
Exactly. For as long as I can remember, in any organisation of any reasonable size I have worked in, you could get rid of the ~50% of the headcount who aren't doing anything productive without any noticeable adverse effects (on the business at least, obviously the effects on the individuals would be somewhat adverse). This being the case, there are obviously many other factors other than pure efficiency keeping people employed, so why would an AI revolution on it's own create some kind of massive Schumpeterian shockwave?
ryandrake
People keep tossing around this 50% figure like it's a fact, but do you really think these companies just have half their staff just not doing anything? It just seems absurd, and I honestly don't believe it.
Everywhere I've ever worked, we had 3-4X more work to do than staff to do it. It was always a brutal prioritization problem, and a lot of good projects just didn't get done because they ended up below the cut line, and we just didn't have enough people to do them.
I don't know where all these companies are that have half their staff "not doing anything productive" but I've never worked at one.
What's more likely? 1. Companies are (for reasons unknown) hiring all these people and not having them do anything useful, or 2. These people actually do useful things, but HN commenters don't understand those jobs and simply conclude they're doing nothing?
sevensor
What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task. In 2025, we can get LLMs to do that for us. Unfortunately, the kind of executive who thinks AI is a legitimate replacement for actual work does not recognize the difference. I expect to see the more credulous CEOs dynamiting their companies as a result. Whether the rest of us can survive this remains to be seen. The CEOs will be fine, of course.
const_cast
> What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.
The only reason this existed in the first place is because measuring performance is extremely difficult, and becomes more difficult the more complex a person's job is.
AI won't fix that. So even if you eliminate 50% of your employees, you won't be eliminating the bottom 50%. At worst, and probably what happens on average, your choices are about as good as random choice. So you end up with the same proportion of shitty workers as you had before. At worst worst, you actively select the poorest workers because you have some shitty metrics, which happens more often than we'd all like to think.
cjs_ac
There's a connection to the return to office mandates here: the managers who don't see how anyone can work at home are the ones who've never done anything but yap in the office for a living, so they don't understand how sitting somewhere quiet and just thinking counts as work or delivers value for the company. It's a critical failure to appreciate that different people do different things for the business.
Jubijub
That is a hugely simplistic take that tells me you never managed people out coordinated work across many people. I mean I a more productive individually at home too, so are probably all my folks in the team. But we don’t always work independently from each others, by which point having some days in common is a massive booster
cjs_ac
There is a spectrum: at one extremity is mandatory in-office presence every day; at the other is a fully-remote business. For any given individual, and for any given team, the approach needs to be placed on that spectrum according to what it is that that individual or team does. I'm not arguing in favour of any position on that spectrum; I'm arguing against blanket mandates that don't involve any consideration for what individuals in the business do.
xg15
> What AI is going to wipe out is white collar jobs where people sleepwalk through the working day and carelessly half ass every task.
Note that AI wipes out the jobs, but not the tasks themselves. So if that's true, as a consumer, expect more sleepwalked, half-assed products, just created by AI.
richardw
CEO’s will be fine until their customers disappear. Are the AI’s going to click ads and buy iPhones?
psadauskas
AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.
I just wish that instead of getting more efficient at generating bullshit, we could just eliminate the bullshit.
TeMPOraL
> AIs are great at generating bullshit, so if your job involves generating bullshit, you're probably on the chopping block.
That covers majority of sales, advertising and marketing work. Unfortunately, replacing people with AI there will only make things worse for everyone.
potatoman22
Some of the best applications of LLMs I've seen are for reducing bullshit. My goal for creating AI products is to let us act more like humans and less like oxen. I know it's idealistic, but I need to act with some goal.
habosa
I think it’s actually going to save those people. They can vibe code themselves just enough output to survive where before they did next to nothing. In relative terms, they’ll get a much much higher productivity boost from AI than the already high-performing Staff engineer.
Management will be thrilled.
einpoklum
I haven't worked in the US; and - have not yet worked in a company where such employees exist. Some are slower, some are fast or more efficient or productive - but they're all, everyone, under the pressure of too many tasks assigned to them, and it's always obvious that more personnel is needed but budget (supposedly) precludes it.
So, what you're describing is a mythical situation for me. But - US corporations are fabulously rich, or perhaps I should say highly-valued, and there are lots of investors to throw money at things I guess, so maybe that actually happens.
ryandrake
No, it's the same in the US, too. I don't know what these mythical companies are where people are saying 50% of the workforce does nothing, but I've never seen such a place. Everywhere I've ever worked had way more projects to get done than people available to do them. Everyone was working at capacity.
johnbenoe
Yea
CKMo
There's definitely a big problem with entry-level jobs being replaced by AI. Why hire an intern or a recent college-grad when they lack both the expertise and experience to do what an AI could probably do?
Sure, the AI might require handholding and prompting too, but the AI is either cheaper or actually "smarter" than the young person. In many cases, it's both. I work with some people who I believe have the capacity and potential to one day be competent, but the time and resource investment to make that happen is too much. I often find myself choosing to just use an AI for work I would have delegated to them, because I need it fast and I need it now. If I handed it off to them I would not get it fast, and I would need to also go through it with them in several back-and-forth feedback-review loops to get it to a state that's usable.
Given they are human, this would push back delivery times by 2-3 business days. Or... I can prompt and handhold an AI to get it done in 3 hours.
Not that I'm saying AI is a god-send, but new grads and entry-level roles are kind of screwed.
ChrisMarshallNY
This is where the horrific disloyalty of both companies and employees, comes to bite us in the ass.
The whole idea of interns, is as training positions. They are supposed to be a net negative.
The idea is that they will either remain at the company, after their internship, or move to another company, taking the priorities of their trainers, with them.
But nowadays, with corporate HR, actively doing everything they can to screw over their employees, and employees, being so transient, that they can barely remember the name of their employer, the whole thing is kind of a worthless exercise.
At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers, upon returning to Japan. It was well worth it.
neilv
I agree that interns are pretty much over in tech. Except maybe for an established company do do as a semester/summer trial/goodwill period, for students near graduation. You usually won't get work output worth the mentoring cost, but you might identify a great potential hire, and be on their shortlist.
Startups are less enlightened than that about "interns".
Literally today, in a startup job posting, to a top CS department, they're looking for "interns" to bring (not learn) hot experience developing AI agents, to this startup, for... $20/hour, and get called an intern.
It's also normal for these startup job posts to be looking for experienced professional-grade skills in things like React, Python, PG, Redis, etc., and still calling the person an intern, with a locally unlivable part-time wage.
Those startups should stop pretending they're teaching "interns" valuable job skills, admit that they desperately need cheap labor for their "ideas person" startup leadership, to do things they can't do, and cut the "intern" in as a founding engineer with meaningful equity. Or, if you can't afford to pay a livable and plausibly competitive startup wage, maybe they're technical cofounders.
geraneum
> horrific disloyalty of both companies and employees
There’s no such a thing as loyalty in employer-employee relationships. There’s money, there’s work and there’s [collective] leverage. We need to learn a thing or two from blue collars.
ChrisMarshallNY
> We need to learn a thing or two from blue collars.
A majority of my friends are blue-collar.
You might be surprised.
Unions are adversarial, but the relationships can still be quite warm.
I hear that German and Japanese unions are full-force stakeholders in their corporations, and the relationship is a lot more intricate.
It's like a marriage. There's always elements of control/power play, but the idea is to maximize the benefits.
It can be done. It has been done.
It's just kind of lost, in tech.
FirmwareBurner
>At my old company, we trained Japanese interns. They would often relocate to the US, for 2-year visas, and became very good engineers,
Damn, I wish that was me. Having someone mentor you at the beginning of your career instead of having to self learn and fumble your way around never knowing if you're on the right track or not, is massive force multiplier that pays massive dividends over your career. It's like entering the stock market with 1 million $ capital vs 100 $. You're also less likely to build bad habits if nobody with experience teaches you early on.
dylan604
I really think the loss of a mentor/apprentice type of experience is one of those baby-with-the-bath-water type of losses. There are definitely people with the personality types of they know everything and nothing can be learned from others, but for those of us who would much rather learn from those with more experience on the hows and whys of things rather than getting all of those paper cuts ourselves, working with mentors is definitely a much better way to grow.
ChrisMarshallNY
Yup. It was a standard part of their HR policy. They are all about long, long-term employment.
They are a marquée company, and get the best of the best, direct from top universities.
Also, no one has less than a Master's, over there.
We got damn good engineers as interns.
xpe
I personally care a lot about people, but if I was running a publicly traded for-profit, I would have a lot of constraints about how to care for them. (A good place to start, by the way, is not bullshitting people about the financial realities.)
Employees are lucky when incentives align and employers treat them well. This cannot be expected or assumed.
A lot of people want a different kind of world. If we want it, we’re gonna have to build it. Think about what you can do. Have you considered running for office?
I don’t think it is helpful for people to play into the victim narrative. It is better to support each other and organize.
mechagodzilla
Interns and new grads have always been a net-negative productivity-wise in my experience, it's just that eventually (after a small number of months/years) they turn into extremely productive more-senior employees. And interns and new grads can use AI too. This feels like asking "Why hire junior programmers now that we have compilers? We don't need people to write boring assembly anymore." If AI was genuinely a big productivity enhancer, we would just convert that into more software/features/optimizations/etc, just like people have been doing with productivity improvements in computers and software for the last 75 years.
lokar
Where I have worked new grads (and interns) were explicitly negative.
This is part of why some companies have minimum terminal levels (often 5/Sr) before which a failure to improve means getting fired.
0xpgm
Isn't that every new employee? The first few months you are not expected to be firing on all cylinders as you catch up and adjust to company norms
An intern is much more valuable than AI in the sense that everyone makes micro decisions that contribute to the business. An Intern can remember what they heard in a meeting a month ago or some important water-cooler conversation and incorporate that in their work. AI cannot do that
alephnerd
It's a monetary issue at the end of the day.
AI/ML and Offshoring/GCCs are both side effects of the fact that American new grad salaries in tech are now in the $110-140k range.
At $70-80k the math for a new grad works out, but not at almost double that.
Also, going remote first during COVID for extended periods proved that operations can work in a remote first manner, so at that point the argument was made that you can hire top talent at American new grad salaries abroad, and plenty of employees on visas were given the option to take a pay cut and "remigrate" to help start a GCC in their home country or get fired and try to find a job in 60 days around early-mid 2020.
The skills aspect also played a role to a certain extent - by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple - I don't care if you can prove Dixon's factorization method using induction if you can't tell me how threading works or the rings in the Linux kernel.
The Japan example mentioned above only works because Japanese salaries in Japan have remained extremely low and Japanese is not an extremely mainstream language (making it harder for Japanese firms to offshore en masse - though they have done so in plenty of industries where they used to hold a lead like Battery Chemistry).
sarchertech
> by the late 2010s it was getting hard to find new grads who actually understood systems internals and OS/architecture concepts, so a lot of jobs adjacent to those ended up moving abroad to Israel, India, and Eastern Europe where universities still treat CS as engineering instead of an applied math disciple
That doesn’t fit my experience at all. The applied math vs engineering continuum is mostly dependent on whether a CS program at a given school came out of the engineering department or the math apartment. I haven’t noticed any shift on that spectrum coming from CS departments except that people are more likely to start out programming in higher level languages where they are more insulated from the hardware.
That’s the same across countries though. I certainly haven’t noticed that Indian or Eastern European CS grads have a better understanding of the OS or the underlying hardware.
brookst
I just can't agree with this argument at all.
Today, you hire an intern and they need a lot of hand-holding, are often a net tax on the org, and they deliver a modest benefit.
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more. Their total impact will be much higher.
The whole "entry level is screwed" view only works if you assume that companies want all of the drawbacks of interns and entry level employees AND there is some finite amount of work to be done, so yeah, they can get those drawbacks more cheaply from AI instead.
But I just don't see it. I would much rather have one entry level employee producing the work of six because they know how to use AI. Everywhere I've worked, from 1-person startup to the biggest tech companies, has had a huge surplus of work to be done. We all talk about ruthless prioritization because of that limit.
So... why exactly is the entry level screwed?
chongli
Tomorrow's interns will be accustomed to using AI, will need less hand-holding, will be able to leverage AI to deliver more.
Maybe tomorrow's interns will be "AI experts" who need less hand-holding, but the day after that will be kids who used AI throughout elementary school and high school and know nothing at all, deferring to AI on every question, and have zero ability to tell right from wrong among the AI responses.
I tutor a lot of high school students and this is my takeaway over the past few years: AI is absolutely laying waste to human capital. It's completely destroying students' ability to learn on their own. They are not getting an education anymore, they're outsourcing all their homework to the AI.
sibeliuss
It's worth reminding folks that one doesn't _need_ a formal education to get by. I did terrible in school and never went to college and years later have reached a certain expertise (which included many fortunate moments along the way).
What I had growing up though were interests in things, and that has carried me quite far. I worry much more about the addictive infinite immersive quality of video games and other kinds of scrolling, and by extension the elimination of free time through wasted time.
alephnerd
I mean, a lot of what you mentioned is an issue around critical thinking (and I'm not sure that's something that can be taught), which has always remained an issue in any job market, and to solve that deskilling via automation (AI or traditional) was used to remediate that gap.
But if you deskill processes, it makes it harder to argue in favor of paying the same premium you did before.
gerad
They don't have the experience to tell bad AI responses from good ones.
xp84
True, but this becomes less of an issue as AI improves, right? Which is the 'happier' direction to see a problem moving, as if AI doesn't improve, it threatens the jobs less.
einpoklum
> will need less hand-holding, will be able to leverage AI to deliver more
Well, maybe it'll be the other way around: Maybe they'll need more hand-holding since they're used to relying on AI instead of doing things themselves, and when faced with tasks they need to do, they will be less able.
But, eh, what am I even talking about? The _senior_ developers in a many companies need a lot of hand-holding that they aren't getting, write bad code, with poor practices, and teach the newbies how to get used to doing that. So that's why the entry-level people are screwed, AI or no.
brookst
You’ve eloquently expressed exactly the same disconnect: as long as we think the purpose of internships is to write the same kind of code that interns write today, sure, AI probably makes the whole thing less efficient.
But if the purpose of an internship is to learn how to work in a company, while producing some benefit for the company, I think everything gets better. Just like we don’t measure today’s terms by words per minute typed, I don’t think we’ll measure tomorrow’s interns by Lines of code that hand – written.
So much of the doom here comes from a thought process that goes “we want the same outcomes as today, but the environment is changing, therefore our precious outcomes are at risk.“
diogolsq
You’re right that AI is fast and often more efficient than entry-level humans for certain tasks — but I’d argue that what you’re describing isn’t delegation, it’s just choosing to do the work yourself via a tool. Implementation costs are lower now, so you decide to do it on your own.
Delegation, properly defined, involves transferring not just the task but the judgment and ownership of its outcome. The perfect delegation is when you delegate to someone because you trust them to make decisions the way you would — or at least in a way you respect and understand.
You can’t fully delegate to AI — and frankly, you shouldn’t. AI requires prompting, interpretation, and post-processing. That’s still you doing the thinking. The implementation cost is low, sure, but the decision-making cost still sits with you. That’s not delegation; it’s assisted execution.
Humans, on the other hand, can be delegated to — truly. Because over time, they internalize your goals, adapt to your context, and become accountable in a way AI never can.
Many reasons why AI can't fill your shoes:
1. Shallow context – It lacks awareness of organizational norms, unspoken expectations, or domain-specific nuance that’s not in the prompt or is not explicit in the code base.
2. No skin in the game – AI doesn’t have a career, reputation, or consequences. A junior human, once trained and trusted, becomes not only faster but also independently responsible.
Junior and Interns can also use AI tools.
dasil003
You said exactly what I came here to say.
Maybe some day AI will truly be able to think and reason in a way that can approximate a human, but we're still very far from that. And even when we do, the accountability problem means trusting AI is a huge risk.
It's true that there are white collar jobs that don't require actual thinking, and those are vulnerable, but that's just the latest progression of computerization/automation that's been happening steadily for the last 70 years already.
It's also true that AI will completely change the nature of software development, meaning that you won't be able to coast just on arcane syntax knowledge the way a lot of programmers have been able to so far. But the fundamental precision of logical thought and mapping it to a desirable human outcome will still be needed, the only change is how you arrive there. This actually benefits young people who are already becoming "AI native" and will be better equipped to leverage AI capabilities to the max.
uludag
I thought the whole idea of automation though was to lower the skill requirement. Everyone compares AI to the industrial revolution and the shift from artisan work to factory work. If this analogy were to hold true, then what employers should actually be wanting is more junior devs, maybe even non-devs, hired at a much cheaper wage. A senior dev may be able to outperform a junior by a lot, but assuming the AI is good enough, four juniors or like 10 non-devs should be able to outperform a senior.
This obviously not being the case shows that we're not in a AI driven fundamental paradigm shift, but rather run of the mill cost cutting measures. Like suppose a tech bubble pops and there are mass layoffs (like the Dotcom bubble). Obviously people will loose their jobs. AI hype merchants will almost definitely try to push the narrative that these losses are from AI advancements in an effort to retain funding.
Loughla
So what happens when you retire and have no replacement because you didn't invest in entry level humans?
This feels like the ultimate pulling up the ladder after you type of move.
mirkodrummer
imo comparing entry-level people with ai is very short sighted, I was smarter than every dumb dinosaur at my first job, I was so eager to learn and proactive and positive... i probably was very lucky too but my point is i don't believe this whole thing that a junior is worse than ai, i'd rather say the contrary
phailhaus
I don't get this because someone has to work with the AI to get the job done. Those are the entry-level roles! The manager who's swamped with work sure as hell isn't going to do it.
snowwrestler
Historically, people have been pretty good at predicting the effects of new technologies on existing jobs. But quite bad at predicting the new jobs / careers / industries that are eventually created with those technologies.
This is why free market economies create more wealth over time than centrally planned economies: the free market allows more people to try seemingly crazy ideas, and is faster to recognize good ideas and reallocate resources toward them.
In the absence of reliable prediction, quick reaction is what wins.
Anyway, even if AI does end up “destroying” tons of existing white collar jobs, that does not necessarily imply mass unemployment. But it’s such a common inference that it has its own pejorative: Luddite.
And the flip side of Ludddism is what we see from AI boosters now: invoking a massive impact on current jobs as a shorthand to create the impression of massive capability. It’s a form of marketing, as the CNN piece says.
digdugdirk
More people need to understand the actual history of the luddites. The real issue was the usage of mechanized equipment to overwhelm an entire sector of the economy of the day - destroying the labor value of a vast swath of craftspeople and knocking them down a peg on the social ladder.
Those people who were able to get work were now subject to a much more dangerous workplace and forced into a more rigid legalized employer/employee structure, which was a relatively new "corporate innovation" in the grand scheme of things. This, of course, allowed/required the state to be on the hook for enforcement of the workplace contract, and you can bet that both public and private police forces were used to enforce that contract with violence.
Certainly something to think about for all the users on this message board who are undoubtedly more highly skilled craftspeople than most, and would never be caught up in a mass economic displacement driven by the introduction of a new technological innovation.
At the very least, it's worth a skim through the Wikipedia article: https://en.wikipedia.org/wiki/Luddite
nopinsight
My thesis is that this could lead to a booming market for “pink-collar” service jobs. A significant latent demand exists for more and better services in developed countries.
For instance, upper-middle-class and middle-class individuals in countries like India and Thailand often have access to better services in restaurants, hotels, and households compared to their counterparts in rich nations.
Elderly care and health services are two particularly important sectors where society could benefit from allocating a larger workforce.
Many others will have roles to play building, maintaining, and supervising robots. Despite rapid advances, they will not be as dexterous, reliable, and generally capable as adult humans for many years to come. (See: Moravec's paradox).
csomar
I think the takeaway is that interest rates have to be maintained relatively high as the ZIRP era has showed that it breaks the free market. There is a reason why the Trump wants to lower the interest rate.
Sure it is painful but a ZIRP economy doesn't listen to the end consumers. No reason to innovate and create crazy ideas if you have plenty of income.
tw04
But also it potentially means mass unemployment and we have literally no plan in place if that happens beyond complete societal collapse.
Even if you think all the naysayers are “luddites”, do you really think it’s a great idea to have no backup plan beyond “whupps we all die or just go back to the Stone Age”?
snowwrestler
We actually have many backup plans. The most effective ones will be the new business plans that unlock investment which is what creates new jobs. But behind that are a large set of government policies and services that help people who have lost work. And behind that are private resources like charities, nonprofits, even friends and family.
People don’t want society to collapse. So if you think it’s something that people can prevent, feel comforted that everyone is trying to prevent it.
alluro2
Compared to 30-40 years ago, I believe many in the US would argue that society has already collapsed to a significant extent, with regards to healthcare, education, housing, cost of life, homelessness levels etc.
If these mechanisms you mention are in place and functioning, why is there, for example, such large growth of the economic inequality gap?
ccorcos
> do you really think it’s a great idea to have no backup plan
What makes you think people haven’t made back up plans?
Or are you saying government needs to do it for us?
argomo
Ah yes the old "let's make individuals responsible for solving societal problems" bit. Nevermind that the state is sometimes the only entity capable of addressing the situation at scale.
beepbooptheory
So, we are doomed to work forever, just maybe different jobs?
satvikpendem
Of course. I mean this has never not been the case unless you are independently wealthy. Work always expands, that's why it's a fallacy to think that if we just had more productivity gains that we'd work half the time; no, there are always new things to do tomorrow that were not possible yesterday.
absurdo
Basically yeah. You live in a world of layered servitude and, short of a financial windfall that hoists you up for some time, you’re basically guaranteed to work your entire life, and grow old, frail and poor. This isn’t a joke, it’s reality for many people that’s hidden from us to keep us deluded. Similar to my other mini-rant, I don’t have any valid answers to the problem at hand. Just acknowledging how fucked things are for humanity.
aianus
No, it's quite easy to make $1mm in a rich country and move to a poorer country and chill if you so desire.
77pt77
Just like the Red Queen.
You have to always keep on moving just to stay in the same place.
madaxe_again
When steam engines came along, an awful lot of people argued that being able to pump water from mines faster, while inarguably useful, would not have any broad economical impact. Only madmen saw the Newcomen engine and thought “ah, railways!”. Those madmen became extraordinarily wealthy. Vast categories of work were eliminated, others were created.
I think this situation is very similar in terms of the underestimation of scope of application, however differs in the availability of new job categories - but then that may be me underestimating new categories which are as yet as unforeseen as stokers and train conductors once were.
null
michaeldoron
Every time an analyst gives the current state of AI-based tools as evidence supporting AI disruption being just a hype, I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Putting that aside, how is this article called an analysis and not an opinion piece? The only analysis done here is asking a labor economist what conditions would allow this claim to hold, and giving an alternative, already circulated theory that AI companies CEOs are creating a false hype. The author even uses everyday language like "Yeaaahhh. So, this is kind of Anthropic’s whole ~thing.~ ".
Is this really the level of analysis CNN has to offer on this topic?
They could have sketched the growth in foundation model capabilities vs. finite resources such as data, compute and hardware. They could have wrote about the current VC market and the need for companies to show results and not promises. They could have even wrote about the giant biotech industry, and its struggle with incorporating novel exciting drug discovery tools with slow moving FDA approvals. None of this was done here.
Terr_
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Compare: "Whenever I think of skeptics dismissing completely novel and unprecedented outcomes occurring by mechanisms we can't clearly identify or prove (will) exist... I think of skeptics who dismissed an outcome that had literally hundreds of well-studied historical precedents using proven processes."
You're right that humans don't have a good intuition for non-linear growth, but that common thread doesn't heal over those other differences.
actuallyalys
Yeah, for this analogy to work, we’d have to see AI causing a small but consistently doubling amount of lost jobs.
mitthrowaway2
If that were happening right now, how would we know? COVID-19 cases were tracked imperfectly but pretty well; is there any equivalent for AI-related job losses?
bgwalter
Why not use the promised exponential growth of home ownership that led to the catastrophic real estate bubble that burst in 2008 as an example?
We are still dealing with the aftereffects, which led to the elimination of any working class representation in politics and suppression of real protests like Occupy Wall Street.
When this bubble bursts, the IT industry will collapse for some years like in 2000.
michaeldoron
The growth of home ownership was an indicator of real estate investment, not of real world capabilities - once the value of real estate dropped and the bubble burst, those investments were worth less than before, causing the crisis. In contrast, the growth in this scenario is the capabilities of foundation models (and to a lesser extent, the technologies that stem out of these capabilities). This is not a promise or an investment, it's not an indication of speculative trust in this technology, it is a non-decreasing function indicating a real increase in performance.
mjburgess
You can pick and choose problems from history where folk belief was wrong: WW1 vs. Y2K.
This isn't very informative. Indeed, engaging in this argument-by-analoguy betrays a lack of actual analysis, credible evidence and justification for a position. Arguing "by analogy" in this way, which picks and chooses an analogy, just restates your position -- it doesnt give anyone reasons to believe it.
TheOtherHobbes
I'm not seeing how comparing AI to a virus that killed millions and left tens of millions crippled is an effective way to support your argument.
drewcon
Humans are not familiar with exponential change so they have almost no ability to manage through exponential change.
Its an apt comparison. The criticisms in the cnn article are already out date in many instances.
bayarearefugee
As a developer that uses LLMs, I haven't seen any evidence that LLMs or "AI" more broadly are improving exponentially, but I see a lot of people applying a near-religious belief that this is happening or will happen because... actually, I don't know? because Moore's Law was a thing, maybe?
In my experience, for practical usage LLMs aren't even improving linearly at this point as I personally see Claude 3.7 and 4.0 as regressions from 3.5. They might score better on artificial benchmarks but I find them less likely to produce useful work.
geraneum
> Humans are not familiar with exponential change
Humans are. We have tools to measure exponential growth empirically. It was done for COVID (i.e. epidemiologists do that usually) and is done for economy and other aspects of our life. If there's to be exponential growth, we should be able to put it in numbers. "True me bro" is not a good measure.
Edit: typo
const_cast
Viruses spread and propagate themselves, often changing along the way. AI doesn't, and probably shouldn't. I think we've made a few movies on why that's a bad idea.
agarren
> The criticisms in the cnn article are already out date in many instances.
Which ones, specifically? I’m genuinely curious. The ones about “[an] unfalsifiable disease-free utopia”? The one from a labor economist basically equating Amodei’s high-unemployment/strong economy claims to pure fantasy? The fact that nothing Amodei said was cited or is substantiated in any meaningful way? Maybe the one where she points out that Amodei is fundamentally a sales guy, and that Anthropic is making the rounds saying scary stuff just after they released a new model - a techbro marketing push?
I like anthropic. They make a great product. Shame about their CEO - just another techbro pumping his scheme.
IshKebab
I think you missed the point. AI is dismissed by idiots because they are looking at its state now, not what it will be in future. The same was true in the pandemic.
dingnuts
especially when the world population is billions and at the beginning we were worried about double digit IFR.
Yeah. Imagine if COVID had actually killed 10% of the world population. Killing millions sucks, but mosquitos regularly do that too, and so does tuberculosis, and we don't shut down everything. Could've been close to a billion. Or more. Could've been so much worse.
monkeyelite
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
But that didn’t happen. All of the people like pg who drew these accelerating graphs were wrong.
In fact, I think just about every commenter on COVID was wrong about what would happen in the early months regardless of political angle.
tim333
I remember scientists, especially epidemiologists being quite accurate. I guess the key is to not even have a political angle but instead some knowledge of what you are talking about.
monkeyelite
I don’t remember that. How do you explain the total collapse of public trust in science and medical institutions?
Try revisiting their content from spring of 2020 (flatten the curve, wild death predictions, etc).
> I guess the key is to not even have a political angle
It’s a fantasy to imagine technical knowledge allows you to transcend the political and 2020 only reinforced that.
timr
> I think of skeptics who dismissed the exponential growth of covid19 cases due to their initial low numbers.
Uh, not to be petty, but the growth was not exponential — neither in retrospect, nor given what was knowable at any point in time. About the most aggressive, correct thing you could’ve said at the time was “sigmoid growth”, but even that was basically wrong.
If that’s your example, it’s inadvertently an argument for the other side of the debate: people say lots of silly, unfounded things at Peak Hype that sound superficially correct and/or “smart”, but fail to survive a round of critical reasoning. I have no doubt we’ll look back on this period of time and find something similar.
SoftTalker
Analysis == Opinion when it comes to mainstream news reporting. It's one guy's thinking on something.
qgin
This is the exact thing I’ve expressed as well.
This moment feels exactly to me like that moment when we were going to “shut down for two weeks” and the majority of people seemed to think that would be the end of it.
It was clear where the trend was going, but exponentials always seem ridiculous on an intuitive level.
bayareapsycho
My last company (F50, ass engineering culture, pretends to be a tech company) went and fired all of the juniors at a certain level because "AI"
The funny part is, most of those juniors were hired in 2022-2024, and they were better hires because of the harsher market. There were a bunch of "senior engineers" who were borderline useless and joined some time between 2018-2021
I just think it's kind of funny to fire the useful people and keep the more expensive ones around who try to do more "managerial" work and have more family obligations. Smart companies do the opposite
darth_avocado
I don’t understand how any business leader can be excited about humans being replaced by AI. If no one has a job, who’s going to buy your stuff? When the unemployment in the country goes up, consumer spending slows down and recession kicks in. How could you be excited for that?
ben_w
Game theory/Nash equilibrium/Prisoner's Dilemma, and the turkey's perspective in the problem of induction.
So far, for any given automation, each actor gets to cut their own costs to their benefit — and if they do this smarter than anyone else, they win the market for a bit.
Every day the turkey lives, they get a bit more evidence the farmer is an endless source of free food that only wants the best for them.
It's easy to fool oneself that the economics are eternal with reference to e.g. Jevons paradox.
abracadaniel
My long term fear with AI is that by replacing entry level jobs, it breaks the path to train senior level employees. It could take a couple of decades to really feel the heat from it, but could lead to massive collapse as no one is left with any understanding of how existing systems work, or how to design replacements.
xp84
> It could take a couple of decades to really feel the heat from it, but could lead to massive collapse
When you consider how this interacts with the population collapse (which is inevitable now everywhere outside of some African countries) this seems even worse. In 20 years, we will have far fewer people under age 60 than we have now, and among that smaller cohort, the percentage of people at any given age who have useful levels of experience will be less because they may not be able to even begin meaningful careers.
Best case scenario, people who have gotten 5 or more years of experience by now (college grads of 2020) may scrape by indefinitely. They'll be about 47 then and have no one to hire that's more qualified than AI. Not necessarily because AI is so great; rather, how will there be someone with 20 years of experience when we simply don't hire any junior people this year?
Worst case, AI overtakes the Class of 2020 and moves up the experience-equivalence ladder faster than 1 year per year, so it starts taking out the classes of 2015, 2010, etc.
pseudo0
Juniors and offshore teams will probably be the most severely impacted. If a senior dev is already breaking off smaller tightly scoped tasks and fixing up the results, that loop can be accomplished much more quickly by iterating with a LLM. Especially if you have to wait a business day for someone in India to even start on the task when a LLM is spitting out a similar quality PR in minutes.
Ironically a friend of mine noticed that the team in India they work with is now largely pushing AI-generated code... At that point you just need management to cut out the middleman.
scarlehoff
This is what I fear as well: some companies might adopt a "sustainable" approach to AI, but others will dynamite the entry path to their companies. Of course, if your only goal is to sell a unicorn and be out after three years, who cares... but serious companies with lifelong employees that adopt the AI-first strategy are in for a surprise (looking at you, Microsoft).
lurkshark
I’m actually worried we’ve gotten a kickstart on that process already. Anecdotally it seems like entry level developer jobs are harder to come by today than a decade ago. Without the free-money growth we were seeing for a long time it seems like companies are more incentivized to only hire senior developers at the loss of the greater good that comes with hiring and mentoring junior developers.
Caveat that this is anecdotal, not sure if there are numbers on this.
cjs_ac
This isn't AI-specific, though; businesses decided that it was everyone else's responsibility to train their employees over a decade ago.
socalgal2
I agree with your worry.
That said, the first thing that jumps to my mind is cars. Back when they were first introduced you had to be a mechanically inclined person to own one and deal with it. Today, people just buy them and hire the very small number of experts (relative to the population of drivers) to deal with any issues. Same with smartphones. The majority of users have no idea how they really work. If it stop working they seek out an expert.
ATM, AI just seems like another level of that. JS/Python programmers don't need to know bits and bytes and memory allocation. Vibe coders won't need to know what JS/Python programmers need to know.
Maybe there won't be enough experts to keep it all going though.
BriggyDwiggs42
If it takes a few decades, they may actually automate all but the most impressive among senior positions though.
Nasrudith
The worst case for such a cycle is generating new jobs in reverse engineers. Although in practice with what we have seen with machinists it tends to just accelerate existing trends towards outsourcing to countries who haven't had the 'entry level collapse'.
We've already eliminated certain junior level domains essentially by design. There aren't any 'barber-surgeons' with only two years of training for good reason. Instead we have surgery integrated it into a more lengthy and complicated educational path to become what we now would consider a 'proper' surgeon.
I think the answer is that if the 'junior' is uneconomical or otherwise unacceptable be prepared to pay more for the alternative, one way or another.
JKCalhoun
> turkey's perspective in the problem of induction…
Had to look that up: https://en.wikipedia.org/wiki/Turkey_illusion
absurdo
Basically if anyone has an iota of sensibility you should have never taken sama, Zuckerberg, Gates, or anyone else of that sort at face value. When they tell you they’re doing things for the good of humanity, look at what the other hand is up to.
antithesizer
>or anyone else of that sort
This category is expansive enough to make fools of almost everyone on hn.
spacemadness
And we as humans figured all this out and still do nothing with this knowledge. We fight as hard as we can against collective wisdom.
anvandare
A cancerous cell does not care that it is (indirectly) killing the lifeform that it is a part of. It just does what it does without a thought.
And if it could think, it would probably be very proud of the quarter (hour) figures that it could present. The Number has gone up, time for a reward.
thmsths
Tragedy of the commons: no one being able to buy stuff is a problem for everyone, but being able to save just a bit more by getting rid of your workforce is a huge advantage for your business.
bckr
“tragedy of the commons” is treated as a Theory of Human Nature when it’s really a religious principle underlying how we operate our society.
Jensson
People hunted large mammals to extinction long before modern society, so tragedy of the commons is nature in general. We know other predators do it as well, not just humans.
JKCalhoun
… in the interim, of course.
untrust
Another question: If AI is going to eat up everyone's jobs, how will any business be safe from a new competitor showing up and unseating them off their throne? I don't think that the low level peons would be the only ones at stake as a company could be easily outcompeted as well since AI could conceivably outperform or replace any existing product anyways.
I guess funding for processing power and physical machinery to run the AI backing a product would be the biggest barrier to entry?
zhobbs
Yeah this will likely lead to margin compression. The best companies will be fine though, as brand and existing distribution is a huge moat.
azemetre
“Best” is carrying a lot of wait. More accurate to say the monopolistic companies that engage in regulatory capture will be fine.
layer8
Institutional knowledge is key here. Third parties can’t replicate it quickly just by using AI.
lubujackson
Luckily we are firing all those people so they will be available for new roles.
This feels a lot like the dot boom/dot bust era where a lot of new companies are going to sprout up from the ashes of all this disruption.
floatrock
Also: network effects, inertia, cornering the market enough to make incumbents uneconomical, regulatory capture...
AI certainly will increase competition in some areas, but there are countless examples where being the best at something doesn't make you the leader.
JKCalhoun
The beginning of the AI Wars?
onlyrealcuzzo
> If no one has a job, who’s going to buy your stuff?
All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
It's easy to imagine a world in which there are way less white collar workers and everything else is pretty much the same.
It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
It's also easy to imagine a world in which you're able to cut more workers than everyone else, and on aggregate, barely anyone is impacted, but your margins go up.
There's tons of other scenarios, including the most cited one - that technology thus far has always led to more jobs, not less.
They're probably believing any combination of these concepts.
It's not guaranteed that if there's 5% less white-collar workers per year for a few decades that we're all going to starve to death.
In the future, if trends continue, there's going to be way less workers - since there's going to be a huge portion of the population that's old and retired.
You can lose x% of the work force every year and keep unemployment stable...
A large portion of the population wants a lot more people to be able to not work and get entitlements...
It's pretty easy to see how a lot of people can think this could lead to something good, even if you think all those things are bad.
Two people can see the same painting in a museum, one finds it beautiful, and the other finds it completely uninteresting.
It's almost like asking - how can someone want the Red team to win when I want the Blue team to win?
darth_avocado
> All the people employed by the government and blue collar workers
If people don’t have jobs, government doesn’t have taxes to employ other people. If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.
trealira
> If CEOs are salivating at the thought of replacing white collar workers, there is no reason to think next step of AI augmented with robotics won’t replace blue collar workers as well.
Robotics seems harder, though, and has been around for longer than LLMs. Robotic automation can replace blue collar factory workers, but I struggle to imagine it replacing a plumber who comes to your house and fixes your pipes, or a waiter serving food at a restaurant, or someone who restocks shelves at grocery stores, that kind of thing. Plus, in the case of service work like being a waiter, I imagine some customers will always be willing to pay for a human face.
JKCalhoun
Yeah, it's as though "middle class" was a brief miracle of our age. Serfs and nobility is the more probably human condition.
Hey, is there a good board game in there somewhere? Serfs and Nobles™
kevin_thibedeau
ML models don't make fully informed decisions and will not until AGI is created. They can make biased guesses at best and have no means of self-directed inquiry to integrate new information with an understanding of its meaning. People employed in a decision making capacity are safe, whether that's managing people or building a bridge from a collection of parts and construction equipment.
spamizbad
> All the people employed by the government and blue collar workers? All the entrepreneurs, gig workers, black market workers, etc?
I can tell you for many of those professions their customers are the same white collar workers. The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...
Jensson
> The blue collar economy isn't plumbers simply fixing the toilets of the HVAC guy, while the HVAC guy cools the home of the electrician, while...
That is exactly what blue collar economy used to be though: people making and fixing stuff for each other. White collar jobs is a new thing.
munksbeer
>It's also easy to imagine a world in which you sell less stuff but your margins increase, and overall you're better off, even if everybody else has less widgets.
History seems to show this doesn't happen. The trend is not linear, but the trend is that we live better lives each century than the previous century, as our technology increases.
Maybe it will be different this time though.
ryandrake
"Technology increases" have not made my life better than my boomer parents' and they will probably not make the next generation's lives better than ours. Big things like housing costs, education costs, healthcare costs are not being driven down by technology, quite the opposite.
Yes, the lives of "people selling stuff" will likely get better and better in the future, through technology, but the wellbeing of normal people seems to have peaked at around the year 2000 or so.
carlosjobim
I think that's mostly myth, and a very very deeply ingrained myth. That's why probably hundreds of people already feel the rage boiling up inside of them right now after reading my first sentence.
But it is myth. It has always been in the interest of the rulers and the old to try to imprint on the serfs and on the young how much better they have it.
Many of us, maybe even most of us, would be able to have fulfilling lives in a different age. Of course, it depends on what you value in life. But the proof is in the pudding, humanity is rapidly being extinguished in industrial society right now all over the world.
neutronicus
There are also blue- and pink-collar industries that we all tacitly agree are crazy understaffed right now because of brutal work conditions and low pay (health care, child care, K-12, elder care), with low quality-of-service a concern across the board, and with many job functions that seem very difficult to replace with AI (assuming liability for preventing children and elderly adults from physically injuring themselves and others).
If you, a CEO, eliminate a bunch of white-collar workers, presumably you drive your former employees into all these jobs they weren't willing to do before, and hey, you make more profits, your kids and aging parents are better-taken-care-of.
Seems like winning in the fundamental game of society - maneuvering everyone else into being your domestic servants.
const_cast
Right, but the elephant in the room is that despite those industries being constantly understaffed and labor being in extreme demand, they're underpaid. It seems nobody gives a flying fuck about the free market when it comes to the labor market, which is arguably the most important market.
So, flooding those industries with more warm bodies probably won't help anything. I imagine it would make the already fucked labor relations even more fucked.
JKCalhoun
> All the people employed by the government and blue collar workers?
You forgot the born-wealthy.
I feel increasingly like a rube for having not made my little entrepreneurial side-gigs focused strictly on the ultra-wealthy. I used to sell tube amplifier kits, for example, so you and I could have a really high-end audio experience with a very modest outlay of cash (maybe $300). Instead I should have sold the same amps but completed for $10K. (There is no upper bounds for audio equipment though — I guess we all know.)
ryandrake
This is the real answer. Eventually, when 95% of us have no jobs because AI and robotics are doing everything, then the rich will just buy and sell from each other. The other 7 billion people are not economically relevant and will just barely participate in the economy. It'll be like the movie Elysium.
I briefly did a startup that was kind of a side-project of a guy whose main business was building yachts. Why was he OK with a market that just consisted of rich people? "Because rich people have the money!"
tim333
It's like all the farmers soil shoveling jobs were stolen by tractors. People moved on to more interesting things.
lowbloodsugar
In all previous such revolutions, humans were freed to do more productive work while the cost of goods came down. But that doesn’t mean the same is true this time. Now the revolution does not make physical tasks easier (like ploughing or spinning thread) but intellectual labor. This time, there are no jobs to go to, since those jobs are also done by AI.
FeteCommuniste
I guess the idea is that the people left working will be made so productive and wealthy thanks to the miracle of AI that they can more than make up the difference with extravagant consumption.
isoprophlex
I too plan to buy 100.000 liters of yogurt each day once AI has transported me into the socioeconomic strata of the 0.1%
FeteCommuniste
My many robots will be busy building glorious mansions out of yogurt cups.
darth_avocado
If you want to see what that looks like, just look at the economy of India. Do we really want that?
FeteCommuniste
Certainly not what I want, but it looks like we could be headed there. And the "industry leaders" seem cool with it, to judge by their politics.
munksbeer
The economy of India is trending in the opposite direction to this narrative. More and more people lifted out of poverty as they modernise.
JKCalhoun
I'd been thinking modern day Russia, but I admit to being ignorant of a lot of countries outside the U.S.
al_borland
A single rich person can only much door dash. Scaling a customer base needs to be done horizontally.
bravesoul2
How to move to post-capitalism is the question. We need something money-like to motivate but something different.
CSMastermind
Huge amounts of white collar jobs have been automated since the advent of computers. If you look at the work performed by office workers in the 1960s and compared it to what people today do it'd be almost unrecognizable.
They spent huge amounts of time on things that software either does automatically or makes 1,000x faster. But by and large that actually created more white collar jobs because those capabilities meant more was getting done which meant new tasks needed to be performed.
janalsncm
I don’t like this argument because 1) it doesn’t address the social consequences of rapid onset and large scale unemployment and 2) there is no law of nature that a job lost here creates a new job there.
On the first point, unemployment during the Great Depression was “only” 30%. And those people were eventually able to find other jobs. Here, we are talking about permanent unemployment for even larger numbers of people.
The Luddites were right. Machines did take their jobs. Those individuals who invested significantly in their craft were permanently disadvantaged. And those who fought against it were executed.
And on point 2, to be precise, a lack of jobs doesn’t mean a lack of problems. There are a ton of things society needs to have accomplished, and in a perfect world the guy who was automated out of packing Amazon boxes could open a daycare for low income parents. We just don’t have economic models to enable most of those things, and that’s only going to get worse.
ccorcos
What makes you so concerned about rapid onset of we haven’t seen any significant change in the (USA) unemployment rate?
And there are some laws of nature that are relevant such as supply-demand economics. Technology often makes things cheaper which unlocks more demand. For example, I’m sure many small businesses would love to build custom software to help them operate but it’s too expensive.
DenisM
It’s an interesting argument, thanks.
A good analogy would be web development transition from c to java to php to Wordpress. I feel like it did make web sites creation for small business more accessible. OTOH a parallel trend was also mass-scale production of industry-specific platforms, such as Yahoo Shopping.
It’s not clear to me which trend won in the end.
ryukoposting
I'll preface this by saying I agree with most of what you said.
It'll be a slow burn, though. The projection of rapid, sustained large-scale unemployment assumes that the technology rapidly ascends to replace a large portion of the population at once. AI is not currently on a path to replacing a generalized workforce. Call center agents, maybe.
Second, simply "being better at $THING" doesn't mean a technology will be adopted, let alone quickly. If that were the case, we'd all have Dvorak keyboards and commuter rail would be ubiquitous.
Third, the mass unemployment situation requires economic conditions where not leveraging a presumably exploitable underclass of unemployed persons is somehow the most profitable choice for the captains of industry. They are exploitable because this is not a welfare state, and our economic safety net is tissue-paper thin. We can, therefore, assume their labor can be had at far less than its real worth, and thus someone will find a way to turn a profit off it. Possibly the Silicon Valley douchebags who caused the problem in the first place.
t-writescode
> > it doesn’t address the social consequences of rapid onset and large scale unemployment
> It'll be a slow burn, though.
Have you been watching the current developer market?
It's really, really rough out here for unemployed software developers.
PeterHolzwarth
The classic example is the 50's/60's photograph of an entire floor of a tall office building replaced by single spreadsheet. This passed without comment.
lambdasquirrel
Anecdotal, but AI was what enabled me to learn French, when I was doing that. Before LLMs, I would've had to pay a lot more money to get the class time I'd need, but the availability of Google Translate and DeepL meant that some meaningful, casual learning was within reach. I could reasonably study, try to figure things out, and have questions for the teachers the two or three times a week I had lessons.
Nowadays I'm learning my parents' tongue (Cantonese) and Mandarin. It's just comical how badly the LLMs do sometimes. I swear they roll a natural 1 on a d20 and then just randomly drop a phrase. Or at least that's my head canon. They're just playing DnD on the side.
anthomtb
> Huge amounts of white collar jobs have been automated since the advent of computers
One of which was the occupation of being a computer!
qgin
I often see people say “AI can’t do ALL of my job, so that means my job is safe.
But what this means at scale, over time, is that if AI can do 80% of your job, AI will do 80% of your job. The remaining 20% human-work part will be consolidated and become the full time job of 20% of the original headcount while the remaining 80% of the people get fired.
AI does not need to do 100% of any job (as that job is defined today ) to still result in large scale labor reconfigurations. Jobs will be redefined and generally shrunk down to what still legitimately needs human work to get it done.
As an employee, any efficiency gains you get from AI belong to the company, not you.
sram1337
...or your job goes from commanding a $200k/yr salary to $60k/yr. Hopefully that's enough to pay your mortgage.
spcebar
Something is nagging me about the AI-human replacement conversation that I would love insight from people who know more about startup money than me. It seems like the AI revolution hit as interest rates went insane, and at the same time the AI that could write code was becoming available, the free VC money dried up, or at least changed. I feel like that's not usually a part of the conversation and I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero. I know next to nothing about this and would love to hear informed opinions.
sfRattan
> It seems like the AI revolution hit as interest rates went insane...
> ...I'm wondering if we would be having the same conversation if money for startups was thrown around (and more jobs were being created for SWEs) the way it was when interest rates were zero.
The end of free money probably has to do with why C-level types are salivating at AI tools as a cheaper potential replacement for some employees, but describing the interest rates returning to nonzero percentages as going insane is really kind of a... wild take?
The period of interest rates at or near zero was a historical anomaly [1]. And that policy clearly resulted in massive, systemic misallocation of investment at global scale.
You're describing it as if that was the "normal?"
[1]: https://www.macrotrends.net/2015/fed-funds-rate-historical-c...
swyx
its not part of the conversation because the influence here is tangential at best (1) and your sense of how much vc money is on the table at any given time is not good (2).
1a. most seed/A stage investing is acyclical because it is not really about timing for exits, people just always need dry powder
1b. tech advancement is definitely acyclical - alexnet, transformers, and gpt were all just done by very small teams without a lot of funding. gpt2->3 was funded by microsoft, not vc
2a. (i have advance knowledge of this bc i've previewed the keynote slides for ai.engineer) free vc money slowed in 2022-2023 but has not at all dried up and in fact reaccelerated in a very dramatic way. up 70% this yr
2b. "vc" is a tenous term when all biglabs are >>10b valuation and raising from softbank or sovereign wealth. its no longer vc, its about reallocating capital from publics to privates because the only good ai co's are private
mjburgess
I'm not seeing how you're replying to this comment. I'm not sure you've understood their point.
The point is that there's a correlation between macroeconomic dynamics (ie., the price of credit increasing) and the "rise of AI". In ordinary times, absent AI, the macroeconomic dynamics would fully explain the economic shifts we're seeing.
So the question is why do we event need to mention AI in our explanation of recent economic shifts?
What phenomena, exactly, require positing AI disruption?
rglover
Social media. Especially in SV, the embarrassment of failing publicly having been given so much money is far too painful psychologically.
Spinning that to say you're a "visionary" for replacing expensive employees with AI (even when it's clear we're not there yet) is risky, but a good enough smoke screen to distract the average bear from poking holes in your financials.
munificent
> What phenomena, exactly, require positing AI disruption?
AI company CEOs trying to juice their stock evaluations?
I think the real white collar bloodbath is that the end of ZIRP was the end of infinite software job postings, and the start of layoffs. I think its easy to now point to AI, but it seems like a canard for the huge thing that already happened.
just look at this:
https://fred.stlouisfed.org/graph/?g=1JmOr
In terms of magnitude the effect of this is just enormous and still being felt, and never recovered to pre-2020 levels. It may never. (Pre-pandemic job postings indexed to 100, its at 61 for software)
Maybe AI is having an effect on IT jobs though, look at the unique inflection near the start of 2025: https://fred.stlouisfed.org/graph/?g=1JmOv
For another point of comparison, construction and nursing job postings are higher than they were pre-pandemic (about 120 and 116 respectively, where pre-pandemic was indexed to 100. Banking jobs still hover around 100.)
I feel like this is almost going to become lost history because the AI hype is so self-insistent. People a decade from now will think Elon slashed Twitter's employee count by 90% because of some AI initiative, and not because he simply thought he could run a lot leaner. We're on year 3-4 of a lot of other companies wondering the same thing. Maybe AI will play into that eventually. But so far companies have needed no such crutch for reducing headcount.