"AI Will Replace All the Jobs " Is Just Tech Execs Doing Marketing
281 comments
·June 4, 2025ednite
tines
I would agree with you, but the people making the decision to fire or keep you don’t care about quality, nor do they care about understanding AI or its limitations. If AI mostly does kinda the right thing 70% of the time but saves the company $80k a year, that’s a no-brainer. We’re being Pollyanna-ish to think that anyone cares about the things we care about, that you mention in your post.
If firing you saves $1.50 a year, they’ll do it.
ednite
Fair enough, but what happens when those same companies start realizing it’s not just about reduced quality, but also security risks and costly errors? At some point, the savings get wiped out by the consequences.
Do they go back to hiring human expertise then?
I totally agree though, the business mindset of saving a buck often outweighs everything else. I’m actually going through something similar right now with a client being swayed by a so-called “AI expert” just to cut costs. But that’s a whole other story.
keiferski
They go bankrupt, get acquired, or just hope that no one notices their security mistakes.
Admitting mistakes and correcting them directly is not a common thing for CEOs to do.
tines
> security risks and costly errors?
I hope that you're right, but the problem is that the regulatory bodies are captured by the players that they are supposed to regulate. Can you name a time in recent history where a company had to pay a penalty for a harmful action, either intentional or neglectful, that exceeded the profit they gained from the action?
davidcbc
And eventually they learn the lesson that many companies learned in the early 00s with offshoring. You get what you pay for
kunzhi
I don't think any of those companies learned a lesson.
soraminazuki
Exactly, businesses will soon treat software engineering the way Google does "support." It'll all be just robots pissing everyone off. Well everyone, except the executives who will receive a nice big paycheck.
paul7986
Indeed and as a UX Researcher, Designer and Front-End Dev chatGPT Plus does same level of design and front-end development I do. It takes me hours vs. chatGPT Plus five minutes or less to come up with professional logos, web app or website design around the logo and then spits out the front-end code. Once I saw that in late Fall of this year I was like yup there it is .. it can do a good portions/parts of my job. So at my job since I've been doing a lot more Customer UX Support and Research as well telling my co-workers and client that if I could use chatGPT at work I would be more effective (have it do the design and front end development). I feel i need to jump ahead and show I embrace this change asap and AI can not interface with clients, do UX research and any other thing that involves human to human interaction. So to me CX/UX Research is safe and now such workers can also do UX Design and Front-End Development using AI. Really anyone now can do those two things and quickly.
ednite
Great example, and to be honest, I’m guilty of the same. I used ChatGPT to come up with a logo for one of my projects, and it only took about five minutes. The kicker? The designer I would’ve usually handed it to actually approved it and liked it.
It kind of stings, though. That used to be someone’s craft, their livelihood. But like you said, the key now is finding ways to adapt, maybe by leaning more into the human side of the work: real collaboration, client interaction, deeper research.
Still, not everyone can or wants to adapt. What happens to the quiet designer who just wants to spend the day in their bubble, listening to music and creating? Not chasing clients, not pivoting constantly, just doing what they love. That’s the part that saddens me at times when I see AI in action. Thanks for sharing.
outside1234
But if the AI inserts things that constantly cost the company $400k per previous head in a lawsuit when private information is leaked, that equation flips.
tines
This kind of thing won't happen much in our future Technopoly. The regulators are captured by the regulated, and all that will happen when a mistake is made is a shuffling of the cards.
reverendsteveii
I feel like I ran into a example use case yesterday that really illustrates how I work with AI as a dev. It's a simple db entity, with a field called "parent" that is the id of another entity of the same type. Theoretically there can be any number of parents (though practically w the data we're modeling we don't ever expect more than 3 levels). Classic case for recursion, right? So I whip up a quick method that goes something like
public void getEntityWithParents(List<DBEntity> entityList, String id) { DBEntity entity = dao.getById(id); entityList.add(entity); if (entity.getParent() != null) { getEntityWithParents(entityList, entity.getId()); } }
Because I'm working from a make it work first then make it efficient perspective I realize that I'm making a lot of DB calls there. So I pop open gemini, copy/paste that algorithm and ask it "How can I replace this method with a single call to the database, in postgresql?" and it gives me
WITH RECURSIVE parent_chain AS (SELECT id, name, parent, 0 AS depth FROM table WHERE id = :id UNION ALL SELECT i.id, i.name, i.parent, pc.depth + 1 FROM fam.dx_concept i INNER JOIN parent_chain pc ON i.id = pc.parent) SELECT id, name, parent FROM parent_chain ORDER BY depth;
That might have been a day or two of research, instead it was 5 minutes to come up with a theory and an hour or so writing tests. Gemini saved the day there. But it wasn't able to determine what needed done (minimizing DB calls), only how to do it, and it wasn't able to verify correctness. That's where we'll fit in in all of this: figuring out what to do and then making sure we actually did it.
ednite
This is a perfect example of how AI can be a powerful partner in development when you already know what you’re trying to solve.
Feels like the real sweet spot right now is: humans define the goal and validate the work, AI just helps fill in the middle.
reverendsteveii
if they make an AI that lets you format code readably on HackerNews we are all sauteed
rubslopes
> Even if I prompt with security as the top priority...
A bit of a counterpoint: I've been programming for 12 years, but only recently started working in webdev. I have a general understanding of cybersecurity, but I never had to actively implement security measures in my code until now—and my boss is always pushing for speed.
AI has been incredible in that regard. I can ask it to review my code for vulnerabilities, explain what they are, and outline the pros and cons of each strategy. In just a month, I've learned a lot about JWT, CSRF, DDoS protection, and more.
In another world, my web apps would be far more vulnerable.
wyclif
In that sense, AI is a tremendous learning tool. You could have learned about those subjects before, but it would have taken exponentially longer to search, scan, appropriate, and integrate them together to form a potential solution to a real-world problem.
ChrisMarshallNY
I suspect that we will be seeing some very creative supply chain compromises.
If you can train AI to insert calls to your malware server, in whatever solutions it provides, that's a huge win.
ednite
Totally agree! That’s where I think cybersecurity experts and system administrators deserve a lot of credit too. AI might help automate some of the work, but it’s also conjuring up threats that are way more complex and sneaky.
Hopefully, countermeasures and AI-powered defense tools can keep up. It's going to be some type of an arms race, for sure.
lambdasquirrel
The AI I use at work can’t set up IAM correctly, didn’t even know it needed to, let alone associate said IAM principal with the correct k8s RBAC groups. I do appreciate that it ground through a lot of boilerplate, but I’m concerned that it’s like buying a new Sony or Leica camera as a beginner photographer. As with your non-coder VBA friend, it might make them think they’re a lot more skilled than they really are. And this is specifically why I’ve never touched VBA.
ednite
Well put. And funny enough, I’m actually a complete newbie in photography and just got one of those expensive Sony cameras, so I know exactly what you mean. It’s overkill for my current skill set.
But the key difference? I’m not planning to use it as the primary photographer at my cousin’s wedding next week.
As you said, the real danger isn’t just the tool, it’s the false confidence it gives. AI can make us feel a bit too capable and too fast, and that’s when things can go sideways.
FirmwareBurner
>The AI I use at work can’t set up IAM correctly
Can you? :D
Based on my favorite quote from the I, Robot(2004) movie with Will Smith when he got roasted by a robot:
W.S.: "You are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a canvas into a beautiful masterpiece?*
Robot: "Can you?"
Which I think applies to a lot of anti-AI sentiments. Sure, the AI doesn't know how to do X, Y, Z but then most people also don't know X, Y and Z. Sure, people can learn how to X, Y and Z, but then so can an AI. ChatGPT couldn't initially draw anime, now they trained it to do that and it can. Similarly it can also learn how to set up IAM correctly if they bother to train it well enough for that task. But for now, like you discovered, we're far away form an AI that's universally good at everything but I expect specialisation will arrive sooner than later.bigstrat2003
> Can you?
Yes, I can. In fact, the entire reason I think AI is not a useful tool (contrary to the hype) is that it can't do many things I find easy to do. I'm certainly not going to trust it with the things I find hard to do (and therefore can't check it effectively) in that case!
For example, a month or two ago I was trying to determine if there's a way to control the order in which CloudFormation creates+deletes objects when a replacement is needed. AI (including both ChatGPT and AWS' own purpose built AI) insisted yes, hallucinating configuration options that straight up don't exist if you try to use them. The ability to produce syntactically valid configuration files (not even necessarily correctly doing what you want them to) should be table stakes here, but I find that AI routinely can't do it.
diggan
> Even if I prompt with security as the top priority
LLMs do really poorly with general statements like that, so not sure it's unexpected. If you put "Make sure to make it production worthy", you'll get as many different answers as there are programmers, because not even us human programmers agree what that really means.
Same for "Make sure security is the top priority", most programmers would understand that differently. If you instead spell out exactly what behavior you expect from something like that (so "Make sure there are no XSS's", "Make sure users can't bypass authentication" and so on), you'll get a much higher probability it'll manage to follow those.
ednite
Totally agree. I was simplifying for discussion’s sake, but yeah, I learned the hard way that vague prompts will lead you that rabbit hole. If you’re not crystal clear, you get everything but what you actually wanted.
These days, I make sure every “t” is crossed and every “i” dotted when giving instructions. Good point, definitely a lesson worth repeating.
kalleboo
Although the promise of the latest generation of "thinking" models is that they're supposed to do that themselves - in the internal chat thread in their "thought" process they're supposed to go "what does secure mean for a web app", list those items (which it can already easily do if you prompt it separately) and then go through that list
geoka9
> It's great for... getting unstuck
That one I never considered until it happened to me. It's funny that the AI-provided implementation was mostly off, but it was a start. "Blank canvas paralysis" is a thing.
socalgal2
> So yeah, for me AI isn’t a replacement.
I agree with everything you wrote. The question is, what about in 6 months? 2 years? 4 years? Just a year ago (not sure the exact timeline) we didn't have the systems we have today that will edit multiple files, iterate over compilation errors and test failures, etc.... So what's it tomorrow?
ednite
That’s the part that keeps me up at night at times. I’m already seeing my workflow shift fast, and the next 6–12 months feel even more unpredictable.
Staying adaptable feels like the only real option, but I get that even that might not be enough for everyone.
eranation
My personal thoughts on this are
- A good lawyer + AI will likely win in court against a non lawyer with AI who would likely win in court against just an AI
- A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
- A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
As long as a human has a marginal boost to AI (either by needing to supervise it, regulation, or just AI is simply better with a human agency and intuition) - jobs won't be lost, but the paradox of "productivity increases, yet we end up working harder" will continue.
p.s. there is the classic example I'm sure we all are aware of, autopilot is capable of taking off and landing since the 80s, I personally prefer to keep the pilots there, just in case.
bluefirebrand
> A good doctor + AI will save more lives than a non doctor + AI, who will perform better than just AI
Ok, what about an Average doctor with an AI? Or how about a Bad doctor with an AI?
AI assisted medcare will be good if it catches some amount of misdiagnoses
AI will be terrible if it winds up reinforcing misdiagnoses
My suspicion is that AI will act as a force multiplier to some extent, more than a safety net
Yes, some top percentage of performers will get some percentage of performance gain out of AI
But it will not make average performers great or bad performers good. It will make bad performers worse
bee_rider
It could make good doctors faster rather than better. This could allow them to help people who wouldn’t be able to afford them otherwise.
bluefirebrand
Doctors already barely see patients for more than a couple of minutes, you want them to be faster?
vjvjvjvjghv
It's pretty much guaranteed that it will be used to increase profits. Caring for patients is secondary.
eranation
I believe that AI will help close the gap (e.g. a bad doctor with AI will be in average, better than just a bad doctor)
bluefirebrand
Maybe one day, but not right now
My experience with the current stuff on the market is you get out what you put in
If you put in a very detailed and high quality, precisely defined question and also provide a framework for how you would like it to reason and execute a task, then you can get out a pretty good response
But the less effort you put in the less accurate the outcome is
If a bad doctor is someone who puts in less effort, is less precise, and less detail oriented, it's difficult to see how AI improves on the situation at all
Especially current iterations of AI that don't really prompt the users for more details or recognize when users need to be more precise
DSMan195276
IMO the problem is that, at least right now, the AI can't examine the patient itself, it has to be fed information from the doctor. This step means bad doctors are likely to provide the AI with bad information and reduce it's effectiveness (or cause the AI to re-enforce the biases of the doctor by only feeding it the information they see as relevant).
mandevil
Not sure about what will happen with software engineers, lawyers, or doctors, but I do know how computer assistance worked decades ago when it took over retail clerks, the net effect was to de-skill and damage the job as a career, by bringing everyone up to the same baseline level management lost interest in building skills above that baseline level.
So until the 1970's shopping clerk was a medium-skill and prestige job. Each clerk had to know the prices for all the items in your store because of the danger of price-tag switching(1). So clerks who knew all the prices were faster at checking out then clerks who had to look up the prices in their book, and reducing customer friction is hugely valuable for stores. So during this era store clerk is a reasonable career, you could have a middle-class lifestyle from working retail, there are people who went from clerk to CEO, and even those who weren't ambitious could just find a stable path to support their family.
Then the UPC code, laser scanner, and product/price database came along in the 1970's. The UPC code is printed in a more permanent way so switching tags is not as big a threat (2). Changing prices is just a database update, rather than printing new tags for every item and having the clerks memorize the new price. And there is a natural language description of every item that the register can display, so you don't have to keep the clerk around to be able to tell the difference between the expensive dress and the cheap dress- it will say the brand and description. This vastly improved the performance of a new clerk, but also decreased the value of the more experienced clerk. The result was a great hallowing-out of the retail sector employment, the so-called "McJob" of the 1990's.
But the result was things like Circuit City (in its death throes) firing all of their experienced retail employees (3) because the management didn't think that experience was worth paying for. This is actually the same sort of process that Marx had noted about factory jobs in the 19th century- he called it the alienation of labor, this is capital investment replacing skilled labor, to the benefit of the owners of the investment- but since retail jobs largely code as female no one really paid much attention to it. It never became a subject of national conversation.
1: This also created a limit on store size: you couldn't have something like a modern supercenter (e.g. Costco, Walmart, Target) because a single clerk couldn't know the prices for such a wide assortment of goods. In department stores in the pre-computer era every section had its own checkout area, you would buy the pots in the housewares section and then go to the women's clothes area and buy that separately, and they would use store credit to make the transaction as friction-less as possible.
2: Because in the old days a person with a price tag gun would come along and put the price directly onto each item when a price changed, so you'd have each orange with a "10p" sticker on it, and now it's a code entry and only the database entry needs to change, the UPC can be much more permanently printed.
3: https://abcnews.go.com/GMA/story?id=2994476 all employees paid above a certain amount were laid off, which pretty much meant they were the ones who had stuck around for a while and actually knew the business well and were good at their jobs.
vjvjvjvjghv
Considering how little interest doctors have taken in some of my medical problems I'll be happy to have AI help me to investigate things myself. And for a lot of people in the US it may make the difference between not being able to afford a doctor vs getting some advice.
touisteur
You (and I) prefer to keep the pilots there, but still, there's a push to need only one person and not two in that plane/cockpit. I have little to no doubt we'll have to relearn some hard lessons after we've AI'd up pilots.
dehrmann
I know airlines are a cutthroat business, but wouldn't the copilot add no more than $1 per passenger for the average flight?
ponector
Remember that success story when airline removed one olive from salads served onboard?
$1 per passenger is huge! For Ryanair it's 200m annually.
cj
I wanted to say maybe the 2nd pilot could double as a flight attendent if they're not needed full time in the cockpit. Still retains redundancy while saving the airline money.
The problem with that is most skills need to be practiced. When you only need to use your skills unexpectedly in an emergency, that may not end well. Same applies to other fields where AI can do something 95% of the time, with human intervention required in the 5% case. Is it realistic to expect humans to continue to fill that 5% gap if we allow our skills to wane by outsourcing the easiest 95% of a job and only keep the hardest 5% for ourselves.
HeyLaughingBoy
> maybe the 2nd pilot could double as a flight attendent
Have you ever managed people?
bigbuppo
And yet there's plenty of evidence that having three pilots in the cockpit is usually a better option when the inevitable happens.
touisteur
For those who can stomach it, reading aviation accident reports, listening to actual recorded voice footage, you very often read about the cognitive load of a two-person team trying to get through a shitty moment.
Richard de Crespigny, who flew the Quantas A380 that blew one of its engines after a departure from Changi, explains very clearly and in a gripping way the amount of stuff happening while trying to save an aircraft.
Lots of accidents happen today already at the seams of automation, I don't think we're collectively ready for a world with much more automation, especially in the name of more shareholder value of a 4 dollars discount.
HenryBemis
You kinda said it, but you didn't hit the nail on the head. Yes we need the pilots. But -I will repeat my own example in my current mega-corp employer- I am about to develop a solution using an LLM (premium/enterprise) that will stop a category of employees from reaching 50, and will remain to 20, and with organic wear & tear, will drop to 10, which will be the 'forever number' (until the next 'jump' in tech).
So yes, we keep pilots, but we keep _fewer_ pilots.
hollerith
It's unclear what your numbers refer to. If I had to guess, I'd say 50 means the number of employees in the category employed by your employer, but I'm not sure.
JSR_FDED
That’s all well and good for the humans with experience, for whom AI is a force multiplier.
My concern is for the juniors - there’s going to be far fewer opportunities for them to get started in careers.
eranation
It’s all supply and demand.
When the market pool of seniors will run dry, and as long as hiring a junior + AI is better than a random person + AI, it will balance itself.
I do believe the “we have a tech talent shortage” was and is a lie, the shortage is tech talent that is willing to work for less. Everyone was told to just learn to code and make 6 figures out do college. This drove over-supply.
There is still shortage of very good software engineers, just not shortage of people with a computer science degree.
esafak
How did commercial pilots solve the problem?
kgilpin
In the US, “Junior” pilots typically work as flight instructors until they have built up enough time to no longer be junior. 1500 flight hours is the essential requirement to be an airline pilot, and every hour spent giving instruction counts as a flight hour. It’s not the only way, but it’s the most common way. Airlines don’t fund this; pilots have to work their way up to this level themselves.
In Europe it’s different.
bluefirebrand
Accreditation, Licensing and Unions
Things that software developers are extremely allergic to
vjvjvjvjghv
Accepting that people need to be trained within a system. As of now it's easy enough for software devs to get started without formal training. I don't see that changing. Smart people will be able to jump directly to senior level with the help of AI.
dgfitz
Not all of them of course, but a lot of them are ex-military.
gh0stcat
My concern though is that over time, a "good ANYTHING" + AI will converge to just AI, as you continue to outsource your thinking processes to AI, it will create dependence like any tool. This is a problem for any individual's long term prospects as a source of expertise. How do you think one might combat this? It seems the skills are at odds, and you are in the best position at the very START of using AI, and then your growth likely slows or stops completely as you migrate to thinking via AI API calls.
programmertote
I generally agree with your thoughts.
I am also concerned about couple of important things: human skill erosion (a lot of new devs who use AI might not bother to learn the basics that can make a difference in production/performance, security, etc.), and human laziness (and thus, gradually growing the habit to trust/rely on AI's output entirely).
qgin
When it's been studied so far, AI alone does better than AI + human doctor
>Surprisingly, in many cases, A.I. systems working independently performed better than when combined with physician input. This pattern emerged consistently across different medical tasks, from chest X-ray and mammography interpretation to clinical decision-making.
https://erictopol.substack.com/p/when-doctors-with-ai-are-ou...
jcfrei
The scenario you describe leads to a massive productivity boost for some engineers and no work left for the rest. Or in other words: The profit share of labour compared to capital becomes even smaller. Meaning an even more skewed income distribution, where a few make millions and the rest of the currently employed software engineers / lawyers, etc will become bartenders or greeters at walmart.
eranation
When backlogs run dry and users don’t come up with feature requests / bugs faster than humans + AI can tackle. Yes.
Until then, adding one more engineer (with AI) will have a better ROI than firing one.
Engineers who are purist and refuse to use AI, might end up with a wake up call. But they are smart, they’ll adapt too.
norir
> A good software engineer + AI will ship features faster / safer vs a non engineer with AI, who will beat just AI
Safer is the crucial word here. If you remove it, I'd argue the ordering should be reversed.
I also will point out that you could replace ai with amphetamines and have close to the same meaning. (And like amphetamines an ai can only act through humans, never solely on its own.)
lenerdenator
It's also a great example of why tech executives shouldn't be trusted, at all.
"My thing will break our entire economy. I'm still gonna build it, though." - statements dreamed up by the utterly deranged
xeromal
You could say that about a number of things we've benefited from. The cotton gin, the plow, industrialization, the car, electricity, alarm clocks, etc.
roywiggins
Uh, not everyone benefited from the cotton gin, to put it mildly. Though I suppose it depends how tightly or loosely you define "we."
It probably wasn't even a net good for the South, being blamed for locking it into an agrarian plantation economy and stunting manufacturing in the states that depended on cotton.
mplanchard
This is a popular meme[0] about our industry, in fact:
> Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale
> Tech Company: At long last, we have created the Torment Nexus, from [the] classic sci-fi novel, Don’t Create the Torment Nexus
ergonaught
They shouldn't be trusted for any number of reasons, but the need for social systems to adapt to reality isn't their fault.
lenerdenator
It wouldn't be their fault if the economic class that they were a part of weren't actively opposed to changing those social systems.
I wouldn't care nearly as much about AI were there a stronger social safety net in the US. However, that's not going to happen anytime soon, because that requires taxes to pay for, and the very wealthy do not like paying those because it reduces their wealth.
const_cast
It kind of is their fault when they're simultaneously lobbying against those social systems and designing their platforms in a way to further align with that propaganda. It seems very intentional to me.
Ozarkian
You can't constrain an idea whose time has come. China will continue developing AI regardless of whether we will. We have to do it just to stay in the race.
It's the same thing with the atomic bomb. There wasn't really a choice not to do it. All the theoretical physicists at the time knew that it was possible to develop the thing. If the United States hadn't done it, someone else would have. Perhaps a few years or a decade later, but it would have happened somewhere.
bigstrat2003
> It's the same thing with the atomic bomb. There wasn't really a choice not to do it.
There is always a choice. "Someone else will do this if I don't" does not absolve one from moral responsibility. Even if it is inevitable (which things generally are not, claiming they are is a rationalization most of the time), you still are culpable if you're the one who pulls the trigger.
Imustaskforhelp
Sam altman literally said something like this, i forgot the youtube video.
https://www.reddit.com/r/ChatGPT/comments/1axkvns/sam_altman...
Its crazy. Idk what else to say because my jaw gets dropped every time I hear something like this. Humanity is a mess sometimes.
some_random
If you believe that breaking the economy is good you're obviously going to do it. If you believe that if you don't break the economy one of your many competitors will, you're obviously going to do it.
lenerdenator
If it's good, why's Altman (reportedly) bragging about the preps he's making for societal collapse?[0]
[0]https://futurism.com/the-byte/openai-ceo-survivalist-prepper
freedomben
If OpenAI shut theirs down tomorrow and Sam Altman became a travelling monk preaching against the development of AI, do you really believe it would stop the momentum?
I don't. The cat is out of the bag. The only thing that would accomplish is giving Google and others less competition. Personally I don't have much trust in any tech companies, including OpenAI, but I'd much rather there be a field of competition than one dominant and (unchecked) leader.
hooverd
At least I can rest assured the billionaires would probably kill each other in a mad scramble to be king of the ashes.
spacemadness
At this point it’s going to break the economy anyway if it doesn’t end up breaking the economy as investors are going to retreat and pop the bubble.
freejazz
If you're a tech ceo, then maybe yeah.... I've seen bankers that are more reflective about their actions than these tech leaders. Tech would kill the goose laying golden eggs just because they could find enough people to believe their BS marketing to get their VC funded startup sold.
msgodel
I like to call most of this stuff "executive sounds." My favorite recent example is the Nvidia CEO talking about how they're going to use quantum computing for ML.
saubeidl
Capitalism is a doomsday cult and these people are its prophets.
Imustaskforhelp
I have no problem with capitalism. I have problem with that there are people who have so much money that they can't even spend yet most people live paycheck to paycheck.
Maybe the solution might be socialism except you can own money till 10 million I guess. But I am not sure if its effective or not. Definitely loop holes. Idk
saubeidl
I think the phenomenon in your second sentence is a direct result of unbridled capitalism.
Maybe the solution is a simple as a social market economy, maybe it takes something a bit more radical - but the extreme techno capitalism that our industry's leaders are trying to advance is definitely a step in the wrong direction.
roywiggins
What if it just makes most jobs worse, or replaces good jobs with more, worse jobs? "Meat robot constantly monitored and directed by AI overlords" is technically a job.
lapcat
> What if it just makes most jobs worse, or replaces good jobs with more, worse jobs?
Right. Consider:
1) Senior engineer writing code
vs.
2) Senior engineer prompting and code reviewing LLM agent
The senior engineer is essential to the process in each case, because the LLM agent left to its own devices will produce nonfunctional crap. But think about the pleasantness of the job and the job satisfaction of the senior engineer? Speaking just for myself, I'd rather quit the industry than spend my career babysitting nonhuman A.I. That's not what I signed up for.
shafyy
Same. I actually like writing code, reviewing code that my colleagues have written, having interesting technical discusisons. I don't want to spend my days reviewing code that some AI has written.
But I guess if you don't like writing code, and you are "just doing it for the money", having an LLM write all the code for you is fine. As long as it passes some very low bar of quality — which, let's be honest — is enough for most companies (i.e. software factories) out there.
AstroBen
As of right now it's actually making my job much more enjoyable. It lets me focus on the things I enjoy thinking about - code design, architecture, and the higher level of how to solve problems
I haven't seen any evidence its made progress on these which is nice
null
sasmithjr
I don't think it's an exclusive choice between the two, though. I think senior engineers will end up doing both. Looking at GitHub Copilot's agent, it can work asynchronously from the user, so a senior engineer can send it off to work on multiple issues at once while still working on tasks that aren't well suited for the agent.
And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.
lapcat
> And really, I think many senior engineers are already doing both in a lot of cases where they're helping guide and teach junior and early mid-level developers.
Babysitting and correcting automated tools is radically different from mentoring less experienced engineers. First, and most important IMO, there's no relationship. It's entirely impersonal. You become alienated from your fellow humans. I'm reminded of Mark Zuckerberg recently claiming that in the future, most of your "friends" will be A.I. That's not an ideal, it's a damn dystopia.
Moreover, you're not teaching the LLM anything. If the LLMs happen to become better in the future, that's not due to your mentoring. The time you spend reviewing the automatically generated code does not have any productive side effects, doesn't help to "level up" your coworkers/copilots.
Also, since LLMs aren't human, they don't make human mistakes. In some sense, reviewing a human engineer's code is an exercise in mind reading: you can guess what they were thinking, and where they might have overlooked something. But LLMs don't "think" in the same way, and they tend to produce bizarre results and mistakes that a human would never make. Reviewing their code can be a very different, and indeed unpleasant WTF experience.
bluefirebrand
Guiding and teaching developers is rewarding because human connections are important
I don't mentor juniors because it makes me more productive I mentor juniors because I enjoy watching a human grow and develop and gain expertise
I am reminded of reports that Ian McKellen broke down crying on the set of one of The Hobbit movies because the joy of being an actor for him was nothing like acting on green screen sets delivering lines to a tennis ball on a stick
roywiggins
This is more or less what happened to artisans during the industrial revolution: sell your tools, become a widget hammerer on the assembly line. Lots of jobs created for widget hammerers. Not a great deal for a lot of people. Deskilling jobs demonstrably sucks for the people with skills!
HeyLaughingBoy
Same here. But for every one of us, there are probably 10 people out there who'd be more happy babysitting an LLM than actually writing code.
Mobius01
This reminded me of this short story where the AI disruption starts as a work management software directing workers in a burger shop:
qoez
I can't make up my mind if I'd prefer an AI boss or not. Human bosses can be quite terrible and not having to deal with an emotional being seems kinda nice.
monknomo
your ai boss isn't going to bend rules for you, and isn't going to advocate for you. You can see how an ai boss would go by looking to amazon warehouses and drivers, or call centers, and how those folks are managed. It's already by computer, they already uses machines learning to detect when people are deviating from expected rails, and you can decide for yourself if that looks appealing
johnpaulkiser
Let me help you. An AI boss would be 100x worse.
coaksford
Don’t worry, it wouldn’t replace your human boss, you’d just have both bosses.
roywiggins
Your human boss can't feasibly maintain a panopticon with only their human brain, AI arguably can. Every single word uttered or pixel emitted can be saved and analyzed, for relative pennies.
LPisGood
AI bosses can also be quite nice and having the benefits of reporting to an emotional being is kinda nice.
AstroBen
Don't forget that AI boss would be controlled by a human.. a human who has no idea how it works
wnc3141
I think we saw that with the "vibe session" from a year or so ago. People were technically employed through door dash and other dead end jobs while overall economic agency shrank.
skwee357
The problem with “AI will replace all jobs” hype, is that it also comes with a flavor of “and we all will do creative work”, while in reality AI replaces all the creative work and people go back to collecting garbage or other physically demanding and mundane jobs.
bufferoverflow
Why would you think garbage collecting or other mundane jobs won't be automated when much more complex ones are?
If AI+robotization gets to the point where most jobs are automated, humans will get to do.what they actually want to do. For some it's endless entertainment. For others it's science exploration, pollution cleanup, space colonization, curing the disease. All of that with the help of the AIs.
skwee357
By the time robots will be able to do personal trainings in, say, boxing; or fix people’s roofs, humanity will long be dead or turned into power source for said robots.
mplanchard
Turns out a simulacrum of intelligence is much easier than dexterous robots. Robots are still nowhere near being able to fold laundry, as far as I know.
freedomben
Yep, that's going to be the main outcome I suspect. The bottom 50% or maybe even 80 to 90% of knowledge workers are going to have to go back to physical work. That too will eventually be automated, but I suspect things like construction work (including the many trades wrapped up therein) will be toward the end of that.
DebtDeflation
Maybe that's why the current administration is pushing so hard to bring back low end manufacturing.
If you're a SWE, Accountant, Marketer, HR person, etc. put out of work by AI, now you can screw together iPhones for just over minimum wage. And if we run out of those jobs, there's always picking vegetables now that all the migrants are getting deported.
It would not surprise me one bit if the Tech CEOs see things this way.
saubeidl
That's how you get a revolution.
throwaway29812
[dead]
theSherwood
The analogies to previous technologies always seem misguided to me. Maybe it allows us to make some predictions about the next few years, but not more than that. We do not know when/where we will hit the limits on AI capabilities. I think this is completely unlike any previous technology. AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years? And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is. Analogies to industrial agriculture (a very big deal, historically) and other technologies completely miss the scope of what's happening.
Imustaskforhelp
Let me give my two cents. I remember when people used to think ai models are all the rage and one day we are gonna get super intelligence.
I am not sure If we can call the current sota models that. Maybe, maybe not. But a little disappointing.
Now everyones saying that ai agents are the hype and the productivity gains are in that, the Darwin Gödel paper which was recently released for example.
On the same day (yesterday), hn top page had an ai blog by some fly.io and the top comment was worried about ai excelling and that as devs we should do something as he was concerned what if companies reach the intelligence hype that they are saying.
On the same day, builder.ai turned out to be actually Indians.
The companies are most likely giving us hype because we are giving them valuation. The hype seems not worth it. Everyones saying that all models are really good and now all that matters are vibes.
So in all of this, I have taken this.
Trust noone. Or atleast not take things at face value of ai hype companies. I genuinely believe that ai is gonna reach a plateau of sorts at such moment like ours and as someone who tinkers with it. I am genuinely happy at its current scale and I kind of don't want it to grow more I guess, and I kind of think that a plateau might come soon. But maybe not.
I don't think that its analogous to species but maybe that's me being optimistic about future but I genuinely don't want to think too much as it stresses my brain too much and makes evene my present... Well not a present(gift)
theSherwood
LLMs have only really been around a handful of years and what they are capable of is shocking. Maybe LLMs hit a wall and plateau. Maybe it's a few years before there's another breakthrough that results in another step-change in capabilities. Maybe not. We can focus on the hype and the fraud and the marketing and all the nonsense, but it's missing the forest for the trees.
We genuinely have seen a shocking increase in reasoning abilities over the course of only a decade from things that aren't human. There may be bumps in the road, but we have very little idea how long this trajectory of capability increases will continue. I don't see any reason to think humans are near the ceiling of what is possible. We are in uncharted territory.
Imustaskforhelp
I may be wrong,I usually am but wasn't ai basically possible even in the 1970s but back then there were of course no gpus and basically alexnet showed that gpu are really effective for ai and that is what basically created the ai snowballing.
I am not sure,but in my opinion, a hardware limitation might be real. These models are training on 100k gpus and like the whole totality of internet. I am not sure but I wouldn't be too certain of ai.
Also,maybe I am biased.is it wrong that I want ai to just stay here,at the moment it is right now. It's genuinely good but anything more feels to me as if it might be terrifying (if the ai companies hype genuinely comes true)
tptacek
I've got no dog on this hunt at all, the idea that any give AI company could be a house of cards is not only plausible but is the bet I would place every time, but the whole "builder.com is all Indians" thing is something 'dang spent a half an hour ruefully looking into yesterday and it turned out not to be substantiated.
Imustaskforhelp
I am not sure but I read the hn post a little and didnt see that part I suppose.
But even then,people were defending it,saying so what,they never said that they aren't doing it or SMTH. So I of course assumed that people are defending what's true.
Maybe not,but such a rumour was quite a funny one to hear as an Indian myself.
kypro
While robotics are still relatively immature, I would think of AI as something akin to a remote worker.
Anything a human remote worker can do, a super human remote worker will be able to do better, faster and for a fraction of the cost – this includes work that humans currently do in offices but could theoretically done remotely.
We should therefore assume if (when) AI broadly surpasses the capabilities of a human remote worker it will not longer make economic sense to hire humans for these roles.
Should we assume this then what is the human's role in the labour market? It won't be their physical abilities (the industrial revolution replaced the human's role here), it won't be their reasoning abilities (AI will soon replace the human's here), but perhaps jobs which require both physical dexterity and human-level reasoning ability humans might still retain an edge? Perhaps at least for now we can assume jobs like roofing, plumbing, and gardening will continue to exist. While jobs like coding, graphic design and copy writing will almost certainly be replaced.
I think the only long-term question at the moment is how long it will take for robotics to catch up and provide something akin to human-level dexterity with super-human intelligence? At which point I'm not sure why anyone would hire a human except from the novelty of it – perhaps like the novelty of riding a horse into town.
AI is so obviously not like other technologies. Past technologies effectively just found ways to automate low-intelligence tasks and augment human strength via machinery. Advanced robotics and AI is fundamentally different in their ability to cut into human labour, and combined it's hard to see any edge to a human labourer.
But ether way, even if you subscribe to this notion that AI will not take all human jobs it seems very likely that AI will displace many more jobs than the industrial revolution did, and at a much much faster pace. Additionally, it will target those who are most educated, which isn't necessarily a bad thing, but it unlike the working class who are easy to ignore and tell to re-skill, my guess would be that demands will be made for UBI and large reorganisations of our existing economic and political systems. My point is, the likelihood any of this will end well is close to zero, even if you just believe AI will replace a bunch of inefficient jobs like software engineers.
theSherwood
This matches my expectations for the near term pretty closely.
awb
We’ve seen tech completely eliminate jobs like phone switch operators and lamp lighters.
And it’s decimated other professions like manual agriculture, assembly line jobs, etc.
It seems like people are debating whether the impact of AI on computer-based jobs will be elimination or decimation. But for the majority of people, what’s the difference?
yoyohello13
I think the comparisons to lamp lighters or whatever don't quite capture why this is so much worse. The training for those jobs was relatively low. You don't need a decade of school to become a lamp lighter.
So if the white collar bloodbath is true. We have to tell a bunch of people, who have spent a significant portion of their lives training for a specific jobs and may be in debt for that education, to go do manual labor or something. The potential civil unrest from this should really concern everyone.
ffsm8
You honestly think it's gonna take more then a few years until everything else to follow?
Seriously, once something is able to do 90% of a white collar workers job, general ai has gotten far enough for robotics to take over/decimate the other industries within the decade.
yoyohello13
Seems like that would make the civil unrest worse not better.
Peroni
>And it’s decimated other professions like manual agriculture, assembly line jobs, etc.
When Henry Ford introduced the moving assembly line, production went from hundreds of cars to thousands. It had a profoundly positive impact on the secondary market, leading to an overall increase in job creation.
I've yet to see any "AI is gonna take your job" articles that even attempt to consider the impact on the secondary market. It seems their argument is that it'll be AI all the way down which is utter nonsense.
naijaboiler
Humans beings can not run out of economically valuable things we can do for one another. Technology can change what that thing is profoundly though.
monknomo
What do you think the secondary market for knowledge work is?
cootsnuck
More knowledge work. It's disheartening for me to see so many people think so little about their own abilities.
There's a reason we can still spot the sterile whiff of AI written content. When you set coding aside, the evidence just hasn't shown up yet that AI agents can reliably replace anything more than the most formulaic and uninspired tasks. At least with how the tech is currently being implemented...
(There's a reason these big companies spend very very little time talking about the power of businesses using their own data to fine-tune or train their own models...)
ilaksh
The biggest reason there is such a difference of opinion on this is that people have fundamentally different worldviews. If you have bought into the singularity concept and exponential acceleration of computing performance, then you are likely to believe that we are right on track to shortly have smarter-than-human AI. This is also related to just having a technology-positive versus negative worldview. Many people like to blame technology for humanity's failings when in reality it's a neutral lever. But that comes down to the way people look at the world.
People who don't "believe" in the exponential of computing (even though I find the charts pretty convincing) seem to always assume that AI progress will stop near where it is. With that assumption, the skepticism is reasonable. But it's a poorly informed assumption.
https://en.wikipedia.org/wiki/Technological_singularity#Expo...
I think that some of that gets into somewhat religious territory, but the increasing power and efficiency of compute seems fairly objective. And also the intelligence of LLMs seems to track roughly with their size and amount of training. So this does look like it's about scale. And we continue to increase the scale with innovations and new paradigms. There will likely be a new memory-centric computing paradigm (or maybe multiple) within the next five years that increases efficiency by another two orders of magnitude.
Why can I just throw out a prediction about orders of magnitude? Because we have increased the efficiency and performance by orders of magnitude over and over again throughout the entire history of computing.
parineum
I think you're missing a third conglomerate that, I think, is actually the most influential.
It's not unlike the crypto space; you've got your true believers, your skeptics and thirdly, your financially motivated hype men. The CEOs if these publicly traded companies and companies that want to be bought are the latter and they are the ones who are behind stories where "the ai lies so we don't turn it off!!!" hype that gets spun into click bait headlines.
ilaksh
I think those CEOs like Altman and Amodei do find it convenient to hype their products like that, but also they believe in computing exponentials and artificial superintelligence etc.
tzs
>> The automation of farm work is the most notable and most labor-impacting example we have from history, rapidly unemploying a huge portion of human beings in the developing economies of the late 19th and 20th centuries. And yet, at the conclusion of this era (~1940s/50s), the conclusion was that “technological unemployment is a myth,” because “technology has created so many new industries” and has expanded the market by “lowering the cost of production to make a price within reach of large masses of purchasers.” In short, technological advances had created more jobs overall
From the late 19th century to the 1940s/50s is 50 years. It's not really reassuring to middle aged workers who lose their jobs to new technology that 50 years later there will overall be more jobs available.
game_the0ry
I will likely be leaving tech bc of business execs getting horny and skeeting all over each other at the cost savings they perceive.
The flip side is that now I am using AI for my own entrepreneurial endeavors. Then I get to be the business exec, except my employees will be AI workflows.
And I never have to deal with a business exec every again.
lddemi
Until you have to hire one :)
game_the0ry
True true. But I want to try to stay at the "solopreuership" level for as long as I can pull it off. I would prefer not to have too much influence over other peoples lives.
cootsnuck
Same here. I'm currently doing the soloist route as a consultant. Going well but I am reaching a point where I'm starting to need help.
Even if you do end up having the best kind of problem and have to scale your business, there are other ways to organize work besides the same ol' tired hierarchy.
Enspiral is one real-life example I can think of. They're a entrepreneurial collective in New Zealand that has figured out its own way of organizing collaboration in a way without bosses/execs. Seems to be working fine for them. (Other types of worker cooperatives / collectives too, they're just a great example.)
I'd rather dare to try to make something perhaps more difficult at first but that allows me to avoid recreating the types of working conditions that pushed me to leave the rat race.
tines
People compare AI to the automation that happened in e.g. car factories. Lots of people were put out of jobs, and that’s just the way things go, they say.
But the difference is that automotive automation does create way more jobs than it destroyed. Programmers, designers, machine maintainers, computer engineers, mechanical engineers, materials scientists, all have a part to play in making those machines. More people are employed by auto manufacturers than ever before, albeit different people.
AI isn’t the same really. It’s not a case of creating more-different jobs. It just substitutes people with a crappier replacement, puts all the power in the hands of the few companies that make it, and the pie is shrinking rather than growing.
We will all pay for the damage done in pure pursuit of profit by shoehorning this tech everywhere.
palmotea
I think one interesting way to frame AI is that it will "degrade" jobs: force speed-ups and remove much of the enjoyable and engaging aspects and replace them with drudgery.
tines
That’s a good way to think of it. I think it’s degrading every aspect of life. We’re so obsessed with metrics and efficiency that we don’t know how to live any more.
palmotea
Yeah, modern society (essentially neoliberal capitalism) does not prioritize quality of life. The apotheosis is maximum output (shareholder profits), even if that means if the vast majority of people are miserable and unhappy (because the low quality of their work-life is not compensated by the products and services they're given access to).
sct202
It's the uncertainty with the transition. Will I, in my mid-career, be able to get one of these new jobs that spawn out as a result of AI, or will I be displaced into something lower-paying as result. TFA kind of just glosses over the people who get displaced in each transition like a footnote.
Not an expert here, just speaking from experience as a working dev. I don’t think AI is going to replace my job as a software developers anytime soon, but it’s definitely changing how we work (and in many cases, already has).
Personally, I use AI a lot. It’s great for boilerplate, getting unstuck, or even offering alternative solutions I wouldn’t have thought of. But where it still struggles sometimes is with the why behind the work. It doesn’t have that human curiosity, asking odd questions, pushing boundaries, or thinking creatively about tradeoffs.
What really makes me pause is when it gives back code that looks right, but I find myself thinking, “Wait… why did it do this?” Especially when security is involved. Even if I prompt with security as the top priority, I still need to carefully review the output.
One recent example that stuck with me: a friend of mine, an office manager with zero coding background, proudly showed off how he used AI to inject some VBA into his Excel report to do advanced filtering. My first reaction was: well, here it is, AI replacing my job. But what hit harder was my second thought: does he even know what he just copied and pasted into that sensitive report?
So yeah, for me AI isn’t a replacement. It’s a power tool, and eventually, maybe a great coding partner. But you still need to know what you’re doing, or at least understand enough to check its work.