Nvidia CEO criticizes Anthropic boss over his statements on AI
171 comments
·June 15, 2025kbos87
zozbot234
"Massive disruption" of what kind? Current AI abilities make white-collar work more productive and potentially higher-paid, not less.
kbos87
Why would my employer pay me more for using their AI? I am already massively more productive at work using AI. I'm not getting paid more, and I'm not working fewer hours. The road we are headed down is one where all of the economic benefits go straight to the owning class.
MostlyStable
For the same reasons that, on average and in general, increases in productivity have lead to increases in wages across history. It's not universal and it's not instantaneous, but the base level assumption should be that increases in productivity will lead to increases in wages in the long run and we should have specific reasons why it won't happen for a given case to believe otherwise.
JumpCrisscross
> Why would my employer pay me more for using their AI?
We’re on HN. AI makes it easier for you to disrupt your employer.
jstummbillig
Owning what? Your employer (likely) owns little of value in a world of cheap AI software generation. Open Source models are already the thing that you can use to code with, and will obviously get better. We have to believe some really weird stuff for this vision to be coherent, for example a nvidia-everything-world, where they are not only the sole provider of hardware but also control access to all relevant models, and all software products that any (non-tech) business needs to use AI.
If by "owning class" you actually mean "all people with agency" then, yeah, I agree.
charcircuit
You need to switch jobs in order to get paid at market rate. Companies have found that they do not need to keep up with the market rate to retain employees.
mjr00
Why would your employer pay you more for using Python/Java/JavaScript? You're massively more productive when using those languages instead of C for many common development tasks.
Did the introduction of Python drastically reduce software developer salaries?
Arainach
Productivity per capita is dramatically up since the 1970s. Wages are flat. Employers are greedy and short sighted.
Employers would rather pay more to hire someone new who doesn't know their business than give a raise to an existing employee who's doing well. They're not going to pay someone more because they're more productive, they'll pay them the same and punish anyone who can't meet the new quota.
MostlyStable
This is not true. Wages did stagnate from about the mid 70s until about the mid 90s, but median, real wages have been increasing steadily since then [0]
tiahura
Now, imagine a 70s - present where the labor supply curve didn’t didn’t massively shift because of women and foreign trade.
uhhhhhhh
Companies are actively not hiring expecting AI to compensate and still have growth. I have seen these same companies giving smaller raises and less promotions, and eliminate junior positions.
The endgame isn't more employees or paying them more. It's paying less people or no skilled people when possible.
That's a fairly massive disruption.
seadan83
We're just 3 years after a giant hiring binge, a similar amount of time post zero interest rates, the US economy has been threatening recession for two years, and economic uncertainty is very high, and post Covid had a glut of junior engineers coming onto the market. Between all of these plausible explanations for why hiring is way down, is there macro economic evidence it really is AI and not anything else?
shanemhansen
I don't believe them. I believe that as we exit zero interest rates, companies have to cut back and "we are doing AI" is easier to sell the investors and even their own employees than "yeah we want to spend less on people".
candiddevmike
All that I've seen with AI in the workplace is my coworkers becoming dumber. Asking them technical questions they should know the answer to has turned into "let me LLM that for you" and giving me an overly broad response. It's infuriating. I've also hilariously seen this in meetings, where folks are being asked something and awkwardly fill time while they wait for a response which they try and not read word for word.
knowitnone
so if they are more productive, does that not mean companies will need fewer staff? Why would they give you more pay when they can so easily replace you. Remember, you're not doing much of the work anymore so expect lower pay.
jaredklewis
Salaries are determined by the replacement cost of the employee in question, not their productivity. How does AI increase wages?
ffsm8
You seem to have the same opinion as kbos87 then, because given your higher productivity, do you honestly think there will not be less job openings from your employer going forward?
What you just said as a rebuttal was pretty much his point, you just didn't internalize what the productivity gains mean at the macro level, only looking at the select few that will continue to have a job
zozbot234
Were there "less job openings from our employers" when the software industry shifted en masse from coding in FORTRAN to C/C++, then to Java and later on to Python and JavaScript? Each of these choices came with a massive gain in productivity. Why then did the software sector grow and not shrink at the macro level?
VWWHFSfQ
USA is presently in the midst of a massive offshoring of software jobs which will only continue to accelerate as AI becomes better. These are "white collar" jobs that will never come back.
mjr00
The USA has "presently" been offshoring software development jobs since around 2001.
I remember, because the same type of people dooming about AI were also telling me, a university student at the time, that I shouldn't get into software development, because salaries would cap out at $50k/year due to competition with low-cost offshore developers in India and Bangladesh.
zozbot234
> a massive offshoring of software jobs
Where have I heard this before? The drawbacks of offshoring are well known by now and AI does not really mitigate them to any extent.
sbierwagen
>Who gets the nice car and the vacation home?
AI will crash the price of manufactured goods. Since all prices are relative, the price of rivalrous goods will rise. A car will be cheap. A lakeside cabin will be cheap. A cottage in the Hamptons will be expensive. Superbowl tickets will be a billion dollars each.
>meager universal basic income allotment
What does a middle class family spend its money on? You don't need a house within an easy commute of your job, because you won't have one. You don't need a house in a good school district, because there's no point in going to school. No need for the red queen's race of extracurriculars that look good on a college application, or to put money in a "college fund", because college won't exist either.
The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
bigbadfeline
> AI will crash the price of manufactured goods.
Quite the opposite, persistent inflation has been with us for a long time despite automation, it's not driven by labor cost (even mainstream econ knows it), it's driven by monopolization which corporate AI facilitates and shifts to overdrive.
> The point of AI isn't that it's going to blow up some of the social order, but that it's going to blow up the whole thing.
AI will blow up only what its controllers tell it to, that control is the crux of the problem. The AI-driven monopolization allows few controllers to keep the multitudes in their crosshairs and do whatever they want, with whomever they want, J. Huang will make sure they have the GPUs they need.
> You don't need a house within an easy commute of your job, because you won't have one.
Remote work has been a thing for quite some time but remote housing is still rare anyway - a house provides access not only to jobs and school but also to medical care, supply lines and social interaction. There are places in Montana and the Dakotas who see specialist doctors only once a week or month because they fly weekly from places as far away as Florida.
> You don't need a house in a good school district, because there's no point in going to school... and college won't exist either.
What you're describing isn't a house, it's a barn! Can you lactate? Because if you can't, nobody is going to provide you with a stall in the glorious AI barn.
amazingman
The main flaw in your framing is that physical resources are still scarce. All prices are not relative in the sense you're building your projections on.
lend000
Friends and others who have described the details of their non-technical white collar work to me over the last 15 or so years have typically evoked the unspoken response... "Hmm, I could probably automate about 50-80% of your job in a couple weeks." That's pre-AI. And yet years later, they would still have similar jobs with repetitive computer work.
So I'm quite confident the future will be similar with AI. Yes, in theory, it could already replace perhaps 90% of the white collar work in the economy. But in practice? It will be a slow, decades-long transition as old-school / less tech savvy employers adopt the new processes and technologies.
Junior software engineers trying to break into high paying tech jobs will be hit the hardest hit IMO, since employers are tech savvy, the supply of junior developers is as high as ever, and they simply will take too long to add more value than using Claude unless you have a lot of money to burn on training them.
brookst
I’m very skeptical of claims that all things will always do this and never do that, etc.
IMO Jensen and others don’t know where AI is going any more than the rest of us. Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
danans
> Your imaginary dystopia is certainly possible, but I would warn against having complete conviction it is the only possible outcome.
Absent some form of meaningful redistribution of the economic and power gains that come from AI, the techno-feudalist dystopia becomes a more likely outcome (though not a certain outcome), based on a straightforward extrapolation of the last 40 years of increasing income and wealth inequality. That trend could be arrested (as it was just after WW2), but that probably won't happen by default.
kbos87
Fair point and I absolutely acknowledge that the future AI will usher in is still very much an unknown. I do think it's worth recognizing that there is one part of the story that is very predictable because it's happened over and over again - the part where some sort of innovation creates new efficiencies and advantages. I think it's fair to debate the extent to which AI will completely disrupt the white collar working class, but to whatever extent it does, I don't think there's much argument about where the benefit will accrue under our current economic system.
autobodie
[flagged]
hnlmorg
That backlash is already happening. Which is why we are seeing the rise in right wing extremism. People are voting for change. The problem is they’re also voting for the very establishment they’re protesting against.
willis936
Surveys aren't revealing that AI legislation is a top 3 issue for constituents on either side. It might as well be under the noise floor politically.
Aurornis
AI doesn’t really register on polls of voter priorities.
timewizard
> base their answers to any questions on economic risk on their own best interests and a pretty short view of history.
We used to just call that lying.
> When AI finally does cause massive disruption to white collar work
It has to exist first. Currently you have a chat bot that requires terabytes of copyrighted data to function and has sublinear increases in performances for exponential increases in costs. These guys genuinely seem to be arguing over a dead end.
> what happens then?
What happened when gasoline engines removed the need to have large pools of farm labor? It turns out people are far more clever than a "chat bot" and entire new economies became invented.
> that we see some form of swift and punitive backlash, politically or otherwise.
Or people just move onto the next thing. It's hilarious how small imaginations become when "AI" is being discussed.
zer00eyz
> and a pretty short view of history
Great lets see an example!
> To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
Except that innovation has lead to more jobs, new industries, and more prosperity and fewer working hours. The stark example of this: you arent a farmer: https://modernsurvivalblog.com/systemic-risk/98-percent-of-a...
Your shirts arent a weeks or a months income: https://www.bookandsword.com/2017/12/09/how-much-did-a-shirt...
Go back to the 1960's when automation was new. It was an expensive, long running failure for GM to put in those first robotic arms. Today there are people who have CNC shops in their garage. The cost of starting that business up is in the same price range as the pickup truck you might put in there. You no longer need accountants, payroll, and your not spending as much time doing these things yourself its all software. You dont need to have a retail location, or wholesale channels, build your website, app, leverage marketplaces and social media. The reality is that it is cheaper and easier than ever to be your own business... and lots of people are figuring this out and thriving.
> Do we really think that most of the American economy is just going to downshift
No I think my fellow Americans are going to scream and cry and hold on to dying ways of life -- See coal miners.
willis936
I struggle to see how AI innovation falls into the "automate creation of material goods" camp and not the "stratification of wealth" camp.
zer00eyz
This is a spurious argument at best:
There isnt a line of unemployed draftsmen out there begging for change cause we invented Autocad: https://azon.com/2023/02/16/rare-historical-photos/
What happened to all the switchboard operators.
How about the computers, the people who used to do math, at desks, with slide rules, before we replaced them with machines.
These are all white colar jobs that we replaced with "automation".
Amazon existed before, it was called Sears... it was a catalog so pictures, and printing, and mailing in checks, we replaced all of that with a website and CC processing.
imperialdrive
Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead, at least for my daily flavor which is PowerShell. No way a double-digit amount of jobs aren't at stake. This stuff feels like it is really starting to take off. Incredible time to be in tech, but you gotta be clever and work hard every day to stay on the ride. Many folks got comfortable and/or lazy. AI may be a kick in the pants. It is for me anyway.
WXLCKNO
I've been trying every flavor of AI powered development and after trying Claude Code for two days with an API key, I upgraded to the full Max 20x plan.
Cursor, Windsurf, Roo Code / Cline, they're fine but nothing feels as thorough and useful to me as Claude Code.
The Codex CLI from OpenAI is not bad either, there's just something satisfying about the LLM straight up using the CLI
dandaka
Claude Code works surprisingly well and is also cheaper, compared to Windsurf and Cline + Sonnet 4. The rate of errors dropped dramatically for my side projects, from "I have to check most changes" to "I have not written a line".
solumunus
It really is night and day. Most of them feel like cool toys, Claude Code is a genuine work horse. It immediately became completely integral to my workflow. I own a small business and I can say with absolute confidence this will reduce the amount of devs I need to hire going forward.
mirkodrummer
I don't get claims like that, if AI let me do more and be more productive with less people I could also grow and scale more, that means that I can also hire more and again multiply growth because each dev will bring more and more... I'm skeptic because I don't see it happening, actually the contrary more people doing more things maybe, but the not 10x nor 100x otherwise we would see products built in 5 years coming out in literally 15 days
wellthisisgreat
hey can you explain the appeal of Claude Code vs Cursor?
I know the context window part and Cursor RAG-ing it, but isn't IDE integration a a true force multiplier?
Or does Claude Code do something similar with "send to chat" / smart (Cursor's TAB feature) autocomplete etc.?
I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
I tried all the usual suspects in AI-assisted programming, and Cursor's TAB is too good to give up vs Roo / Cline.
I do agree Claude's the best for programming so would love to use it full-featured version.
JoeMattie
I would also like to know this. I've only very briefly looked into Claude code and I may just not understand how I'm supposed to be using it.
I currently use cursor with Claude 4 Sonnet (thinking) in agent mode and it is absolutely crushing it.
Last night i had it refactor some Django / react / vite / Postgres code for me to speed up data loading over websocket and it managed to:
- add binary websocket support via a custom hook - added missing indexes to the model - clean up the data structure of the payload - add messagepack and gzip compression - document everything it did - add caching - write tests - write and use scripts while doing the optimizations to verify that the approaches it was attempting actually sped up the transfer
All entirely unattended. I just walked away for 10 minutes and had a sandwich.
The best part is that the code it wrote is concise, clean, and even stylistically similar to the existing codebase.
If claude code can improve on that I would love to know what I am missing!
cybrjoe
i rewrote a code base that i’ve been tinkering on for the last 2 years or so this weekend. a complete replatform, new tech stack, ui, infra, the whole nine yards. the rewrite took exactly 3 days, referenced the old code base, online documentation, github issues all without (mostly) ever leaving claude.
it completely blew my mind. i wrote maybe 10 lines of code manually. it’s going to eliminate jobs.
sorcerer-mar
> I fired it up but it seemed like just Claude in terminal with a lot more manual copy-pasting expected?
You should never have to copy/paste something from Claude Code...?
WXLCKNO
Claude Code has a VS Code (and therefore cursor / windsurf) extension so it will show you changes it wants to make directly in the IDE.
I still use the Cursor auto complete but the rest is all Claude Code.
Even without the extension Claude is directly modifying and creating files so you never have to copy paste.
GardenLetter27
I find it's good if you can get a really clean context, but on IRL problems with 100k+ lines of code that's extremely hard to manage.
It absolutely aced an old take-home test I had though - https://jamesmcm.github.io/blog/claude-data-engineer/
But note the problems it got wrong are troubling, especially the off-by-one error the first time as that's the sort of thing a human might not be able to validate easily.
Aurornis
> Finally gave Claude a go after trying OpenAI a while and feeling pretty _meh_ about the coding ability... Wow, it's a whole other level or two ahead,
I’ve been avoiding LLM-coding conversations on popular websites because so many people tried it a little bit 3-6 months ago, spot something that doesn’t work right, and then write it off completely.
Everyone who uses LLM tools knows they’re not perfect, they hallucinate some times, their solutions will be laughably bad to some problems, and all the other things that come with LLMs.
The difference is some people learn the limits and how to apply them effectively in their development loop. Other people go in looking for the first couple failures and then declare victory over the LLM.
There are also a lot of people frustrated with coworkers using LLMs to produce and submit junk, or angry about the vibe coding glorification they see on LinkedIn, or just feel that their careers are threatened. Taking the contrarian position that LLMs are entirely useless provides some comfort.
Then in the middle, there are those of us who realize their limits and use them to help here and there, but are neither vibe coding nor going full anti-LLM. I suspect that’s where most people will end up, but until then the public conversations on LLMs are rife with people either projecting doomsday scenarios or claiming LLMs are useless hype.
neilfrndes
Yup, Claude Code is the real deal. It's a massive force multiplier for me. I run a small SaaS startup. I've gotten more done in the last month than the previous 3 months or more combined. Not just code, but also emails, proposals, planning, legal etc. I feel like working in slo-mo when Claude is down (which unfortunately happens every couple of days). I believe that tools like Claude code will help smaller companies disproportionately.
finlayson_point
how are you using claude code for emails? with a MCP connection or just taking the output from the terminal
unshavedyak
I purchased Max a week ago and have been using it a lot. Few experiences so far:
- It generates slop in high volume if not carefully managed. It's still working, tested code, but easily illogical. This tool scares me if put in the hands of someone who "just wants it to work".
- It has proven to be a great mental block remover for me. A tactic i've often had in my career is just to build the most obvious, worst implementation i can if i'm stuck, because i find it easier to find flaw in something and iterate than it is to build a perfect impl right away. Claude makes it easy to straw man a build and iterate it.
- All the low stakes projects i want to work on but i'm too tired to after real work have gotten new life. It's updated library usage (Bevy updates were always a slog for me), cleaned up tooling and system configs, etc.
- It seems incapable of seeing the larger picture on why classes of bugs happen. Eg on a project i'm Claude Code "vibing" on, it's made a handful of design issues that started to cause bugs. It will happily try and fix individual issues all day rather than re-architect to make a less error prone API. Despite being capable to actually fix the API woes if prompted to. I'm still toying with the memory though, so perhaps i can get it to reconsider this behavior.
- Robust linting, formatting and testing tools for the language seem necessary. My pet peeve is how many spaces the LLM will add in. Thankfully cargo-fmt clears up most LLM gunk there.
null
levocardia
Nvidia is also very mad about Anthropic's advocacy for chip export controls, which is not mentioned in this article. Dario has an entire blog post explaining why preventing China from getting Nvidia's top of the line chips is a critical national security issue, and Jensen is, at least by his public statements, furious about the export controls. As it currently stands, Anthropic is winning in terms of what the actual US policy is, but it may not stay that way.
KerrAvon
Jensen is right, though. If we force China to develop their own technology they’ll do that! We don’t have a monopoly on talent or resources. The US can have a stake at the table or nothing at all. The time when we, the US, could do protectionism without shooting ourselves in the foot is well and truly over. The most we can do is inconvenience China in the short term.
orangecat
The most we can do is inconvenience China in the short term.
If scaling holds up enough to make AGI possible in the next 5-10 years, slowing down China by even a few years is extremely valuable.
cedws
Nothing says we’re the good guys like “we’ll do whatever it takes to sandbag our competitors.” Of course, we’re the benevolent ones who will only use this tool for wealth and prosperity.
nickysielicki
> If we force China to develop their own technology they’ll do that!
They’re going to do that anyway. They already are. The reason that they want to buy these cards in the first place is because developing these accelerators takes time. A lot of time.
sorcerer-mar
Should we also give them the plans for all of our military equipment then, by the same logic?
Neither side is obviously right.
dsign
Why look at five years and say "everything is gonna be fine in five years, thus, everything is gonna be fine and we should keep this AI thing going"?
It's early days and nobody knows how things will go, but to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs. And if our society doesn't change radically, let's remember that the only way most people have of eating and clothing is to sell their labor.
I'm an AI pessimist-pragmatist. If the thing with AI gets really bad for wage slaves like me, I would prefer to have enough savings to put AIs to work in some profitable business of mine, or to do my healthcare when disease strikes.
quonn
> It's early days and nobody knows how things will go, but to me it looks that in the next century or so
How is it early days? AI has been talked about since at least the 50s, neural networks have been a thing since the 80s.
If you are worried about how technology will be in a century, why stop right here? Why not take the state of computers in the 60s and stop there?
Chances are, if the current wave does not achieve strong AI the there will be another AI winter and what people will research in 30 or 40 or 100 years is not something that our current choices can affect.
Therefore the interesting question is what happens short-term not what happens long-term.
dsign
I said that one hundred years from now humans would have likely gone the way of the horse. It will be a finished business, not a thing starting. We may take it with some chill, depending on how we value our species and our descendants and the long human history and our legacy. It's a very individual thing. I'm not chill.
There's no comparing the AI we have today with what we had 5 years ago. There's a huge qualitative difference: the AI we had five years ago was reliable but uncreative. The one we have now is quite a bit unreliable but creative at a level comparable with a person. To me, it's just a matter of time before we finish putting the two things together--and we have already started. Another AI winter of the sort we had before seems to me highly unlikely.
quonn
I think you severely underestimate what the 8 billion human beings on this planet can and will do. They are not like horses at all. They will not allow themselves to be ruled by, like, 10 billionaires operating an AI and furthermore if all work vanishes then we will find other things to do. Just ask a beach bum or a monk or children in school or athletes or students or an artist or the filthy rich. There _are_ ways to spend your time.
You can‘t just judge humans in terms of economic value given the fact that the economy is something that those humans made for themselves. It‘s not like there can be an „economy“ without humankind.
The only problem is the current state where perhaps _some_ work disappears, creating serious problems for those holding those jobs.
As for being creative, we had GPT2 more than 5 years ago and it did produce stories.
And the current AI is nothing like a human being in terms of the quality of the output. Not even close. It‘s laughable and to me it seems like ChatGPT specifically is getting worse and worse and they put more and more lipstick on the pig by making it appear more submissive and producing more emojis.
falcor84
> How is it early days?
When you have exponential growth, it's always early days.
Other than that I'm not clear on what you're saying. What is in your mind the difference between how we should plan for the societal impact of AI in the short vs the long term?
seadan83
Is it early days of exponential growth? The growth of AI to beat humans in chess and then Go took a long time. Appears to be step function growth. LLMs have limitations and can't double their growth for much longer. I'd argue they never did double, just a step function with a slow linear growth since.
The crowd claiming exponential growth have been at it for not quite a decade now. I have trouble separating fact from CEOs of AI companies shilling to attract that VC money. VCs desperately want to solve the expensive software engineer problem, you don't get that cash by claiming AI will be 3% better YoY
quonn
> When you have exponential growth, it's always early days.
Let‘s take the development of CPUs where for 30-40 years the observable performance actually did grow exponentially (unlike the current AI boom where it does not).
Was it always early days? Was it early days for computers in 2005?
davemp
> …to me it looks that in the next century or so humans are going the way of the horse, at least when it comes to jobs.
I’m not sure. I think we can extrapolate that repetitive knowledge work will require much less labor. For actual AGI capable of applying rigor, I don’t think it clear that the computational requirements are achievable without a massive breakthrough. Also for general purpose physical tasks, humans are still pretty dang efficient at ~100watts and self maintaining.
fmbb
We have only been selling our labor for a couple of hundred years. Humanity has been around for hundreds of thousands of years.
We will manage. Hey, we can always eat the rich!
dsign
>> we can always eat the rich!
As long as they are not made out of silicon....
falcor84
And even then, we could perhaps genetically engineer ourselves to metabolize silicon.
pixl97
"Dinosaurs have been around 100 million years and they will be around 100 million more" --Dinosaurs 65.1 million years ago.
fmbb
How is that comparable?
I write we have only sold our labor for a couple of hundred years. We had civilization for many thousands of years before.
Dinosaurs did not live before they lived. And they did not die because their mode of production changed. They did not have a mode of production.
Are you suggesting LLMs will exterminate humanity?
bearjaws
Pretty bad example, maybe something more like "Horses have been working for thousands of years -- horse in 1927".
jjfoooo4
The AI executives predicting AI doomsday trend has been pretty tiresome, and I'm glad it's getting push back. It's impossible to take it seriously given an Anthropic's CEO's incentives: to thrill investors and to shape regulation of competitors.
The biggest long term competitor to Anthropic isn't OpenAI, or Google... it's open source. That's the real target of Amodei's call for regulation.
scuol
Just this morning, I had Claude come up with a C++ solution that would have undefined behavior that even a mid-level C++ dev could have easily caught (assuming iterator stability in a vector that was being modified) just by reading the code.
These AI solutions are great, but I have yet to see any solution that makes me fear for my career. It just seems pretty clear that no LLM actually has a "mental model" of how things work that can avoid the obvious pitfalls amongst the reams of buggy C++ code.
Maybe this is different for JS and Python code?
jsrozner
This is exactly right. LLMs do not build appropriate world models. And no...python and JS have similar failure cases.
Still, sometimes it can solve a problem like magic. But since it does not have a world model it is very unreliable, and you need to be able to fall back to real intelligence (i.e., yourself).
unshavedyak
I agree, but i think the thing we often miss in these discussions is how much LLMs have potential to be productivity multipliers.
Yea, they still need to improve a bit - but i suspect there will be a point at which individual devs could be getting 1.5x more work done in aggregate. So if everyone is doing that much more work, it has potential to "take the job" of someone else.
Yea, software is being needed more and more and more, so perhaps it'll just make us that much more dependent on devs and software. But i do think it's important to remember that productivity always has potential to replace devs, and LLMs imo have huge potential in productivity.
scuol
Oh I agree it can be a multiplier for sure. I think it's not "AI will take your job" but rather "someone who uses AI well will take your job if you don't learn it".
At least for C++, I've found it does very mediocre at suggesting project code (because it has the tendency to drop in subtle bugs all over the place, you basically have to carefully review it instead of just writing it yourself), but asking things in copilot like "Is there any UB in this file?" (not that it will be perfect, but sometimes it'll point something out) or especially writing tests, I absolutely love it.
unshavedyak
Yea i'm a big fan of using it in Rust for that same reason. I watch it work through compile errors constantly, i can't imagine what it would be like in JS or Python
skerit
Sonnet or Opus? Well, I guess they both still can do that. But I'm just keeping on asking it to review all its code. To make sure it works. Eventually, it'll catch its errors.
Now this isn't a viable way of working if you're paying for this token-by-token, but with the Claude Code $200 plan ... this thing can work for the entire day, and you will get a benefit from it. But you will have to hold its hand quite a bit.
rangestransform
> assuming iterator stability in a vector that was being modified
This is the crux of an interview question I ask, and you’d be amazed how many experienced cpp devs require heavy hints to get it
phamilton
(not trolling) Would that undefined behavior have occurred in idiomatic rust?
Will the ability to use AI to write such a solution correctly be enough motivation to push C++ shops to adopt rust? (Or perhaps a new language that caters to the blindspots of AI somehow)
There will absolutely be a tipping point where the potential benefits outweigh the costs of such a migration.
ddaud
I agree. That mental model is precisely why I don’t use LLMs for programming.
pepinator
This is where one can notice that LLM are, after all, just stochastic parrots. If we don't have a reliable way to systematically test their outputs, I don't see many jobs being replaced by AI either.
mistrial9
> just stochastic parrots
this is flatly false for two reasons -- one is that all LLMs are not equal. The models and capacities are quite different, by design. Secondly a large number of standardized LLM testing, tests for sequence of logic or other "reasoning" capacity. Stating the fallacy of stochastic parrots is basically proof of not looking at the battery of standardized tests that are common in LLM development.
pepinator
Even if not all LLMs are equal, almost all of them are based on the same base model: transformers. So the general idea is always the same: predict the next token. It becomes more obvious when you try to use LLMs to solve things that you can't find in internet (even if they're simple).
And the testing does not always work. You can be sure that only 80% of the time it will be really really correct, and that forces you to check everything. Of course, using LLMs makes you faster for some tasks, and the fact that they are able to do so much is super impressive, but that's it.
mistrial9
a difference emerges when an agent can run code and examine the results. Most platforms are very cautious about this extension. Recent MCP does define toolsets and can enable these feedback loops in a way that can be adopted by markets and software ecosystems.
rectang
The Anthropic CEO wants companies to lay off workers and pay Anthropic to do the work instead. Is Anthropic capable enough to replace those workers, and will it actually happen? Such pronouncements should be treated with the skepticism you'd apply to any sales pitch.
leetrout
Anthropic warns unemployment is a serious risk. Nvidia has an inflated stock and knows how to play the game so of course they deny any such thing with a view not much past the next quarterly earnings call.
No surprises here.
Tepix
The reason that AI companies are valuated so highly is the inherent promise of replacing humans in the labor force.
akomtu
In the Big Short movie there is a scene where the old trader scolds two newbies: "why are you so happy? don't you understand that if your bet is right, it means the american economy is going to crash, that millions will lose their jobs?"
Right now we're betting on sp500 going up, which is mostly backed by the belief that machines are going to replace us soon.
swalsh
Sonnet 4 changed my mind on AI safety. It can do ALOT of work unattended, real work like configuring servers. If you give it a goal, and a set of tools, it will get the job done. But I got freaked out the first time I used it, since I didn't realize just how good it was at pursuing it's goal. I gave it a custom MCP server with limited bash commands. But one of the commands was python (I assumed Anthropic would have trained it not to be so relentless... i was wrong), with that it just gladly used python to build and execute any command I didn't give direct access to. Sonnet 4 is scary smart and efficient. The only hesitation i have is that it's messy. For example, since it does not have a memory (i'm using claude desktop) i've seen it duplicate installations/configurations of containers if it failed to find the origional installation. The solution is to add language to the prompt instructing it to drop documentation, and to read documentation on everything it does.
tonyhart7
I don't care if AI eliminates job or not tbh, maybe its like internet that jobs would created but maybe not
the only thing that I certain is I would take advantage of this "AI revolution" so called, maybe just maybe Human would get replace with Human + AI tools for now at least
jasonsb
So far they're both wrong. Jensen says AI technologies will open more career opportunities in the future with zero evidence to support his claim. Dario says unemployment will skyrocket, but we're not seeing a spike in unemployment yet. If Dario's claims were indeed valid, we should already be observing at least a slight spike in the unemployment data.
Companies like Nvidia and OpenAI base their answers to any questions on economic risk on their own best interests and a pretty short view of history. They are fighting like hell to make sure they are among a small set of winners while waving away the risk or claiming that there's some better future for the majority of people on the other side of all this.
To claim that all of the benefit isn't going to naturally accrue to a thin layer of people at the top isn't a speculation at this point - it's a bold faced lie.
When AI finally does cause massive disruption to white collar work, what happens then? Do we really think that most of the American economy is just going to downshift into living off a meager universal basic income allotment (assuming we could ever muster the political will to create a social safety net?) Who gets the nice car and the vacation home?
Once people are robbed of what remaining opportunities they have to exercise agency and improve their life, it isn't hard to imagine that we see some form of swift and punitive backlash, politically or otherwise.