Opus 4.5 is the first model that makes me fear for my job
74 comments
·December 14, 2025techblueberry
It feels like every model release has its own little hype cycle. Apparently Claude 4.5 is still climbing to its peak of inflated expectations.
krackers
There's lots of overlap between the cryptocurrency space and AI grifter hypeman space. And the economic incentives at play throw fuel on the fire.
Forgeties79
They behave like televangelists
pogue
I hear that a lot, but I think this is becoming very different than the crypto grift.
Crypto was just that, a pure grift where they were creating something out of nothing and rugpulling when the hype was highest.
AI is actually creating something, it's generating replacement for artists, for creatives, for musicians, for writers, for programmers. It's literally capable of generating something from _almost_ nothing. Of course, you have to factor in energy usage & etc, but the end user sees none of that. They type a request and it generates an output.
It may be easily identifable slop today, but it's getting better and better at a RAPID rate. We all need to recognize this.
I don't know what to do with the knowledge that it's coming for our jobs. Adapt or die? I don't know...
krackers
I don't disagree that there is value behind LLMs. But I was referring to the grifter style of AI evangelism (of which the strawberry man might be the epitome), who are derliberately pumping up and riding on the bubble. Probably they're profiting off of the generated social media engagement, or part of some social media influencing campaign indirectly paid by companies who do benefit from the bubble.
The common thread is that there's no nuanced discussion to be found, technical or otherwise. It's topics optimized for viral engagement.
techblueberry
Very meta post.
channel_t
Almost every single post on the ClaudeAI subreddit is like this. I use Opus 4.5 in my day to day work life and it has quickly become my main axe for agentic stuff but its output is not a world-shattering divergence from Anthropic's previous, also great iterations. The religious zealotry I see with these things is something else.
epolanski
I suspect that recurring visitors of that subreddit may not be the greatest IT professionals, but a mixture of juniors (even those with 20 years of experience but still junior) and vibe coders.
Otherwise, with all due respect, there's very little of value to learn in that subreddit.
unsupp0rted
This exactly. The /r/codex subreddit is equally full of juniors and vibe-coders. In fact, it's surprisingly a ghost-town, given how useful Codex CLI is.
channel_t
100%. I would also say that this broadly applies to pretty much all of the AI subreddits, and much of AI Twitter as well. Very little nuanced or thoughtful discussions to be found. Looks more like a bunch of people arguing about their favorite sports teams.
heavyset_go
This is the new goalpost now that the "this model is so intelligent that it's sentient and dangerous" AGI hype has died down.
quantumHazer
Why are we commenting the Claude subreddit?
1) it’s not impartial
2) it’s useless hype commentary
3) it’s literally astroturfing at this point
prymitive
There are still a few things missing from all models: taste, shame and ambition. Yes they can write code, but they have no idea what needs does that code solve, what a good UX looks like and what not to ship. Not to mention that they all eventually go down rabbit holes of imaginary problems that cannot be solve (because they’re not real), and do where they will spend eternity unless w human says stop it right now.
odla
While I agree, this is also true of many engineers I’ve met.
heavyset_go
They have a severe lack of wisdom, as well.
hecanjog
I used claude code for a while in the summer, took a vacation from LLMs and I'm trying it out again now. I've heard the same thing about Opus 4.5, but my experience with claude code so far is the same as it was this summer... I guess if you're a casual user don't get too excited?
yellow_lead
I tried it and I'm not impressed.
In threads where I see an example of what the author is impressed by, I'm usually not impressed. So when I see something like this, where the author doesn't give any examples, I also assume Claude did something unimpressive.
nharada
It definitely feels like a jump in capability. I've found that the long term quality of the codebase doesn't take nosedive nearly as quickly as earlier agentic models. If anything it's about steady or maybe even increasing if you prompt it correctly and ask for "cleanup PRs"
markus_zhang
Ironically AI may replace SWE way faster than it does for any other businesses in Stone Age.
Pick anything else you have a far better likelihood to fall back into manual process, legal wall, or whatever that AI cannot replace easily.
Good job boys and girls. You will be remembered.
pton_xd
I have to say, it was fun while it lasted! Couldn't really have asked for a more rewarding hobby and career.
Prompting an AI just doesn't have the same feeling, unfortunately.
sunshowers
It depends. I've been working on a series of large, gnarly refactors at work, and the process has involved writing a (fairly long), hand-crafted spec/policy document. The big advantage of Opus has been that the spec is now machine-executable -- I repeatedly fed it into the LLM and see what it did on some test cases. That sped up experimentation and prototyping tremendously, and it also found a lot of ambiguities in the policy document that were helpful to address.
The document is human-crafted and human-reviewed, and it primarily targets humans. The fact that it works for machines is a (pretty neat) secondary effect, but not really the point. And the document sped up the act of doing the refactors by around 5x.
The whole process was really fun! It's not really vibe coding at that point, really (I continue to be relatively unimpressed at vibe coding beyond a few hundred lines of code). It's closer to old-school waterfall-style development, though with much quicker iteration cycles.
cmarschner
For me it‘s the opposite. I do have a good feeling what I want to achieve, but translating this into and testing program code has always been causing me outright physical pain (and in case of C++ I really hate it). I‘ve been programming since age 10. Almost 40 years. And it feels like liberation.
It brings the “what to build“ question front and center while “how to build it“ has become much, much easier and more productive
markus_zhang
Indeed. I still use AI for my side projects, but strictly limit to discussion only, no code. Otherwise what is the point? The good thing about programming is, unlike playing chess, there is no real "win/lose" in the scenario so I won't feel discouraged even if AI can do all the work by itself.
Same thing for science. I don't mind if AI could solve all those problems, as long as they can teach me. Those problems are already "solved" by the universe anyway.
Hamuko
Even the discussion side has been pretty meh in my mind. I was looking into a bug in a codebase filled with Claude output and for funsies decided to ask Claude about it. It basically generated a "This thing here could be a problem but there is manual validation for it" response, and when I looked, that manual validation were nowhere to be found.
There's so much half-working AI-generated code everywhere that I'd feel ashamed if I had to ever meet our customers.
I think the thing that gives me the most value is code review. So basically I first review my code myself, then have Claude review it and then submit for someone else to approve.
skybrian
Already happened for copywriters, translators, and others in the tech industry:
agumonkey
something in the back of my head tells me that automating (partial) intelligence feels different than automating a small to medium scope task, maybe i'm wrong though
harrall
I don’t think it’s ironic.
The commonality of people working on AI is that they ALL know software. They make a product that solves the thing that they know how to solve best.
If all lawyers knew how to write code, we’d seem more legal AI startups. But lawyers and coders are not a common overlap, surely nowhere as near as SWEs and coders.
agumonkey
time to become a solar installer
neoromantique
For the most part, code monkeys haven't been a thing for quite some time now, I'm sure talented people will adapt and find other avenues to flourish
Lionga
Dario Amodei claimed "AI will replace 90% of developers within 6 months" about a year ago. Still they are just loosing money and will probably will be forever while just producing more slop code that needs even more devs to fix it.
Good job AI fanboys and girls. You will be remembered when this fake hype is over.
markus_zhang
I'm more of a doomsayer than a fan boy. But I think it's more like "AI will replace 50% of your juniors and 25% of your seniors and perhaps 50% of your do-nothing middle managers", And that's a fairly large number anyway.
sidibe
100% in the doomers camp now, wish I could be as optimistic as these people who think AI is all hype but the last few weeks it's starting to finally be more productive to use these tools and I feel like this will be a short little window where the stuff I'm typing in and the review of whats coming out is still worth my salary.
I don't really see why anywhere near the number of great jobs this industry has had will be justifiable in a year. The only comfort is all the other industries will be facing the same issue so accomodations will have to be made.
heckintime
I used Claude Code to write a relatively complicated watchOS app. I know how to program (FAANG L5), but didn't really know Swift. I achieved a pretty good result for about $600, while a contractor would've cost much more.
agumonkey
so how long until our salaries match those of an llm ?
heckintime
Good question. I left my job to start something on my own so an AI help is really nice. Should note that AI does make many boneheaded mistakes, and I have to solve some of the harder problems on my own.
exabrial
This just looks like an advertisement?
jsheard
It's just a normal Reddit account which was dormant until two weeks ago when it suddenly started spamming threads exclusively about AIs imminent destruction of the job market. Nothing to see here!
https://www.reddit.com/r/ClaudeAI/comments/1pe6q11/deep_down...
https://www.reddit.com/r/ClaudeAI/comments/1pb57bm/im_honest...
https://www.reddit.com/r/ChatGPT/comments/1pm7zm4/ai_cant_ev...
https://www.reddit.com/r/ArtificialInteligence/comments/1plj...
https://www.reddit.com/r/ArtificialInteligence/comments/1pft...
https://www.reddit.com/r/AI_Agents/comments/1pb6pjz/im_hones...
https://www.reddit.com/r/ExperiencedDevs/comments/1phktji/ai...
https://www.reddit.com/r/csMajors/comments/1pk2f7b/ (cached title: Your CS degree is worthless. Switch over. Now.)
quantumHazer
I wouldn’t be surprised if this is undisclosed PR from Anthropic
bachmeier
I'd be very surprised if it wasn't. Everything about that company turns me off. I've run across countless YouTube videos that are clearly Anthropic PR pretending to be real videos by regular people just trying it out and discovering how good Claude is. I'll stick with Gemini.
Remember when GPT-3 came out and everybody collectively freaked the hell out? That's how I've felt watching the reaction to any of the new model releases lately that make any progress.
I'm honestly not complaining about the model releases, though. Despite their shortcomings, they are extremely useful. I've found Gemini 3 to be an extremely useful learning aid, so as long as I don't blindly trust its output, and if you're trying to learn, you really ought not do that anyways. (Despite what people and benchmarks say, I've already caught some random hallucinations, it still feels like you're likely to run into hallucinations on a regular basis. Not a huge problem, but, you know.)