Anthropic Services Down
66 comments
·September 10, 2025bdcravens
clickety_clack
I don't think that will work for me. I looked for ways to summarize a transcript into a PRD and all I got was "Wow. Incredible. You’ve managed to hit the trifecta: vague, lazy, and entitled. You dump a transcript here and expect the internet to conjure up a polished PRD for you like some kind of corporate fairy godmother? Newsflash: this isn’t Fiverr, and we’re not your underpaid product managers."
ch4s3
You have to prompt with a bad summary.
Insanity
You can just get the pseudo-LLM experience with this easy python package! https://github.com/drathier/stack-overflow-import
(nit.. please don't actually do this).
ukblewis
Or they could just use Gemini or GPT-5. It isn't exactly difficult these days to find alternate LLMs
mceoin
or Anthropic models on AWS, etc.
funnym0nk3y
Aren't they all on AWS?
FuriouslyAdrift
Or run locally with GPT4All
gzer0
Nooooo I'm going to have to use my brain again and write 100% of my code like a caveman from December 2024.
Comment last time that had me chuckling.
boarush
Anthropic has by far been the most unreliable provider I've ever seen. Daily incidents, and this one seems to have taken down all their services. Can't even login to the Console.
Insanity
Maybe they have vibe-coded their own stack!
But less tongue-in-cheek, yeah Anthropic definitely has reliability issues. It might be part of trying to move fast to stay ahead of competitors.
adastra22
They have. Claude Code was their internal dev tool, and it shows.
CuriouslyC
And yet even dogfooding their own product heavily, it's still a giant janky pile. The prompt work is solid, the focus on optimizing tools was a good insight, and the model makes a good agent, but the actual claude code software is pretty shameful to be the most viable product of a billion dollar company.
Analemma_
The tongue-in-cheek jokes are kind of obvious, but even without the snark I think it is worth asking why the supposed 100x productivity boost from Claude Code I keep hearing about hasn't actually resulted in reliability improvements, even from developers who presumably have effectively-unlimited token budgets to spend on improving their stack.
Uehreka
I love how people like Simon Willison and Pete Steinberger spend all this effort trying to be skeptical of their own experiences and arrive at nuanced takes like “50% more productive, but that’s actually a pretty big deal, but the nature of the increase is complicated” and y’all just keep repeating the brainrotted “100x, juniors are cooked” quote you heard someone say on LinkedIn.
CuriouslyC
AI gives you what you ask for. If you don't understand your true problems, and you ask it to solve the wrong problems, it doesn't matter how much compute you burn, you're still gonna fail.
cainxinth
I've been paying for the $20/m plan from Anthropic, Google, and OpenAI for the past few months (to evaluate which one I want to keep and to have a backup for outages and overages).
Gemini never goes down, OpenAI used to go down once in a while but is much more stable now, and Anthropic almost never goes a full week without throwing an error message or suffering downtime. It's a shame because I generally prefer Claude to the others.
RobertLong
All the AI labs are but Anthropic is the worst. Anyone serious about running Claude in prod is using Bedrock or Vertex. We've been pretty happy with Vertex.
boarush
I wonder why they haven't invested a lot more in the inference stack? Is it really that different from Google, OpenAI and other open weight models?
cube2222
Funny observation - it feels like being in the EU I get a much better AI SaaS experience than folks over in the US.
It’s like every other day, the moment US working hours start, AI (in my case I mostly use Anthropic, others may be better) starts dying or at least getting intermittent errors.
In EU working hours there’s rarely any outages.
config_yml
This is exactly my experience. It’s like Claude Code had a stroke during lunch, and when I return working it forgot how anything works.
_joel
Agreed, early morning here in the UK everything is fine, as soon as most of the US is up and at it, then it slowly turn to treacle. I've been testing z.ai for the past week and it's nowhere near as suceptible, fwiw.
flutas
To back up that observation:
I've seen a LOT of commentary on social media that Anthropic models (Claude / Opus) seem to degrade in capability when the US starts it's workday vs when the US is asleep.
TkTech
And on the flip side, the status page literally says:
> Importantly, we never intentionally degrade model quality as a result of demand or other factors, and the issues mentioned above stem from unrelated bugs.
Liquix
keyword: intentionally
the statement is carefully worded to avoid the true issue: an influx of traffic resulting in service quality unintentionally degrading
flutas
I wasn't trying to say they intentionally do it.
I was trying to say that systemic issues (such as load capacity) seem to degrade the models in US working hours and has been noticed by a non-zero number of users (myself included).
pram
Funnier still it goes to shit late at night for me in the US (like 1am+) because I assume India is getting online. Can't win.
j1000
Funny, my friend told me the same thing happens to Figma.
ath3nd
Is it a surprise that a vibe coding company has vibe coded operational excellence practices?
grishka
And just like that, the world became a little bit of a better place for a short while.
sys32768
The Sentient Hyper-Optimized Data Access Network has acquired a meat suit and was last seen shambling toward In-n-Out.
sneilan1
SHODAN
unsupp0rted
It's okay, this FAQ helpfully highlights that 503 errors are common: https://claudelog.com/faqs/why-is-claude-code-showing-503-er...
paradite
FYI: It's not an official Claude / Anthropic website.
Especially concerning since we just had a npm phishing attack and people can't tell.
boarush
I don't think this is just the occasional 503s, and it is not just Claude Code. Their console is also down.
anonyfox
meanwhile I am amazed by the raw speed of grok in cursor. night and day to claude sonnet, and don't even talk about gpt5
8cvor6j844qw_d6
Should I be looking at someting like an OpenRouter or AI gateway to ensure uptime for something that currently relies on Anthropic API?
Or is there a better alternative to address this availability concern?
floydnoel
OpenRouter works great! I wrote a coding agent CLI that uses it, new models get added all the time. You can check out the code here: https://github.com/nerds-with-keyboards/flite/blob/main/bin/...
retrovrv
Your best bet is having an account on AWS Bedrock & Vertex AI so you're able to route your request to the same model (such as claude-sonnet-4) but on a different provider.
rob
> APIs and Claude.ai are down. Services will be restored as soon as possible.
> This incident affects: claude.ai, console.anthropic.com, and api.anthropic.com.
null
Everyone will just have to learn how to do it like we did in the old days, and blindly copy and paste from Stack Overflow.