Skip to content(if available)orjump to list(if available)

How much Anthropic and Cursor spend on Amazon Web Services

dcre

Numbers are always interesting, assuming they're real, but I just want to comment on the Cursor thing: Zitron has been insisting for 6 months that Anthropic screwed Cursor somehow by raising prices on them but the claim has always been gibberish. It's not that it's false, it's that it's impossible to figure out what Zitron claims happened. He cannot describe (here or in https://www.wheresyoured.at/anthropic-and-openai-have-begun-...) what the bad change actually was. We know everyone moved to more usage-oriented pricing earlier this year. He cannot explain why this was a price increase for Cursor. He is unable to draw a distinction between a price increase for end users (it's not even clear that it was a price increase for the average end user) and a price increase for Cursor.

literatepeople

Ed has constantly done this, and it's a shame because it has taken the air out of the room for real AI criticism. Most of Ed's criticism comes from a place of giving a narrative to people who are wishing for a magic bullet that makes ChatGPT vanish tomorrow rather than actually pressuring companies about the harms this technology can cause. This in part is why his writing so often focuses on perceived financial issues (despite his lack of credentials in financial journalism) rather than the social harms the technologies cause today (slop, delusions, manipulated truth).

watwut

Zitron is too much of a small player to "suck the air off other criticism of ai".

Claiming that a single journalist blog has power to stop others from criticiaing ai for different reasons ia kind of absurd.

floatrock

The buried lede:

Anthropic: "$2.66 billion on compute on an estimated $2.55 billion in revenue"

Cursor: "bills more than doubled from $6.2 million in May 2025 to $12.6 million in June 2025"

Clickthrough if you want the analysis and caveats

Kuinox

I could find an ARR for Cursor of $500M. Why do they they in this article that Cursor is loosing with this spending number ?

LetsGetTechnicl

Ed's mentioned ARR in previous articles and it's not a "generally accepted accounting principle". They cherry pick the highest monthly revenue number and multiply that by 12, but that's not their actual annual revenue.

dcre

"Cherry pick the highest" is misleading. If your revenue is growing 10% a month for a year straight and is not seasonal, picking any other than the most recent month to annualize would make no sense.

infecto

Eh, when you have a company that’s growing, picking the highest and annualizing it is sensible. If we had a mature company with highly seasonal revenue it would be dishonest.

dcre

The article Zitron links says Cursor has single-digit millions of cash burn with about $1B in the bank (as of August). Assuming that is true, they are losing money but have a long runway.

https://www.newcomer.co/p/cursors-popularity-has-come-at-a

omnicognate

Single-digit cash burn on AWS, which the article says is only a small part of its compute, with the majority coming from Anthropic.

neuronexmachina

For context, AWS's total revenue for 2024 was $107.6B: https://ir.aboutamazon.com/news-release/news-release-details...

spydum

specific clarification: That was only Cursor's AWS bill. If they are using other providers, wasn't clear.

omnicognate

TFA claims Cursor "obtains the majority of its compute from Anthropic — with AWS contributing a relatively small amount" and therefore only claims that for Cursor the AWS number indicates a "direction of travel" for compute costs. (Debatable whether it does indicate even that, ofc.)

rdtsc

> through September, Anthropic has spent more than 100% of its estimated revenue (based on reporting in the last year) on Amazon Web Services, spending $2.66 billion on compute on an estimated $2.55 billion in revenue.

Well I don't have to scratch my head any longer and wonder why Amazon hasn't jumped on the AI bandwagon with their own Gemini or whatever. They are sitting pretty and selling shovels and pickaxes to the AI fools. Not a bad strategy for them...

rs186

Amazon trained their own models like Nova and has AI coding assistants like Amazon Q, but I don't know anyone outside Amazon who is using them.

JCM9

Yes, the Amazon AI stuff isn’t great. They don’t really have the right leadership or talent to do anything particularly competitive there.

oliwarner

Why do they need to compete? AI at Amazon should be laser-focussed on two things: selling compute time to AI Wannabees, and up-selling stuff to me in the shop.

Everything else is expense.

haberdasher

Amazon owns 15-19% of Anthropic. So yes and no.

candiddevmike

GenAI financing is a flat circle. The bubble bursting is going to have a huge blast radius.

lesuorac

They survived the dot-com bubble, I don't see the AI bubble taking out Amazon.

It might take out your 401k for a decade.

almostgotcaught

this is the latest mantra from people who have missed the boat. i'm like lol do you think that industry didn't learn anything (about financing structures) from the last one?

tinyhouse

No way their share is so high. Where did you get these numbers from?

swyx

it's a number 2 years out of date. they invested 8b and ant is now worth 183b.

wrren

They have their own models, the Nova series, although my experience has been pretty mixed with them.

jayd16

They actually do have Alexa.

rdtsc

They do and they've had her for a while, but from what I understand that's "so last Tuesday" in the AI race. It was ok in the race with "Ok Google" and "Siri" but not competing with OpenAI, Anthropic, Gemini.

null

[deleted]

dzonga

One day we will westerners will learn why the Chinese are releasing models that are optimized for cost of training n yet good enough to run locally or cheaply.

when the music stops, suddenly a lot of people won't just sit on the ground but plunge into the depths of hell.

infecto

I keep hearing this but I don’t know of many folks utilizing Chinese models, even those hosted in an agreeable territory.

ojosilva

Yeah, I'm one of them, using Qwen 3 coder on Cerebras as coding agent through CC. What I keep hearing is (very ballpark anecdata)...

50% of people into coding agents are quite concerned about that last mile in difference with frontier models that they "can't afford to lose" - my experience tells me otherwise, the difference is negligible once you have a good setup going and knows how to tune your model + agent.

The other 50% don't give a damn, they just landed, or got locked, into some deal for a coding agent and are happy with what they got, so why change? These deals arrived from the big model providers and resellers first, so Chinese arrived late and with too little to the party.

Running Chinese models (for coding) requires many things that you need to figure-out yourself. Are you running the model on your hw or through a provider? Are you paying by token or on a plan? Does the model pair well with you agent CLI/IDE of choice? (Zed, Cline, Opencode, etc) Does it even work with your favorite tool? (tool calling is very wobbly) Is it fast (tps)? Is it reliable? How do you do "ultrathink" with a secondary model? How do you do "large context"? Does it include a cache or are you going to eat through the plan in 1hr/day? What context size are you getting? Does it include vision and web search or do you have to get another provider/mcp for that? And, yeah, is it in a territory where you can send your client's code to? A lot to grok.

Cerebras Coder Max is really cool if you want to hack your way through this, but they couldn't care less about your experience: no cache, no tool endpoint fine-tuning, no plans or roadmap on updating models, on increasing context windows, adding vision, or anything really. They just deleted some of the tools they were recommending out of the website (ie Cursor) as they got reports of things that stopped working.

almostgotcaught

i'm starting a new trend: ask every person that is so certain about the negative outlook how big their short position is. so how big is your short position? please let us know.

throwaway290

here's another trend: every time a person is on this hype bandwagon ask them how much are they invested in nvidia/ms/openai/etc

I am not invested in anything except popcorn to watch it burst;)

swyx

> Based on discussions with sources with direct knowledge of their AWS billing, I am able to disclose the amounts that AI firms are spending, specifically Anthropic and AI coding company Cursor, its largest customer.

so he got a leaked copy of their AWS bills?

joshribakoff

Usually, the reporter inspects the document but does not get to take a copy

daft_pink

I think this is a minor speed bump and VC’s believe that cost of inference will decrease over time and this is a gold rush to grab market share while cost of inference declines.

I don’t think they got it right and the market share and usage grew faster than inference dropped, but inference costs will clearly drop and these companies will eventually be very profitable.

Reality is that startups like this assume moore’s law will drop the cost over time and arrange their business around where they expect costs to be and not where costs currently are.

Frieren

> I think this is a minor speed bump and VC’s believe that cost of inference will decrease over time and this is a gold rush to grab market share while cost of inference declines.

It could also be that you give too much credit to the market. People follow trends because in most cases that makes money. There is no other deeper though involved. Look at the financial crisis, totally irrational.

rglover

This. Post-crypto, AI was the obvious next gambit for VC. Their money flows to hype, not product. The second that hype starts to fade and the money dries up, VCs will be running with their Harold Hill trunk full of cash toward the border. Just from the content they publish alone you can tell they're channeling their inner Barnum & Bailey in between Ayahuasca seizures.

onlyrealcuzzo

Isn't the consensus that the MOE architecture and other optimizations in the newest gen models (GPT-5, Gemini 3.0 to come, etc) will reduce inference costs by 50-75% already?

ACCount37

Kind of. Frontier LLMs aren't going to get cheaper, but that's because the frontier keeps advancing.

Price-performance though? The trend is clear: a given level of LLM capability keeps getting cheaper, and that trend is expected to hold. Improvements in architecture and training make LLMs more capability-dense, and advanced techniques make inference cheaper.

yomismoaqui

Sounds interesting, do you have some links with more info about this?

Thanks!

xnx

> inference costs will clearly drop and these companies will eventually be very profitable.

Inference costs for old models will drop, but inference costs may stay the same if models continue to improve.

No guarantee that any wrapper for inference will be able to hold on to customers when they stop selling $1.00 for $0.50.

null

[deleted]

Analemma_

My own usage and the usage of pretty much everyone I know says that as inference costs drop, usage goes up in lockstep, and I’m still nowhere near the ceiling of how many tokens I could use if they were free.

I think if these companies are gambling their future on COGS going down, that’s a gamble they’re going to lose.

x0x0

> inference costs will clearly drop

They haven't though. On two fronts: 1, the soa models have been pretty constantly priced, and everyone wants the soa models. Likely the only way costs drop is the models get so good that people are like hey, I'm fine with a less useful answer (which is still good enough) and that seems, right now, like a bad bet.

and 2 - we use a lot more tokens now. No more pasting Q&A into a site; now people hammer up chunks of their codebases and would love to push more. More context, more thinking, more everything.

ctoth

You're describing increased spending while calling it increased cost. These aren't the same thing. A task that cost me $5 to accomplish with GPT-4 last year might cost $1 with Sonnet today, even though I'm now spending $100/month total on AI instead of $20 because I'm doing 100x more tasks. The cost per task dropped 80%. My spending went up 5x. Both statements are true.

Here's an analogy you may understand:

https://crespo.business/posts/cost-of-inference/

KallDrexx

Fwiw that's not necessarily true, because if Sonnet ends up using reasoning, then you are using more tokens than GPT-4 would have used for the same task. Same with GPT-5 since it will decide (using an LLM) if it should use the thinking model for it (and you don't have as much control over it).

infecto

Anecdote of 1. Costs for openai on a per token basis have absolutely dropped and that accounts for new sota models over time. I think by now we can all agree that inference costs from providers are largely at or above breakeven. So more tokens is a good problem to have.

dexwiz

Point 2 was the analysis I saw. Context size and token cost grow inversely at a rate that keeps prices constant, almost like supply and demand curves.

username223

Color me skeptical. We're running into the speed of light when it comes to transistor size, and the parallelism that made neural nets take off is running into power demands. Where do the exponential hardware gains come from? Optimizing the software by 2x or 4x happens only once. Then there's the other side: if Moore's Law works too well, local models will be good enough for most tasks, and these companies won't be able to do the SaaS thing.

It seems to me like models' capability scales logarithmically with size and wattage, making them the rare piece of software that can counteract Moore's Law. That doesn't seem like a way to make a trillion dollars.

throwaway290

One improvement is from scraping and stealing better quality IP to train on. And they can just ride Moore's law until they profit then lobby governments to require licenses for fast GPUs because of national security.

mannyv

Given how hard it is to understand AWS billing, especially if you have custom pricing, I doubt his numbers are anywhere near correct.

That said, I hope they're using their prime Visa card so they can get some cash back on that spend.

JCM9

“Spend” requires a grain of salt here. AWS “invests” in Anthropic and then Anthropic buys AWS. If you follow the money with marked bills, AWS is buying this compute from itself and then claiming revenue.

There’s a lot of that sort of thing going on at the moment in the AI bubble.

dumbmrblah

Is that just for inference or is that the cost of training the models as well?

dcre

There is no breakdown. Assuming they're real, they're just spending numbers.

VirusNewbie

         I have sat with these numbers for a great deal of time, and I can’t find any evidence that Anthropic has any path to profitability outside of aggressively increasing the prices on their customers to the point that its services will become untenable for consumers and enterprise customers alike.

This is where he misunderstands. Enterprise companies will absolutely pay 10x the cost for Claude. Meta and Apple are two large customers, you think they won't pay $500 a month per employee? $1000 a month per employee? Neither of those are outrageous to imagine if it increases productivity 10%.

scottyah

Also spend will drop dramatically if the models level out a bit more. The training is what's compute heavy, and if you aren't having to retrain every month, but able to use things like Skills to stay competitive your costs will drop a lot.

I suppose that's the pessimistic-on-AI side. On the other hand, once you create God little things like money are meaningless.

isoprophlex

Ed Zitron's style isn't for everybody, I understand that. But if these numbers, and the direction they're going in, are correct... to me this points to a significant AI bubble deflation coming soon. It just isn't sustainable, it seems.

swyx

if you are worried about AWS spend, i have news for you about non AWS spend.

"coming soon" is also really over simplistic. you would have missed some of the greatest tech companies in the past 20 years if you evaluated startups based on their early-year revenue vs infra spend

like sure i have a dog in this fight but i actually want the criticism to sharpen my thinking, unfortunately yours does not meet that bar.

isoprophlex

are you glossing over the whole "aws bill isnt structural infra spend" thing?!

they spend 104% of revenue on ONE cloud provider and costs scale linearly with revenue growth. assume zitron didn't pull these numbers out of his ass.

educate me how this isnt selling $20 bills for $5. you're a smart dude; i myself aint seeing the "sustainable business practices" here

swyx

the entire point is that stressing sustainable business practices here at this point in a startup's life is extremely short sighted and is the kind of shitty analysis that gives HN a bad rep.

pull up Uber's financials leading up to IPO. unsustainable and everyone knew it. they work it out after because they burned money and eventually achieved a sustainable moat. this is why venture exists. HN doesnt like venture which is, well, ironic given the domain we're on.

a better negative argument i'd rather see looks like this - "ive run these aws numbers against the typical spend path of pre IPO startups who then later improved their cost baseline and margin profile and even after accounting for all that, Anthropic ngmi". thats the kind of minimum sophistication you need here to play armchair financial analyst. ed zitron, and everyone involved in this entire thread, incl myself, have not done that because we are lazy and ignorant and dont actually care enough about seeking the truth here. we are as unprepared to analyze this AWS spend as we are to understand their 1b -> 10b revenue ramp in 2025. you havent done the work and yet you sit here and judge it unsustainable based off some shitty "leaks". dont pretend that ed's analysis is at all meaningful particularly because he conveniently stops where it supports his known negative bias.

spyckie2

Ai can both be a bubble and also the greatest economic value add of this generation at the same time. It doesn’t have to be either or.

All bubbles (dot com, housing, tech, crypto, etc) have a lot of losers and a few big winners.

That is less a reflection on the market of the bubble and more a reflection of the number, skill and risk taking of the prospectors.