Anthropic raises $13B Series F
469 comments
·September 2, 2025llamasushi
AlexandrB
The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents, so you know the monetization is going to be awful. Since the models are expensive to run it's all subscription priced and has to run in the cloud where the user has no control. The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships. Basically it's the culmination of all the worst tech trends of the last 10 years.
dpe82
In a previous generation, the enabler of all our computer tech innovation was the incredible pace of compute growth due to Moore's Law, which was also "top-down" from very well-funded companies since designing and building cutting edge chips was (and still is) very, very expensive. The hype was insane, and decisions about what chip features to build were made largely on the basis of existing vendor relationships. Those companies benefited, but so did the rest of us. History rhymes.
JohnMakin
Should probably change this to "was appearance of incredible pace of compute growth due to Moore's Law," because even my basic CS classes from 15 years ago were teaching that it was drastically slowing down, and isn't really a "law" more than an observational trend that lasted a few decades. There are limits to how small you can make transistors and we're not too far from it, at least not what would continue to yield the results of that curve.
dmschulman
Eh, if this is true then IBM and Intel would still be the kings of the hill. Plenty of companies came from the bottom up out of nothing during the 90s and 2000s to build multi-billion dollar companies that are still dominate the market today. Many of those companies struggled for investment and grew over a long timeframe.
The argument is something like that is not really possible anymore given the absurd upfront investments we're seeing existing AI companies need in order to further their offerings.
simianwords
This is very pessimistic take. Where else do you think the innovation would come from? Take cloud for example - where did the innovation come from? It was from the top. I have no idea how you came to the conclusion that this implies monetization is going to be awful.
How do you know models are expensive to run? They have gone down in price repeatedly in the last 2 years. Why do you assume it has to run in the cloud when open source models can perform well?
> The hype is insane, and so usage is being pushed by C-suite folks who have no idea whether it's actually benefiting someone "on the ground" and decisions around which AI to use are often being made on the basis of existing vendor relationships
There are hundreds of millions of chatgpt users weekly. They didn't need a C suite to push the usage.
AlexandrB
> I have no idea how you came to the conclusion that this implies monetization is going to be awful.
Because cloud monetization was awful. It's either endless subscription pricing or ads (or both). Cloud is a terrible counter-example because it started many awful trends that strip consumer rights. For example "forever" plans that get yoinked when the vendor decides they don't like their old business model and want to charge more.
acdha
> Take cloud for example - where did the innovation come from? It was from the top.
Definitely not. That came years later but in the late 2000s to mid-2010s it was often engineers pushing for cloud services over the executives’ preferred in-house services because it turned a bunch of helpdesk tickets and weeks to months of delays into an AWS API call. Pretty soon CTOs were backing it because those teams shipped faster.
The consultants picked it up, yes, but they push a lot of things and usually it’s only the ones which actual users want which succeed.
HarHarVeryFunny
C-suite is pushing business adoption, and those GenAI projects of which 95% are failing.
awongh
> All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents
What I always thought was exceptional is that it turns out it wasn't the incumbents who have the obvious advantage.
Take away the fact that everyone involved is already at the top 0.00001% echelon of the space (Sam Altman and everyone involved with the creation of OpenAI), but if you had asked me 10 years ago who will have the leg up creating advanced AI I would have said all the big companies hoarding data.
Turns out just having that data wasn't a starting requirement for the generation of models we have now.
A lot of the top players in the space are not the giant companies with unlimited resources.
Of course this isn't the web or web 2.0 era where to start something huge the starting capital was comparatively tiny, but it's interesting to see that the space allows for brand new companies to come out and be competitive against Google and Meta.
crawshaw
> All the innovation is coming "top-down" from very well funded companies - many of them tech incumbents
The model leaders here are OpenAI and Anthropic, two new companies. In the programming space, the next leaders are Qwen and DeepSeek. The one incumbent is Google who trails all four for my workloads.
In the DevTools space, a new startup, Cursor, has muscled in on Microsoft's space.
This is all capital heavy, yes, because models are capital heavy to build. But the Innovator's Dilemma persists. Startups lead the way.
nightski
At what point is OpenAI not considered new? It's a few months from being a decade old with 3,000 employees and $60B in funding.
lexandstuff
And all of those companies except for Google are entirely dependant on NVIDIA who are the real winners here.
tedivm
This is only if you ignore the growing open source models. I'm running Qwen3-30B at home and it works great for most of the use cases I have. I think we're going to find that the optimizations coming from companies out of China are going to continue making local LLMs easier for folks to run.
hintymad
> The whole LLM era is horrible. All the innovation is coming "top-down" from very well funded companies
Wouldn't it be the same for the hardware companies? Not everyone could build CPUs as Intel/Motorola/IBM did, not everyone could build mainframes like IBM did, and not everyone could build smart phones like Apple or Samsung did. I'd assume it boils down the value of the LLMs instead of who has the moat. Of course, personally I really wish everyone can participate in the innovation like the internet era, like training and serving large models on a laptop. I guess that day will come, like PC over mainframes, but just not now.
atleastoptimal
Nevertheless, prices for LLM at any given level of performance have gone down precipitously over the past few years. Regardless of how bad it seems the decisions being made are, the decision making process both is making an extreme amount of money for those in the AI companies, and providing extremely cheap and high quality intelligence for those using their offerings.
pimlottc
Remember when you could get an Uber ride all the way across town for $5? It is way too early to know what prices for these services will actually cost.
duxup
It's not clear to me that each new generation of models is going to be "that" much better vs cost.
Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...
Video seems to be moving forward fast from what I can tell, but it sounds like the back end cost of compute there is skyrocketing with it raising other questions.
renegade-otter
We do seem to be hitting the top of the curve of diminishing returns. Forget AGI - they need a performance breakthrough in order to stop shoveling money into this cash furnace.
reissbaker
According to Dario, each model line has generally been profitable: i.e. $200MM to train a model that makes $1B in profit over its lifetime. But, since each model has been more and more expensive to train, they keep needing to raise more money to train the next generation of model, and the company balance sheet looks negative: i.e. they spent more this year than last (since the training cost for model N+1 is higher), and the model this year made less money this year than they spent (even if the model generation itself was profitable, model N isn't profitable enough to train model N+1 without raising — and spending — more money).
That's still a pretty good deal for an investor: if I give you $15B, you will probably make a lot more than $15B with it. But it does raise questions about when it will simply become infeasible to train the subsequent model generation due to the costs going up so much (even if, in all likelihood, that model would eventually turn a profit).
mikestorrent
Inference performance per watt is continuing to improve, so even if we hit the peak of what LLM technology can scale to, we'll see tokens per second, per dollar, and per watt continue to improve for a long time yet.
I don't think we're hitting peak of what LLMs can do, at all, yet. Raw performance for one-shot responses, maybe; but there's a ton of room to improve "frameworks of thought", which are what agents and other LLM based workflows are best conceptualized as.
The real question in my mind is whether we will continue to see really good open-source model releases for people to run on their own hardware, or if the companies will become increasingly proprietary as their revenue becomes more clearly tied up in selling inference as a service vs. raising massive amounts of money to pursue AGI.
duxup
>cash furnace
They don't even burn it on on AI all the time either: https://openai.com/sam-and-jony/
jayde2767
"cash furnace", so aptly put.
general1465
Yep we do. There is a 1 year old video on YouTube, which describes this limitation https://www.youtube.com/watch?v=5eqRuVp65eY
Called efficient compute frontier
fredoliveira
I think that the performance unlock from ramping up RL (RLVR specifically) is not fully priced into the current generation yet. Could be wrong, and people closer to the metal will know better, but people I talk to still feel optimistic about the next couple of years.
derefr
> Anecdotally moving from model to model I'm not seeing huge changes in many use cases.
Probably because you're doing things that are hitting mostly the "well-established" behaviors of these models — the ones that have been stable for at least a full model-generation now, that the AI bigcorps are currently happy keeping stable (since they achieved 100% on some previous benchmark for those behaviors, and changing them now would be a regression per those benchmarks.)
Meanwhile, the AI bigcorps are focusing on extending these models' capabilities at the edge/frontier, to get them to do things they can't currently do. (Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model": ever-better domain-specific analysis capabilities, to "logic out" whether training data belongs in the training corpus for some fine-tune; and domain-specific synthesis capabilities, to procedurally generate unbounded amounts of useful fine-tuning corpus for specific tasks, ala AlphaZero playing unbounded amounts of Go games against itself to learn on.)
This means that the models are getting constantly bigger. And this is unsustainable. So, obviously, the goal here is to go through this as a transitionary bootstrap phase, to reach some goal that allows the size of the models to be reduced.
IMHO these models will mostly stay stable-looking for their established consumer-facing use-cases, while slowly expanding TAM "in the background" into new domain-specific use-cases (e.g. constructing novel math proofs in iterative cooperation with a prover) — until eventually, the sum of those added domain-specific capabilities will turn out to have all along doubled as a toolkit these companies were slowly building to "use models to analyze models" — allowing the AI bigcorps to apply models to the task of optimizing models down to something that run with positive-margin OpEx on whatever hardware that would be available at that time 5+ years down the line.
And then we'll see them turn to genuinely improving the model behavior for consumer use-cases again; because only at that point will they genuinely be making money by scaling consumer usage — rather than treating consumer usage purely as a marketing loss-leader paid for by the professional usage + ongoing capital investment that that consumer usage inspires.
Workaccount2
>Mostly this is inside-baseball stuff to "make the model better as a tool for enhancing the model"
Last week I put GPT-5 and Gemini 2.5 in a conversation with each other about a topic of GPT-5's choosing. What did it pick?
Improving LLMs.
The conversation was far over my head, but the two seemed to be readily able to get deep into the weeds on it.
I took it as a pretty strong signal that they have an extensive training set of transformer/LLM tech.
kdmtctl
You have just described a singularity point for this line of business. Which could happen. Or not.
ACCount37
The raw model scale is not increasing by much lately. AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available. Models that are much larger than the current frontier are still too expensive to train, and far too expensive to serve them en masse.
In the meanwhile, "better data", "better training methods" and "more training compute" are the main ways you can squeeze out more performance juice without increasing the scale. And there are obvious gains to be had there.
robwwilliams
The jump to 1 million token length context for Sonnet 4 plus access to internet has been a game-changer for me. And somebody should remind Anthropic leadership to at least mirror Wikipedia; better yet support Wikipedia actively.
All of the big AI players have profited from Wikipedia, but have they given anything back, or are they just parasites on FOSS and free data?
xnx
> AI companies are constrained by what fits in this generation of hardware, and waiting for the next generation to become available.
Does this apply to Google that is using custom built TPUs while everyone else uses stock Nvidia?
gmadsen
Its not clear to me that it needs to. If at the margins it can still provide an advantage in the market or national defense, then the spice must flow
duxup
I suspect it needs to if it is going to cover the costs of training.
darepublic
I hope you're right.
dvfjsdhgfv
> I can just pick an older model and often I can't tell the difference...
Or, as in the case of a leading North American LLM provider, I would love to be able to choose an older model but it chooses it for me instead.
wslh
> Anecdotally moving from model to model I'm not seeing huge changes in many use cases. I can just pick an older model and often I can't tell the difference...
Model specialization. For example a model with legal knowledge based on [private] sources not used until now.
yieldcrv
Locally run video models that are just as good as today’s closed models are going to be the watershed moment
The companies doing foundational video models have stakeholders that don’t want to be associated with what people really want to generate
But they are pushing the space forward and the uncensored and unrestricted video model is coming
xenobeb
The problem is the video models are only impressive in news stories about the video models. When you actually try to use them you can see how the marketing is playing to people's imagination because they are such a massive disappointment.
giancarlostoro
Nobody wants to make a commercial NSFW model that then suffers a jailbreak... for what is the most illegal NSFW content.
lynx97
Maybe. The question is, will legislation be fast enough? Maybe, if people keep going for politician porn: https://www.theguardian.com/world/2025/aug/28/outrage-in-ita...
andrewgleave
> “There's kind of like two different ways you could describe what's happening in the model business right now. So, let's say in 2023, you train a model that costs 100 million dollars. > > And then you deploy it in 2024, and it makes $200 million of revenue. Meanwhile, because of the scaling laws, in 2024, you also train a model that costs a billion dollars. And then in 2025, you get $2 billion of revenue from that $1 billion, and you spend $10 billion to train the model. > > So, if you look in a conventional way at the profit and loss of the company, you've lost $100 million the first year, you've lost $800 million the second year, and you've lost $8 billion in the third year. So, it looks like it's getting worse and worse. If you consider each model to be a company, the model that was trained in 2023 was profitable.” > ... > > “So, if every model was a company, the model is actually, in this example, is actually profitable. What's going on is that at the same time as you're reaping the benefits from one company, you're founding another company that's like much more expensive and requires much more upfront R&D investment. And so, the way that it's going to shake out is this will keep going up until the numbers go very large, the models can't get larger, and then it will be a large, very profitable business, or at some point, the models will stop getting better. > > The march to AGI will be halted for some reason, and then perhaps it will be some overhang, so there will be a one-time, oh man, we spent a lot of money and we didn't get anything for it, and then the business returns to whatever scale it was at.” > ... > > “The only relevant questions are, at how large a scale do we reach equilibrium, and is there ever an overshoot?”
From Dario’s interview on Cheeky Pint: https://podcasts.apple.com/gb/podcast/cheeky-pint/id18210553...
DebtDeflation
The wildest part is that the frontier models have a lifespan of 6 months or so. I don't see how it's sustainable to keep throwing this kind of money at training new models that will be obsolete in the blink of an eye. Unless you believe that AGI is truly just a few model generations away and once achieved it's game over for everyone but the winner. I don't.
jononor
It is being played like a winner-takes-it-all right now (it may or may not be such a market). So it is a game of being the one that is left standing, once the others fall off. In this kind of game, speeding more is done as a strategy to increase the chances of other competitors running out of cash or otherwise hitting a wall. Sustainability is the opposite of the goal being pursued... Whether one reaches "AGI" is not considered important either, as long as one can starve out most competitors.
And for the newcomers, the scale needs to be bigger than what the incumbents (Google and Microsoft) have as discretionary spending - which is at least a few billion per year. Because at that rate, those companies can sustain it forever and would be default winners. So I think yearly expenditure is going to be 20B year++
sdesol
> So it is a game of being the one that is left standing
Or the last investor. When this type of money is raised, you can be sure the earlier investors are looking for ways to have a soft landing.
leptons
It's the Uber business plan - losing money until the competition loses more and goes out of business. So far Lyft seems to be doing okay, which proves the business plan doesn't really work.
solomonb
They are only getting deprecated this fast because the cost of training is in some sense sustainable. Once it is not, then they will no longer be deprecated so fast.
protocolture
>The compute moat is getting absolutely insane.
Is it?
Seems like theres a tiny performance gain between "This runs fine on my laptop" and "This required a 10B dollar data centre"
I dont see any moat, just crazy investment hoping to crack the next thing and moat that.
docdeek
> The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.
For what it is worth, $13 billion is about the GDP of Somalia (about 150th in nomimal GDP) with a population of 15 million people.
Aeolun
As a fun comparison, because I saw the population is more or less the same.
The GDP of the Netherlands is about $1.2 trillion with a population of 18 million people.
I understand that that’s not quite what’s meant with ‘small country’ but in both population and size it doesn’t necessarily seem accurate.
Aurornis
Country scale is weird because it has such a large range.
California (where Anthropic is headquartered) has over twice as many people as all of Somalia.
The state of California has a GDP of $4.1 Trillion. $13 billion is a rounding error at that scale.
Even the San Francisco Bay Area alone has around half as many people as Somalia.
nradov
That's why wealthy investors connected to the AI industry are also throwing a lot of money into power generation startups, particularly fusion power. I doubt that any of them will actually deliver commercially viable fusion reactors but hope springs eternal.
mapt
Continuing to carve out economies of scale in battery + photovoltaic for another ten doublings has plenty of positive externalities.
The problem is that in the meantime, they're going to nuke our existing powergrid, created in the 1920's to 1950's to serve our population as it was in the 1970's, and for the most part not expanded since. All of the delta is in price-mediated "demand reduction" of existing users.
UltraSane
A lot of the biggest data centers being built are also building behind the meter generation dedicated to them.
vrt_
Imagine solving energy as a side effect of this compute race. There's finally a reason for big money to be invested into energy infrastructure and innovation to solve a problem that can't be solved with traditional approaches.
bobsmooth
I would trade the destruction of trustworthy information and images on the internet for clean fusion power. It's a steep cost but I think it's worth it.
powerapple
Also not all compute was necessary for the final model, a large chunk of it is trial and error research. In theory, for $1B you spent training the latest model, a competitor will be able to do it after 6 months with $100M.
SchemaLoad
Not only are the actual models rapidly devaluing, the hardware is too. Spend $1B on GPUs and next year there's a much better model out that's massively devalued your existing datacenter. These companies are building mountains of quicksand that they have to constantly pour more cash on else they be reduced to having no advantage rapidly.
m101
This round started at $5bn target and it ends at $13bn. When this sort of thing happens it's normally because the company wants to 1) hit the "hot" market, and 2) has uncertainty about their ability to raise revenues at higher valuations in the future.
Whatever it is, the signal it's sending of Anthropic insiders is negative for AI investors.
Other comments having read a few hundred comments here:
- there is so much confusion, uncertainty, and fanciful thinking that it reminds me of the other bubbles that existed when people had to stretch their imaginations to justify valuations
- there is increasing spend on training models, and decreasing improvements in new models. This does not bode well
- wealth is an extremely difficult thing to define. It's defined vaguely through things like cooperation and trade. Ultimately these llms actually do need to create "wealth" to justify the massive investments made. If they don't do this fast this house of cards is going to fall, fast.
- having worked in finance and spoken to finance types for a long time: they are not geniuses. They are far from it. Most people went into finance because of an interest in money. Just because these people have $13bn of other people's money at their disposal doesn't mean they are any smarter than people orders of magnitude poorer. Don't assume they know what they are doing.
masterjack
I may agree if it was a 20% dilution round, but not if they are increasing from 3% to 7% dilution. Being so massively oversubscribed is a bullish sign, bad companies would be struggling to fill out their round.
utyop22
Lol yeah I generally read most comments on here with one eye closed. This is one of the good ones though.
code4tee
Impressive round but it seems unlikely this game can go on much longer before something implodes. Given the amount of cash you need to set of fire to stay relevant it’s becoming nearly impossible for all but a few players to stay competitive, but those players have yet to demonstrate a viable business model.
With all these models converging, the big players aren’t demonstrating a real technical innovation moat. Everyone knows how to build these models now, it just takes a ton of cash to do it.
This whole thing is turning into an expensive race to the bottom. Cool tech, but bad business. A lot of VC folks gonna lose their shirt in this space.
rsanek
I was convinced of this line of thinking for a while too but lately I'm not so sure. In software in particular, I think it's actually quite relevant what you can do in-house with a SOTA model (especially in the tool calling / fine tuning phase) that you just don't get with the same model via API. Think Cursor vs. Claude Code -- you can use the same model in Cursor, but the experience with CC is far and away better.
I think of it a bit like the Windows vs. macOS comparison. Obviously there will be many players that will build their own scaffolding around open or API-based models. But there is still a significant benefit to a single company being able to build both the model itself as well as the scaffolding and offering it as a unit.
mritchie712
CC being better than Cursor didn't make sense to me until I realized Anthropic trains[0] it's models to use it's own built-in tools[1].
0 - https://x.com/thisritchie/status/1944038132665454841
1- https://docs.anthropic.com/en/docs/agents-and-tools/tool-use...
criemen
I'm not so confident in that yet. If you look at the inference prices Anthropic charges (on the API) it's not a race to the bottom - they are asking for what I feel is a lot of money - yet people keep paying that.
worldsayshi
Yeah, a collapse should only mean that training larger models become non viable right? Selling inference alone should still deliver profit.
xpe
> Everyone knows how to build these models now, it just takes a ton of cash to do it.
This ignores differential quality, efficiency, partnerships, and lots more.
1oooqooq
you say it can't go much longer, yet herbalife is still listed.
dcchambers
And unfortunately, the amount of money being thrown around means that when the bottom falls out and its revealed that the emperor has no clothes, the implosion is going to impact all of us.
It's going to rock the market like we've never seen before.
jononor
Hope it stays long enough to build up serious electricity generation, storage and distribution. Cause that has a lot of productive uses, and has historically been underdeveloped (in favor of fossile fuels). Though there will likely be a squeeze before we get there...
axus
The electricians in data center country report they are earning a lot of money.
nathan_douglas
It'd be an interesting time for China to invade Taiwan.
m101
Why is this downvoted when it's spot on.. if reality < expectations so much money is sitting on extremely quickly depreciating assets. It will be bad. Risk is to the downside.
dcchambers
Being critical of AI companies on Hacker News is pretty tough these days. Either the majority of people are all-in and want to bury their heads in the sand to the real dangers and risks (economical, psychological, etc) or there's just lots of astroturfing going on.
ijidak
I think we underestimate the insane amount of idle cash the rich have. We know that the top 1% owns something like 80% of all resources, so they don't need that money.
They can afford to burn a good chunk of global wealth so that they can have even more global wealth.
Even at the current rates of insanity, the wealthy have spent a tiny fraction of their wealth on AI.
Bezos could put up this $13 billion himself and remain a top five richest man in the world.
(Remember Elon cost himself $40 billion because of a tweet and still was fine!)
This is a technology that could replace a sizable fraction of humamkind as a labor input.
I'm sure the rich can dig much deeper than this.
not_the_fda
"This is a technology that could replace a sizable fraction of humamkind as a labor input."
And if it does? What happens when a sizable fraction of humamkind is hungry and can't find work? It usually doesn't turn out so well for the rich.
dweekly
I don't think most folks think very hard about where most wealth comes from but imagine it just sort of exists in a fixed quantity or is pulled from the ground like coal or diamonds - there's a fixed amount of it, and if there are very rich people, it must be because they took the coal/diamonds away from other people who need it. This leads to catchy slogans.
But it's pretty obvious wealth can be created and destroyed. The creation of wealth comes from trade, which generally comes from a vibrant middle class which not only earns a fair bit but also spends it. Wars and revolutions are effective at destroying wealth and (sometimes) equitably redistributing what's left.
Both the modern left and modern right seem to have arrived at a consensus that trade frictions are a good way to generate (or at least preserve) wealth, while the history of economics indicates quite the contrary. This was recently best pilloried by a comic that showed a town under siege and the besieging army commenting that this was likely to make the city residents wealthy by encouraging self-reliance.
We need abundant education and broad prosperity for stability - even (and maybe especially) for the ultra wealthy. Most things we enjoy require absolute and not relative wealth. Would you rather be the richest person in a poor country or the poorest of the upper class in a developed economy?
harmmonica
Big question is whether it replaces and then doesn't create new opportunity to make up for those casualties. I'm not sold on this, but there's this part of me that actually believes LLM's or perhaps AI more broadly will enable vast numbers of people to do things that were formerly impossible for them to do because the cost was too great, or the thought of doing it too complex. Now those same things are not only accessible, but easy to access. I made a comment earlier today in the thread about Google's antitrust "win" where things I couldn't formerly have done without sizable and costly third-party professional help are now possible for near-zero cost and near-zero time. It really can radically empower folks. Not sure that's going to make up for all the job loss, but there is the possibility of real empowerment.
xpe
Remember the YouTube acquisition? Many probably don’t since it was 2006. $1.65B. To many, it seemed bonkers.
Narrow point: In general, one person’s impression of what is crazy does not fare well against market-generated information.
Broader point: If you think you know more than the market, all other things equal, you’re probably wrong.
Lesson: Only searching for reasons why you are right is a fishing expedition.
If the investment levels are irrational, to what degree are they? How and why? How will it play out specifically? Predicting these accurately is hard.
slashdave
> Only searching for reasons why you are right is a fishing expedition.
Not to be mean, but aren't you being a little hypercritical here, bringing up your bespoke example of YouTube?
null
pnt12
I mean, this sounds like survivor bias in action?
Google also bought Motorola for 12 billion and Microsoft bought Nokia for 7 billion. Those weren't success cases.
Or more similarly, WeWork got 12B from investor and isn't doing well (hell, bankrupt, according to Wikipedia).
tick_tock_tick
> Google also bought Motorola for 12 billion and Microsoft bought Nokia for 7 billion. Those weren't success cases.
A lot of that was patent acquisition rather than trying to run those businesses so it's hard to say a success or not.
nikanj
$183B makes sense because 20 years ago something else was valued at $1.65 billion and money has decreased in value 100-fold?
xenobeb
You are just making up nonsense.
xyst
Somebody didn’t get the memo from MIT…
fancyfredbot
So many negative comments here! The fact that one of the top players in a new market segment with significant growth potential can raise $13B at a 20x revenue valuation is not the bubble indicator you think it is.
It's at least possible that the investment pays off. These investors almost certainly aren't insane or stupid.
We may still be in a bubble, but before you declare money doesn't mean anything any more and start buying put options I'd probably look for more compelling evidence than this.
slashdave
> can raise $13B at a 20x revenue valuation is not the bubble indicator you think it is.
What a minute. Isn't this the very definition of a bubble?
utyop22
Remind me what happened re. SoftBank + WeWork.
mateus1
> These investors almost certainly aren't insane or stupid.
I'm sure this exact sentence was said before every bubble burst.
sothatsit
Most investors I've heard talk about the AI bubble have mentioned exactly that they know it is a bubble. They are just playing the game, because there is money to be made before that bubble bursts. And additionally, there is real value in these companies.
I would assume the majority of investors in AI are playing a game of estimating how much more these AI valuations can run before crashing, and whether that crash will matter in the long-run if the growth of these companies lives up to their estimates.
fancyfredbot
That sounds very cynical and knowing which is obviously great, but not super interesting. Do you think the investors are insane or stupid? Do you think this is a bubble and that it's about to burst? I'm interested to know why.
kittikitti
These are the same investors who got scammed by SBF who didn't even have a basic spreadsheet that explained the finances.
fancyfredbot
I see two of nineteen investors were also invested in FTX (Insight and Ontario teachers). With hindsight that's a bad investment although they probably recovered their money here so probably not their worst. Does this actually tell you they are stupid or insane?
I think that's one possible interpretation but another is that these funds choose to allocate a controlled portion of their capital toward high risk investments with the expectation that many will fail but some will pay off. It's far from clear that they are crazy or stupid.
utyop22
They recovered their money but what about the opportunity cost? Its actually an economic loss. In retrospect given the risk it was a pretty terrible investment.
Wojtkie
... or really any SoftBank Vision Fund backed startup
ankit219
Their projections for ARR at the end of this year at a high of $9B[1] at the end of this year. And reported gross margins of 60% (-30% with cloud providers partnerships). All things considered, if this pans out, it's a 20x multiple. High yes, but not that crazy. Specially considering their growth rate and that too at a decent margin at gm level.
[1]: It was $3B at the end of May (so likely $250M in May alone), and $5B at end of july (so $400M that month).
null
1oooqooq
exactly. what are people who make these investments even betting on? it certainly is not revenue or dividends. so it can only be a bet the stock will go up faster than other less risky stocks.
and we continue to pretend that market generates any semblance of value.
jedberg
> what are people who make these investments even betting on?
They they achieve AGI or a close approximation, and end up wealthier than god.
That's basically the bet here. Invest in OpenAI and Anthropic, and hope one of them reached near AGI.
utyop22
But if you're an investor who doesn't care about the long-term value of the firm, all you care about is maximizing your return on future sales of the shares of stock.
Doing proper intrinsic valuation with technology firms is nigh-on impossible to do.
bradley13
Throwing money and compute at AI strikes me as a very short-term solution. In the end, the human brain does not run off a nuclear power plant, not even when we are learning.
I expect the next breakthroughs to be all about efficiency. Granted, that could be tomorrow, or in 5 years, and the AI companies have to stay all at in the meantime.
sixdimensional
I'm not sure quantum computing is the solution, but it strikes me that a completely new compute paradigm like quantum computing is probably what is necessary - which is orders of magnitude more efficient and powerful than today's binary compute.
ryukoposting
This is roughly where I am on the matter. If the energy costs stay massive, your investment in AI is really just a bet that energy production will get cheaper. If the energy costs fall, so does the moat that keeps valuations like this one afloat.
If there's a step-function breakthrough in efficiency, it's far more likely to be on the model side than on the semiconductor side. Even then, investing in the model companies only makes sense if you think one of them is going to be able to keep that innovation within their walls. Otherwise, you run into the same moat-draining problem.
Davidzheng
the human brain can't run off a nuclear power plant b/c it was too hard for evolution to figure out, but we figured it out. No reason running on nuclear power plant won't give much higher intelligence.
d_burfoot
There's a big issue with a lot of thinking about these valuations, which is that LLM inference does not need the 5-nines of uptime guarantees that cloud datacenters provide. You are going to see small business investors around the world pursue the following model:
- Buy an old warehouse and a bunch of GPUs
- Hire your local tech dude to set up the machines and install some open-source LLMs
- Connect your machines to a routing service that matches customers who want LLM inference with providers
If the service goes down for a day, the owner just loses a day's worth of income, nobody else cares (it's not like customers are going to be screaming at you to find their data). This kind of passive, turn-key business is a dream for many investors. Comparable passive investments like car washes, real estate, laundromats, self-storage, etc are messier.
matt3D
I use OpenAI's batch mode for about 80% of my AI work at the moment, and one of the upsides is it reduces the frantic side of my AI work. When the response is immediate I feel like I can't catch a break.
I think once the sheen of Microsoft Copilot and the like wear off and people realise LLMs are really good at creating deterministic tools but not very good at being one, not only will the volume of LLM usage decline, but the urgency will too.
utyop22
Yeah these things take time to play out. So I always just say, the large populous of people will finally realise fantasy and reality have to converge at some point.
j7ake
Wait did I see “ Ontario Teachers' Pension Plan” as an investor?
Are they putting Canadian public funds into Anthropic?
noleary
Ontario Teachers' is a pretty active principal in venture/growth financings and a major LP to a bunch of funds. That said, venture/growth is a pretty small percentage of their holdings.
---
[1] https://www.crunchbase.com/organization/ontario-teachers-pen...
[2] https://www.otpp.com/en-ca/investments/our-investments/teach...
datadrivenangel
That's how investments these big get made: pension funds and other similar trusts need returns, and at a certain point if softbank says they have a way to deploy billions of dollars you don't have better options...
xp84
To me, 'public pension monies' (more or less, retirement savings from citizens who happen to work for the government) and 'public funds' don't seem like the exact same thing. To me, public funds implies money from the government budget or sovereign wealth funds.
Although I admit that the government may be on the hook to replenish any spectacular failures in such a pension plan so in that way, it is somewhat fair -- though I doubt any one investment is weighted so heavily in any pension fund as to precipitate such an event.
j7ake
Government workers are funded by public money, so public pension monies are funded by public money ultimately.
IshKebab
They're a huge VC. Paid my wages for a few years.
null
sebzim4500
Weren't they also a significant investor in FTX?
arduanika
Yes. So they indirectly owned some Anthropic through the FTX bankruptcy. I kinda wonder whether they somehow opted to keep their Anthropic stake when the FTX estate sold. Or maybe they bought it at some other time.
sidewndr46
was FTX actually liquidated? Last I read the lawyers were just busy paying themselves $500,000 a day
null
OhMeadhbh
Yes.
VirgilShelton
My contrarian take on the astronomical costs need to scale LLM infrastructure is that since it does cost so much, innovation at the grid and power plant / renewables will also see massive gains and ultimately save our planet.
tpurves
And 75% of that just gets shipped right over to nVidia as pure profit. The mind boggles at the macro-economic inefficiency of that situation.
unsupp0rted
Hopefully this'll give them another 3 months of runway, so they can go back to letting me use Claude Sonnet for 5 hours out of the 5-hour limit, rather than the 2.5 hours I'm getting now.
($100-plan, no agents, no mcp, one session at a time)
1oooqooq
ewww. paying for AI is worse than paying for porn.
The compute moat is getting absolutely insane. We're basically at the point where you need a small country's GDP just to stay in the game for one more generation of models.
What gets me is that this isn't even a software moat anymore - it's literally just whoever can get their hands on enough GPUs and power infrastructure. TSMC and the power companies are the real kingmakers here. You can have all the talent in the world but if you can't get 100k H100s and a dedicated power plant, you're out.
Wonder how much of this $13B is just prepaying for compute vs actual opex. If it's mostly compute, we're watching something weird happen - like the privatization of Manhattan Project-scale infrastructure. Except instead of enriching uranium we're computing gradient descents lol
The wildest part is we might look back at this as cheap. GPT-4 training was what, $100M? GPT-5/Opus-4 class probably $1B+? At this rate GPT-7 will need its own sovereign wealth fund