LLMs are cheap
179 comments
·June 9, 2025xxbondsxx
Palmik
> API that is likely a loss-leader to grab market share (hosted LLM cloud models).
I don't think so, not anymore.
If you look at API providers that host open-source models, you will see that they have very healthy margin between their API cost and inference hardware cost (this is, of course, not the only cost) [1]. And that does not take into account any proprietary inference optimizations they have.
As for closed-model API providers like OpenAI and Anthropic, you can make an educated guess based on the not-so-secret information about their model sizes. As far as I know, Anthropic has extremely good margins between API cost and inference hardware cost.
[1]: This is something you can verify yourself if you know what it costs to run those models in production at scale, hardware wise. Even assuming use of off-the-shelf software, they are doing well.
Xmd5a
I use whisper to transcribe long conversations, and deploying the model myself on vastai is ten times cheaper than OpenAI's API offer.
noodletheworld
I don’t completely disagree, but “assertion one” [1]
[1] ~ you can obviously verify this yourself by doing it yourself and seeing how expensive it is.
…is an enormously weak argument.
You suppose. You guess. We guess.
Let’s be honest, you can just stop at:
> I don’t think so.
Fair. I don’t either; but that’s about all we can really get at the moment afaik.
JimDabell
> you also don't have any evidence that they are profitable.
Sure we do. Go to AWS or any other hosting provider and pay them for inference. You think AWS are going to subsidise your usage of somebody else’s models indefinitely?
> All the data points we have today show that companies are spending an insane amount of capex on gaining AI dominance without the revenue to achieve profitability yet.
Yes, capex not opex. The cost of running inference is opex.
bee_rider
> Yes, capex not opex. The cost of running inference is opex.
This seems sort of interesting, maybe (I don’t know business, though). I agree that the cost of running inference is part of the opex, but saying that doesn’t rule out putting other stuff in the opex bucket.
Currently these LLM companies train and models on rented Azure nodes in an attempt to stay at the head of the pack, to be well positioned for when LLMs become really useful in a “take many white collar jobs” sense, right?
So, is it really obvious what’s capex and what’s opex? In particular:
* The nodes used for training are rented, so that’s opex, right?
* The models are in some sense consumable? Or at least temporary. I mean, they aren’t cutting edge anymore after a year or so, and the open weights models are always sneaking up on them, so at least they aren’t a durable investment.
JimDabell
> The nodes used for training are rented, so that’s opex, right?
It’s capex. They are putting money in, and getting an asset out (the weights).
> The models are in some sense consumable?
Assets depreciate.
dragontamer
Purchasing new GPUs is capex but depreciation of GPUs is opex.
There's still a cost, it's just thrown into the future.
antman
No we don't, MS used their OpenAI position as a strategy to increase Azure adoption. I am surprised AWS didn't give ls for free
rco8786
AWS isn’t doing the training on those models.
JimDabell
OpenAI spends less on training than inference, so the worst case scenario is less than double the cost after factoring in training. Inference is still cheap.
ceejayoz
> You think AWS are going to subsidise your usage of somebody else’s models indefinitely?
As with Costco's giant $5 roasted chickens, this is not solid evidence they're profitable. Loss-leaders exist.
lhl
Rather than speculating another option is to just measure things. I churned through billions of tokens for evals and synthetic data earlier this year, so I did some of that. On an H100 node, a Llama3 70B FP8 at concurrency=128 generated at about 0.4 J/token (this was estimating node power consumption and multiplying by a generous PUE, 1.2X or something like that) - it was still 120X cheaper than the 48 J/token estimates of cost to run the 175B GPT-3 on 2021-era Microsoft DC1 hardware (Li et al. 2023) and 10X cheaper than the 3-4 J/token empirical measurements to run LLaMA-65B on V100/A100 HPC nodes (Samsi et al 2023).
Anyway, at 0.4 J/token, at a cost of 5 cents/kWh, is about 0.5 cents/million tokens. Even at 50% utilization you're only up to 1.1 cents/M tokens. Artificial Analysis reports the current average price of Llama3.3 70B to be about $0.65/M tokens. I'd assume most of the cost you're paying for is probably the depreciation schedule of the hardware.
Note that of course, modern-day 7B class models stomp on both those older models so you could throw in another 10X lower cost if you're going to quality adjust. Also, I did minimal perf tuning - I used FP8, and W8A8-INT8 both is faster and has slightly better quality (in my functional evals). I also used -tp 8 for my system. -tp 4 w/ model parallelism and cache-aware routing you should also be able to increase throughput a fair amount. Also, speculative decode w/ a basic draft model would give you another boost. And this was tested at the beginning of the year, so using vLLM 0.6.x or so - the vLLM 1.0 engine is faster (better graph building, compilation, scheduling). I'd guess that if you were conscientious about just optimizing you could probably get at least another 2X perf free with basically just "config".
xxbondsxx
For example, Perplexity has been fudging their accounting numbers to shift COGS to R&D to make their margin appear profitable: https://thedeepdive.ca/did-perplexity-fudge-its-numbers/
jstummbillig
There is also a lot of different models at a lot of different price points (and LLMs are fairly hard to compare to begin with). In this theory of a likely loss-leader, must we assume that all of them, from all companies, are priced below cost...? If so, that seems like a fairly wild claim. What's Step 2 for all of these companies to get ahead of this, given how model development currently works?
I think the far more reasonable assumption is: It's profitable enough to not get super nervous about the existence of your company. You have to build very costly models and build insanely costly infrastructure. Running all of that at a loss without an obvious next step, because ALL of them are pricing to not even make money at inference, seems to require a lot of weird ideas about how companies are run.
otterley
We’ve seen this pattern before. This happened in the 1990s during the original dot-com boom. Investors gamble, everything is subsidized, most companies fail, and the ones left standing then raise prices.
dietr1ch
I don't think it's that wild. Hardware will improve together with performance, but once the market stops expanding and behaviour gets stagnant the market shares will solidify, so you better aim to have a large portion to make the scale together with the improvements help reach profitability.
raincole
> an API that is likely a loss-leader to grab market share (hosted LLM cloud models)
Everyone just repeats this but I never buy it.
There is literally a service that allows you to switch models and service providers seamlessly (openrouter). There is just no lock-in. It doesn't make any financial sense to "grab market share".
If you sell something with UI, like ChatGPT (the web interface) or Cursor, sure. But selling API at a loss is peak stupidity and even VCs can see that.
mupuff1234
Except they most likely do have a plan to make it harder to switch.
raincole
Yeah, sure, please elaborate on how providers such as Fireworks, DeepInfra, Chutes are going to "make it harder to switch."
DarmokJalad1701
Who is "they"? It makes no sense for Openrouter to allow providers that do not conform to the API. They profit from the commission from the fees and not providing inference.
pama
Please read the DeepSeek analysis of their API service (linked in this article): they have 500% profit margin and they are cheaper than any of the US companies serving the same model. It is conceivable that the API service of OpenAI or Anthropic have much higher profit margins yet.
(GPUs are generally much more cost effective and energy efficient than CPU if the solution maps to both architectures. Anthropic certainly caches the KV-cache of their 24k token system prompt.)
SEGyges
Every LLM provider caches their KV-cache, it's a publicly documented technique (go stuff that KV in redis after each request, basically) and a good engineering team could set it up in a month.
iamnotagenius
With all due respect to Deepseek, I would take their numbers with grain of salt, as they might as well be politically motivated.
jarym
Any more politically motivated than a model from anywhere else?
WithinReason
is that better or worse than commercially motivated?
int_19h
The problem with this theory in general is that, given the sheer number of cloud inference providers (most of which are hosting third party models), it would be exceedingly strange if not only all of them are engaging in this same tactic, but apparently all of them have the same financial capacity to do so.
ddp26
I analyzed OpenAI API profitability in summer 2024 and found inference for gpt-4 class models likely pretty profitable, ~50% gross margins (ignoring capex for training models): https://futuresearch.ai/openai-api-profit
otterley
That’s a little like saying you can compute the profitability of the energy market by looking only at the margins of gas stations. You can’t exclude all the outlays on actually acquiring the product to sell.
paxys
The entire comparison hinges on people only making simple factual searches ("what is the capital of USA") on both search engines and LLMs. I'm going to say that's far enough from the standard use case for both these sets of APIs to be entirely meaningless.
- If I'm using a search engine, I want to search the web. Yes these engines are increasingly providing answers rather than just search results, but that's a UI/product feature rather than an API one. If I'm paying Google $$ for access to their index, I'm interested in the index.
- If I'm using an LLM, it is for parsing large amounts of input data, image recognition, complex analysis, deep thinking/reasoning, coding. All of these result in significantly more token usage than a 2-line "the answer to your question is xyz" response.
The author is basically saying – a Honda Civic is cheap because it costs about the same per pound as Honeycrisp apples.
dale_glass
I think the issue is that the classical search engine model has increasingly become less useful.
There's less experts using search engines. Normal people treat search engines less like an index search and more like a person. Asking an old school search engine "What is the capital of USA" is actually not quite right, because the "what is" is probably quite superfluous, and you're counting on finding some sort of educative website with the answer. In fact phrasing it as "the capital of the USA is" is probably a better fit for a search engine, since that's the sort of sentence that would contain what you want to know.
Also with the plague of "SEO", there's a million sites trying to convince Google that their site is relevant even when it's not.
So LLMs are increasingly more and more relevant at informally phrased queries that don't actually contain relevant key words, and they're also much more useful in that they bypass a lot of pointless verbiage, spam, ads and requests to subscribe.
agentultra
Most search engines will parse the query sentence much more intelligently than that. It's not literally matching every word and hasn't for decades. I just tried a handful of popular search engines, they all return the appropriate responses and links.
dale_glass
They're not that literal anymore of course, but they still don't compare to an LLM. In the end it's still mostly searching for key words even if with a few tweaks here and there, and the ability to answer vague questions mostly works by finding forums and Reddit posts where people ask that specific question and hopefully get an answer.
When you're asking a standard question like the capital of whatever, that works great.
When you have one of those weird issues, it often lands you in a thread somewhere in the Ubuntu forums where people tried to help this person, nothing worked, and the thread died 3 years ago.
Just the fact that LLMs can translate between languages already adds an amazing amount of usefulness that search engines can't have. There seems to be a fair amount of obscure technical info that's only available in Russian for some reason.
atrettel
This is a great point. I'll add that search engines are also unclear about what kind of output they give. As you point out, search engines accept both questions and key words as queries. Arguably you'd want completely different searches/answers for those. Moreover, search engines no longer just output web sites with the key words but also give an "AI overview" in an attempt to keep you on their site, which is contrary to what search engines have traditionally done. Previously search engines were something you pass through but they now try to position themselves as destinations instead.
I'd argue that search engines should stick to just outputting relevant websites and let LLMs give you an overview. Both technologies are complimentary and fulfill different roles.
null
phillipcarter
> If I'm using an LLM, it is for parsing large amounts of input data, image recognition, complex analysis, deep thinking/reasoning, coding. All of these result in significantly more token usage than a 2-line "the answer to your question is xyz" response.
Correct, but you're also not the median user. You're a power user.
xpe
> The entire comparison hinges on people only making simple factual searches ... on both search engines and LLMs.
I disagree, but I can see why someone might say this, because the article's author writes:
> So let's compare LLMs to web search. I'm choosing search as the comparison since it's in the same vicinity and since it's something everyone uses and nobody pays for, not because I'm suggesting that ungrounded generative AI is a good substitute for search.
Still, the article's analysis of "is an LLM API subsidized or not?" does not _rely_ on a comparison with search engines. The fundamental analysis is straightforward: comparing {price versus cost} per unit (of something). The goal is figure out the marginal gain/loss per unit. For an LLM, the unit is often a token or an API call.
Summary: the comparison against search engine costs is not required to assess if an LLM APIs is subsidized or not.
og_kalu
>The entire comparison hinges on people only making simple factual searches
You have a point but no it doesn't. The article already kind of addresses it, but Open AI had a pretty low loss in 2024 for the volume of usage they get. 5B seems like a lot until you realize chatgpt.com alone even in 2024 was one of the most visited sites on the planet each month with the vast majority of those visits being entirely free users (no ads, nothing). Open AI in December last year said chatgpt had over a billion messages per day.
So even if you look at what people do with the service as a whole in general, inference really doesn't seem that costly.
disgruntledphd2
I'll definitely buy that argument for OpenAI, but then why are Anthropic/XAI etc losing money? They don't have the same generous free tiers as OpenAI and yet they keep raising absurd amounts of money.
llm_nerd
The comparison is quite literally predicated on seeking an answer via both mechanisms. And the simple truth is that for an enormous percentage of users, that is indeed precisely how they use both search engines and LLMs: They want an answer to a question, maybe with some follow-up links so if that isn't satisfactory they can use heuristics to dig deeper.
Which is precisely why Google started adding their AI "answers". The web has kind of become a cancer -- the sites that game SEO the most seem to have the trashiest, most user-hostile behaviour, so search became unpleasant for most -- so Google just replaces the outbound visit conceptually.
fkyoureadthedoc
Anecdotally, I'm a paying user and do a lot of super basic queries. What is this bug, rewrite this drivel into an email to my HOA, turn me into a gnome, what is the worst state and why is it west Virginia.
This would probably increase 10x if one of the providers sold a family plan and my kids got paid access.
Most of my heavy lifting is work related and goes through my employer's pockets.
paxys
None of those are "basic queries", in the sense that you will not be able to solve them using the Google/Bing search API.
sdenton4
Careful there: Once the machine turns you into a gnome, the price to turn back is quite hefty. A friend of mine gave up an eye, I only lost my most cherished memory. And most people ask the wrong question entirely and are never heard from again.
johnisgood
I love your prompts. :D
WhyIsItAlwaysHN
There's something I don't get in this analysis.
The queries for the LLM which were used to estimate costs don't make a lot of sense for LLMs.
You would not ask an LLM to tell you the baggage size for a flight because there might be a rule added a week ago that changes this or the LLM might hallucinate the numbers.
You would ask an LLM with web search included so it can find sources and ground the answer. This applies to any question where you need factual data, otherwise it's like asking a random stranger on the street about things that can cost money.Then the token size balloons because the LLM needs to add entire websites to its context.
If you are not looking for a grounded answer, you might be doing something more creative, like writing a text. In that case, you might be iterating on the text where the entire discussion is sent multiple times as context so you can get the answer. There might be caching/batching etc but still the tokens required grow very fast.
In summary, I think the token estimates are likely quite off. But not to be all critical, I think it was a very informative post and in the end without real world consumption data, it's hard to estimate these things.
barrkel
Oh contraire, I ask questions about recent things all the time, because the LLM will do a web search and read the web page - multiple pages - for me, and summarize it all.
4o will always do a web search for a pointedly current question, give references in the reply that can be checked, and if it didn't, you can tell it to search.
o3 meanwhile will do many searches and look at the thing from multiple angles.
zambal
But in that case it's hard to argue that llm's are cheap in comparison to search (the premise of the article)
pzo
But that's from user perspective, check Google or openai pricing if you wanted to have grounded results in their API. Google ask $45 for 1k grounded searches on top of tokens. If you have business model based on ads you unlikely gonna have $45 CPM. Same if you want to offer so free version of you product then it's getting expensive.
skywhopper
Yeah, the point is that this behavior uses a lot more tokens than the OP says is a “typical” LLM query.
WhyIsItAlwaysHN
But that was my point, then you need to include the entire websites in the context and it won't be 506 tokens per question. It will be thousands
harperlee
Nitpick: Au contraire
brookst
Just tried asking “what is the maximum carryon size for an American Airlines flight DFW-CDG” and it used a webs search, provided the correct answer, and provided links to both the airline and FAA sites.
Why wouldn’t I use it like this?
ceejayoz
That search query brings up https://www.aa.com/i18n/travel-info/baggage/carry-on-baggage... for the first result, which says "The total size of your carry-on, including the handles and wheels, cannot exceed 22 x 14 x 9 inches (56 x 36 x 23 cm) and must fit in the sizer at the airport."
What benefit did the LLM add here, if you still had to vet the sources?
SoftTalker
> What benefit did the LLM add here
Its answer was not buried in ads for suitcases, hotels, car rentals, and restaurants.
WhyIsItAlwaysHN
What I was saying is that you wouldn't use a raw LLM (so 506 tokens to get an answer). You would use it with web search so you can get the links.
The LLM has to read the websites to answer you so that significantly increases the token count, since it has to include them in its input.
adrian_b
I do not see which is the added benefit provided by the LLM in such cases, instead of doing yourself that web search, and for free.
JimDabell
I just tried that search on Google.
The first thing I saw was the AI summary. Underneath that was a third-party site. Underneath that was “People also ask” with five different questions. And then underneath that was the link to the American Airlines site.
I followed the line to the official site. I was presented with a “We care about your privacy” consent screen, with four categories.
The first category, “Strictly necessary”, told me it was necessary for them to share info with eleven entities, such as Vimeo and LinkedIn, because it was “essential to our site operation”.
The remaining categories added up to 59 different entities that American Airlines would like to share my browsing data with while respecting my privacy.
Once I dismissed the consent screen, I was then able to get the information.
Then I tried the question on ChatGPT. It said “Searching the web”, paused for a second, and then it told me.
Then I tried it on Claude. It paused for a second, said “Searching the web”, and then it told me.
Then I tried it on Qwen. It paused for a second, then told me.
Then I tried it on DeepSeek. It paused for a second, said “Searching the web”, and then it told me.
All of the LLMs gave me the information more quickly, got the answer right, and linked to the official source.
Yes, Google’s AI answer did too… but that’s just Google’s LLM.
Websites have been choosing shitty UX for decades at this point. The web is so polluted with crap and obstacles it’s ridiculous. Nobody seems to care any more. Now LLMs have come along that will just give you the info straight away without any fuss, so of course people are going to prefer them.
pmdr
I really doubt that, in an industry where chips are so hard to come by, draw so much power and are so terribly expensive, big players could at any time flip a switch and become profitable.
They burn through insane amounts of cash and are, for some reason, still called startups. Sure, they'll be around for a long time until they figure something out, but unless hardware prices and power consumption go down, they won't be turning a profit anytime soon.
Just look at YouTube: in business for 20 years, but it's still unclear whether it's profitable or not, as Alphabet chooses not to disclose YT's net income. I'd imagine any public company would do this, unless those numbers are in the red.
patapong
Sure, but Alphabet is insanely profitable, based on having grabbed a lot of market share in the search market and showing people ads. The AI companies är betting that AI will be similarly important to people, and that there is at least some stickiness to the product, meaning that market share can eventually be converted to revenue. I think both of these are relatively likely.
dist-epoch
Stock price go up is another way a company is profitable. The amazon playbook for 10+ years.
otterley
Stock prices are (at least in theory, discounting speculation) a consequence of profits; they are not profits in and of themselves. Profits are at the bottom of the income statement.
andrew_lettuce
Amazon made huge money as they captured more and more of the market and didn't return any of it. The company literally became worth more and more each year. Open AI continues to hemorrhage money.
otterley
Amazon hemorrhaged money for the first decade of its life. It was founded in 1994 and didn’t turn its first profit until 2004.
wrsh07
I'm confused by this claim - OpenAI has pretty meaningful revenue.
If they monetized free users, they would have even better revenue. The linked post estimates eg $1 per user per month would flip them to profitable.
bfrog
It's another Uber moment for VC. The bullshit ends as soon as becoming a functioning business suddenly takes precedence, and the real costs start to come out.
Etheryte
> OpenAI reportedly made a loss of $5B in 2024. They also reportedly have 500M MAUs. To reach break-even, they'd just need to monetize those free users for an average of $10/year, or $1/month. A $1 ARPU for a service like this would be pitifully low.
This is a tangent to the rest of the article, but this "just" is doing more heavy lifting than Atlas holding up the skies. Taking a user from $0 to $1 is immeasurably harder than taking a user from $1 to $2, and the vast majority of those active users would drop as soon as you put a real price tag on it, no matter the actual number.
jsnell
Ok, I clearly should have made the wording more explict since this is the second comment I got in the same vein. I'm not saying you'd convert users to $1/month subscriptions. That would indeed be an absurd idea.
I'm saying that good-enough LLMs are so cheap that they could easily be monetized with ads, and it's not even close. If you look at other companies with similar sized consumer-facing services monetized with ads, their ARPU is far higher than $1.
A lot of people have this mental model of LLMs being so expensive that they can’t possibly be ad-supported, leaving subscriptions as the only consumer option. That might have been true two years ago, but I don't think it's true now.
andrew_lettuce
There are some big problems with this, mostly that openAI doesn't want to break even or be profitable, their entire setup is based on being wildly so. Building a Google sized business on ads is incredibly difficult. They need to be so much better than the competition that we have no choice but to use them, and that's not the case any more. More minor but still a major issue is the underlying IP rights. As users mature they will increasingly look for citations from LLMs, and if open AI is monetizing in this vein everyone is going to come for a piece.
NewsaHackO
> mostly that openAI doesn't want to break even or be profitable, their entire setup is based on being wildly so.
I’m sure you are going to provide some sort of evidence for this otherwise ridiculous claim, correct?
netdevphoenix
>This is a tangent to the rest of the article, but this "just" is doing more heavy lifting than Atlas holding up the skies. Taking a user from $0 to $1 is immeasurably harder than taking a user from $1 to $2, and the vast majority of those active users would drop as soon as you put a real price tag on it, no matter the actual number.
Hard indeed but they don't need everyone to pay only enough people to effectively subsidise the free users
brookst
Agreed. Not to mention that having 500m paid users would dramatically change usage and drive up costs.
Better math would be converting 1% of those users, but that gets you $1000/year.
chaz6
I thought that services like these were run at a loss because the data that users provide is often worth more than the price of a subscription.
AndrewDucker
Only if you can find a way of monetisng that data or selling it on.
So, basically, ads.
jeffbee
To make a billion dollars, I would simply sell a Coke to everyone in China. I have been giving away Coke in China and it is very popular, so I am sure this will work.
barrkel
You joke, but for food and beverages, a stand in the supermarket giving the stuff away for free is a really common (and thus successful) tactic.
otterley
It’s successful for some, but not for everyone. People play roulette all the time but that doesn’t mean everyone other than the house is making a profit. (BTW supermarkets charge for promotional space.)
paxys
It's easy. All OpenAI has to do to break even is checks notes replicate Google's multi-trillion dollar advertising engine and network that has been in operation for 2+ decades.
andrew_lettuce
Of there 500M users a very small number are already paying, so it's not zero-to-one for all of them, but monetize more and take $10 a month to $100. It's unclear if this is easier or harder than what you presented, but both are hard.
eptcyka
500M MAU also implies that some are already paying. They need to extract 1$ more on average, not just get all of them to pay 1$ per month. This, I imagine is harder than assuming there are 500m users that pay nothing today.
theOGognf
Some anecdotal data, but we recently estimated the cost of running a LLM at $WORK by looking at power usage over a bursty period of requests from our internal users and it was on the order of $10s/mil tokens. And we arent a big place, nor were our servers at max load, so I can see the cost being much lower at scale
exceptione
This is only the power usage?
theOGognf
Right, this is only power usage. Factoring in labor and all that would make it more expensive for sure. However, it’s not like it’s a complex system to maintain. We use a popular inference server and just run it with some modest rate limits . It’s been hands-off for close to a year at this point
exceptione
Ok! What hardware do you run? I had thought that would be the most expensive part.
dist-epoch
Hardware spend also need to be amortized (over 1 year? 2 years?) Unless you cloud rent.
jenny91
5 year amortization is pretty realistic I'd say. A100s (came out 2020Q1) are still in heavy use. (I think V100s from 2017Q3 are starting to be phased out a fair bit.)
theOGognf
That is true too
qoez
So far. Give it a few years when the core players have spent their way to market dominance and regulation kicks in and you'll see the price hikes investors have been promised behind closed doors.
hackyhacky
Or maybe they'll just use ads.
Whatever question you ask, the response will recommend a cool, refreshing Coca Cola soft drink.
Your AI coding project will automatically display ads collecting revenue for Anthropic, not for you.
Every tenth email sent by your AI agent will encourage the recipient to consider switching to Geico.
The opportunities are endless.
JackSlateur
Yes
LLM and stuff are the ultimate propaganda machine: a machine which is able to masquerade everything, to generate endless lies in the coherent manner
sameermanek
There is a problem with these llms though which is that these companies will have to keep spending massive amounts of money on research unless they solve major issues with these models. These models are inherently depreciating assets and they depreciate almost fully within months as soon as either they or their competitors come out with a new model.
For eg. Claude was undoubtedly the best model for software devs until gemini 2.5 was released and now i see people divided with majority of them leaning towards Gemini.
And there is very little room for mistakes, as we have seen how llama became completely irrelevant in matter of months.
So while inference in itself can be profitable (again thats a big *), these companies will have to keep fighting for what it looks like decades unless one of them actually solves hallucinations and re constructs computer interfacing at a global scale!
tiagod
10 years ago, we had nearly free ride-sharing and delivery. When a new company entered my market, I could usually get stuff cheaper through it than by walking to the shop they were picking it up from.
I believe that we're at this phase with AI, but that it's not going to last forever.
prmoustache
LLMs aren't cheap if you consider the impact on the climate and the cost that comes from it.
PickledChris
I will preface this by saying that I care a lot about climate change and carbon usage and AI usage is not a big issue, it is in fact a distraction from where we should be focusing our efforts.
https://www.sustainabilitybynumbers.com/p/carbon-footprint-c...
kgwgk
Watching TV isn’t cheap if you consider the impact on the climate and the cost that comes from it.
worldsayshi
I would like to understand if this still has truth to it.
fastball
I didn't realize Large Language Models have a direct impact on the climate.
tecleandor
Well, running them does. And, from what I get from the article, that's what they're trying to do: either running them or having someone do it for them as a service.
How big is that impact? Well, that's a complicated issue.
fastball
Running LLMs does not have any intrinsic impact on the climate.
If you want to talk about the impact of different power generation methods on climate change, fair enough, but I don't think this thread is the place for it. Unless of course the idea is to talk about climate change in every single thread centered on "things that consumes energy", which is approximately all of them.
johnisgood
How about indirect? At any rate, something is going on, because our summers are more and more hotter, and there are no snow during our winters. We are all noticing it but it gets shrugged off as "misremembering". I am not contributing it to running LLMs alone, however, but climate change seems real enough to me, I experience it. It is barely July and I am dying! We used to have more tolerable weather around this time of year, for a long time.
fastball
Yes, but what does climate change have to do specifically with LLMs? How are they different from any other use of energy? As far as I can tell they are better than most uses, given that (as software) they run entirely with electricity, which of course can be generated with near-zero CO2 emissions.
Given that, this interjection about climate change seems like a complete non-sequitur to the topic at hand.
mmcnl
I think this is a good analysis but falls a little short. Sure, the price is not high for inference, but what about the cost? To be fair, the author already tries to answer this claim, but you could look more critically at this question. Something like: taking into account the insane amount of capital that is being spent and injected into AI companies, what is the strategy to break-even in a reasonable amount of time? What would be the implications for the price over time from now on? That's an interesting thought experiment that, at least in my head, raises the question if the price we're paying for inference today is actually fair.
bfrog
Cheap by what measure? Surely not by the carbon footprint these large capital intense datacenters are going up in droves to support them? Surely not given by the revenue being generated by one silicon design company at the moment?
I think this article is measuring all the wrong things and therefore comes to the wrong conclusion.
You can't compare an API that is profitable (search) to an API that is likely a loss-leader to grab market share (hosted LLM cloud models).
Sure there might not be any analysis that proves that they subsidized, but you also don't have any evidence that they are profitable. All the data points we have today show that companies are spending an insane amount of capex on gaining AI dominance without the revenue to achieve profitability yet.
You're also comparing two products in very different spots in the maturity lifecycle. There's no way to justify losing money on a decade-old product that's likely declining in overall usage -- ask any MBA (as much as engineers don't like business perspectives).
(Also you can reasonably serve search queries off of CPUs with high rates of caching between queries. LLM inference essentially requires GPUs and is much harder to cache between users since any one token could make a huge difference in the output)