Mistral reports on the environmental impact of LLMs
41 comments
·July 22, 2025greyadept
kingstnap
You can assume the API price is roughly proportional to electricity usage.
If you buy $10 in tokens, that probably folds into ~$3 to $5 dollars in electricity.
Which would be around 30 to 90 kWhr in electricity.
Depending on the source, it could be anywhere from ~500g/kWhr (for natural gas) and ~24g/kWhr for hydroelectric.
It's a really wide spread, but I'd say for $10 in tokens, you'd probably be in the neighbourhood of 1 kg to 40 kg of emissions.
What's a good thing is that a lot of the spread comes from the electricity source. So if we can get all of these datacenters on clean energy sources it could change emissions by over an order of magnitude compared to gas turbines (like XAi uses).
chessgecko
if you bought a nvidia h100 at wholesale prices (around $25k) and ran it 24/7 at commercial electric rates (lets say $0.1 per kwh), then it would take you over 40 years to spend the purchase price of the gpu in electricity. Maybe bump it down to 20 for data center cooling.
I don't think the cost of the ai is close to converging to the price of power yet. Right now its mostly the price of hardware and data center space minus subsidies.
dijit
I don’t think that you can make this assumption.
People are selling AI at a loss right now.
preciz
And each toilet flush you make should also have a Co2 calculation which should go against your daily carbon allowance.
evrimoztamur
Spending drinking water for toilet flushes is indeed a problem. Perhaps not CO2 measurements directly, but informing people in general of how much high quality water is wasted on flushes alone will hopefully bring more momentum into more efficient flushing mechanism and introducing grey water systems to new and old buildings alike. Good idea!
aziaziazi
I don’t flush my toilet, I Kildwick [0] but J-pd has a more interesting comparaison
j-pb
people downvote your sarcasm, but if you do the calculations you're kinda right.
1Kg of Beef costs:
- The energy equivalent of 60.000 ChatGPT queries.
- The water equivalent of 50.000.000 ChatGPT queries.
Applied to their metric Mistral Large 2 used: - The water equivalent of 18.8 Tons of Beef.
- The CO2 equivalent of 204 Tons of Beef.
France produces 3836 Tons of Beef per day,and one large LLM per 6 months.
So yeah, maybe use ChatGPT to ask for vegan recipes.
People will try to blame everything else they can get a hold on before changing the stuff that really has an impact, if it means touching their lifestyle.
The LLMs are not the problem here.
plants
Those are incredible stats. As a vegan who uses LLMs at work frequently, I would love to have the source as well :)
bluefirebrand
The difference is that food is important and live-giving and LLMs are a very fancy magic 8-ball
stonogo
Toilets are already labeled with their usage rate.
jrflowers
This is a good point because being curious about energy usage is the same thing as advocating for an imaginary rule about energy usage
jiehong
Let’s call it GreenOps
jeffbee
That would be ... thousands of time less useful than giving you the same information at the motor fuel pump. Unfortunately this isn't one of those situations where every little bit counts. There are 2 or 3 things you can do to reduce your environmental impact and not using chatbots isn't one of the things.
jiehong
So, using the smallest model for the task would help, as expected.
A very small model could run on device to automatically switch and choose the right model based on the request. It would help navigate the difficult naming of each model of each vendor for sure.
potatolicious
> ” A very small model could run on device to automatically switch and choose the right model based on the request.”
This is harder than it looks. A “router” model often has to be quite large to maintain routing accuracy, especially if you’re trying to understand regular user requests.
Small on-device models gating more powerful models most likely just leads to mis-routes.
evrimoztamur
What is the levelised cost per token? As in how we calculate levelised cost of energy.
If we take the total training footprint and divide that by the number of tokens the model is expected to produce over its lifetime, how does that compare to the marginal operational footprint?
My napkin math says per token water and material footprints are up 6-600% and 4-400% higher respectively for tokens on the order of 40B to 400M.
I don't have a good baseline on how many tokens Mistral Large 2 will infer over the course of its lifetime, however. Any ideas?
kurthr
Within marginal error, dollars=destruction.
Even if the company is "green" they make money, they pay employees/stockholders, those people use the money to buy more things and go on vacations in airplanes. Worse, they invest the money to make more money and consume more goods.
Even your gains and vegetables are shipped in to feed you, if you walk to the grocery store. You pay rent/mortgage for a house built with concrete and steel. The highest priced items you pay for are also likely the most energy and environmentally costly. They create GDP.
It's a little weird with LLMs right now, because everything is subsidized by VC, Ads, BigCo investment so you can't see real costs. They're probably higher than the $30-200/mo you pay, but they're not 10x the price like your rent, car payment, food, vacation, investment/pension are.
dr_kretyn
This is a fantastic report. As someone tasked to get the most of AI at our company, in conversations I'm frequently getting questions about it's environmental impact. Great to have a reference.
djoldman
They report that the emissions of 400 output tokens, "one page of text," equates to 10 seconds of online video streaming in the USA.
So I guess one saves a lot of emissions if one stops tiktok-ing, hulu-ing, instagram reel-ing, etc.
wmf
It's sad to see the French of all people fall for guilt-trip austerity thinking. Just decarbonize the grid and move on. Energy is good.
jeffbee
These conclusions are broadly compatible with "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" or, as I prefer, the PDF metadata title that they left in there, "Revamped Happy CO2e Paper".
Despite the incredible focus by the press on this topic, Mistral's lifecycle emissions in 18 months were less than the typical annual emissions of a single A320neo in commercial service.
ACCount36
The press focus is a mix of the usual "new thing BAD", and the much more insidious PR work by fossil fuel megacorps.
Fossil fuel companies are damn good at PR, and they know well that they simply can't make themselves look good. The next best thing? Make someone else look worse.
If an Average Joe hears "a company that hurts the environment" and thinks OpenAI and not British Petroleum, that's a PR win.
jeffbee
I suspect the press is also aligned against machine learning because they are still Big Mad® that the internet destroyed their revenue model (charging individuals $50 to advertise used cars, for example).
austinjp
This is interesting but I'd love it if they'd split training and inference. Training might be highly expensive and conducted once, while inference might be less expensive but conducted many, many times.
I would really like it if an LLM tool would show me the power consumption and environmental impact of each request I’ve submitted.