Will Supercapacitors Come to AI's Rescue?
31 comments
·May 6, 2025blt
What is causing demand bursts in AI workloads? I would have expected that AI training is almost the exact opposite. Load a minibatch, take a gradient step, repeat forever. But the article claims that "each step of the computation corresponds to a massive energy spike."
wmf
If the cores go idle (or just much less loaded) in between steps because they're waiting for network communication that would cause the problem.
0cf8612b2e1e
One solution is to rely on backup power supplies and batteries to charge and discharge, providing extra power quickly. However, much like a phone battery degrades after multiple recharge cycles, lithium-ion batteries degrade quickly when charging and discharging at this high rate.
Is this really a problem for an industrial installation? I would imagine that a properly sized facility would have adequate cooling + capacity to only run the batteries within optimal spec. Solar plants are already charging/discharging their batteries daily.pixl97
Eh, I think part of the problem here is the speed of load switching. From the article it looks like the loads could generate dozens to hundreds of demand spikes per minute. With most battery operated loads that I've ever messed with we're not switching loads like that. It's typically 'oh a fault, switch to battery' then some time later you check the power circuit to see if it's up and switch back.
This looks a whole lot more like high frequency load smoothing. Really it seems to me like a continuation of a motherboard. Even if you have a battery backup on your PC you still have capacitors on the board for voltage fluctuations.
jeffbee
In addition to what you said, nothing is forcing or even encouraging anyone to use lithium-ion batteries in fixed service, such as a rack full of computers.
lstodd
in a properly designed install you can actually use the compressors and fans for smoothing load spikes. won't be much, but why not.
edit: otherwise I'm not getting what the entire article is about. it's as contrary to what I know about datacenter design as it can get.
it's.. just wrong.
sonium
Or you simply use the pytorch.powerplant_no_blow_up operator [1]
janalsncm
Pretty much. From the article:
> Another solution is dummy calculations, which run while there are no spikes, to smooth out demand.
null
Animats
Is that kind of load variation from large data centers really a problem to the power grid? There are much worse intermittent loads, such as an electric furnace or a rolling mill.
toast0
I suspect it's more of a problem for the data center's energy bill. My understanding is that large electric customers pay a demand charge in addition to the volumetric charge for the kWh's use at whatever rates given time of use / wholesale rates. The demand charge is based on the maximum kW used (or sometimes just the connection size) and may also have a penalty rate if the power factor is poor. Smoothing over small duration surges probably makes a lot of things nicer for the rate payer, including helping manage fluctuations from the utility.
There's probably something that could be done on the individual systems so that they don't modulate power use quite so fast, too; at some latency cost, of course. If you go all the way to the extremes, you might add a zero crossing detector and use it to time clock speed increases.
hinkley
Large customers pay not by wattage but by… I’m spacing on the word but essentially how much their power draw fucks up the sine waves for voltage and current in the power grid.
I imagine common power rail systems in hyperscaler equipment helps a bit with this, but for sure switching PSUS chop up the input voltage and smooth it out. And that leads to very strange power draws.
murderfs
You're probably thinking of power factor, which is usually not a big deal for datacenters. All of your power supplies are going to have active PFC, and anything behind a double conversion UPS is going to get PFC from the UPS. The biggest contributors are probably going to be the fans in the air conditioning units.
oakwhiz
There is often a demand flux surcharge as well. Not just demand but delta in demand over some time period.
null
timewizard
If you have a working thermometer you can predict when furnaces are going to run.
If you want to smooth out data centers then you need hourly pricing to force them to manage their demand into periods where excess grid capacity is not being used to serve residential loads.
changoplatanero
Yes its a problem for the grid and the power companies don't allow large clusters to oscillate their power like this. The solution that AI have to do during their training big runs is to fill in the idle time on the GPUs with dummy operations to keep the power load constant. Having capacitors would be able to save on power usage.
nancyminusone
Inb4 a startup is created to sell power load idle cycle compute time in AI training data centers.
mystified5016
Those loads aren't nearly as intermittent. Your furnace likely runs for tens of minutes at a time. These datacenters are looking at second-to-second loads.
Drawing high intermittent loads at high frequency likely makes the utility upset and leads to over-building supply to the customer to cope with peak load. If you can shave down those peaks, you can use a smaller(cheaper) supply connection. A smoother load will also make the utility happy.
Remember that electricity generation cannot ramp up and down quickly. Big transient loads can cause a lot of problems through the whole network.
paulkrush
Edit: It's interesting the GPU's are causing issues on the grid before they cause issues with the data center's power.
mystified5016
Read the article.
janalsncm
I am curious about what the load curves look like in these clusters. If the “networking gap” is long enough you might just be able to have a secondary workload that trains intermittently.
Slightly related, you can actually hear this effect depending on your GPU. It’s called coil whine. When your GPU is doing calculations, it draws more power and whines. Depending on your training setup, you can hear when it’s working. In other words, you want it whining all the time.
paulkrush
"Thousands of GPUs all linked together turning on and off at the same time." So supercapacitors allow for simpler software?, reduced latency? at a low cost?
mjevans
They service 'spot demand moderation' as an extension of UPS and power smoothing. In this case it's flattening out spikes to smooth slopes.
amelius
Maybe a superconducting superinductor would be a better fit.
lstodd
that would be a blackhole bomb.
hulitu
> Will Supercapacitors Come to AI's Rescue?
Yes, just like the octopussies. /s
> Another solution is dummy calculations, which run while there are no spikes, to smooth out demand. This makes the grid see a consistent load, but it also wastes energy doing unnecessary work.
Oh god...I can see it now. Someone will try to capitalize on the hype of LLMs and the hype of cryptocurrency and to build a combined LLM training and cryptocurrency mining facility that that runs the mining between train spikes.