Show HN: Terminal dashboard that throttles my PC during peak electricity rates
60 comments
·April 1, 2025naveen_k
Quick update: Definitely wasn't expecting this to end up on the front page. I was more focused on publishing the dashboard than the power optimizer service I'm running. I'll take all the feedback into account and will open source an improved version of it soon. Appreciate all the comments!
vondur
That's quite a beefy workstation you got there!
PeterStuer
Had a quick look through the code but I can't find where he actually throttles the PC. Anyone can point me to it?
dartos
Yeah I don’t see anything that even suggests it throttles the PC.
Looks like it’s just a display.
naveen_k
Sorry, I only open sourced the dashboard part as mentioned in the bottom of the blogpost. Still working on improving the 'Power optimizer' service so will open source that soon as well.
PeterStuer
If it were up to me I would go for switching complete performance profiles through something like tuned-adm rather than trying to change just cpu frequencies. There's too many interlinked things that can have an effect on throughput efficiency.
naveen_k
Thanks, I'll check it out.
russdill
If your computer is still doing bursty jobs during that period, it will use less power but still as much energy. Sure, you can reduce the power but if you aren't also reducing what you ask it to do, it'll just use that max amount of allowed power for a longer period of time.
PaulKeeble
All the modern CPUs will boost into high clockspeeds and voltage to get work done quicker but at considerably higher power draws per operation. On that side of the equation its clear that it uses more energy. The problem is the entire CPU package is on longer if you don't do that and this costs power too and so its a trade off between the two. Generally we consider there isn't much difference between them but I don't know about that having seen the insanity that was the 13th and 14th gen Intel's consuming 250W when 120W gets about 95% the performance I think its very likely moving down to power save and avoiding that level of boosting definitely saves small amounts of power.
delusional
This is some pretty old analysis, but I remember when smartphones came out and people were thinking about throttling their applications to lower power consumption the general advice was to just "race to idle".
The consensus thus was that spending more time in lower power states (where you use ~0W) was much more efficient than spending a longer amount of time in the CPU sweetspot, but with all sort of peripherals online that you didn't need anyway.
I remember when Google made a big deal out of "bundling" idle CPU and network requests, since bursting them out was more efficient than having the radio and CPU trotting along at low bandwidth.
wongarsu
However there are two factors that might make "race to idle" more valid on phones than on most other platforms:
Smartphone chips are designed to much stricter thermal and power limits. There is only so much power you can get out of the tiny battery, and only so much heat you can get rid of without making the phone uncomfortably hot. Even in a burst that puts a limit on the wastefulness. Desktop CPUs are very different: If you can get 10% more performance while doubling power draw, people will just buy bigger coolers and bigger power supplies to go along with the CPU. Notebook CPUs are somewhere in the middle: limited, but with much better cooling and much more powerful batteries than phones.
The other thing is the operating system: "race to idle" makes sense in phones because the OS will actually put the CPU into sleep states if there's nothing to do, and puts active effort into not waking the CPU up unnecessarily and cramming work into the time slots when the CPU is active anyways. Desktop operating systems just don't do that to the same degree. You might race to idle, but the rest of the system will then just waste power with background work once it's idle.
bee_rider
Race to idle probably makes more sense in the context of smartphones where there’s at least some chance that “idle” means the screen might be turned off.
For a desktop, the usage… I mean, it is sort of different really. If I’m writing a Tex file for example, slower compiles mean I’ll get… fewer previews. The screen is still on. More previews is vaguely useful, but probably doesn’t substantially speed up the rate at which I write—the main bottleneck is somewhere between my hat and my hands, I think.
sockbot
Technology Connections just did a timely video on the very topic of power vs energy.
toast0
As with everything, it depends. If you are going to do the same jobs regardless of the amount of time it takes, then yeah, dropping the max power probably just spreads the energy use over time. That doesn't usually help you save money, unless you have a very interesting residential plan.
OTOH, if it's something like realtime game rendering without a frame limiter, throttling would reduce the frame rate, reducing the total amount of work done, and most likely the total energy expended.
KennyBlanken
It is well known in the PC hardware enthusiast community that the last few digits of percent of performance come at enormous increases in power consumption as voltages are raised to prevent errors as clock speeds go up.
Manufacturers chase benchmark results by youtubers and magazines. Even a few percent difference in framerate means the difference between everyone telling each other to buy a particular motherboard, processor, or graphics card over another.
Amusingly, you often get better performance by undervolting and lowering the processor's power limits. This keeps temperatures low and thus you don't end up with the PC equivalent of the "toyota supra horsepower chart" meme.
1400W for a desktop PC is...crazy. That's a threadripper processor plus a bleeding edge top of the line GPU, assuming that's not just them reading off the max power draw on the nameplate of the PSU.
If their PC is actually using that much power, they could save far more money, CO2, etc by undervolting both the CPU and GPU.
PeterStuer
I myself massively overspec my PSU's for my builds as I want ti keep them in the optimal efficiency range rather than pushing their limits. For a typical 800W budget I usually go with a tier1 1200W offering.
creaturemachine
1400 is definitely the sticker on the side of the PSU. There is some theory behind keeping your PSU at 30-50% for optimal efficiency, but considering the cost of these 1k+ W units You're probably better off right-sizing it.
naveen_k
It's a 1600W PSU (Coolmaster 1600 V2 Platinum)
naveen_k
I'm actually using a 1600W PSU. 1400W is my target max draw. This is a dual EPYC (64 core CPU each) system btw. The max draw by the CPU+MB+Drives running at peak 3700MHz without the GPU is 495W! Adding 4x 4090 (underclocked) will quickly get you to 1400W+.
null
gorbypark
Pretty neat! I’m currently working on a project that uses an ESP-C6 that just exposes a “switch” over matter/thread thats based off the results from the Spanish electricity prices API. The idea is have the switch be on when it’s one of the cheapest hours of the day, and off otherwise. Then other automations can be based on it. This was pretty trivial to do in home assistant but I want something that’s ultra low power and can just be completely independent of anything for less technical users. My end goal is to have a small battery powered device that wakes up from deep sleep once a day to check the day ahead prices via WiFi. The C6 might be overkill for this, but once I have a proof of concept working I’ll try and pick something that’s ultra low super ultra low power. Something that needs charging once or twice a year would be ideal.
The ideal form factor might be a smart plug itself, but I can’t find any with hackable firmware and also matter/thread/wifi.
naveen_k
That's actually pretty cool. ESPs are awesome little things.
Symbiote
Within the next year or two, I'm going to look at implementing something similar at my work.
We don't pay for electricity directly (it's included in the rackspace rental), but we could reduce our carbon footprint by adjusting the timing of batch processing, perhaps based on the carbon intensity APIs from https://app.electricitymaps.com/
Though, the first step will be to quantify the savings. I have the impression from being in the datacentre while batch jobs have started that they cause a significant increase in power use, but no numbers.
reaperman
You’re probably already on top of it but if your company doesn't operate the datacenter you’ll also want to estimate the carbon cost of cooling in addition to the electricity that the machines consume.
dehrmann
Can you run the batch processing on other machines at off-peak hours?
PeterStuer
Nice project, but would it not be more rational to have your system running underclocked/undervolted at the optimal perf/watt at all times, with an optional boost to max performance for a time critical task? Running it away from the optimum might save on instant consumption but increase your aggregate consumption.
blitzar
Bring back the "turbo" button on the front of the PC.
naveen_k
Thanks! That's an excellent point. You're right that there's likely a sweet spot that would be more efficient overall than aggressive throttling.
The current implementation uniformly sets max frequency for all 128 cores, but I'm working on per-core frequency control that would allow much more granular optimization. I'll definitely measure aggregate consumption with your suggestion versus my current implementation to see the difference.
schiffern
Zooming out, 80-90% of a computer's lifecycle energy use is during manufacturing, not pulled from the wall during operation.[1] To optimize lifetime energy efficiency, it probably pushes toward extending hardware longevity (within reason, until breakeven) and maximizing compute utilization.
Ideally these goal are balanced (in some 'efficient' way) against matching electricity prices. It's not either/or, you want to do both.
Besides better amortizing the embodied energy, improving compute utilization could also mean increasing the quality of the compute workloads, ie doing tasks with high external benefits.
Love this project! Thanks for sharing.
[1] https://forums.anandtech.com/threads/embodied-energy-in-comp...
KennyBlanken
Please go learn about modern Ryzen power and performance management, namely Precision Boost Overdrive and Curve Optimizer - and how to undervolt an AM4/AM5 processor.
The stuff the chip and motherboard do, completely built-in, is light-years ahead of what you're doing. Your power-saving techniques (capping max frequency) are more than a decade out of date.
You'll get better performance and power savings to boot.
naveen_k
Thanks for the suggestion! I'm actually using dual EPYC server processors in this workstation, not Ryzen. I'm not sure EPYC supports PBO/Curve Optimizer functionality that's available in AM4/AM5 platforms.
That said, I'm definitely interested in learning more about processor-specific optimizations for EPYC. If there are server-focused equivalents to what you've mentioned that would work better than frequency capping, I'd love to explore them!
ac29
For people with Intel processors, check out raplcap: https://github.com/powercap/raplcap
It lets you set specific power consumption limits in W instead of attempting to do the same by restricting maximum core frequencies (which could also be useful in addition to overall power limits).
csdvrx
Another suggestion: when you want to save power, use irq affinity with /proc/irq/$irq/smp_affinity_list to put them all on one core.
This core will get to sleep less than the others.
You can also use the CPU "geometry" (which cores share cache) to set max frequency on its neighboring cores first, before recruiting the other cores
naveen_k
Thanks for the suggestion. Will check it out.
throwaway3231
It's well established that completing the same task more slowly at a lower clock rate is actually less energy-efficient.
yjftsjthsd-h
Right, "race to idle"
nottorp
How is it with modern overclocked by default cpus? If you cut power use by 50% you still get 80% of the performance?
throwaway3231
It's usually more energy-efficient to finish a task quickly with a higher power draw, also known as race-to-idle.
naveen_k
Good point. I'm often running multiple parallel jobs with varying priorities where uniform throttling actually makes sense. Many LLM inference tasks are long-running but not fully utilizing hardware (often waiting on I/O or running at partial capacity)
The dual Epyc CPUs (128 cores) in my setup have a relatively high idle power draw compared to consumer chips. Even when "idle" they're consuming significant power maintaining all those cores and I/O capabilities. By implementing uniform throttling when utilization is low, the automation actually reduces the baseline power consumption by a decent amount without much performance hit.
foobarian
It seems it may be relatively accessible to take a few representative tasks and actually measure the soup-to-nuts energy consumed at the plug. Would be very interesting to see in tandem with the power optimizations!
naveen_k
That's exactly what I did first! I ran a CPU torture test at full clock speed and measured the power draw at the plug, then repeated the same test with the lowest clock speed setting. For the Epyc system, there was about 225W lower power draw at the reduced clock speed. Even at idle, capping the max frequency reduced the power draw by about 20+%.
gitpusher
People have made valid criticisms about the basic effectiveness of your strategy. But in any case, this is a pretty awesome hacker project - nicely done! Love the appearance of your CLI tool. I am definitely bookmarking for future inspo
naveen_k
Thanks! I initially just wanted to build a dashboard, with the power optimization part being a later addition. Based on the HN response, it seems that's the feature that resonated most with people. I'll be making improvements to the optimization component in the coming days and will publish what I have.
Havoc
From what I’ve seen price per token make home generation uncompetitive in most countries. And that’s just on elec - never mind cost of gear
Only really makes sense for learning or super confidential info
gtirloni
Could you share how much you have saved in $?
naveen_k
The power optimizer daemon has only been running for a few days, so it's hard to measure in $ value but based on my peak pricing I would estimate the savings to be around a few dollars since then.
whalesalad
Wonder if a big UPS/power bank would be better? Charge it during periods where power is cheaper, and utilize it when power is more expensive. Then again if you do not need full performance all the time - this is a cool solution.
naveen_k
Definitely, I've been contemplating getting a 5-10kWh LFP battery backup with <10ms UPS switchover to run the workstation and home backup. This is an intermediate solution until then.
ajsnigrutin
Why all this instead of a simple cronjob switching from performance to powersave profiles depeding on current time (=electricity price)?
naveen_k
A cronjob would definitely work in most cases if the goal is just to auto change freq profiles during set ToU periods. I just wanted a more flexible system where the system can auto change the profiles based on actual utilization so demanding tasks aren't slowed down.
pests
I'm on a time of use rate plan, most expensive from 11am-7pm. However they also have "Critical Peak Events" which increase the rate about 10x to over a $1/kwh that last up to 4 hours. Just saying it would be a bit more complex then just checking the time.
ajsnigrutin
So how do you get that data (status) now (if(is-critical-peak-event){})? Do the smartplugs gather some smartgrid-style data?
joshvm
It depends on your supplier, because they set the pricing and that information gets displayed on your meter. Octopus (UK) has a dynamically-priced service called Agile where you can query the API as a user; in some cases the API doesn't even need a login for regional pricing. You would have to build some logic on top for most smartplugs through something like HomeAssistant. There are storage batteries which can react to pricing, and some which will also work in concert with current solar power or other renewables.
https://octopus.energy/blog/agile-smart-home-diy/
https://www.zerofy.net/2024/03/26/meter-data.html for some more European info on meters, though mostly focused on accessing your own usage data.
pests
I do believe I've seen some plugs that do connect to some data source to do it automatically, but I'd rather not give plugs access to the internet haha. Our provider also gives advanced notice via text or email so its totally possible to connect it up yourself.
WattWise is a CLI tool that monitors my workstation’s power draw using a smart plug and automatically throttles the CPU & GPUs during expensive Time-of-Use electricity periods. Built with Python, uses PID controllers for smooth transitions between power states. Works with TP-Link Kasa plugs and Home Assistant.