Reworking 30 lines of Linux code could cut power use by up to 30 percent
52 comments
·April 21, 2025ryao
fnordpiglet
It’s a lot more nuanced than that. Being in a data center doesn’t imply heavy network utilization. The caveat is clearly outlined in the article being workload based not deployment based. If you have a home machine doing network routing it would absolutely benefit from this. In fact I would say probably the vast majority of Linux installs are home network devices, just people don’t know it. Embedded Linux machines doing network routing or switching or IPS or NAS or whatever would benefit a lot from this. “Energy savings” can be seen as a green washing statement but on embedded budgets it’s a primary requirement.
taeric
I was curious how much this would be applicable to home routers.
I confess I'm dubious on major savings for most home users, though? At least, at an absolute level. 30% of less than five percent is still not that big of a deal. No reason not to do it, but don't expect to really see the results there.
tuetuopay
Practically none of it would be applicable (if using a commercial router). They all use hardware offloading, and traffic seldom touches the CPU. Only "logical" tasks are raised to the CPU, like ARP resolution and the likes (what’s called "trap to cpu").
If you’re doing custom routing with a NUC or a basic Linux box, however, this would gain massive power savings because that box pretty much only does networking.
fnordpiglet
For embedded it’s not about saving cost it’s about saving limited on board power. The lower the power demand of the device, the smaller and the more you can dedicate to other things.
jeffbee
What home router would be busy polling?
sandworm101
Someone who decided to turn their old PC into a router because a youtube video daid "recycling" an old PC is more green despite the massive power suck.
jes5199
you don’t have to say “in datacenters” when talking about linux, that is the obvious context in the vast majority of cases
panzi
I'm reading this on an Android phone. The phone OS that has over 70% market share.
matkoniecz
Why you think so? Especially for HN readers decent chunk is using Linux.
(I typed this on Linux PC).
devsda
Agree. On reading the headline, I was hoping it would extend my laptop battery life.
ajross
This is a naming mistake. You're taking "Linux" in the sense of "Linux Distribution", where the various competitors have seen most of their success in the datacenter.
This is specifically a change to the Linux kernel, which is much, much more broadly successful.
ptero
IME majority of linux systems are laptops and desktops.
jolmg
Without looking at stats, I would think android phones.
homarp
at least inside docker or wsl
Tireings
Indeed he has to.
teeray
IoT devices would like a word
corbet
For a more detailed look at this change: https://lwn.net/Articles/1008399/
linsomniac
Does this mean that "adaptive interrupt mitigation" is no longer a thing in the kernel? I haven't really messed with it in ~15+ years, but it used to be that the kernel would adapt, if network rate was low it would use interrupts, but then above a certain point it would switch to turning off interrupts and using polling instead.
The issue I was trying to resolve was sudden, dramatic changes in traffic. Think: a loop being introduced in the switching, and the associated packet storm. In that case, interrupts could start coming in so fast that the system couldn't get enough non-interrupted time to disable the interrupts, UNLESS you have more CPUs than busy networking interfaces. So my solution then was to make sure that the Linux routers had more cores than network interfaces.
queuebert
This is really cool. As a high-performance computing professional, I've often wondered how much energy is wasted due to inefficient code and how much that is a problem as planetary compute scales up.
For me, it feels like a moral imperative to make my code as efficient as possible, especially when a job will take months to run on hundreds of CPU.
toomuchtodo
My experience with HPC is only tangential to being a sysadmin for data taking and cluster management for a high energy physics project; I am interested on your thoughts about using generative AI to search out for potentially power inefficient code paths in codebases for potential improvement.
jeffbee
Don't send an LLM to do a profiler's job.
rvz
Absolutely.
It is unfortunate that many software engineers continue to dismiss this as "premature optimization".
But as soon as I see resources or server costs gradually rising every month (even on idle usage) costing into the tens of thousands which is a common occurrence as the system scale, then it becomes unacceptable to ignore.
samspot
When you achieve expertise you know when to break the rules. Until then it is wise to avoid premature optimization. In many cases understandable code is far more important.
I was working with a peer on a click handler for a web button. The code ran in 5-10ms. You have nearly 200ms budget before a user notices sluggishness. My peer "optimized" the 10ms click handler to the point of absolute illegibility. It was doubtful the new implementation was faster.
didgetmaster
This brings back memories.
https://didgets.substack.com/p/finding-and-fixing-a-billion-...
wrsh07
The flip side of this is meta having a hack that keeps their GPUs busy so that the power draw is more stable during llm training (eg don't want a huge power drop when synchronizing batches)
jack_riminton
I thought of this too. Iirc it was bigger problem because surging spikes in power and cooling were harder and more costly to account for.
I'm not au fait with network data centres though, how similar are they in terms of their demands?
Jeaye
Niiice. What if we reworked 100 lines?
endorphine
Off topic: glad to read about Joe Damato again — such a blast from the past. I haven't read anything from him since I first read James Gollick posts about on tcmalloc and then learning about packagecloud.io which eventually led me to Joe's amazing posts.
null
secondcoming
I thought that Intel added the ‘pause’ instruction to make busy spinning more power friendly
mintflow
Oh, as a guy which was using DPDK like technology that do busy-poll and bypass kernel to process network packets, I must say may be much power have be wasted...
nly
Standard practice in all trading applications is to busy poll the NIC using a kernel bypass technology.
Typically saves 2-3 microseconds going through the kernel network stack.
infogulch
Wow busy waiting is more resource intensive than I realized.
Linux added a busy polling feature for high performance networking. Most Linux software does not use it, but software used in datacenters (e.g. by CDNs) that does use it makes the system very energy inefficient when things are not busy. This patch gives the the kernel the ability to turn that off when not busy to regain energy efficiency until things become busy again.
The article name is somewhat misleading, since it makes it sound like this would also apply to desktop workloads. The article says it is for datacenters and that is true, but it would have been better had the title ended with the words “in datacenters” to avoid confusion.