Skip to content(if available)orjump to list(if available)

Why is my CPU usage always 100%?

Why is my CPU usage always 100%?

149 comments

·January 9, 2025

WediBlino

An old manager of mine once spent the day trying to kill a process that was running at 99% on Windows box.

When I finally got round to see what he was doing I was disappointed to find he was attempting to kill the 'system idle' process.

Twirrim

Years ago I worked for a company that provided managed hosting services. That included some level of alarm watching for customers.

We used to rotate the "person of contact" (POC) each shift, and they were responsible for reaching out to customers, and doing initial ticket triage.

One customer kept having a CPU usage alarm go off on their Windows instances not long after midnight. The overnight POC reached out to the customer to let them know that they had investigated and noticed that "system idle processes" were taking up 99% of CPU time and the customer should probably investigate, and then closed the ticket.

I saw the ticket within a minute or two of it reopening as the customer responded with a barely diplomatic message to the tune of "WTF". I picked up that ticket, and within 2 minutes had figured out the high CPU alarm was being caused by the backup service we provided, apologised to the customer and had that ticket closed... but not before someone not in the team saw the ticket and started sharing it around.

I would love to say that particular support staff never lived that incident down, but sadly that particular incident was par for the course with them, and the team spent inordinate amount of time doing damage control with customers.

panarky

In the 90s I worked for a retail chain where the CIO proposed to spend millions to upgrade the point-of-sale hardware. The old hardware was only a year old, but the CPU was pegged at 100% on every device and scanning barcodes was very sluggish.

He justified the capex by saying if cashiers could scan products faster, customers would spend less time in line and sales would go up.

A little digging showed that the CIO wrote the point-of-sale software himself in an ancient version of Visual Basic.

I didn't know VB, but it didn't take long to find the loops that do nothing except count to large numbers to soak up CPU cycles since VB didn't have a sleep() function.

jimt1234

That's hilarious. I had a similar situation, also back in the 90s, when a developer shipped some code that kept pegging the CPU on a production server. He insisted it was the server, and the company should spend $$$ on a new one to fix the problem. We went back-and-forth for a while: his code was crap versus the server hardware was inadequate, and I was losing the battle, because I was just a lowly sysadmin, while he was a great software engineer. Also, it was Java code, and back then, Java was kinda new, and everyone thought it could do no wrong. I wasn't a developer at all back then, but I decided to take a quick look at his code. It was basically this:

1. take input from a web form

2. do an expensive database lookup

3. do an expensive network request, wait for response

4. do another expensive network request, wait for response

5. and, of course, another expensive network request, wait for response

6. fuck it, another expensive network request, wait for response

7. a couple more database lookups for customer data

8. store the data in a table

9. store the same data in another table. and, of course, another one.

10. now, check to see if the form was submitted with valid data. if not, repeat all steps above to back-out the data from where it was written.

11. finally, check to see if the customer is a valid/paying customer. if not, once again, repeat all the steps above to back-out the data.

I looked at the logs, and something like 90% of the requests were invalid data from the web form or invalid/non-paying customers (this service was provided only to paying customers).

I was so upset from this dude convincing management that my server was the problem that I sent an email to pretty much everyone that said, basically, "This code sucks. Here's the problem: check for invalid data/customers first.", and I included a snippet from the code. The dude replied-to-all immediately, claiming I didn't know anything about Java code, and I should stay in my lane. Well, throughout the day, other emails started to trickle in, saying, "Yeah, the code is the problem here. Please fit it ASAP." The dude was so upset that he just left, he went completely AWOL, he didn't show up to work for a week or so. We were all worried, like he jumped off a bridge or something. It turned into an HR incident. When he finally returned, he complained to HR that I stabbed him in the back, that he couldn't work with me because I was so rude. I didn't really care; I was a kid. Oh yeah, his nickname became AWOL Wang. LOL

liontwist

I am a little confused. He was intentionally sabotaging performance?

m463

That's what managers do.

Silly idle process.

If you've got time for leanin', you've got time for cleanin'

cassepipe

I abandonned Windows 8 for linux because of an bug (?) where my HDD was showing it was 99% busy all the time. I had removed every startup program that could be and analysed thouroughly for any viruses, to no avail. Had no debugging skills at the time and wasn't sure the hardware could stand windows 10. That's how linux got me.

ryandrake

Recent Linux distributions are quickly catching up to Windows and macOS. Do a fresh install of your favorite distribution and then use 'ps' to look at what's running. Dozens of processes doing who knows what? They're probably not pegging your CPU at 100%, which is good, but it seems that gone are the days when you could turn on your computer and it was truly idle until you commanded it to actually do something. That's a special use case now, I suppose.

ndriscoll

IME on Linux the only things that use random CPU while idle are web browsers. Otherwise, there's dbus and NetworkManager and bluez and oomd and stuff, but most processes have a fraction of a second used CPU over months. If they're not using CPU, they'll presumably swap out if needed, so they're using ~nothing.

craftkiller

This is one the reasons I love FreeBSD. You boot up a fresh install of FreeBSD and there are only a couple processes running and I know what each of them does / why they are there.

m3047

At least under some circumstances Linux shows (schedulable) threads as separate processes. Just be aware of that.

johnmaguire

this is why I use arch btw

ciupicri

I recommend using systemd-cgls to get a better idea of what's going on.

margana

Why is this such a huge issue if it merely shows it's busy, but the performance of it indicates that it actually isn't? Switching to Linux can be a good choice for a lot of people, the reason just seems a bit odd here. Maybe it was simply the straw that broke the camel's back.

RHSeeger

1. I expect that a HD that is actually doing things 100% of the time is going to have it's lifespan significantly reduce, and

2. If it isn't doing anything and it just lying to you... when there IS a problem, your tools to diagnose the problem are limited because you can't trust what they're telling you

ddingus

Over the years I have used top and friends to profile machines and identify expensive bottlenecks. Once one comes to count on those tools, the idea of one being wrong, and actually really wrong! --is just a bad rub.

Fixing it would be gratifying and reassuring too.

saintfire

I had this happen with an nvme drive. Tried changing just about every setting that affected the slot.

Everything worked fine on my Linux install ootb

BizarroLand

Windows 8/8.1/10 had an issue for a while where when it was run on spinning rust HDD it would peg it out and slow the system to a crawl.

The only solution was to swap over to a SSD.

nullhole

To be fair, it is a really poorly named "process". The computer equivalent of the "everything's ok" alarm.

chowells

Long enough ago (win95 era) it wasn't part of Windows to sleep the CPU when there was no work to be done. It always assigned some task to the CPU. The system idle process was a way to do this that played nicely with all of the other process management systems. I don't remember when they finally added CPU power management. SP3? Win98? Win98SE? Eh, it was somewhere in there.

drsopp

I remember listening on FM radio to my 100MHz computer running FreeBSD, which sounded like calm rain, and to Windows 95, which sounded like a screaming monster.

eggsome

There were a number of hacks to deal with this. RAIN was very popular back in the day, but AMNHLTM appears to have better compatibility with modern CPUs.

Agentus

reminds of when i was a kid and noticed a virus had taken over a registry. from that point forward i attempted to delete every single registry file, not quite understanding. Between that and excessive bad website viewing, I dunno how i ever managed to not brick my operating system, unlike my grandma who seemed to brick her desktop in a timely fashion before each of the many monthly visits to her place.

bornfreddy

The things grandmas do to see their grandsons regularly. Smart. :-)

jsight

I worked at a government site with a government machine at one time. I had an issue, so I took it to the IT desk. They were able to get that sorted, but then said I had another issue. "Your CPU is running at 100% all the time, because some sort of unkillable process is consuming all your cpu".

Yep, that was "System Idle" that was doing it. They had the best people.

belter

Did he have a pointy hair?

mrmuagi

I wonder if you make a process with idle in it you could end up in the reverse track where users ignore it. Is there anything preventing an executable being named System Idle.

veltas

It doesn't feel like reading 4 times is necessarily a portable solution, if there will be more versions at different speeds and different I/O architectures; or how this will work under more load, and whether the original change was done to fix some other performance problem OP is not aware of, but not sure what else can be done. Unfortunately many vendors like Marvell can seriously under-document crucial features like this. If anything it would be good to put some of this info in the comment itself, not very elegant but how else practically are we meant to keep track of this, is the mailing list part of the documentation?

Doesn't look like there's a lot of discussion on the mailing list, but I don't know if I'm reading the thread view correctly.

adrian_b

This is a workaround for a hardware bug of a certain CPU.

Therefore it cannot really be portable, because other timers in other devices will have different memory maps and different commands for reading.

The fault is with the designers of these timers, who have failed to provide a reliable way to read their value.

It in hard to believe that this still happens in this century, because reading correct values despite the fact that the timer is incremented or decremented continuously is an essential goal in the design of any timer that may be read, and how to do it has been well known for more than 3 quarters of century.

The only way to make such a workaround somewhat portable is to parametrize it, e.g. with the number of retries for direct reading or with the delay time when reading the auxiliary register. This may be portable between different revisions of the same buggy timer, but the buggy timers in other unrelated CPU designs will need different workarounds anyway.

stkdump

> how to do it has been well known for more than 3 quarters of century

Don't leave me hanging! How to do it?

adrian_b

Direct reading without the risk of reading incorrect values is possible only when the timer is implemented using a synchronous counter instead of an asynchronous counter and the synchronous counter must be fast enough to ensure a stable correct value by the time when it is read, and the reading signal must be synchronized with the timer clock signal.

Synchronous counters are more expensive in die area than asynchronous counters, especially at high clock frequencies. Moreover, it may be difficult to also synchronize the reading signal with the timer clock. Therefore the second solution may be preferable, which uses a separate capture register for reading the timer value.

This was implemented in the timer described in TFA, but it was done in a wrong way.

The capture register must either ensure that the capture is already complete by the time when it is possible to read its value after giving a capture command, or it must have some extra bit that indicates when its value is valid.

In this case, one can read the capture register until the valid bit is on, having a complete certainty that the end value is correct.

When adding some arbitrary delay between the capture command and reading the capture register, you can never be certain that the delay value is good.

Even when the chosen delay is 100% effective during testing, it can result in failures on other computers or when the ambient temperature is different.

veltas

> This is a workaround for a hardware bug of a certain CPU.

What about different variants, revisions, and speeds of this CPU?

Karliss

The related part of doc has one more note "This request requires up to three timer clock cycles. If the selected timer is working at slow clock, the request could take longer." From the way doc is formatted it's not fully clear what "this request" refers to. It might explain where 3-5 attempts come from, and that it might not be pulled completely out of thin air. But the part about taking up to but sometimes more clock cycles makes it impossible to have a "proper" solution without guesswork or further clarifications from vendor.

"working at slow clock" part, might explain why some other implementations had different code path for 32.768 KHz clocks. According to docs there are two available clock sources "Fast clock" and "32768 Hz" which could mean that "slow clock" refers to specific hardware functionality is not just a vague phrase.

As for portability concerns, this is already low level hardware specific register access. If Marvell releases new SOC not only there is no assurance that will require same timing, it might was well have different set of registers which require completely different read and setup procedure not just different timing.

One thing that slightly confuses me - the old implementation had 100 cycles of "cpu_relax()" which is unrelated to specific timer clock, but neither is reading of TMR_CVWR register. Since 3-5 of cycles of that worked better than 100 cycles of cpu_relex, it clearly takes more time unless cpu_relax part got completely optimized out. At least I didn't find any references mentioning that timer clock affects read time of TMR_CVWR.

veltas

It sounds like this is an old CPU(?), so no need to worry about the future here.

> I didn't find any references mentioning that timer clock affects read time of TMR_CVWR.

Reading the register might be related to the timer's internal clock, as it would have to wait for the timer's bus to respond. This is essentially implied if Marvell recommend re-reading this register, or if their reference implementation did so. My main complaint is it's all guesswork, because Marvell's docs aren't that good.

MBCook

The Chumby hardware I’m thinking of is from 2010 or so. So if that’s it, it would certainly be old. And it would explain a possible relation with the OLPC having a similar chip.

https://en.wikipedia.org/wiki/Chumby

_nalply

I also wondered about this, but there's a crucial differnce, no idea if it matters: in that loop it reads the register, so the register is read at least 4 times.

rbanffy

In the late 1990's I worked in a company that had a couple mainframes in their fleet and once I looked into a resource usage screen (Omegamon, perhaps? Is it that old?) and noticed the CPU was pegged at 100%. I asked the operator if that was normal. His answer was "Of course. We paid for that CPU, might as well use it". Funny though that mainframes are designed for that - most, if not all, non-application work is offloaded to other processors in the system so that the CPU can run applications as fast as it can.

defrost

Having a number of running processes take the CPU usage to 100% is one thing, have an under utilised CPU with almost no processes running report that usage is at 100% is another thing, the subject of the article here.

rbanffy

I didn't intend this as an example of the issue the article mentions (a misreporting of usage because of a hardware design issue). It was just a fun example of how different hardware behaves differently.

One can also say Omegamon (or whatever tool) was misreporting, because it didn't account for the processor time of the various supporting systems that dealt with peripheral operations. After all, they also paid for the disk controllers, disks, tape drives, terminal controllers and so on, so they could want to drive those to close to 100% as well.

defrost

Sure, no drama - I came across as a little dry and clipped as I was clarifying on the fly as it were.

I had my time squeezing the last cycle possible from a Cyber 205 waaaay back in the day.

datadrivenangel

Some mainframes have the ability to lock clock speed and always run at exactly 100%, so you can often have hard guarantees about program latency and performance.

sneela

This is a wonderful write-up and a very enjoyable read. Although my knowledge about systems programming on ARM is limited, I know that it isn't easy to read hardware-based time counters; at the very least, it's not as simple as the x86 rdtsc [1]. This is probably why the author writes:

> This code is more complicated than what I expected to see. I was thinking it would just be a simple register read. Instead, it has to write a 1 to the register, and then delay for a while, and then read back the same register. There was also a very noticeable FIXME in the comment for the function, which definitely raised a red flag in my mind.

Regardless, this was a very nice read and I'm glad they got down to the issue and the problem fixed.

[1]: https://www.felixcloutier.com/x86/rdtsc.

pm215

Bear in mind that the blog post is about a 32 bit SoC that's over a decade old, and the timer it is reading is specific to that CPU implementation. In the intervening time both timers and performance counters have been architecturally standardised, so on a modern CPU there is a register roughly equivalent to the one x86 rdtsc uses and which you can just read; and kernels can use the generic timer code for timers and don't need to have board specific functions to do it.

But yeah, nice writeup of the kinds of problem you can run into in embedded systems programming.

dmitrygr

Curiously, instead of "set capture reg, wait for clock edge, read", the "read reg twice, until same result is obtained" approach is ignored. This is strange as it is usually much faster - reading a 3.25MHz counter at 200MHz+ twice is very likely to see the same value twice. For a 32KHz counter, it is basically guaranteed.

   u32 val;
   do {
       val = readl(...);
   } while (val != readl(...));

   return val;
compiles to a nice 6-instr little function on arm/thumb too, with no delays

   readclock:
     LDR  R2, =...
   1:
     LDR  R0, [R2]
     LDR  R1, [R2]
     CMP  R0, R1
     BNE  1b
     BX   LR

askvictor

My recurring issue (on a variety of laptops, both Linux and Windows): the fans will start going full-blast, everything slows down, then as soon as I open a task manager CPU usage drops from 100% to something negligible.

crazydoggers

You my friend, most likely have mining malware on your systems. They’ll shutdown when they detect task manager is opened so you don’t notice them.

michaelcampbell

That was my thought too; one way to get another data point is to just run the task manager as soon as you boot and let it stay there. If the fan behavior NEVER comes back while doing that, another point in the "mining malware" favor (though of course, not definitive).

Though he did say a VAREITY of laptops, both Windows and Linux. Can someone be _that_ unlucky?

askvictor

If it was malware I'd expect it to happen more often; it's usually when I do have a lot of thing going on (browser tabs, VSCode sessions), so I spark up the task manager to work out the problem process, but CPU usage drops before I can investigate.

Plus I'd be surprised if I got the same thing on both linux and windows

RicardoLuis0

or possibly the malware has spread to multiple of their devices?

steventhedev

Aside from the technical beauty of this post, what is the practical impact of this?

Fan speeds should ideally be looking at temperature sensors, CPU idling is working albeit with interrupt waits as pointed out here. The only impact seems to be surprise that the CPU is working harder than it really is when looking at this number.

It's far better to look at the system load (which was 0.0 - already a strong hint this system is working below capacity). It has a formal definition (average waiting cpu task queue depth over 1, 5, 10 minutes) and succinctly captures the concept of "this machine is under load".

Many years ago, a coworker deployed a bad auditd config. CPU usage was below 10%, but system load was 20x the number of cores. We moved all our alerts to system load and used that instead.

thrdbndndn

I don't get the fix.

Why reading it multiple times will fix the issue?

Is it just because reading takes time, therefore reading multiple time makes the needed time from writing to reading passes? If so, it sounds like a worse solution than just extending waiting delay longer like the author did initially.

If not, then I would like to know the reason.

(Needless to say, a great article!)

adrian_b

The article says that the buggy timer has 2 different methods for reading.

When reading directly, the value may be completely wrong, because the timer is incremented continuously and the updating of its bits is not synchronous with the reading signal. Therefore any bit in the value that is read may be wrong, because it has been read exactly during a transition between valid values.

The workaround in this case is to read multiple times and accept as good a value that is approximately the same for multiple reads. The more significant bits of the timer value change much less frequently than the least significant bits, so at most attempts of reading, only a few bits can be wrong. Only seldom the read value can be complete garbage, when comparing it with the other read values will reject it.

The second reading method was to use a separate capture register. After giving a timer capture command, reading an unchanging value from the capture register should have caused no problems. Except that in this buggy timer, it is unpredictable when the capture is actually completed. This requires the insertion of an empirically determined delay time before reading the capture register, hopefully allowing enough time for the capture to be complete.

Dylan16807

> The workaround in this case is to read multiple times and accept as good a value that is approximately the same for multiple reads.

It's only incrementing at 3.25MHz, right? Shouldn't you be able to get exactly the same value for multiple reads? That seems both simpler and faster than using this very slow capture register, but maybe I'm missing something.

adrian_b

In this specific case, yes, if none of two successive readings is corrupted and when you did not straddle a transition, they should be the same.

In general, when reading a timer that increments faster, you may want to mask some of the least significant bits, to ensure that you can have the same values on successive readings.

dougg3

Author here. Thanks! I believe the register reads are just extending the delay, although the new approach does have a side effect of reading from the hardware multiple times. I don't think the multiple reads really matter though.

I went with the multiple reads because that's what Marvell's own kernel fork does. My reasoning was that people have been using their fork, not only on the PXA168, but on the newer PXAxxxx series, so it would be best to retain Marvell's approach. I could have just increased the delay loop, but I didn't have any way of knowing if the delay I chose would be correct on newer PXAxxx models as well, like the chip used in the OLPC. Really wish they had more/better documentation!

rep_lodsb

It's possible that actually reading the register takes (significantly) more time than an empty countdown loop. A somewhat extreme example of that would be on x86, where accessing legacy I/O ports for e.g. the timer goes through a much lower-clocked emulated ISA bus.

However, a more likely explanation is the use of "volatile" (which only appears in the working version of the code). Without it, the compiler might even have completely removed the loop?

deng

> However, a more likely explanation is the use of "volatile" (which only appears in the working version of the code). Without it, the compiler might even have completely removed the loop?

No, because the loop calls cpu_relax(), which is a compiler barrier. It cannot be optimized away.

And yes, reading via the memory bus is much, much slower than a barrier. It's absolutely likely that reading 4 times from main memory on such an old embedded system takes several hundred cycles.

Karliss

From what I understand the timer registers should be on APB(1) bus which operates at fixed 26MHz clock. That should be much closer to the scale of fast timer clocks compared to cpu_relax() and main CPU clock running somewhere in the range of 0.5-1GHz and potentially doing some dynamic frequency scaling for power saving purpose.

The silliest part of this mess is that 26Mhz clock for APB1 bus is derived from the same source as 13Mhz, 6.5Mhz 3.25Mhz, 1Mhz clocks usable by fast timers.

rep_lodsb

You're right, didn't account for that. Though even when declared volatile, the counter variable would be on the stack, and thus already in the CPU cache (at least 32K according to the datasheet)?

Looking at the assembly code for both versions of this delay loop might clear it up.

mastax

Karliss above found docs which mention:

> This request requires up to three timer clock cycles. If the selected timer is working at slow clock, the request could take longer.

Let's ignore the weirdly ambiguous second sentence and say for pedagogical purposes it takes up to three timer clock cycles full stop. Timer clock cycles aren't CPU clock cycles, so we can't just do `nop; nop; nop;`. How do we wait three timer clock cycles? Well a timer register read is handled by the timer peripheral which runs at the timer clock, so reading (or writing) a timer register will take until at least the end of the next timer clock.

This is a very common pattern when dealing with memory mapped peripheral registers.

---

I'm making some reasonable assumptions about how the clock peripheral works. I haven't actually dug into the Marvell documentation.

deng

> Is it just because reading takes time, therefore reading multiple time makes the needed time from writing to reading passes?

Yes.

> If so, it sounds like a worse solution than just extending waiting delay longer like the author did initially.

Yeah, it's a judgement call. Previously, the code called cpu_relax() for waiting, which is also dependent on how this is defined (can be simply NOP or barrier(), for instance). The reading of the timer register maybe has the advantage that it is dependent on the actual memory bus speed, but I wouldn't know for sure. Hardware at that level is just messy, and especially niche platforms have their fair share of bugs where you need to do ugly workarounds like these.

What I'm rather wondering is why they didn't try the other solution that was mentioned by the manufacturer: reading the timer directly two times and compare it, until you get a stable output.

evanjrowley

This headline reminded me of Mumptris, an implementation of Tetris in the old mainframe-oriented language MUMPS, which by design, uses 100% CPU to reduce latency: https://news.ycombinator.com/item?id=4085593

a1o

This was very well written, I somehow read every single line and didn't skip to the end. Great work too!

RajT88

TIL there are still Chumby's alive in the wild. My Insignia Chumby 8 didn't last.

rbohac

This was a well written article! It was nice to read the process of troubleshooting with the rabbit holes included. Glad you stuck it out!