Skip to content(if available)orjump to list(if available)

Linux Performance Analysis (2015)

Linux Performance Analysis (2015)

39 comments

·July 29, 2025

janvdberg

My first command is always 'w'. And I always urge young engineers to do the same.

There is no shorter command to show uptime, load averages (1/5/15 minutes), logged in users. Essential for quick system health checks!

mmh0000

It should also be mentioned, Linux Load Average is a complex beast[1]. However, a general rule of thumb that works for most environments is:

You always want the load average to be less than the total number of CPU cores. If higher, you're likely experiencing a lot of waits and context switching.

[1] https://www.brendangregg.com/blog/2017-08-08/linux-load-aver...

tanelpoder

On Linux this is not true, on an IO heavy system - with lots of synchronous I/Os done concurrently by many threads - your load average may be well over the number of CPUs, without having a CPU shortage. Say, you have 16 CPUs, load avg is 20, but only 10 threads out of 20 are in Runnable (R) mode on average, and the other 10 are in Uninterruptible sleep (D) mode. You don't have a CPU shortage in this case.

Note that synchronous I/O completion checks for previously submitted asynchronous I/Os (both with libaio and io_uring) do not contribute to system load as they sleep in the interruptible sleep (S) mode.

That's why I tend to break down the system load (demand) by the sleep type, system call and wchan/kernel stack location when possible. I've written about the techniques and one extreme scenario ("system load in thousands, little CPU usage") here:

https://tanelpoder.com/posts/high-system-load-low-cpu-utiliz...

null

[deleted]

lotharcable

The proper way is to have a idea of what it normally is before you need to troubleshoot issues.

What is a 'good load' depends on the application and how it works. Some servers something close to 0 is a good thing. Other servers a 10 or lower means something is seriously wrong.

Of course if you don't know what is a 'good' number or you are trying to optimize a application and looking for bottlenecks then it is time to reach for different tools.

chasil

Glances is nice. I think it is a clone of HP-UX Glance.

https://nicolargo.github.io/glances/

I have also hacked basic top to add database login details to server processes.

Propelloni

Me too! So much so that I add it to my .bashrc everywhere.

__turbobrew__

If you like this post, I would recommend “BPF Performance Tools” and “Systems Performance: Enterprise and the Cloud” by Brenden Gregg.

I have pulled out a few miracles using these tools (identifying kernel bottlenecks or profiling programs using ebpf) and it has been well worth the investment to read through the books.

yankcrime

Agreed, highly recommended reading. A slightly more up-to-date post of his which recommends tools in such situations is: https://www.brendangregg.com/blog/2024-03-24/linux-crisis-to...

wcunning

Literally did miracles at my last job with the first book and that got me my current job, where I also did some impressive proving which libraries had what performance with it again... Seriously valuable stuff.

__turbobrew__

Yea it is kindof cheating. I was helping someone debug why their workload was soft locking. I ran the profiling tools and found that cgroup accounting for the workload was taking nearly all the cpu time on locks. From searches through linux git logs I found that cgroup accounting in older kernels had global locks. I saw that newer kernels didn’t have this, so we moved to a newer kernels and all the issues went away.

People thought I was a wizard lol.

ch33zer

Almost all of these have been replaced for me with below: https://developers.facebook.com/blog/post/2021/09/21/below-t...

It is excellent and contains most things you could need. Downside is that it isn't yet a standard tool so you need to get it installed across your fleet

benreesman

Oh man nostalgia city. I vividly remember meeting atop time travel debugging at 3am in Menlo Park in 2012, wild times.

tomhow

Previously:

Linux Performance Analysis in 60,000 Milliseconds - https://news.ycombinator.com/item?id=10652076 - Nov 2015 (11 comments)

Linux Performance Analysis - https://news.ycombinator.com/item?id=10654681 - Dec 2015 (82 comments)

Linux Performance Analysis in 60k Milliseconds (2015) [pdf] - https://news.ycombinator.com/item?id=44070741 - May 2025 (1 comment)

mortar

danieldk

Yeah, I skipped the date and then saw Linux 3.13 in the examples.

5pl1n73r

After this article was written, `free -m` on many systems started to have an "available" column that shows the sum of reclaimable and free memory. It's nicer than the "-/+" section shown in this old article.

  $ free -m
                 total        used        free      shared  buff/cache   available
  Mem:            3915        2116        1288          41         769        1799
  Swap:            974           0         974

fduran

shameless plug: you can practice this in a free VM https://docs.sadservers.com/docs/scenario-guides/practical-l... (there's a typo there to keep you on your feet)

CodeCompost

> At Netflix we have a massive EC2 Linux cloud

Wait a minute. I thought Netflix famously ran FreeBSD.

craftkiller

My understanding was their CDN ran on FreeBSD, but not their API servers. But I don't work for Netflix.

diab0lic

Your understanding is correct.

achierius

Why did they not choose to use it for both (or neither)? I.e., what reasons for using FreeBSD on CDN servers would not also apply to using them for API servers?

drewg123

The CDN runs FreeBSD. Linux is used for nearly everything else.

null

[deleted]

louwrentius

The iostat command has always been important to observe HDD/SDD latency numbers.

Especially SSDs are treated like magic storage devices with infinite IOPS at Planck-scale latency.

Until you discover that SSDs that can do 10GB/s don't do nearly so well (not even close) when you access them in a single thread with random IOPS, with queue depth of 1.

wcunning

That's where you start down the eBPF rabbit hole with bcc/biolatency and other block device histogram tools. Further, the cache hit rate and block size behavior of the SSD/NVME drive can really affect things if, say, your autonomous vehicle logging service uses MCAP with a chunk size much smaller than a drive block... Ask me how I know

rkachowski

it's 10 years later - what's the 60 second equivalent in 2025?

wcunning

BlackLotus89

PSI (pressure stall information) are missing.

I always use a configured!(F2) htop (not mentioned as well). Always enable PSI information in htop (some red hat systems I work with still don't offer them...).

If you have zfs enable those meters as well and htop has an io tab, use it!

whalesalad

I quite like `iotop` as an alternative to iostat. https://linux.die.net/man/1/iotop

emmelaich

Nice list. sar/sysstat is underrated imho.

mmh0000

Oh man. There's a blast from the past.

Today, you'd want something like:

Prometheus + Node Exporter [1]

[1] https://github.com/prometheus/node_exporter