Skip to content(if available)orjump to list(if available)

Understanding the Go Scheduler

Understanding the Go Scheduler

14 comments

·May 18, 2025

__turbobrew__

Make sure you set GOMAXPROCS when the runtime is cgroup limited.

I once profiled a slow go program running on a node with 168 cores, but cpu.max was 2 cores for the cgroup. The runtime defaults to set GOMAXPROCS to the number of visible cores which was 168 in this case. Over half the runtime was the scheduler bouncing goroutines between 168 processes despite cpu.max being 2 CPU.

The JRE is smart enough to figure out if it is running in a resource limited cgroup and make sane decisions based upon that, but golang has no such thing.

xyzzy_plugh

Relevant proposal to make GOMAXPROCS cgroup-aware: https://github.com/golang/go/issues/73193

yencabulator

This should be automatic these days (for the basic scenarios).

https://github.com/golang/go/blob/a1a151496503cafa5e4c672e0e...

formerly_proven

This is probably going to save quadrillions of CPU cycles by making an untold number of deployed Go applications a bit more CPU efficient. Since Go is the "lingua franca" of containers, many ops people assume the Go runtime is container-aware - it's not (well not in any released version, yet).

If they'd now also make the GC respect memory cgroup limits (i.e. automatic GOMEMLIMIT), we'd probably be freeing up a couple petabytes of memory across the globe.

Java has been doing these things for a while, even OpenJDK 8 has had those patches since probably before covid.

mappu

GOMEMLIMIT is not as easy, you may have other processes in the same container/cgroup also using memory.

jasonthorsness

uh isn't that change 3 hours old?

yencabulator

Oh heh yes it is. I just remembered the original discussion from 2019 (https://github.com/golang/go/issues/33803) and grepped the source tree for cgroup to see if that got done or not, but didn't check when it got done.

As said in 2019, import https://github.com/uber-go/automaxprocs to get the functionality ASAP.

jasonthorsness

It's always a sign of good design when something as complex as the scheduler described "just works" with the simple abstraction of the goroutine. What a great article.

"1/61 of the time, check the global run queue." Stuff like this is a little odd; I would have thought this would be a variable dependent on the number of physical cores.

kortex

Fantastic writeup! Visualizations are great, the writeup is thorough but readable.

90s_dev

I heard that the scheduler is a huge obstacle to many potential optimizations, is that true?

NAHWheatCracker

In some ways, yes. If you want to optimize at that level you ought to use another language.

I'm not a low level optimization guy, but I've had occasions where I wanted control over which threads my goroutines are running on or prioritizing important goroutines. It's a trade off for making things less complex, which is standard for Go.

I suppose there's always hope that the Go developers can change things.

silisili

You can kinda work around this though. runtime package has a LockOSThread that pins a goroutine to its current thread and prevents others from using it.

If you model it in a way where you have one goroutine per os thread that receives and does work, it gets you close. But in many cases that means rearching the entire code base, as it's not a style I typically reach for.