Optimizing Heap Allocations in Go: A Case Study
7 comments
·April 18, 2025athorax
nu11ptr
In general, it is a win, since it lets you code faster and 80-90% the performance doesn't matter. Over time, you learn generally what leads to heap allocs and what doesn't. In rare hot spot, using -m will show you the allocations and you can optimize.
athorax
I would generally agree. It's good enough performance for most applications. For those that it isn't fast enough for (even with optimizations like these), it still allows for rapid prototyping to arrive at that conclusion.
Thaxll
At least you have the tools to understand where things get allocated.
Ygg2
I think same applies to any GC language. Ride is fun until GC starts either taking too much time, too much memory or taking too much of CPU.
returningfory2
> It's possible that this is a compiler bug. It's also possible that there's some fringe case where the reference actually can escape via that method call, and the compiler doesn't have enough context to rule it out.
Here's an example, I think: suppose the method spawns a new goroutine that contains a reference to `chunkStore`. This goroutine can outlive the `ReadBytes` function call, and thus Go has to heap allocate the thing being referenced.
In general, this kind of example makes me suspect that Go's escape analysis algorithm treats any method call as a black box and heap allocates anything being passed to it by reference.
38
instead of this:
t.Buf = []byte{}
you can just do: t.Buf = nil