Branch prediction: Why CPUs can't wait?
21 comments
·August 16, 2025Izmaki
My favourite explanation of how Branch Prediction works: https://stackoverflow.com/a/11227902/1150676
zenolijo
I do wonder how branch prediction actually works in the CPU, predicting which branch to take also seems like it should be expensive, but I guess something clever is going on.
I've also found G_LIKELY and G_UNLIKELY in glib to be useful when writing some types of performance-critical code. Would be a fun experiment to compare the assembly when using it and not using it.
checker659
There are two things to predict: whether there will be a branch, and if so, to where.
hansvm
Semantically it's just a table from instruction location to branch probability. Some nuances exist in:
- Table overflow mitigation: multi-leveled tables, not wasting space on 100% predicted branches, etc
- Table eviction: Rolling counts are actually impossible without space consumption; do you have space wasted, periodic flushing, exponential moving averages, etc
- Table initialization: When do you start caring about a branch (and wasting table space), how conservative are the initial parameters, etc
- Table overflow: What do you do when a branch doesn't fit in the table but should
As a rule of thumb, no extra information/context is used for branch prediction. If a program over the course of a few thousand instructions hits a branch X% of the time, then X will be the branch prediction. If you have context you want to use to influence the prediction, you need to manifest that context as additional lines of assembly the predictor can use in its lookup table.
As another rule of thumb, if the hot path has more than a few thousand branches (on modern architectures, often just a few thousand <100% branches (you want the assembly to generate the jump-if-not-equal in the right direction for that architecture though, else you'll get a 100% misprediction rate instead)) then you'll hit slow paths -- multi-leveled search, mispredicted branches, etc.
It's reasonably interesting, and given that it's hardware it's definitely clever, but it's not _that_ clever from a software perspective. Is there anything in particular you're curious about?
NobodyNada
> If a program over the course of a few thousand instructions hits a branch X% of the time, then X will be the branch prediction.
This is not completely true - modern branch predictors can recognize patterns such as "this branch is taken every other time", or "every 5th time", etc. They also can, in some cases, recognize correlations between nearby branches.
However, they won't use factors like register or memory contents to predict branches, because that would require waiting for that data to be available to make the prediction -- which of course defeats the point of branch prediction.
moregrist
There’s ample information out there. There are quite a few text books, blogs, and YouTube videos covering computer architecture, including branch prediction.
For example: - Dan Luu has a nice write-up: https://danluu.com/branch-prediction/ - Wikipedia’s page is decent: https://en.m.wikipedia.org/wiki/Branch_predictor
> I've also found G_LIKELY and G_UNLIKELY in glib to be useful when writing some types of performance-critical code.
A lot of the time this is a hint to the compiler on what the expected paths are so it can keep those paths linear. IIRC, this mainly helps instruction cache locality.
_chris_
> A lot of the time this is a hint to the compiler on what the expected paths are so it can keep those paths linear. IIRC, this mainly helps instruction cache locality.
The real value is that the easiest branch to predict is a never-taken branch. So if the compiler can turn a branch into a never-taken branch with the common path being straight line code, then you win big.
And it takes no space or effort to predict never taken branches.
o11c
> And it takes no space or effort to predict never taken branches.
Is that actually true, given that branch history is stored lossily? What if other branches that have the same hash are all always taken?
ActorNightly
Branch prediction is probably the main reason CPUs got fast in the past 2 decades. As Jim Keller descrbied, modern BPs look very much like neural networks.
delta_p_delta_x
> I do wonder how branch prediction actually works in the CPU, predicting which branch to take also seems like it should be expensive
There are a few hardware algorithms that are vendor-dependent. The earliest branch predictors were two-bit saturating counters that moved between four states of 'strongly taken', 'weakly taken', 'weakly not taken', 'strongly not taken', and the state change depended on the eventual computed result of the branch.
Newer branch predictors are stuff like two-level adaptive branch predictors that are a hardware `std::unordered_map` of branch instruction addresses to the above-mentioned saturating counters; this remembers the result of the last n (where n is the size of the map) branch instructions.
Ryzen CPUs contain perceptron branch predictors that are basically hardware neural networks—not far from LLMs.
pkaye
Here is some examples of the different branch prediction algorithms.
https://enesharman.medium.com/branch-prediction-algorithms-a...
bee_rider
Modern branch predictors are pretty sophisticated. But, it is also worth keeping in mind that you can do pretty good, for a lot of codes, by predicting simple things like “backwards jumps will probably be followed.” Because a backwards jump is probably a loop, and so jumping backwards is by far the most likely thing to do (because most loops go through more than one iteration).
And a lot of programmers are willing to conspire with the hardware folks, to make sure their heuristics work out. Poor branches, never had any chances.
rayiner
It’s fairly expensive but well suited to pipelined implementations in hardware circuits: https://medium.com/@himanshu0525125/global-history-branch-pr.... Modern CPU branch predictors can deliver multiple predictions per clock cycle.
whitten
I know branch prediction is essential if you have instruction pipelining in actual CPU hardware.
It is an interesting thought experiment re instruction pipelining in a virtual machine or interpreter design. What would you change in a design to allow it ? Would an asynchronous architecture be necessary ? How would you merge control flow together efficiently to take advantage of it ?
cogman10
With the way architectures have gone, I think you'd end up recreating VLIW. The thing holding back VLIW was compilers were too dumb and computers too slow to really take advantage of it. You ended up with a lot of "NOP"s as a result in the output. VLIW is essentially how modern GPUs operate.
The main benefit of VLIW is that it simplifies the processor design by moving the complicated tasks/circuitry into the compiler. Theoretically, the compiler has more information about the intent of the program which allows it to better optimize things.
It would also be somewhat of a security boon. VLIW moves the branch prediction (and rewinding) into the processor. With exploits like spectre, pulling that out would make it easier to integrate compiler hints on security sensitive code "hey, don't spec ex here".
_chris_
> The thing holding back VLIW was compilers were too dumb
That’s not really the problem.
The real issue is that VLIW requires branches to be strongly biased, statically, so a compiler can exploit them.
But in fact branches are very dynamic but trivially predicted by branch predictors, so branch predictors win.
Not to mention that even vliw cores use branch predictors, because the branch resolution latency is too long to wait for the branch outcome to be known.
addaon
> I know branch prediction is essential if you have instruction pipelining in actual CPU hardware.
With sufficiently slow memory, relative to the pipeline speed. A microcontroller executing out of TCM doesn’t gain anything from prediction, since instruction fetches can keep up with the pipeline.
MMIX instruction set specifies the branch prediction explicitly.
If you also have a "branch always" and "branch never" and the compiler can generate a code to modify that instruction during the initialization of the program, then for some programs where some of the branches are known during initialization, it might modify the code when it is initialized before it is executed.