Caches: LRU vs. Random
10 comments
·July 31, 2025bob1029
nielsole
It's also similar to load balancing. Least requests vs. best of two. Benefit being that you never serve the most loaded backend. I guess the feared failure mode of least requests and LRU is similar. Picking the obvious choice might be the worst choice in certain scenarios (fast failures and cache churning respectively)
pvillano
The idea of using randomness to extend cliffs really tickles my brain.
Consider repeatedly looping through n+1 objects when only n fit in cache. In that case LRU misses/evicts on every lookup! Your cache is useless and performance falls of a cliff! 2-random turns that performance cliff into a gentle slope with a long tail(?)
I bet this effect happens when people try to be smart and loop through n items, but have too much additional data to fit in registers.
hinkley
I have never been able to wrap my head around why 2 random works better in load balancing than leastconn. At least in caching it makes sense why it would work better than another heuristic.
yuliyp
There are a few reasons:
1. Random is the one algorithm that can't be fooled. So even if there's something against number of connections as a load metric, not using that metric alone dampens the problems.
2. There is a lag between selection and actually incrementing the load metric for the next request, meaning that just using the load metric alone is prone to oscillation
3. A machine that's broken (immediately errors all requests) can capture almost all requests, while 2-random means its damage is limited to 2x its weight fraction
4. For requests which are a mix of CPU and IO work, reducing convoying (i.e. many requests in similar phases) is good for reducing CPU scheduling delay. You want some requests to be in CPU-heavy phases while others are in IO-heavy phases; not bunched.
hinkley
I’m fine with the random part. What I don’t get is why 2 works just as well as four, or square root of n. It seems like 3 should do much, much better and it doesn’t.
It’s one of those things I put in the “unreasonably effective” category.
contravariant
Technically it doesn't, it's just really hard to implement leastconn correctly.
If you had perfect information and could just pick whichever was provably lowest that'd would probably work. However keeping that information up to date also takes effort. And if your information is outdated it's easy to overload a server that you think doesn't have much to do or underload one that's long since finished with its tasks. Picking between 2 random servers introduces some randomness without allowing the spread to become huge.
hinkley
When the cost of different requests varies widely it’s difficult to get it right. When we rolled out docker I saw a regression in p95 time. I countered this by doubling our instance size and halving the count, which made the number of processes per machine slightly more instead of way less than the number of machines. I reasoned that the local load balancing would be a bit fairer and that proved out in the results.
beoberha
Try giving Marc Brooker’s blog on this a read: https://brooker.co.za/blog/2012/01/17/two-random.html
It is only better than leastconn when you have stale information to base your decision on. If you have perfect, live information, best will always be optimal.
kgeist
By the time you decide to route to a particular node, conditions on that node might have already changed. So, from what I understand, there can be worst-case scenarios in usage patterns where the same nodes keep getting stressed due to repeatedly stale data in the load balancer. Randomization helps ensure the load is spread out more uniformly.
> But what if we take two random choices (2-random) and just use LRU between those two choices?
> Also, we can see that pseudo 3-random is substantially better than pseudo 2-random, which indicates that k-random is probably an improvement over 2-random for the k. Some k-random policy might be an improvement over DIP.
This sounds very similar to tournament selection schemes in evolutionary algorithms. You can control the amount of selective pressure by adjusting the tournament size.
I think the biggest advantage here is performance. A 1v1 tournament is extremely cheap to run. You don't need to maintain a total global ordering of anything.