How much slower is random access, really?
17 comments
·June 23, 2025andersa
wtallis
> A more interesting example might be if each slot in the target array has the next index to go to in addition to the value, then you will introduce a dependency chain preventing this from happening.
However, on some processors there's a data-dependent prefetcher that will notice the pointer-like value and start prefetching that address before the CPU requests it.
hansvm
Fun fact, that's part of why parsing protobuf is so slow.
jiggawatts
This is why array random access and linked-list random access have wildly different performance characteristics.
Another thing I noticed is that the spike on the left hand side of his graphs is the overhead of file access.
Without this overhead, small array random access should have a lot better per-element cost.
porcoda
The RandomAccess (or GUPS) benchmark (see: https://ieeexplore.ieee.org/document/4100365) was looking at measuring machines on this kind of workload. In high performance computing this was important for graph calculations and was one of the things the Cray (formerly Tera) MTA machine was particularly good at. I suppose this benchmark wouldn’t be very widely known outside HPC circles.
jandrewrogers
I worked on the MTA architectures for years among several other HPC systems but I don’t remember this particular benchmark. I suspect it was replaced by the Graph500 benchmark. Graph500 measures something similar and was introduced only a few years after GUPS.
porcoda
The HPCS benchmarks predated Graph500. They were talked about at SC for a few years in the early 2000s but mostly faded into the background. It’s hard to dig up the numbers for the MTA on RandomAccess, but the Eldorado paper from ‘05 by Feo and friends (https://dl.acm.org/doi/10.1145/1062261.1062268) mentions it and you can see the MTA beating the other popular architectures of the time in one of the tables.
Adhyyan1252
Love this analysis! Was expecting random to be much slower. 4x is not bad at all
null
null
FpUser
I did another type of experiment which evaluates benefits of branch prediction on AMD 9950X on contiguous array with 1,000,000 elements. Calculated sum adding element if it is bigger than 125 (50% of 256). Difference between random and sorted was 10 times. I guess branch prediction plays a huge role as well.
Andys
Thanks for sharing that.
Presumably if you'd split the elements into 16 shares (one for each CPU), summed with 16 threads, and then summed the lot at the end, then random would be faster than sorted?
forrestthewoods
Here’s an older blog post of mine on roughly the same topic:
https://www.forrestthewoods.com/blog/memory-bandwidth-napkin...
I’m not sure I agree with the data presentation format. “time per element” doesn’t seem like the right metric.
klank
What are your qualms with time per element? I liked it as a metric because it kept the total deviation of results to less than 32 across the entire result set.
Using something like the overall run length would have such large variations making only the shape of the graph particularly useful (to me) less so much the values themselves.
If I was showing a chart like this to "leadership" I'd show with the overall run length. As I'd care more about them realizing the "real world" impact rather than the per unit impact. But this is written for engineers, so I'd expect it to also be focused on per unit impacts for a blog like this.
However, having said all that, I'd love to hear what your reservations are using it as a metric.
alain94040
From your blog post:
> Random access from the cache is remarkably quick. It's comparable to sequential RAM performance
That's actually expected once you think about it, it's a natural consequence of prefetching.
petermcneeley
Whats most misleading is the data for the smaller sizes (1k)
null
Note this is not true random access in the manner it occurs in most programs. By having a contiguous array of indices to look at, that array can be prefetched as it goes, and speculative execution will take care of loading many upcoming indices of the target array in parallel.
A more interesting example might be if each slot in the target array has the next index to go to in addition to the value, then you will introduce a dependency chain preventing this from happening.