Adjacency Matrix and std:mdspan, C++23
10 comments
·September 8, 2025michelpp
Dylan16807
This is very much a nitpick but a million million bits is 116GB and you can squeeze 192GB of RAM into a desktop these days, let alone a workstation or a server. (Even if mdspan forces bytes, you can fit a million^2 elements into a server.)
michelpp
Fair enough, showing my age with "impossible".
But still true that dense growth is not linear but quadratic to the number of nodes.
munk-a
These approaches may be nice to demonstrate the concept in brief but I'm a bit sad the article didn't take the opportunity to go into a design that only stores the triangular data since it's pretty trivial to overload operators in C++. If this is meant to be a demonstration of the performance advantage of mdspan over nested vector creation (which certainly is the case for large multidimensional arrays) it'd be good to dial that up.
Night_Thastus
I can see a use for this. It would be nice to not have to write the typical indexing boilerplate when dealing with multidimensional data. One less area to make a mistake. Feels less kludgy.
I wonder if this has any benefit of row vs column memory access, which I always forget to bother with unless suddenly my performance crawls.
bee_rider
Are the adjacency matrices in graph theory really usually dense?
contravariant
Technically the article is saying the graphs are dense. Which might make sense, but using sparse matrices to represent sparse graphs is not unusual.
michelpp
For a powerful sparse adjacently matrix C library check out SuiteSparse GraphBLAS, there are binding for Python, Julia and Postgres.
Malipeddi
Same thing caught my eye. They are usually sparse.
michelpp
Yep, for any decent sized graph, sparse is an absolute necessity, since a dense matrix will grow with the square of the node size, sparse matrices and sparse matrix multiplication is complex and there are multiple kernel approaches depending on density and other factors. SuiteSparse [1] handles these cases, has a kernel JIT compiler for different scenarios and graph operations, and supports CUDA as well. Worth checking out if you're into algebraic graph theory.
Using SuiteSparse and the standard GAP benchmarks, I've loaded graphs with 6 billion edges into 256GB of RAM, and can BFS that graph in under a second. [2]
It's an interesting exploration of ideas, but there are some issues with this article. Worth noting that it does describe it's approach as "simple and naive", so take my comments below to be corrections and/or pointers into the practical and complex issues on this topic.
- The article says adjacency matrices are "usually dense" but that's not true at all, most graph are sparse to very sparse. In a social network with billions of people, the average out degree might be 100. The internet is another example of a very sparse graph, billions of nodes but most nodes have at most one or maybe two direct connections.
- Storing a dense matrix means it can only work with very small graphs, a graph with one million nodes would require one-million-squared memory elements, not possible.
- Most of the elements in the matrix would be "zero", but you're still storing them, and when you do matrix multiplication (one step in a BFS across the graph) you're still wasting energy moving, caching, and multiplying/adding mostly zeros. It's very inefficient.
- Minor nit, it says the diagonal is empty because nodes are already connected to themselves, this isn't correct by theory, self edges are definitely a thing. There's a reason the main diagonal is called "the identity".
- Not every graph algebra uses the numeric "zero" to mean zero, for tropical algebras (min/max) the additive identity is positive/negative infinity. Zero is a valid value in those algebras.
I don't mean to diss on the idea, it's a good way to dip a toe into the math and computer science behind algebraic graph theory, but in production or for anything but the smallest (and densest) graphs, a sparse graph algebra library like SuiteSparse would be the most appropriate.
SuiteSparse is used in MATLAB (A .* B calls SuiteSparse), FalkorDB, python-graphblas, OneSparse (postgres library) and many other libraries. The author Tim Davis from TAMU is a leading expert in this field of research.
(I'm a GraphBLAS contributor and author of OneSparse)