Sorting algorithms with CUDA
43 comments
·March 11, 2025raphlinus
gregw2
Onesweep paper (Nvidia 2022): https://research.nvidia.com/publication/2022-06_onesweep-fas...
Onesweep GitHub repo: https://github.com/b0nes164/GPUSorting
raphlinus
The second one is Thomas Smith's independent reimplementation of Onesweep. For the official version, see https://github.com/NVIDIA/cccl . The Onesweep implementation is in cub/cub/agent/agent_radix_sort_onesweep.cuh .
taeric
To be fair, I took this far more as an exploration on writing CUDA than I did an attempt at the best sorting method.
raphlinus
Yup, nothing wrong with clear exposition about simpler algorithms, there's definitely a place for that. I just thought HN readers should have some more context on whether we were looking at a programming exercise or state of the art algorithms.
xyzsparetimexyz
Sure but it's still incredibly misleading.
diggan
> it's still incredibly misleading
What, exactly, is misleading? The title of the blogpost is "Sorting Algorithms with CUDA" and I didn't get the feeling that the author is touting their "Bottom-up iterative merge sort" is the fastest possible way of sorting with CUDA. There is even a "Future Work" section at the end, implying even the author know it can be done better.
spenczar5
As the other posts have said, this isn't the right algorithm. Onesweep and its kin are cool but intimidating. The magic is easier to understand if one looks into the core algorithm: radix sort.
A very readable explanation is here: https://gpuopen.com/download/publications/Introduction_to_GP...
It turns out that radix sort can be implemented in a way that is very straightforward to parallelize. It's a beautiful and elegant approach and worth knowing about!
torginus
Yeah, but radix sort requires you to have an ordered integer key, not just two objects being comparable, like with most general sorting techniques.
anonymoushn
I always ask in these threads, what sort of data do you have that cannot be divided into several integers, such that sorting by those integers according to some priority implements your comparator correctly, and also your comparator is not so expensive as to be impractical? on one occasion someone replied "sorting unicode strings in collation order" but it doesn't seem like recomputing the collation inside of each comparison is practical, and the cache behavior of comparison sorts if you precompute all of them (spending several times the memory occupied by the input) will be quite bad.
structs containing strings and doubles for example are well suited to radix sort.
shiandow
Best I've been able to come up with is fractions. Though true fractions are rare (and can be converted to a sequence of integers that can be sorted lexicographically, see continued fractions).
Technically floats qualify, but not in an interesting way since you basically just need to take care of the sign bit and a few special cases.
Really most things can be sorted with radix sort, though I wouldn't want to be the one having to implement it for unicode strings (sorting text in general is one of those problems you'd preferably let other people solve for you).
yorwba
A sequence of integers such that sorting by those integers implements the comparator correctly is precisely what the Unicode Collation Algorithm produces as its sort key. https://www.unicode.org/reports/tr10/#Scope
suresk
Kind of a fun toy problem to play around with. I noticed you had thread coarsening as an option to play around with - there is often some gain to be had here. I think this is also a fun thing to play around with Nsight on - things that are impacting your performance aren't always obvious and it is a pretty good profiler - might be worth playing around with. (I wrote about a fun thing I found with thread coarsening and automatic loop unrolling with Nsight here: https://www.spenceruresk.com/loop-unrolling-gone-bad-e81f66f...)
You may also want to look at other sorting algorithms - common CPU sorting algorithms are hard to maximize GPU hardware with - a network sort like bitonic sorting involves more work (and you have to pad to a power of 2) but often runs much faster on parallel hardware.
I had a fairly naive implementation that would sort 10M in around 10ms on an H100. I'm sure with more work they can get quite a bit faster, but they need to be fairly big to make up for the kernel launch overhead.
giovannibonetti
For a more convenient way to use GPUs for algorithms like this, the Futhark language [1] can be very valuable. It is a very high-level language that compiles to GPU instructions, which can be accessed as Python libraries. In their website there is an example of a merge sort implementation [2].
[1] https://futhark-lang.org/ [2] https://futhark-lang.org/examples/merge-sort.html
almostgotcaught
Do you use futhark in prod? Do you use it at all actually?
rowanG077
I use it in prod. Currently actually expanding our use of it. Main selling point for us was AD.
giovannibonetti
I don't use it at all currently because the problems it solves do not come up at my job. But if they did, I would gladly use it.
If your job involves a lot of heavy number crunching it might be useful.
almostgotcaught
Lololol then why are you recommending it like you know anything about it? I will never understand this kind of comment on hn - like you don't get why hyping something up that you don't actually understand is bad?
winwang
I don't use it because it didn't seem stable for CUDA when I tried it out.
As for number crunching, I'd probably use CuPy (outside of the typical ML stuff).
null
ltbarcly3
Let me save you the time: Someone wrote a sorting algorithm on a GPU. It was slow. It's not state of the art and they aren't an expert. It's not clear that they know how to use a GPU effectively. This is just someone's personal playing with GPU programming. (Not judging, but there's literally nothing interesting here at all to most people who would be attracted by the title. It's just a personal blog post.)
musicale
Not judging, but I am having trouble seeing how
> there's literally nothing interesting here at all to most people who would be attracted by the title
is not some kind of judgment.
ltbarcly3
I mean every opinion is a judgement. I'm saying I think it's perfectly worthwhile for them to do that project, and it's perfectly valid for them to post their work on their blog, and it might be interesting to someone somewhere even. However if you have in interest in sorting algorithms, or if you have an interest in GPU programming there is nothing of value for you here.
jedbrooke
nice! this reminds me of a small project I did in college implementing bitonic sort[0] in CUDA for a gpu accelerated Burrow-Wheelers transform
https://github.com/jedbrooke/cuda_bwt
I believe I got the implementation for bitonic sort here: https://gist.github.com/mre/1392067
winwang
Love your notes explaining some of the GPU thread-indexing concepts!
Shameless plug for my own little post going a bit into the performance benefits of "vectorized" sorting, even vs. programmable L1 cache: https://winwang.blog/posts/bitonic-sort/
m-schuetz
I'm a fan of this blazingly fast radix sort implementation: https://github.com/b0nes164/GPUSorting
It's great since it easily works with the Cuda driver API, unlike CUB which is mostly exclusive for the runtime API. It also has Onesweep but I havent been able to make that one work.
animal531
I've looked at using it before with Unity, but I couldn't get past the bottleneck of having to get data to it and then back again. There's an overhead with using e.g. compute shaders as well but not nearly as much.
yimby2001
Surprised that gpu accelerated databases are not a bigger thing, for example ive never even heard of PG-storm in the wild nor does it implement a sorting algorithm like onesweep iir
DeathArrow
I think you need to have huge arrays to worth sorting them on GPU. Copying data between RAM and GPU and taking it back will take some time.
m-schuetz
Some render algorithms require sorting, like the currently hugely popular gaussian splats. Those sort like 5 to 20 million items per frame, in real-time.
david-gpu
They may already be in GPU memory. Sorting is rarely all the processing that you need to do on your data.
null
This is not a fast way to sort on GPU. The fastest known sorting algorithm on CUDA is Onesweep, which uses a lot of sophisticated techniques to take advantage of GPU-style parallelism and work around its limitations.
Linebender is working (slowly) on adapting these ideas to GPUs more portably. There's a wiki page here with some resources:
https://linebender.org/wiki/gpu/sorting/