OpenBSD is so fast, I had to modify the program slightly to measure itself
65 comments
·August 15, 2025Const-me
sa46
Isn't gettimeofday implemented with vDSO to avoid kernel context switching (and therefore, most of the overhead)?
My understanding is that using tsc directly is tricky. The rate might not be constant, and the rate differs across cores. [1]
[1]: https://www.pingcap.com/blog/how-we-trace-a-kv-database-with...
toast0
I think most current systems have invariant tsc, I skimmed your article and was surprised to see an offset (but not totally shocked), but the rate looked the same.
You could cpu pin the thread that's reading the tsc, except you can't pin threads in OpenBSD :p
wahern
But just to be clear (for others), you don't need to do that because using RDTSC/RDTSCP is exactly how gettimeofday and clock_gettime work these days, even on OpenBSD. Where using the TSC is practical and reliable, the optimization is already there.
OpenBSD actually only implemented this optimization relatively recently. Though most TSCs will be invariant, they still need to be trained across cores, and there are other minutiae (sleeping states?) that made it a PITA to implement in a reliable way, and OpenBSD doesn't have as much manpower as Linux. Some of those non-obvious issues would be relevant to someone trying to do this manually, unless they could rely on their specific hardware behavior.
Dylan16807
If you have something newer than a pentium 4 the rate will be constant.
I'm not sure of the details for when cores end up with different numbers.
quotemstr
Wizardly workarounds for broken APIs persist long after those APIs are fixed. People still avoid things like flock(2) because at one time NFS didn't handle file locking well. CLOCK_MONOTONIC_RAW is fine these days with the vDSO.
sweetjuly
A better title: a pathological test program meant for Linux does not trigger pathological behavior on OpenBSD
apgwoz
Surely you must be new to tedu posts…
ameliaquining
Still worth avoiding having the HN thread be about whether OpenBSD is in general faster than Linux. This is a thing I've seen a bunch of times recently, where someone gives an attention-grabbing headline to a post that's actually about a narrower and more interesting technical topic, but then in the comments everyone ignores the content and argues about the headline.
st_goliath
> This is a thing I've seen a bunch of times recently ...
> ... in the comments everyone ignores the content and argues about the headline.
Surely you must be new to Hacker News…
rurban
No, generally Linux is at least 3x faster than OpenBSD, because they don't care much for optimizations.
farhaven
OpenBSD is a lot faster in some specialized areas though. Random number generation from `/dev/urandom`, for example. When I was at university (in 2010 or so), it was faster to read `/dev/urandom` on my OpenBSD laptop and pipe it over ethernet to a friend's Linux laptop than running `cat /dev/urandom > /dev/sda` directly on his.
Not by just a bit, but it was a difference between 10MB/s and 100MB/s.
sillystuff
I think you meant to say /dev/random, not /dev/urandom.
/dev/random, on linux used to stall waiting for entropy from sources of randomness like network jitter, mouse movement, keyboard typing. /dev/urandom has always been fast on Linux.
Today, linux /dev/random mainly uses an RNG after initial seeding. The BSDs always did this. On my laptop, I get over 500MB/s (kernel 6.12) .
IIRC, on modern linux kernels, /dev/urandom is now just an alias to /dev/random for backward compatibility.
tptacek
There's no reason for normal userland code not part of the distribution itself ever to use /dev/random, and getrandom(2) with GRND_RANDOM unset is probably the right answer for everything.
Both Linux and BSD use a CSPRNG to satisfy /dev/{urandom,random} and getrandom, and, for future-secrecy/compromise-protection continually update their entropy pools with hashed high-entropy events (there's ~essentially no practical cryptographic reason a "seeded" CSPRNG ever needs to be rekeyed, but there are practical systems security reasons to do it).
sgarland
OpenBSD switched their PRNG to arc4random in 2012 (and then ChaCha20 in 2014); depending on how accurate your time estimate is, that could well have been the cause. Linux switched to ChaCha20 in 2016.
Related, I stumbled down a rabbit hole of PRNGs last year when I discovered [0] that my Mac was way faster at generating UUIDs than my Linux server, even taking architecture and clock speed into account. Turns out glibc didn’t get arc4random until 2.36, and the version of Debian I had at the time didn’t have 2.36. In contrast, since MacOS is BSD-based, it’s had it for quite some time.
[0]: https://gist.github.com/stephanGarland/f6b7a13585c0caf9eb64b...
cogman10
/dev/urandom isn't a great test, IMO, simply because there are reasonable tradeoffs in security v speed.
For all I know BSD could be doing 31*last or something similar.
The algorithm is also free to change.
chowells
Um... This conversation is about OpenBSD, making that objection incredibly funny. OpenBSD has a mostly-deserved reputation for doing the correct security thing first, in all cases.
But that's also why the rng stuff was so much faster. There was a long period of time where the Linux dev in charge of randomness believed a lot of voodoo instead of actual security practices, and chose nonsense slow systems instead of well-researched fast ones. Linux has finally moved into the modern era, but there was a long period where the randomness features were far inferior to systems built by people with a security background.
somat
At one point probably 10 years ago I had linux vm guests refuse to generate gpg keys, gpg insisted it needed the stupid blocking random device, and because the vm guest was not getting any "entropy" the process went nowhere. As an openbsd user naturally I was disgusted, there are many sane solutions to this problem, but I used none of them. Instead I found rngd a service to accept "entropy" from a network source and blasted it with the /dev/random from a fresh obsd guest on the same vm host. Mainly out of spite. "look here you little shit, this is how you generate random numbers"
shmerl
It's talking about a specific case: https://infosec.exchange/@jann/115022521451508325
kleiba
Pretty broad statement(s).
cout
Interesting. I tried to follow the discussion in the linked thread, and the only takeaway I got was "something to do with RCU". What id the simplified explanation?
bobby_big_balls
In Linux, the file descriptor table (fdtable) of a process starts with a minimum of 256 slots. Two threads creating 256 sockets each, which uses 512 fds on top of the three already present (for stdin, stdout and stderr), requires that the fdtable be expanded about halfway through when the capacity is doubled from 256 to 512, and again near the end when resizing from 512 to 1024.
This is done by expand_fdtable() in the kernel. It contains the following code:
if (atomic_read(&files->count) > 1)
synchronize_rcu();
The field files->count is a reference counter. As there are two threads, which share a set of open files between them, the value of this is 2, meaning that synchronize_rcu() is called here during fdtable expansion. This waits until a full RCU grace period has elapsed, causing a delay in acquiring a new fd for the socket currently being created.If the fdtable is expanded prior to creating a new thread, as the test program optionally will do by calling dup(0, 666) if supplied a command line argument, this avoids the synchronize_rcu() call because at this point files->count == 1. Therefore, if this is done, there will be no delay later on when creating all the sockets as the fdtable will have sufficient capacity.
By contrast, the OpenBSD kernel doesn't have anything like RCU and just uses a rwlock when the file descriptor table of the process is being modified, avoiding the long delay during expansion that may be observed in Linux.
tptacek
RCUs are super interesting; here's (I think I've got the right link) a good talk on how they work and why they work that way:
dekhn
Thanks for the explanation. I confirmed the performance timing different by enabling the dup call.
I guess my question is why would synchronize_rcu take many milliseconds (20+) to run. I would expect that to be in the very low milliseconds or less.
altairprime
> allocating kernel objects from proper virtual memory makes this easier. Linux currently just allocates kernel objects straight out of the linear mapping of all physical memory
I found this to be a key takeaway of reading the full thread: this is, in part, a benchmark of kernel memory allocation approaches, that surfaces an unforeseen difference in FD performance at a mere 256 x 2 allocs. Presumably we’re seeing a test case distilled down from a real world scenario where this slowdown was traced for some reason?
saagarjha
That’s how they’re designed; they are intended to complete at some point that’s not soon. There’s an “expedited RCU” which to my understanding tries to get everyone past the barrier as fast as possible by yelling at them but I don’t know if that would be appropriate here.
viraptor
When 2 threads are allocating sockets sequentially, they fight for the locks. If you preallocate a bigger table by creating fd 666 first, the lock contention goes away.
JdeBP
It's something that has always been interesting about Windows NT, which has a multi-level object handle table, and does not have the rule about re-using the lowest numbered available table index. There's scope for reducing contention amongst threads in such an architecture.
Although:
1. back in application-mode code the language runtime libraries make things look like a POSIX API and maintain their own table mapping object handles to POSIX-like file descriptors, where there is the old contention over the lowest free entries; and 1. in practice the object handle table seems to mostly append, so multiple object-opening threads all contend over the end of the table.
saagarjha
RCU is very explicitly a lockless synchronization strategy.
themafia
Yea, well, I had to modify your website to make it readable. Why do people do this?
GTP
By leaving my finger on the screen, I accidentally triggered an easter egg of two "cannons" shooting squares. Did anyone else notice it?
pan69
Happened for me on my normal desktop browser, cute but distracting. It also made my mouse cursor disappear. I had to move my mouse outside the browser window to make it visible again.
evanjrowley
I also saw it, and it happened on a non-touch computer screen.
haunter
In my mind faster = the same game with the same graphics settings have more FPS
(I don’t even know you could actually start mainstream games on BSD or not)
nine_k
Isn't it mostly limited by GPU hardware, and by binary blobs that are largely independent from the host platform?
haunter
Games run better under Linux (even if they are not-native but with Proton/Wine) than on Windows 11 so the platform does matter
extraisland
It annoys me when people claim this. It depends on the game, distro, proton version, what desktop environment, plus a lot of other things I have forgotten about.
Also latency is frequently worse on Linux. I play a lot of quick twitch games on Linux and Windows and while fps and frame times are generally in the same ballpark, latency is far higher.
Other problems is that proton compatibility is all over the place. Some of the games valve said were certified don't actually work well, mods can be problematic, and generally you end up faffing with custom launch options to get things working well.
dang
The article title is too baity to fit HN's guidelines (https://news.ycombinator.com/newsguidelines.html) so I replaced it with a phrase from the article that's hopefully just baity enough.
JdeBP
I was just about to point Ian Betteridge at the original title. (-:
dang
Just don't joke that he has retired! https://news.ycombinator.com/item?id=10393754
agambrahma
So ... essentially testing file descriptor allocation overhead
asveikau
My guess is it has something to do with the file descriptor table having a lot of empty entries (the dup2(0, 666) line.)
Now time to read the actual linked discussion.
wahern
I think dup2 is the hint, but in the example case the dup2 path isn't invoked--it's conditioned on passing an argument, but the test runs are just `./a.out`. IIUC, the issue is growing the file descriptor table. The dup2 is a workaround that preallocates a larger table (666 > 256 * 2)[1], to avoid the pathological case when a multi-threaded process grows the table. From the linked infosec.exchange discussion it seems the RCU-based approach Linux is using can result in some significant latency, resulting in much worse performance in simple cases like this compared to a simple mutex[2].
[1] Off-by-one. To be more precise, the state established by the dup2 is (667 > 256 * 2), or rather (667 > 3 + 256 * 2).
[2] Presumably what OpenBSD is using. I'd be surprised if they've already imported and adopted FreeBSD's approach mentioned in the linked discussion, notwithstanding that OpenBSD has been on an MP scalability tear the past few years.
uwagar
the bsd people seem to enjoy measuring and logging a lot.
jedberg
"It depends"
Faster is all relative. What are you doing? Is it networking? Then BSD is probably faster than Linux. Is it something Linux is optimized for? Then probably Linux.
A general benchmark? Who knows, but does it really matter?
At the end of the day, you should benchmark your own workload, but also it's important to realize that in this day and age, it's almost never the OS that is the bottleneck. It's almost always a remote network call.
Not sure if that’s relevant, but when I do micro-benchmarks like that measuring time intervals way smaller than 1 second, I use __rdtsc() compiler intrinsic instead of standard library functions.
On all modern processors, that instruction measures wallclock time with a counter which increments at the base frequency of the CPU unaffected by dynamic frequency scaling.
Apart from the great resolution, that time measuring method has an upside of being very cheap, couple orders of magnitude faster than an OS kernel call.