Nubmq: A high performant key-value store engine built from first principles
7 comments
·March 15, 2025avinassh
This looks great, how are you doing the benchmarks? It claims to be way faster than Redis. Can you also measure it against Microsoft Garnet? Whats the secret sauce for beating in latency?
Redis:
Write Latency ~1.1ms
Read Latency ~700µs
Max Throughput ~85,000 ops/sec
Nubmq: Write Latency 900µs
Read Latency 500µs
Max Throughput 115,809 ops/sec
also, 700µs for Redis reads sounds high to me. Running against memtier_bench also would be great - https://github.com/RedisLabs/memtier_benchmarknubskr
The benchmarks were done on an M2 MacBook Air (8-core) with 21M requests distributed across 100 concurrent clients sending requests as fast as possible(the test script is also in the github repo),
a few reasons for being fast that come to my mind now are:
1. reads are direct lookups, so distributing that across goroutines will result in them being faster 2. it is set requests where it gets complicated, if we're simply updating some key's value, it's essentially negligible, but if we're creating a new key value pair, that can increase per shard load under scale, which would trigger an store resizing, to avoid just stopping everything when that happens, the engine recognises when the per shard load starts to gets too high, in the backgroud creates a bigger store, and then essentially switches writes from the old engine to the new(bigger) one, while the old one keeps processing reads, and the older engine migrates it's keys to the newer one in backgroud, once it's done, we just dereference the older engine to be collected by GC :) this essentially makes sure that incoming requests keep getting served (oh shit, I spilled the secret sauce)
atleast on my machine,with the default setting, under that concurrent load, Redis starts slowing down due to single-threaded execution and Lua overhead.
ForTheKidz
The image here appears to be upside down (or rather rotated 180): https://github.com/nubskr/nubmq/blob/master/assets/architect...
It's not clear from the readme physically where the data is stored, nor where in the storage process the "congestion" is coming from.
I'm surprised there's no range scan. Range scans enable a whole swathe of functionality that make kv stores punch above their weight. I suppose that's more rocksdb/dynamo/bigtable/cassandra than redis/memcached, though.
nubskr
> *"The image here appears to be upside down (or rather rotated 180)"*
Yeah, something weird happened with the image rendering; I'll fix that.
> *"It's not clear from the readme physically where the data is stored, nor where in the storage process the 'congestion' is coming from."*
The data is *fully in-memory*, distributed across dynamically growing shards (think of an adaptive hashtable that resizes itself). There’s no external storage layer like RocksDB or disk persistence—this is meant to be *pure cache-speed KV storage.*
Congestion happens when a shard starts getting too many keys relative to the rest of the system. The engine constantly tracks *contention per shard*, and when it crosses a threshold, we trigger an upgrade (new shards added, old ones redistributed). Migration is *zero-downtime*, but at very high write rates, there’s a brief moment where some writes are directed to the old store while the new one warms up.
> *"I'm surprised there's no range scan."*
Yeah, that’s an intentional design choice—this is meant to be a *high-speed cache*, closer to Redis than a full database like RocksDB or BigTable. Range queries would need an ordered structure (e.g., skip lists or B-trees), which add overhead. But I’m definitely considering implementing *prefix scans* (e.g., `SCAN user:*` style queries) since that’d be useful for a lot of real-world use cases.
andsoitis
are you using it in production yet?
nubskr
It's still early, but it's been stress-tested pretty aggressively with high write loads and contention scenarios. Right now, it's feature-complete as a high-performance in-memory KV store with dynamic scaling. Clustering and multi-node setups are on the roadmap before any production rollout, but for single-node workloads, it’s already showing strong results (115K+ ops/sec on an M2 MacBook Air).
Are you thinking about a specific use case?
keven-fr
[dead]
not able to post the link in url for some reason, here it is: https://github.com/nubskr/nubmq