bdcravens
Is there a single sentence anywhere that describes what it actually is?
DrammBA
I've seen this more and more with software landing pages, they are somehow so deep into developing/marketing that they totally forget to say what the thing actually is or does, that's why you show it to family and friends first to get some fresh eyes before publishing the site.
lucianbr
In a similar vein, lots of software is Mac-only, but omits to say this anywehere. You just get to the downloads page and see that there are only mac packages.
As if nobody ever uses anything else.
johnisgood
Looks like a Redis clone. The benchmarks compare it to Redis.
Description from GitHub:
> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads.
arpitbbhayani
Arpit here.
DiceDB is an in-memory database that is also reactive. So, instead of polling the database for changes, the database pushes the resultset if you subscribe to it.
We have a similar set of commands as Redis, but are not Redis-compliant.
nebulous1
Would "key-value" not have a place in the description?
This application may be very capable, but I agree with the person saying that its use-case isn't clear on the home page, you have to go deeper into the docs. "Smarter than a database" also seems kind of debatable.
remram
This is a lot clearer than any information I found anywhere else. There wasn't any room on your website, README, or docs for this summary?
arpitbbhayani
It is right there on the landing page. But, let me highlight it a bit.
ofrzeta
So like RethinkDB? https://rethinkdb.com/
dkh
Not a month goes by where I don’t remember it at least once and realize that I still miss it.
This seems more like Redis though
rvnx
A Redis-inspired server in Go
adhamsalama
Can't wait to feel the impact of garbage collection in my fast cache!
arpitbbhayani
We had a similar thought, but it is not as bad as we think.
We have the benchmarks, and we will be sharing the numbers in subsequent releases.
But, there is still a chance that I may come to bite us and limit us to a smaller scale, and we are ready for it.
arpitbbhayani
Nope. it started as Redis clone. We are on a different trajectory now. Chasing different goals.
bob1029
> Chasing different goals.
What are those goals? I was struggling to interpret a meaningful roadmap from the issue & commit history.
remram
Secret goals are no selling point.
lucianbr
No. I had the exact same problem.
Feels arrogant. "Of course you already know what this is, how could you not?"
remram
The docs do, the site is useless.
> DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware.
A Redis-like database with a Redis-like interface. No info about drop-in compatibility, I assume no.
null
bdcravens
Even clicking through to the Github, after reading the "What is DiceDB?", I'm still not very clear. It feels more like marketing than information.
"What is DiceDB? DiceDB is an open-source, fast, reactive, in-memory database optimized for modern hardware. Commonly used as a cache, it offers a familiar interface while enabling real-time data updates through query subscriptions. It delivers higher throughput and lower median latencies, making it ideal for modern workloads."
alexey-salmin
| Metric | DiceDB | Redis |
| -------------------- | -------- | -------- |
| Throughput (ops/sec) | 15655 | 12267 |
| GET p50 (ms) | 0.227327 | 0.270335 |
| GET p90 (ms) | 0.337919 | 0.329727 |
| SET p50 (ms) | 0.230399 | 0.272383 |
| SET p90 (ms) | 0.339967 | 0.331775 |
UPD Nevermind, I didn't have my eyes open. Sorry for the confusion.Something I still fail to understand is where you can actually spend 20ms while answering a GET request in a RAM keyvalue storage (unless you implement it in Java).
I never gained much experience with existing opensource implementations, but when I was building proprietary solutions at my previous workplace, the in-memory response time was measured in tens-hundreds of microseconds. The lower bound of latency is mostly defined by syscalls so using io_uring should in theory result in even better timings, even though I never got to try it in production.
If you read from nvme AND also do the erasure-recovery across 6 nodes (lrc-12-2-2) then yes, you got into tens of milliseconds. But seeing these numbers for a single node RAM DB just doesn't make sense and I'm surprised everyone treats them as normal.
Does anyone has experience with low-latency high-throughput opensource keyvalue storages? Any specific implementation to recommend?
davekeck
> Something I still fail to understand is where you can actually spend 20ms
Aren’t these numbers .2 ms, ie 200 microseconds?
esafak
They also sounded fishy to me. I'd expect closer to 10x as much throughput with Redis: https://redis.io/docs/latest/operate/oss_and_stack/managemen...
bitlad
I think it is fishy based on this - https://dzone.com/articles/performance-and-scalability-analy...
Kerbonut
Looks like your units are in ms, so 0.20 ms.
alexey-salmin
oh thank you, it's just me being blind
null
null
losvedir
I didn't see it in the docs, but I'd want to know the delivery semantics of the pubsub before using this in production. I assume best effort / at most once? Any retries? In what scenarios will the messages be delivered or fail to be delivered?
schmookeeg
Using an instrument of chance to name a data store technology is pretty amusing to me.
bufferoverflow
No chance if we live in a deterministic universe.
huntaub
What are some example use cases where having the ability for the database to push updates to an application would be helpful (vs. the traditional polling approach)?
zupa-hu
One example is when you want to display live data on a website. Could be a dashboard, a chat, or really the whole site. Polling is both slower and more resource hungry.
If it is built into your language/framework, you can completely ignore the problem of updating the client, as it happens automatically.
Hope that makes sense.
huntaub
Interesting -- is that normally done with database updates + polling vs. something purpose-built?
bitlad
I think performance benchmark you have done for DiceDB is fake.
These are the real numbers - https://dzone.com/articles/performance-and-scalability-analy...
Does not match with your benchmarks.
arpitbbhayani
The benchmark tool is different. I mentioned the same on my benchmark page.
We had to write a small benchmark utility (membench) ourselves because the long-term metrics that we are optimizing need to be evaluated in a different way.
Also, the scripts, utilities, and infra configurations are mentioned. Feel free to run it.
remram
This seems orders of magnitude slower than Nubmq which was posted yesterday: https://news.ycombinator.com/item?id=43371097
arpitbbhayani
Different tool. I metrics I am optimizing for are different hence wrote a separate utility. May not be the most optimized one. But I am usign this to measure all things DiceDB and will be using this to optimize DiceDB further.
nylonstrung
Who is this for? Can you help me explain why and when I'd want to use this in place of redis/dragonfly
ac130kz
Any reason to use this over Valkey, which is now faster than Redis and community driven? Genuinely interested.
hp77
DragonflyDB is also in that race, isn't it?
ac130kz
From what I looked at in the past, they seem better on paper by comparing themselves to a very old version of Redis in a rigged scenario (no clustering or multithreading applied despite Drangonfly getting multithreading enabled), and they are a lot worse in terms of code updates. Maybe that's different today, but I'm more keen on using Valkey.
hp77
Does Redis support multithreading? Doesn't it use a single-threaded event loop, while DragonflyDB basic version is with multithreading enabled and shared-nothing architecture. Also I found this latest comparison between Valkey and DragonflyDB : https://www.dragonflydb.io/blog/dragonfly-vs-valkey-benchmar...
DrammBA
I love the "Follow on twitter" link with the old logo and everything, they probably used a template that hasn't been updated recently but I'm choosing to believe it's actually a subtle sign of protest or resistance.
arpitbbhayani
I prefer that over X icon.
spiderfarmer
Just use Bluesky. It’s the better middle finger.
datadeft
Is this suffering from the same problems like Redis when trying to horizontally scale?
Looking at the diceDB code base, I have few questions regarding its design, I'm asking this to understand the project's goals and design rationale. Anyone feel free to help me understand this.
I could be wrong but the primary in-memory storage appears to be a standard Go map with locking. Is this a temporary choice for iterative development, and is there a longer-term plan to adopt a more optimized or custom data structure ?
I find the DiceDB's reactivity mechanism very intriguing, particularly the "re-execution" of the entire watch command (i.e re-running GET.WATCH mykey on key modification), it's an intriguing design choice.
From what I understand is the Eval func executes client side commands this seem to be laying foundation for more complex watch command that can be evaluated before sending notifications to clients.
But I have the following question.
What is the primary motivation behind re-executing the entire command, as opposed to simply notifying clients of a key change (as in Redis Pub/Sub or streams)? Is the intent to simplify client-side logic by handling complex key dependencies on the server?
Given that re-execution seems computationally expensive, especially with multiple watchers or more complex (hypothetical) watch commands, how are potential performance bottlenecks addressed?
How does this "re-execution" approach compare in terms of scalability and consistency to more established methods like server-side logic (e.g., Lua scripts in Redis) or change data capture (CDC) ?
Are there plans to support more complex watch commands beyond GET.WATCH (e.g. JSON.GET.WATCH), and how would re-execution scale in those cases?
I'm curious about the trade-offs considered in choosing this design and how it aligns with the project's overall goals. Any insights into these design decisions would help me understand its use-cases.
Thanks