Ask HN: What's your go-to message queue in 2025?
98 comments
·May 15, 2025speedgoose
I played with most message queues and I go with RabbitMQ in production.
Mostly because it has been very reliable for years in production at a previous company, and doesn’t require babysitting. Its recent versions also has new features that make it is a descent alternative to Kafka if you don’t need to scale to the moon.
And the logo is a rabbit.
swyx
Datadog too. i often wonder how come more companies dont pick cute mascots. gives a logo, makes everyone have warm fuzzies immediately, creates pun opportunities.
inb4 "oh but you wont be taken seriously" well... datadog.
DonsDiscountGas
Hugging face clearly shares the same philosophy
taskforcegemini
but usually I only see the namw "huggingface" written, and I think of headcrabs from half-life instead
aitchnyu
Just used it as a Celery (job queue) backend. How is it a Kafka alternative?
speedgoose
RabbitMQ streams: https://www.rabbitmq.com/docs/streams
KingOfCoders
NATS.io because I'm using Go, and I can just embed it for one server [0], one binary to deploy with Systemd, but able to split it out when scaling the MVP.
adamcharnock
I would highlight a distinction between Queues and Streams, as I think this is an important factor in making this choice.
In the case of a queue, you put an item in the queue, and then something removes it later. There is a single flow of items. They are put in. They are taken out.
In the case of a stream, you put an item in the queue, then it can be removed multiple times by any other process that cares to do so. This may be called 'fan out'.
This is an important distinction and really effects how one designs software that uses these systems. Queues work just fine for, say, background jobs. A user signs up, and you put a task in the 'send_registration_email' queue.[1]
However, what if some _other_ system then cares about user sign ups? Well, you have to add another queue, and the user sign-up code needs to be aware of it. For example, a 'add_user_to_crm' queue.
The result here is that choosing a queue early on leads to a tight-coupling of services down the road.
The alternative is to choose streams. In this case, instead of saying what _should_ happen, you say what _did_ happen (past tense). Here you replace 'send_registration_email' and 'add_user_to_crm' with a single stream called 'used_registered'. Each service that cares about this fact is then free to subscribe to that steam and get its own copy of the events (it does so via a 'consumer group', or something of a similar name).
This results in a more loosely coupled system, where you potentially also have access to an event history should you need it (if you configure your broker to keep the events around).
--
This is where Postgresql and SQS tend to fall down. I've yet to hear of an implementation of streams in Postgresql[2]. And SQS is inherently a queue.
I therefore normally reach for Redis Steams, but mostly because it is what I am familiar with.
Note: This line of thinking leads into Domain Driven Design, CQRS, and Event Sourcing. Each of which is interesting and certainly has useful things to offer, although I would advise against simply consuming any of them wholesale.
[1] Although this is my go-to example, I'm actually unconvinced that email sending should be done via a queue. Email is just a sequence of queues anyway.
[2] If you know of one please tell me!
thruflo
There are lots of options to stream data out of Postgres, including:
- https://electric-sql.com (disclaimer: co-founder) - https://feldera.com - https://materialize.com - https://powersync.com - https://sequinstream.com - https://supabase.com/docs/guides/realtime/broadcast - https://zero.rocicorp.dev
Etc.
adamcharnock
I think these all relate to streaming data. Not streams in the sense of the data-structure for message passing (a la Kafka, Redis Streams, etc)
empthought
The logical replication of the transaction log is basically a stream of data change events, so the difference between those senses isn’t very big.
j45
While someone’s use case would have to be verified, the below is to show that there are streaming options in Postgres.
Would be interesting to get your take on queues vs streams on the below.
I consider myself a little late to the Postgres party after time with other nosql and rbdms, but it seems more and more an ok place to consider beginning from.
For Streaming…
Supabase has some Kafka stream type examples that covers change data capture: https://supabase.com/blog/postgres-wal-logical-replication
Tables can also do some amount of stream like behaviour with visibility and timeout behaviours:
pg-boss — durable job queues with visibility timeouts and retries.
Zilla — supports Postgres as a source using CDC to act as a stream. • ElectricSQL — uses Postgres replication and CRDTs for reactive sync (great for frontend state as a stream
Streaming inside Postgres also has some attention from
Postgres as Event Store https://eventmodeling.org. This can combine event sourcing with Postgres for stream modeling.
pgmq — from Tempo - this is a minimal message queue built on Postgres using append-only design.. Effectively works as a persistent stream with ordered delivery
adamcharnock
I suspect this comment is LLM generated. There is a 404-ing URL, discussion of queues, and some discussion of Postgres CDC which I believe is Postgres logical replication. Neither of which are a streams implementation on Postgres.
vlvdus
What makes Postgres (or any decent relational DB) fall down in this case?
adamcharnock
It is simply that I’m unaware of a streams implementation for postgresql. Although another comment is mentioning them, so I’ll read that in some more detail shortly.
I’ve always felt that streams should be implementable via stored procedures, and that it would be a fun project. I’ve just never quite had the driving force to do it.
ryandvm
Great comment. I'm disappointed that I had to scroll this far down to see someone pointing out that queues and streams ARE NOT THE SAME.
bilinguliar
I am using Beanstalkd, it is small and fast and you just apt-get it on Debian.
However, I have noticed that oftentimes devs are using queues where Workflow Engines would be a better fit.
If your message processing time is in tens of seconds – talk to your local Workflow Engine professional (:
janstice
In that case, any suggestions if the answer was looking for workflow engines? Ideally something that will work for no-person-in-the-middle workloads in the tens of seconds range as well as person-making-a-decision workflows that can live for anywhere between minutes and months?
bilinguliar
Temporal if you do not want vendor locks.
AWS Step Functions or GCP Workflows if you are on the cloud.
mdaniel
https://github.com/temporalio/temporal/tree/v1.27.2 (MIT)
It has been submitted quite a few times but I don't readily see any experiences (pro or con) https://news.ycombinator.com/from?site=github.com/temporalio
dkh
A classic. Not something I personally use these days, but I think just as a piece of software it is an eternally good example of something simple, powerful, well-engineered, pleasant to use, and widely-compatible, all at the same time
wordofx
Postgres. Doing ~ 70k messages/second average. Nothing huge but don’t need anything dedicated yet.
lawn
I'm curious on how people use Postgres as a message queue. Do you rely on libraries or do you run a custom implementation?
ericaska
We also use Postgres but we don't have many jobs. It's usually 10-20 squedule that creates hourly-monthly jobs and they are mostly independent. Currently a custom made solution but we are going to update it to use skip locked and use Notify/Listen + interval to handle jobs. There is a really good video about it on YouTube called: "Queues in PostgreSQL Citus Con."
padjo
You can go an awfully long way with just SELECT … FOR UPDATE … SKIP LOCKED
Spivak
I've never found a satisfying way to not hold the lock for the full duration of the task that is resilient to workers potentially dying. And postgres isn't happy holding a bunch of locks like that. You end up having to register and track workers with health checks and a cleanup job to prune old workers so you can give jobs exclusivity for a time.
j45
Built right in using a group of pg functions, or also with a library, or also with a python based tool that happens to use pg for the queue.
wordofx
Just select for update skipped locked. Table is partitioned to keep unprocessed small.
iamcalledrob
Curious what kind of hardware you're using for that 70K/s?
wordofx
It’s an r8g instance in aws. Can’t remember the size but I think it’s over provisioned because it’s at like 20% utilisation and only spikes to 50.
aynyc
What’s your batch size?
lmm
SQS is great if you're already on AWS - it works and gets out of your way.
Kafka is a great tool with lots of very useful properties (not just queues, it can be your primary datastore), but it's not operationally simple. If you're going to use it you should fully commit to building your whole system on it and accept that you will need to invest in ops at least a little. It's not a good fit for a "side" feature on the edge of your system.
mstaoru
Redis Streams is a "go-to" for me, mostly because of operational simplicity and performance. It's also dead simple to write consumers in any language. If I had more stringent durability requirements, I would probably pick Redpanda, but Kafka-esque (!) processing semantics can be daunting sometimes.
I didn't have anything but bad experiences with RabbitMQ, maybe I cannot "cook" it, but it would always go split-brain, or last issue I had, a part of clients connected to certain clustered nodes just stopped receiving messages. Cluster restart helped, but all logs and all metrics were green and clean. I try to avoid it if I can.
ZeroMQ is more like a building block for your applications. If you need something very special, it could be a good fit, but for a typical EDA-ish bus architecture Redis or Kafka/Redpanda are both very good.
jolux
Kafka is fairly different from the rest of these — it’s persistent and designed for high read throughput to multiple simultaneous clients at the same time, as some other commenters have pointed out.
We wanted replayability and multiple clients on the same topic, so we evaluated Kafka, but we determined it was too operationally complex for our needs. Persistence was also unnecessary as the data stream already had a separate archiving system and existing clients only needed about 24hr max of context. AWS Kinesis ended up being simpler for our needs and I have nothing but good things to say about it for the most part. Streaming client support in Elixir was not as good as Kafka but writing our own adapter wasn’t too hard.
AznHisoka
Sidekiq, Sidekiq, Sidekiq (or just Postgres if Im dealing with something trivial)
vanbashan
I prefer pulsar. Elegant modular design and fully open source ecosystem.
Performance is at least as good as Kafka.
For simpler workload, beanstalkd could be a good fit, either.
atombender
Pulsar's feature set is amazing, but it looks like a beast to operate? Especially compared to lighter-weight systems like NATS or Redpanda.
You need both Bookkeeper and Pulsar, which are both stateful, and both require ZooKeeper. (You can apparently configure Bookkeeper to use Etcd, not sure about Pulsar.) So three applications, each of which has several types of processes that probably demand a dedicated operator if running on Kubernetes.
crmd
The US Federal Reserve uses IBM MQ for the FedNow interbank settlement service that went live last year.
Architecture info: https://explore.fednow.org/resources/technical-overview-guid...
j45
After using more than a few, 2025 has been trying to start with Postgres with everything to minimize so many things.
Database functions can remain independent of stack or programming changes.
Complexity comes on it's own, often little need to pile it in from the start to tie ones hands early for relatively simple solutions.
The space is confusing to say the least.
Message queues are usually a core part of any distributed architecture, and the options are endless: Kafka, RabbitMQ, NATS, Redis Streams, SQS, ZeroMQ... and then there's the “just use Postgres” camp for simpler use cases.
I’m trying to make sense of the tradeoffs between:
- async fire-and-forget pub/sub vs. sync RPC-like point to point communication
- simple FIFO vs. priority queues and delay queues
- intelligent brokers (e.g. RabbitMQ, NATS with filters) vs. minimal brokers (e.g. Kafka’s client-driven model)
There's also a fair amount of ideology/emotional attachment - some folks root for underdogs written in their favorite programming language, others reflexively dismiss anything that's not "enterprise-grade". And of course, vendors are always in the mix trying to steer the conversation toward their own solution.
If you’ve built a production system in the last few years:
1. What queue did you choose?
2. What didn't work out?
3. Where did you regret adding complexity?
4. And if you stuck with a DB-based queue — did it scale?
I’d love to hear war stories, regrets, and opinions.