Skip to content(if available)orjump to list(if available)

Kafka at the low end: how bad can it get?

Kafka at the low end: how bad can it get?

121 comments

·February 18, 2025

NovemberWhiskey

Kafka for small message volumes is one of those distinct resume-padding architectural vibes.

hiAndrewQuinn

Apt time to mention the classic "Command-line Tools can be 235x Faster than your Hadoop Cluster", for those who may have not yet read it.

https://adamdrake.com/command-line-tools-can-be-235x-faster-...

kvakerok

You haven't seen the worst of it. We had to implement a whole kafka module for a SCADA system because Target already had unrelated kafka infrastructure. Instead of REST API or anything else sane (which was available), ultra low volume messaging is now done by JSON objects wrapped in kafka. Peak incompetence.

kevinherron

> for a SCADA system

for Ignition?

SteveNuts

Probably Wonderware

Joel_Mckay

We did something similar using RabbitMQ with bson over AMQP, and static message routing. Anecdotally, the design has been very reliable for over 6 years with very little maintenance on that part of the system, handles high-latency connection outage reconciliation, and new instances are cycled into service all the time.

Mostly people that ruminate on naive choices like REST/HTTP2/MQTT will have zero clue how the problems of multiple distributed telemetry sources scale. These kids are generally at another firm by the time their designs hit the service capacity of a few hundred concurrent streams per node, and their fragile reverse-proxy load-balancer CISCO rhetoric starts to catch fire.

Note, I've seen AMQP nodes hit well over 14000 concurrent users per IP without issue, as RabbitMQ/OTP acts like a traffic shock-absorber at the cost of latency. Some engineers get pissy when they can't hammer these systems back into the monad laden state-machines they were trained on, but those people tend to get fired eventually.

Note SCADA systems were mostly designed by engineers, and are about as robust as a vehicular bridge built by a JavaScript programmer.

Anecdotally, I think of Java as being a deprecated student language (one reason to avoid Kafka in new stacks), but it is still a solid choice in many use-cases. Sounds like you might be too smart to work with any team. =3

vips7L

> Anecdotally, I think of Java as being a deprecated student language (one reason to avoid Kafka in new stacks), but it is still a solid choice in many use-cases. Sounds like you might be too smart to work with any team. =3

Honestly from reading this it seems like you’re the one who is too smart to work with any team.

javaunsafe2019

I don’t know why but I could wear you are German (and old)

atmosx

Oh no!

Let’s be real: teams come to the infra team asking for a queue system. They give their requirements, and you—like a responsible engineer—suggest a more capable queue to handle their needs more efficiently. But no, they want Kafka. Kafka, Kafka, Kafka. Fine. You (meaning an entire team) set up Kafka clusters across three environments, define SLIs, enforce SLOs, make sure everything is production-grade.

Then you look at the actual traffic: 300kb/s in production. And right next to it? A RabbitMQ instance happily chugging along at 200kb/s.

You sit there, questioning every decision that led you to this moment. But infra isn’t the decision-maker. Sometimes, adding unnecessary complexity just makes everyone happier. And no, it’s not just resume-padding… probably.

kyawzazaw

We have way way way less than that in my team. But they don't support anything else.

InDubioProRubio

Then all the guys who requested that stuff quit

dude187

Well duh! They got a kafkaesque promotion using their upgraded resume!

rizky05

[dead]

FearNotDaniel

That’s almost certainly true, but at least part of the problem (not just Kafka but RDD tech in general) is that project home pages, comments like this and “Learn X in 24 hours” books/courses rarely spell out how to clearly determine if you have an appropriate use case at an appropriate scale. “Use this because all the cool kids are using it” affects non-tech managers and investors just as much as developers with no architectural nous, and everyone with a SQL connection and an API can believe they have “big data” if they don’t have a clear definition of what big data actually is.

evantbyrne

It really is a red flag dependency. Some orgs need it... Everyone else is just blowing out their development and infrastructure budgets.

bassp

I use Kafka for a low-message-volume use case because it lets my downstream consumers replay messages… but yeah in most cases, it’s over kill

ofrzeta

That was also a use case for me. However at some point I replaced Kafka with Redpanda.

drinker

Isn't redpanda built for the same scale requirements as Kafka?

cheema33

I needed to synchronize some tables between MS SQL Server and PostgreSQL. In the future we will need to add ClickHouse database to the mix. When I last looked, the recommended way to do this was to use Debezium w/Kafka. So that is why we use it. Data volume is low.

If anybody knows of a simpler way to accomplish this, please do let me know.

lijok

We used a binlog reader library for Python, wrapped it in some 50 loc of rudimentary integration code and hosted it on some container somewhere.

Data volume was low though.

tstrimple

Or, as mentioned in the article, you've already got Kafka in place handling a lot of other things but need a small queue as well and were hoping to avoid adding a new technology stack into the mix.

rockwotj

The kafka protocol is a distributed write ahead log. If you want a job queue you need to build something on top of that, it’s a pretty low level primative.

mumrah

Not for long. An early access version of KIP-932 Queues for Kafka will be released in 4.0 in a few weeks.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-932%3A...

atmosx

Why does everybody keep missing this point? I don’t know.

nobleach

There's a wonderful Kafka Children's book that I always suggest every team I work with read: https://www.gentlydownthe.stream/

The way I describe Kafka is, "an event has transpired... sometimes you care, and choose to take an action based on that event"

The way I describe RabbitMQ is, "there's a new ticket in the lineup... it needs to be grabbed for action or left in the lineup... or discarded"

Definitely not perfect analogies. But they get the point across that Kafka is designed to be reactive and message queues/job queues are meant to be more imperative.

stickfigure

Your two-sentence description is excellent. That book, not so much.

jszymborski

What do people recommend?

Especially for low levels of load, that doesn't require that the dispatcher and consumer are written in the same language.

stickfigure

Until you hit scale, the database you're already using is fine. If that's Postgres, look up SELECT FOR UPDATE SKIP LOCKED. The major convenience here - aside from operational simplicity - is transactional task enqueueing.

For hosted, SQS or Google Cloud Tasks. Google's approach is push-based (as opposed to pull-based) and is far and above easier to use than any other queueing system.

vrosas

Cloud Tasks is one of the most undervalued tools in the GCP ecosystem, but mostly because PubSub gets all the attention. I've been using it since it was baked in the AppEngine and love it for 1-to-1 queues or delayed job handling.

kamikaz1k

how do you recommend working with Cloud Tasks?

raw dogging gcloud? Terraform? or something more manageable?

I've been curious for one of my smaller projects, but I am worried about adopting more GCPisms.

boruto

How could I solve the problem of in-order processing based on a key using skip locked? Basically all records having the key to be processed one after other.

wmfiv

Work jobs in the order they were submitted within a partition key. This selects the next partition key that isn't locked. You could make it smarter to select a subset of the jobs checking for partition keys where all of the rows are still unlocked.

  SELECT
  * 
  FROM jobs 
  WHERE partition_key = (
    SELECT partition_key 
    FROM jobs 
    ORDER BY partition_key 
    LIMIT 1
    SKIP LOCKED
  )
  ORDER BY submitted_at
  FOR UPDATE SKIP LOCKED;

crabbone

I'm probably biased, but in the number of cases where I had to work with Kafka, I'd really prefer to simply have an SQL database. In all of those cases I struggled to understand why developers wanted Kafka, what problem was it solving better than the database they already had, and for the life of me, there just wasn't one.

I'm not saying that configuring and deploying databases is easy, but it's probably going to happen anyway. Deploying and configuring Kafka is a huge headache: bad documentation, no testing tools, no way to really understand performance in the light of durability guarantees (which are also obscured by the poor quality documentation). It's just an honestly bad product (from the infra perspective): poor UX, poor design... and worst of all, it's kind of useless from the developer standpoint. Not 100% useless, but whatever it offers can be replaced by other existing tools with a tiny bit of work.

monksy

Famious last words. There are database as a queue antipattern warnings about this.

srhtftw

> Famious last words.

These weren't his last words, but Jim Gray had this to say about this so-called "antipattern".

Queues Are Databases (1995)

Message-oriented-middleware (MOM) has become an small industry. MOM offers queued transaction processing as an advance over pure client-server transaction processing. This note makes four points: Queued transaction processing is less general than direct transaction processing. Queued systems are built on top of direct systems. You cannot build a direct system atop a queued system. It is difficult to build direct, conversational, or distributed transactions atop a queued system. Queues are interesting databases with interesting concurrency control. It is best to build these mechanisms into a standard database system so other applications can use these interesting features. Queue systems need DBMS functionality. Queues need security, configuration, performance monitoring, recovery, and reorganization utilities. Database systems already have these features. A full-function MOM system duplicates these database features. Queue managers are simple TP-monitors managing server pools driven by queues. Database systems are encompassing many server pool features as they evolve to TP-lite systems.

https://arxiv.org/abs/cs/0701158

reval

Why is that an anti-pattern? Databases have added `SKIP LOCKED` and `SELECT FOR UPDATE` to handle these use cases. What are the downsides?

makeitdouble

I suppose you are referring to this:

https://mikehadlow.blogspot.com/2012/04/database-as-queue-an...

The main complaint seems to be that it's not optimal...but then, the frame of the discussion was "Until you hit scale", so IMHO convenience and simpler infra trumps having the absolute most efficient tool at that stage.

golergka

Can you elaborate? I guess it has to do with connection pooling?

ozarker

SQS, Azure Service Bus, RabbitMQ, ActiveMQ, QPID, etc… any message broker that provides the competing consumer pattern. though I’ll say having managed many of these message brokers myself, it’s definitely better paying for a managed service. They’re a nightmare when you start running into problems.

sanex

If you're using .NET I have to plug https://particular.net/ Nservicebus from particular.net. It's great at abstracting away the underlying message broker and provides an opinionated way to build a distributed system.

stackskipton

.Net SRE here, please no. Take 5 minutes to learn your messaging bus SDK and messaging system instead of yoloing some library that you don't understand. It's really not that hard.

Also, ServiceControl, ServiceInsight and ServicePulse are inventions of developers who are clearly WinAdmins who don't know what modern DevOps is. If you want to use that, you are bad and should feel bad.

(Sorry, I have absolute rage around this topic)

EDIT: If you insist, use MassTransit (https://masstransit.io/)

sea-gold

rogerthis

I'd wish NATS were more popular. It feels it lacks some real big sponsors $$$.

hhh

Scaling with NATS seems weird. I like what i’ve seen with others using it though

MuffinFlavored

NATS/WebSockets are good for 1 publisher -> many consumer (pubsub)

RabbitMQ is good for 1 producer -> 1 consumer with ack/nack

Right?

esafak

NATS does many-to-many.

Joel_Mckay

Actually, I used RabbitMQ static routes to feed per-cpu-core single thread bound consumers that restart their process every k transactions, or watchdog process timeout after w seconds. This prevents cross contamination of memory spaces, and slow fragmentation when the parsers get hammered hard.

RabbitMQ/Erlang on OTP is probably one of the most solid solutions I've deployed over the years (low service cycle demands.) Highly recommended with the AMQP SSL credential certs, and GUID approach to application layer load-balancing. Cut our operational costs around 37 times lower than traditional load-balancer approaches. =3

gnfargbl

Pulsar. Works extremely well as both a job queue and a data bus.

We have been using it in this application for half a decade now with no serious issues. I don't understand why it doesn't get more popular attention.

VenturingVole

Pulsar vs Kafka was a significant lesson to me: The "best" technology isn't always the winner.

I put it in quotes because I'm a massive fan of Pulsar and addressing the shortcomings of Kafka. However, with regards to some choices at a former workplace: The broader existing support/integration ecosystem along with Confluent's commercial capabilities won out with regards to technology choices and I was forced to acquiesce.

A bit like Betamax vs VHS, albeit that one pre-dates me significantly.

akshayshah

Even StreamNative is effectively abandoning Pulsar and going all-in on the Kafka protocol. I can see the theoretical benefits of Pulsar, but it just doesn’t seem to have the ecosystem momentum to compete with the Kafka juggernaut.

gnfargbl

The advantages of Pulsar are very much practical, at least for us. Without it we would have to manage two separate messaging systems.

I don't see any evidence of StreamNative abandoning Pulsar at this point. I do see a compatibility layer for the Kafka protocol. That's fine.

kgeist

We use RabbitMQ, and workers simply pull whatever is next in the queue after they finish processing their previous jobs. I’ve never witnessed jobs piling up for a single consumer.

sc68cal

* Redis pub/sub

* Redis streams

* Redis lists (this is what Celery uses when Redis backend is configured)

* RabbitMQ

* ZeroMQ

est

This. If you have really small volume like this article describes, just use Redis.

null

[deleted]

kitd

Kafka with a different partitioner would have worked fine. The problem was that the web workers loaded up the same partition. Randomising the chosen partition would have removed, or at least alleviated, the stated problem.

jiaaro

Random and round robin partitioning are the configurations being discussed.

The main point of the article is that low message volumes mean you can get unlucky and end up with idle workers when there is still work to be done

voodooEntity

We build an Infrastructure with about 6 microservices and Kafka as main message queue (job queue).

The problem the author describes is 100% true and if you are scaled with enaugh workers this can turn out really bad.

While not beeing the only issue we faced (others are more environment/project-language specific) we got to a point where we decided to switch from kafka to rabbitmq.

enether

thankfully early access for KIP-932 is coming in 1-3 weeks as the 4.0.0 release gets published

film42

First time I've heard of KIP-932 and it looks very good. The two biggest issues IMO are finding a good Kafka client in the language you need (even for ruby this is a challenge) and easy at-least-once workers.

You can over partition and make at-least-once workers happen (if you have a good Kafka client), or you use an http gateway and give up safe at-least-once. Hopefully this will make it easier to build an at-least-once style gateway that's easier to work with across a variety of languages. I know many have tried in the past but not dropping messages is hard to do right.

akshayshah

Couldn’t agree more - the most exciting thing about KIP-932 is how much easier it’ll become to build a good HTTP push gateway.

Uber wrote a Kafka push gateway years ago, when it was considerably harder to do well: https://www.uber.com/blog/kafka-async-queuing-with-consumer-...

PhilippGille

TFA mentions it in the third paragraph:

> Note: when Queues for Kafka (KIP-932) becomes a thing, a lot of these concerns go away. I look forward to it!

brunoborges

For a small load queueing system, I had great success with Apache ActiveMQ back in the days. I designed and implemented a system with the goal of triggering SMS for paid content. This was in 2012.

Ultimately, the system was fast enough that the telco company emailed us and asked to slow down our requests because their API was not keeping up.

In short: we had two Apache Camel based apps: one to look at the database for paid content schedule, and queue up the messages (phone number and content). Then, another for triggering the telco company API.

xyst

> Each of these Web workers puts those 4 records onto 4 of the topic’s partitions in a round-robin fashion. And, because they do not coordinate this, they might choose the same 4 partitions, which happen to all land on a single consumer

Then choose a different partitioning strategy. Often key based partitioning can solve this issue. Worst case scenario, you use a custom partitioning strategy.

Additionally , why can’t you match the number of consumers in consumer group to number of partitions?

The KIP mentioned seems interesting though. Kafka folks trying to make a play towards replacing all of the distributed messaging systems out there. But does seem a bit complex on the consumer side, and probably a few foot guns here for newbies to Kafka. [1]

[1] https://cwiki.apache.org/confluence/plugins/servlet/mobile?c...

techcode

What that post describes (all work going to one/few workers) in practice doesn't really happen if you properly randomize (e.g. just use random UUID) ID of the item/task when inserting it into Kafka.

With that (and sharding based on that ID/value) - all your consumers/workers will get equal amount of messages/tasks.

Both post and seemingly general theme of comments here is trashing choice of Kafka for low volume.

Interestingly both are ignoring other valid reasons/requirements making Kafka perfectly good choice despite low volume - e.g.:

- multiple different consumers/workers consuming same messages at their own pace

- needing to rewind/replay messages

- guarantee that all messages related to specific user (think bank transactions in book example of CQRS) will be handled by one pod/consumer, and in consistent order

- needing to chain async processing

And I'm probably forgetting bunch of other use cases.

And yes, even with good sharding - if you have some tasks/work being small/quick while others being big/long can still lead to non-optimal situations where small/quick is waiting for bigger one to be done.

However - if you have other valid reasons to use Kafka, and it's just this mix of small and big tasks that's making you hesitant... IMHO it's still worth trying Kafka.

Between using bigger buckets (so instead of 1 fetch more items/messages and handle work async/threads/etc), and Kafka automatically redistributing shards/partitions if some workers are slow ... You might be surprised it just works.

And sure - you might need to create more than one topic (e.g. light, medium, heavy) so your light work doesn't need to wait for heavier one.

Finally - I still didn't see anyone mention actual real deal breakers for Kafka.

From the top of my head I recall a big one is no guarantee of item/message being processed only once - even without you manually rewinding/reprocessing it.

It's possible/common to have situations where worker picks up a message from Kafka, processes (wrote/materialized/updated) it and when it's about to commit the kafka offset (effectively mark it as really done) it realizes Kafka already re-partitioned shards and now another pod owns particular partition.

So if you can't model items/messages or the rest of system in a way that can handle such things ... Say with versioning you might be able to just ignore/skip work if you know underlying materialized data/storage already incorporates it, or maybe whole thing is fine with INSERT ON DUPLICATE KEY UPDATE) - then Kafka is probably not the right solution.

alexwebr

(Author here)

You say: > What that post describes (all work going to one/few workers) in practice doesn't really happen if you properly randomize (e.g. just use random UUID) ID of the item/task when inserting it into Kafka.

I would love to be wrong about this, but I don't _think_ this changes things. When you have few enough messages, you can still get unlucky and randomly choose the "wrong" partitions. To me, it's a fundamental probability thing - if you roll the dice enough times, it all evens out (high enough message volume), but this article is about what happens when you _don't_ roll the dice enough times.

kod

If it's a fundamental probability thing with randomized partition selection, put the actual probability of what you're describing in the article.

.25^20 is not a "somewhat unlucky sequence of events"

alexwebr

(Author here)

Fair enough. I agree .25^20 is basically infinitesimal, and even with a smaller exponent (like .25^3) the odds are not great, so I appreciate you calling this out.

Flipping this around, though, if you have 4 workers total and 3 are busy with jobs (1 idle), your next job has only a 25% chance of hitting the idle worker. This is what I see the most in practice; there is a backlog, and not all workers are busy even though there is a backlog.

techcode

The other thing that's PITA with Kafka is fail/retry.

If you want to continue processing other/newer items/messages (and usually you do), you need to commit Kafka topic offset - leaving you to figure out what to do with failed item/message.

One simple thing is just re-inserting it again into the same topic (at the end). If it was temps transient error that could be enough

Instead of same topic, you can also insert it into another failedX Kafka topic (and have topic processed by cron like scheduled task).

And if you need things like progressive backing off before attempting reprocessing - you liekly want to push failed items into something else.

While it could be another tasks system/setup where you can specify how many reprocessing attempts to make, how much time to wait before next attempt ...etc. Often it's enough to have a simple DB/table.

araes

Having never actually used this platform before, does anybody know why they named it Kafka, with all the horrible meanings?

Per Wiktionary, Kafkaesque: [1]

1. "Marked by a senseless, disorienting, often menacing complexity."

2. "Marked by surreal distortion and often a sense of looming danger."

3. "In the manner of something written by Franz Kafka." (like the software language was written by Franz Kafka)

Example: Metamorphosis Intro: "One morning, when Gregor Samsa woke from troubled dreams, he found himself transformed in his bed into a horrible vermin. He lay on his armour-like back, and if he lifted his head a little he could see his brown belly, slightly domed and divided by arches into stiff sections. The bedding was hardly able to cover it and seemed ready to slide off any moment. His many legs, pitifully thin compared with the size of the rest of him, waved about helplessly as he looked." [2]

[1] Wiktionary, Kafkaesque: https://en.wiktionary.org/wiki/Kafkaesque

[2] Gutenberg, Metamorphosis: https://www.gutenberg.org/cache/epub/5200/pg5200.txt

snotrockets

It was named so based on the Idea is that like the author (who the term "Kafkesque" is coined after), Apache Kafka is a prolific writer.

kod

Kafka wrote a lot, and destroyed most of what he wrote.

Seems like a good name for a high-volume distributed log that deletes based on retention, not after consumption.

op00to

Jay Kreps liked Kafka’s writing.

denkmoon

Nominative determinism.