Skip to content(if available)orjump to list(if available)

Show HN: DuckDB for Kafka Stream Processing

Show HN: DuckDB for Kafka Stream Processing

8 comments

·December 8, 2025

Hello Everyone! We built SQLFlow as a lightweight stream processing engine.

We leverage DuckDB as the stream processing engine, which gives SQLFlow the ability to process 10's of thousands of messages a second using ~250MiB of memory!

DuckDB also supports a rich ecosystem of sinks and connectors!

https://sql-flow.com/docs/category/tutorials/

https://github.com/turbolytics/sql-flow

We were tired of running JVM's for simple stream processing, and also of bespoke one off stream processors

I would love your feedback, criticisms and/or experiences!

Thank you

itsfseven

It would be great if this supported Pulsar too!

srameshc

This looks brilliant, thank you. I love DuckDB and use it for lot of local data processing jobs. We have a data stream, not to the size where we need to push to BigQuery or elsewhere. I was thinking of trying something like sql-flow but I am glad now it makes the job very easy.

mihevc

How does this compare to https://github.com/Query-farm/tributary ?

rustyconover

The next major release of Tributary will support Avro, Protobuf and JSON along with the Schema Registry it will also bring the ability to write to Kafka with transactions.

But really you should get excited for DuckDB Labs to build out materialized views. Materialized views where you can ingest more streaming data to update aggregates. This way you could just keep pushing rows through aggregates from Kafka.

It is going to be a POWER HOUSE for streaming analytics.

Contact DuckDB Labs if you want to sponsor the work on materialized views: https://duckdb.org/roadmap

dm03514

Oh yes!! I've seen this a couple times. I am far from an expert in tributary so please take with a grain of salt.

Based on the tributary documentation, I understand that tributary embeds kafka consumers into duckdb. This makes duckdb the main process that you run to perform consumption. I think that this makes creating stream processing POCs very accessible. It looks like it is quite easy to start streaming data into duckdb. What I don't see is a full story around Devops, operations, testing, configuration as code etc.

SQLFlow is a service that embeds DuckDB as the storage and processing brains. Because of this, we're able to offer metrics, testing utilities, pipelines as code, and all the other DevOps utilities that are necessary to run a huge number of streaming instances 24x7. I have almost 20 years experience running high throughput distributed systems with high uptime, and SQLFlow is created as a tool that I'm comfortable with running in production in high availability contexts :)

mihevc

Nice! Thanks for the context, it's great to know!

mbay

I see an example with what looks like a lookup-type join against a Postgres DB. Are stream/stream joins supported, though?

The DLQ and Prometheus integration out of the box are nice.

dm03514

Stream to stream joins are NOT currently supported. This is a regularly requested feature, and I'll look at prioritizing it.

SQLFlow uses duckdb internally for windowing and stream state storage :), and I'll look at extending it to support stream / stream joins.

Could you describe a bit more about your use case? I'd really appreciate it if you could create an issue in the repo describing your use case and desired functionality a bit!

https://github.com/turbolytics/sql-flow/issues

We were looking at solving some of the simplier use cases first before branching out into these more complicated ones :)