“Streaming vs. Batch” Is a Wrong Dichotomy, and I Think It's Confusing
44 comments
·May 14, 2025fifilura
gopher_space
Or, "How I saved millions a year by introducing one 500ms animation".
monksy
Kafka integrates against aws lamdas very easily
fifilura
You can't do groupbys or joins with lambdas.
You can only really take one garden gnome, put some paint on it and forward the same gnome.
perrygeo
I think I get the analogy, something like: you can append to the record but everything is still record-based? And what do garden gnomes have to do with it :-)
franktankbank
I'm curious if its web scale.
layer8
Is that a pro or a con? ;)
efitz
Batch processes IRL tend to “fetch” data, which is nothing like streaming.
For example, I’ve worked with “batch” systems that periodically go do fetches from databases or S3 buckets and then do lots of crunching, before storing the results.
Sometimes batch systems have separate fetchers and only operate vs a local store; they’re still batch.
Streaming systems may have local aggregation or clumping in the arriving information; that doesn’t make it a “batch” system. Likewise streaming systems may process more than one work item simultaneously; still not a “batch”.
I associate “batch” more with “schedule” or “periodic” and “fetch”; I associate “stream” with “continuous” and “receiver”.
null
10000truths
"latency" and "throughput" are only mentioned in passing in the article, but that is really the crux of the whole "streaming vs. batch" thing. You can implement a stream-like thing with small, frequent batches of data, and you can implement a batch-like thing with a large-buffered stream that is infrequently flushed. What matters is how much you prioritize latency over throughput, or vice versa. More importantly, this can be quantified - multiply latency and throughput, and you get buffer/batch size. Congratulations, you've stumbled across Little's Law, one of the fundamental tenets of queuing theory!
binoct
The comments here are really interesting to read since there are so many strongly stated different definitions. It’s obvious “steaming” and “batch” have different implications and even meanings in different contexts. Depending on what the type of work being done and what system it’s being done with, batch and streaming can be interpreted differently, so it feels like really a semantic argument going on lacking specificity. It’s important to have common and clear terminology, and across the industry these words (like so many in computer science) are not always as clear as we might assume. Part of what makes naming things so difficult.
It does seem to me that push vs pull are slightly more standardized in usage, which might be what the author is getting at. But even then depending on what level of abstraction in the system you are concerned with the concepts can flip.
floating-io
From skimming the article, it seems that this is a munging of the terms in directions that just aren't meaningful.
I've had the following view from the beginning:
- Batches are groups of data with a finite size, delivered at whatever interval you desire (this can be seconds, minutes, hours, days, or years between batches).
- Streaming is when you deliver the data "live", meaning immediately upon generation of that data. There is no defined start or end. There is no buffering or grouping at the transmitter of that data. It's constant. What you do with that data after you receive it (buffering, batching it up, ...) is irrelevant.
JMHO.
bjornsing
The lines blur though when you start keeping state between batches, and a lot of batch processing ends up requiring that (joins, deduplication, etc).
floating-io
No, it really doesn't. The definition of "streaming", to me, can be boiled down to "you send individual data as soon as it's available, without collecting into groups."
Batching is, by definition, the gathering of data records into a collection before you send it. Streaming does not do that, which is the entire point. What happens after transmission occurs, on reception, is entirely irrelevant to whether the data transfer mode is "streaming."
leni536
Most streaming does some batching. If you stream audio from a live source, you batch at least into "frames", and you batch into network packets. On top of that you might batch further depending on your requirements, yet I would still count most of it as "streaming".
bjornsing
Isn’t that pretty much exactly what the OP is saying? He just calls it ”push” and ”pull” instead. Different words, same concepts.
brudgers
Streams have unknown size and may be infinite.
Batches have a known size and it are not infinite.
fjdjshsh
Maybe I'm using the wrong definitions, but I think that's backwards.
Say you are receiving records from users and different intervals and you want to eventually store them in a different format on a database.
Streaming to me means you're "pushing" to the database according to some rule. For example, wait and accumulate 10 records to push. This could happen in 1 minute or in 10 hours. You know the size of the dataset (exactly 10 records). (You could also add some max time too and then you'd be combining batching with streaming)
Batching to me means you're pulling from the database. For example, you pull once every hour. In that hour, you get 0 records or 1000 records. You don't know the size and it's potentially infinite
setr
It’s because you’re looking at it from opposing ends.
From the perspective of the data source, in a streaming context, the size is finite — it’s whatever you’re sending. From the data sink’s perspective, it’s unknown how many records are going to get sent in total.
Vice versa, in a batch context, the data source has no idea how many records will eventually be requested, but the data sink knows exactly the size of the request.
That is, whoever is initiating the job knows what’s up, and whoever is targeted just has to deal with it.
But generally I believe the norm is to discuss from the sink’s perspective, because the main interesting problem is when the sink has to deal with infinity (streaming). When then source deals with infinity (batch), it’s fairly straightforward to manage — refuse requests of too large a size and move on. The data isn’t going anywhere, so the sink can fix itself and re-request. You do that with streaming and data starts getting lost
brudgers
In part I think that is because the sink can run out of memory, the store has already allocated enough memory.
numbsafari
I work with batch oriented store and forward systems and they definitely push data in batches.
simlevesque
Isn't everything batched ? I've built live streaming video, iot, and it's batches all the way down.
fragmede
technically yes, at the lowest levels (polled interrupts anybody) but there's a material difference (or not, as this blog argues) depending on how they're processed. At one end of the spectrum you have bank records being reconciled at the end of the day. At the other extreme, reading individual chunks of video data off disk, not saving it, and chucking it into the Internet via udp as fast as the client can handle, but could be dropped on the floor as necessary; that doesn't really require the same kind of assurances as a day's worth of bank records.
cgio
I know many push batch systems, e.g. all the csv type pushed onto s3 and processed in an event based pipeline. Even for non event based, the fact that I schedule a batch does not make a pipeline pull. Pull is when I control the timing AND query. In my view the dichotomy stream vs batch is meaningful. The fact that there are also combinations where a stream is supported by batch does not invalidate the differences.
davery22
I think the article was getting at this at the end - different use cases naturally call for either a point-in-time snapshot (optimally serviced by pull) or a live-updating view (optimally serviced by push). If I am gauging the health of a system, I'll probably want a live view. If I am comparing historical financial reports, snapshot. Note that these are both "read-only" use cases. If I am preparing updates to a dataset, I may well want to work off a snapshot (and when it comes time to commit the changes, compare-and-swap if possible, else pull the latest snapshot and reconcile conflicts). If I am adjusting my trades for market changes, live view again.
If I try to service a snapshot with a push system, I'll have to either buffer an unbounded number of events, discard events, or back-pressure up the source to prevent events from being created. And with push alone, my snapshot would still only be ephemeral; once I open the floodgates and start processing more events, the snapshot is gone.
If I try to service a live view with a pull system, I'll have to either pull infrequently and sacrifice freshness, or pull more frequently and waste time and bandwidth reprocessing unchanged data. And with pull alone, I would still only be chasing freshness; every halving of refresh interval doubles the resource cost, until the system can't keep up.
The complicating real-world factor that this article alludes to is that, historically, push systems lacked the expressiveness to model complex data transformations. (And to be fair, they're up against physical limitations: Many transformations simply require storing the full intermediate dataset in order to compute an incremental update.) So the solution was to either switch wholesale to pull at some point in the pipeline (and try to use caching, change detection, etc to reduce the resource cost and enable more frequent pulling), or, introduce a pulling segment in the pipeline ("windowing" joins, aggregations, etc) and switch back to push after.
It's pretty recent that push systems are attempting to match the expressiveness of pull systems (e.g. Materialize, Readyset), but people are still so used to assuming pull-based compromises, asking questions like "How fresh does this data feed really _need_ to be?". It's analogous to asking "How long does this snapshot really _need_ to last?" - a relevant question to be sure, but maybe doesn't need to be the basis for massive architectural lifts.
kazinator
The opposite of "batch" is "interactive".
A classic "batch job" is one that can be executed without input from a keyboard or output to a display, and therefore can be queues in a batch with other such jobs (perhaps from other programmers).
There is a connection with scripting; batch job control was done with command languages. This is where DOS/Windows "batch files" get their name, and the .BAT suffix.
Grouping transmitted items together (such as bytes into a datagram) is better called aggregation, not to confuse it with "batch job" batching.
Nearly all streaming uses aggregation, other than at the lowest data link and physical layers.
briankelly
> Often times, "Stream vs. Batch" is discussed as if it’s one or the other, but to me this does not make that much sense really.
Just seems like a flawed premise to me since lambda architecture is the context in which streaming for data processing is frequently introduced. The batch vs stream discussion is more about the implementation side - tools or techniques best used for one aren’t best suited for the other since batch processing is usually optimized for throughput and streaming is usually optimized for latency. For example vectorization is useful for the former and code generation is useful for the latter.
mannyv
'Pull' and 'push' make even less sense than 'stream' and 'batch.'
In the old days batch was not realtime and took a while. Imagine printing bank statements, or calculating interest on your accounts at the end of the day. You literally process them all later.
Streaming is processing the records as they arrive, continuously.
IRL you can stream then batch...but normally batch runs at a specific time and chows everything.
"Try it yourself" "very quickly wanted to get real-time streaming for more"
My experience is the opposite.
You think you need streaming, so you "try it out" and build something incredibly complex with Kafka, that needs 24h maintenance to monitor congestion in every pipeline.
And 10x more expensive because your servers are always up.
And some clever (expensive) engineers that figure out how watermarks, out of orderness and streaming joins really work and how you can implement them in a parallel way without SQL.
And of course a renovate bot to upgrade your fancy (but half baked) framework (flink) to the latest version.
And you want to tune your logic? Luckily that last 3 hours of data is stored in Kafka so all you have to do is reset all consumer offsets, clean your pipelines and restart your job and the in data will hopefully be almost the same as last time you run it. (Compared to changing a parameter and re-running that SQL query).
When all you business case really needed was a monthly report. And that you can achieve with pub/sub and an SQL query.
In my experience the need for live data rarely comes from a business case, but for a want to see your data live.
And if it indeed comes from a business case, you are still better off prototyping with something simple and see if it really flies before you "try it out".