A lost decade chasing distributed architectures for data analytics?
116 comments
·May 19, 2025rr808
Ugh I have joined a big data team. 99% of the feeds are less than a few GB yet we have to use Scala and Spark. Its so slow to develop and slow to run.
threeseed
a) Scala being a JVM language is one of the fastest around. Much faster than say Python.
b) How large are the 1% of the feeds and the size of the total joined datasets. Because ultimately that is what you build platforms for. Not the simple use cases.
rr808
1) Yes Scala and JVM is fast. If we could just use that to clean up a feed on a single box that would be great. The problem is calling the Spark API creates a lot of complexity for developers and runtime platform which is super slow. 2) Yes for the few feeds that are a TB we need spark. The platform really just loads from hadoop transforms then saves back again.
threeseed
a) You can easily run Spark jobs on a single box. Just set executors = 1.
b) The reason centralised clusters exist is because you can't have dozens/hundreds of data engineers/scientists all copying company data onto their laptop, causing support headaches because they can't install X library and making productionising impossible. There are bigger concerns than your personal productivity.
Larrikin
But can you justify Scala existing at all in 2025. I think it pushed boundaries but ultimately failed as a language worth adoption.l anymore.
threeseed
Absolutely.
a) One of the only languages you can write your entire app in Scala i.e. it supports compiling to Javascript, JVM and LLVM.
b) It has the only formally proven type system of any language.
c) It is the innovation language. Many of the concepts that are now standard in other languages had their implementation borrowed from Scala. And it is continuing to innovate with libraries like Gears (https://github.com/lampepfl/gears) which does async without colouring and compiler additions like resource capabilities.
tsss
Scala is still one of the most powerful languages out there.
tomrod
PySpark is a wrapper, so Scala is unnecessary and boggy.
spark1377485
PySpark is great, except for UDF performance. This gap means that Scala is helpful for some Spark edge cases like column-level encryption/decryption with UDF
specialist
After a year, I managed to convince a team to migrate our hottest data set (~1.5m records) from DynamoDB to Redis.
Hot damn, we collectively spent so much time mitigating our misuse & abuse of DynamoDB.
mountainriver
I can’t believe anyone would write scala at this point
icedchai
I worked at a Scala shop about 15 years ago. It was terrible. Everyone had their own "dialect" of features they used (kinda like C++.). The tooling was the worst (Eclipse's Scala plugin was especially awful, IntelliJ's was okay.) The compiler was slow.
I'm assuming it's better now?
mountainriver
Yeah I had a similar experience, it was the worst dev shop I've ever been a part of. I tried it again 5 or so years back, it was a bit better but still a lot of SBT strangeness and unneeded abstractions
Mortiffer
The R community has been hard at work on small data. I still highly prefer working on on memory data in R dplyr DataTable are elegant and fast.
The CRan packages are all high quality if the maintainer stops responding to emails for 2 months your package is automatically removed. Most packages come from university Prof's that have been doing this their whole career.
wodenokoto
A really big part of a in-memory dataframe centric workflow is how easy it is to do one step at a time and inspect the result.
With a database it is difficult to run a query, look at the result and then run a query on the result. To me, that is what is missing in replacing pandas/dplyr/polars with DuckDB.
IanCal
I'm not sure I really follow, you can create new tables for any step if you want to do it entirely within the db, but you can also just run duckdb against your dataframes in memory.
jgalt212
In R, data sources, intermediate results, and final results are all dataframes (slight simplification). With DuckDB, to have the same consistency you need every layer and step to be a database table, not a data frame, which is awkward for the standard R user and use case.
wodenokoto
You can, but then every step starts with a drop table if exists; insert into …
zkmon
A database is not only about disk size and query performance. Database reflects the company's culture, processes, workflows, collaboration etc. It has an entire ecosystem around it - master data, business processes, transactions, distributed applications, regulatory requirements, resiliency, Ops, reports, tooling etc,
The role of a database is not just to deliver query performance. It needs to fit into the ecosystem, serve the overall role on multiple facets, deliver on a wide range of expectations - tech and non-tech.
While the useful dataset itself may not outpace the hardware advancements, the ecosystem complexity will definitely outpace any hardware or AI advancements. Overall adaptation to the ecosystem will dictate the database choice, not query performance. Technologies will not operate in isolation.
willvarfar
And its very much the tech culture at large that influences the company's tech choices. Those techies chasing shiny things and trying to shoehorn it into their job - perhaps cynically to pad their cvs or perhaps generously thinking it will actually be the right thing to do - have an outsized say in how tech teams think about tech and what they imagine their job is.
Back in 2012 we were just recovering from the everything-is-xml craze and in the middle of the no-sql craze and everything was web-scale and distribute-first micro-services etc.
And now, after all that mess, we have learned to love what came before: namely, please please please just give me sql! :D
threeseed
Why you don't just quietly use SQL instead of condescending lecturing others about how compromised their tech choices are.
NoSQL e.g. Cassandra, MongoDB and Microservices were invented to solve real-world problems which is why they are still so heavily used today. And the criticism of them is exactly the same that was levelled at SQL back in the day.
It's all just tools at the end of the day and there isn't one that works for all use cases.
kukkeliskuu
Around 20 years ago I was working for a database company. During that time, I attended SIGMOD, which is the top conference for databases.
The keynote speaker for the conference Stonebraker, who started Postgres, among other things. He talked about the history of relational databases.
At that time, XML databases were all the rage -- now nobody remembers them. Stonebraker explained that there is nothing new in the hierarchical databases. There was a significant battle in SIGMOD, I think somewhere in the 1980s (I forget the exact time frame) between network databases and relational databases.
The relational databases won that battle, as they have won against each competing hierarchical database technology since.
The reason is that relational databases are based on relational algebra. This has very practical consequences, for example you can query the data more flexibly.
When you use JSON storage such as MongoDB, when you decide your root entities you are stuck with that decision. I see very often in practice that there will always come new requirements that you did not foresee that you then need to work around.
I don't care what other people use, however.
hobs
Every person I know who has ever used Cassandra in prod has cursed its name. Mongo lost data for close to a decade, and Microservices mostly are NOT used to solve real world problems but instead used either as an organizational or technical hammer for which everything is a nail. Hell there's entire books written how you should cut people off from each other so they can "naturally" write microservices and hyperscale your company!!
zwnow
No, a database reflects what you make out of it. Reports are just queries after all. I dont know what all the other stuff you named has to do with the database directly. The only purpose of databases is to store and read data, thats what it comes down to. So query performance IS one of the most important metrics.
DonHopkins
You can always make your data bigger without increasing disk space or decreasing performance by making the font size larger!
braza
> History is full of “what if”s, what if something like DuckDB had existed in 2012? The main ingredients were there, vectorized query processing had already been invented in 2005. Would the now somewhat-silly-looking move to distributed systems for data analysis have ever happened?
I like the gist of the article, but the conclusion sounds like 20/20 hindsight.
All the elements were there, and the author nails it, but maybe the right incentive structure wasn't there to create the conditions to make it able to be done.
Between 2010 and 2015, there was a genuine feeling from almost all industry that we would converge to massive amounts of data, because until this time, the industry had never faced a time with so much abundance of data in terms of data capture and ease of placing sensors everywhere.
The natural step in this scenario won't be, most of the time, something like "let's find efficient ways to do it with the same capacity" but instead "let's invest to be able to process this in a distributed manner independent of the volume that we can have."
It's the same thing between OpenAI/ChatGPT and DeepSeek, where one can say that the math was always there, but the first runner was OpenAI with something less efficient but with a different set of incentive structures.
mamcx
It will not happened. The problem is that people believe theirs app will be web-scale pretty-soon so need to solve the problem ASAP.
Is only after being burned many many times that arise the need for simplicity.
Is the same of NoSql. Only after suffer it you appreciate going back.
ie: Tools like this circle back only after the pain of a bubble. It can't be done inside it
gopher_space
> The problem is that people believe theirs app will be web-scale pretty-soon so need to solve the problem ASAP.
Investors really wanted to hear about your scaling capabilities, even when it didn't make sense. But the burn rate at places that didn't let a spreadsheet determine scale was insane.
Years working on microservices, and now I start planning/discovery with "why isn't this running on a box in the closet" and only accept numerical explanations. Putting a dollar value on excess capacity and labeling it "ad spend" changes perspectives.
twic
This feels like a companion to classic 2015 paper "Scalability! But at what COST?":
https://www.usenix.org/system/files/conference/hotos15/hotos...
willvarfar
I only retired my 2014 MBP ... last week! It started transiently not booting and then, after just a few weeks, it switched to be only transiently booting. Figured it was time. My new laptop is actually a very budget buy, and not a mac, and in many things a bit slower than the old MBP.
Anyway, the old laptop is about par with the 'big' VMs that I use for work to analyse really big BQ datasets. My current flow is to do the kind of 0.001% queries that don't fit on a box on BigQuery and massage things with just enough prepping to make the intermediate result fit on a box. Then I extract that to parquet stored on the VM and do the analysis on the VM using DuckDB from python notebooks.
DuckDB has revolutionised not what I can do but how I can do it. All the ingredients were around before, but DuckDB brings it together and makes the ergonomics completely different. Life is so much easier with joins and things than trying to do the same in, say, pandas.
Cthulhu_
I still have mine, but it's languishing, I don't know what to do with it / how to get rid of it, it doesn't feel like trash. The Apple stores do returns but for this one you get nothing, they're just like "yeah we'll take care of it".
The screen started to delaminate on the edges, and its follow-up (a MBP with the touch bar)'s screen is completely broken (probably just the connector cable).
I don't have a use for it, but it feels wasteful just to throw it away.
compiler-devel
I have the same machine and installed Fedora 41 on it. Everything works out of the box, including WiFi and sound.
HPsquared
eBay is pretty active for that kind of thing. Spares/repair.
culebron21
A tangential story. I remember, back in 2010, contemplating the idea of completely distributed DBs inspired by then popular torrent technology. In this one, a client would not be different from a server, except by the amount of data it holds. And it would probably receive the data in torrents manner.
What puzzled me was that a client would want others to execute its queries, but not want to load all the data and make queries for the others. And how to prevent conflicting update queries sent to different seeds.
I also thought that Crockford's distributed web idea (where every page is hosted like on torrents) was a good one, even though I didn't think deep of this one.
Until I saw the discussion on web3, where someone pointed out that uploading any data on one server would make a lot of hosts to do the job of hosting a part of it, and every small movement would cause tremendous amounts of work for the entire web.
bhouston
I have a large analytics dataset in BigQuery and I wrote an interactive exploratory UI on top of it and any query I did generally finished in 2s or less. This led to a very simple app with infinite analytics refinement that was also fast.
I would definitely not trade that for a pre-computed analytics approach. The freedom to explore in real time is enlightening and freeing.
I think you have restricted yourself to recomputed fix analytics but real time interactive analytics is also an interesting area.
roenxi
> As recently shown, the median scan in Amazon Redshift and Snowflake reads a doable 100 MB of data, and the 99.9-percentile reads less than 300 GB. So the singularity might be closer than we think.
This isn't really saying much. It is a bit like saying the 1:1000 year storm levy is overbuilt for 99.9% of storms. They aren't the storms the levy was built for, y'know. It wasn't set up with them close to the top of mind. The database might do 1,000 queries in a day.
The focus for design purposes is really to queries that live out on the tail - can they be done on a smaller database? How much value do they add? What capabilities does the database need to handle them? Etc. That is what should justify a Redshift database. Or you can provision one to hold your 1Tb of data because red things go fast and we all know it :/
capitol_
If you only have 1tb of data then you can have it in ram on a modern server.
steveBK123
AND even if you have 10TB of data, NVMe storage is ridiculously fast compared to what disk used to look like (or s3...)
xyzzy_plugh
In the last few years, sure, but certainly not in 2012.
steveBK123
1TB memory servers weren't THAT exotic even in say 2014~2018 era either, I know as I had a few at work.
Not cheap, but these were at companies with 100s of SWEs / billions in revenue / would eventually have multi-million dollar cloud bills for what little they migrated there.
PaulHoule
You can take a different approach to the 1-in-1000 jobs. Like don't do them, or approximate them. I remember the time I wrote a program that would have taken a century to finish and then developed an approximation that got it done in about 20 minutes.
benterix
> This isn't really saying much.
On the contrary, it's saying a lot about sheer data size, that's all. The things you mention may be crucial why Redshift and co. have been chosen (or not - in my org Redshift was used as standard so even small dataset were put into it as the management want to standardize, for better or worse), but the fact remains that if you deal with smaller datasets all of the time, you may want to reconsider the solutions you use.
simlevesque
I'm working on a big research project that uses duckdb, I need a lot of compute resources to develop my idea but I don't have a lot of money.
I'm throwing a bottle into the ocean: if anyone has spare compute with good specs they could lend me for a non-commercial project it would help me a lot.
My email is in my profile. Thank you.
fulafel
Related in the big-data-benchmarks-on-old-laptop department: https://www.frankmcsherry.org/graph/scalability/cost/2015/01...
This has the same energy of this article named "Command-line Tools can be 235x Faster than your Hadoop Cluster" [1]
[1] - https://adamdrake.com/command-line-tools-can-be-235x-faster-...