What Killed Innovation?
113 comments
·March 25, 2025janalsncm
marginalia_nu
I'd also argue that even if all else is equal, a flashy visualization is worse than a conventional one, as you generally do not want to draw attention to the presentation if your aim is to convey information.
datadrivenangel
Fun counterpoint is that a flashier visualization may be 'better' than once optimized for accurate and efficient information conveyance IF it causes people to spend more time on the visualization. A flashy infographic conveys information less efficiently than a good chart, but if it causes people to engage with it more you may end up communicating your information to more people.
smrtinsert
I don't have time for cool or innovative visualization. Show the bar chart, give me higher or lower is better.
Gormo
It's strange to expect continuous innovation in anything where outputs are measurable against clear, stable targets. Once you've achieved the target, what further effort is necessary?
It's like asking what killed innovation in the shape of wheels. The answer is that we're able to make nearly perfect circular wheels, and there is no more optimal shape for a wheel than a circle, so no further gains for new innovations to capture.
NooneAtAll3
I thought the youtube link would be this https://www.youtube.com/watch?v=SwIyd_gsGWA
borgdefenser
I love data visualization but it very much reminds me of shred guitar playing, something I also use to very much love.
What non-guitar players are complaining about the lack of innovation in shred guitar playing? It is just not something that non-guitar players really care much about. Good shred vs bad shred is all going to sound the same to the non-guitarist anyway.
0xbadcafebee
Innovation is never constantly increasing. It usually appears in bursts, and stops around the point that humans don't need it as much, or development hits a ceiling of effort. But it's always slowly simmering. Usually it's research or yak-shaving that, after years, suddenly appears as if out of nowhere as a useful product.
I am hopeful that in my lifetime, the web will die. It's such an insanely stupid application platform. An OS on an OS, in a document reader (which, due to humans' ability to go to any lengths to avoid hard work, literally all new network protocols have to be built on top of).
You want cool visualizations? Maybe don't lock yourself into using a goddamn networked document viewer. Native apps can do literally anything. But here we are, the most advanced lifeforms on the planet, trapped in a cage of our own making.
ryandrake
> I am hopeful that in my lifetime, the web will die.
I'd like to see the www go back to its roots as a way to share and browse documents, hyperlinked together. The web worked when it was just documents to render and click on links. It is terrible as an application platform.
It's been 30 years since JavaScript was invented. Imagine what we'd have today, if instead of making the WWW into this half-assed application platform, those 30 years of collective brainpower were instead spent on making a great cross-platform native application development and delivery system!
sunrunner
The web as it was originally conceived - readable (but not interactive) content with linked resources - feels a far cry from the web of today, a platform for interactive applications that seems to grow asymptotically towards feature-parity with native applications (UI, input handling, data processing, hardware access) while never quite getting there, encompassing the fundamental things that make 'applications' work.
If the modern web _did_ reach feature parity in some way the real question would then be 'What makes it different?'. As linked resources doesn't seem like a particularly strong unique feature today the only other things I can think of are the simpler cross-platform experience and the ease of distribution.
So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?' Is it feasible or sensible to have users expect to access every application just by visiting a named page and getting the latest version blasted at their browser?
And I guess that's how we got Chrome OS.
0xbadcafebee
> 'What would make for a better cross-platform development experience?'
Back in the day, people balked at using Java as a universal portable application platform, because ironically, everyone wanted their own language and their own platform semantics. Yet everyone on the planet has already unanimously agreed to a cross-platform development platform with a strict set of languages:
- HTML/CSS for presenting text/style/layouts
- Javascript for crafting and organizing the UI
- WebAssembly for compiled code run in a VM
- HTTP for IPC
So right there you have prescribed languages, formats, protocols. We gave up a single simple prescribed system for a more complicated one.However, if you wanted to, you could get rid of the browser, and still support those same things - plus everything else a native app can do.
Separate those components into libraries, then link those libraries into your app, have your app load the libraries & components you want, and configure how the library should handle the presentation, UI, and business logic. You're then basically shipping a custom, streamlined browser - except it doesn't have to obey browser rules! It can run any code you want, because it's your app. It can present any way you want, because it's your app.
But it's also portable! The UI, the business logic, presentation, would all be using libraries ported to multiple platforms. Just recompile your app on the target platform. Avoid native code for an instant recompile, or add native code and deal with the portability tax.
It's sort of like compiling one browser on multiple platforms, except 1) it's broken down into components, 2) you can interchange those components with ones from different vendors that follow the same standards, and 3) you are not limited to these libraries - you can add your own native code.
In terms of interoperability with other apps, use the same network protocols, the same URIs. You can add new ones too, if you prefer, just for your own custom backend for your app... but probably everyone will stick to HTTP, because it's a universal way to access everyone's backend APIs or other native apps across sandboxes.
bobthepanda
The web gained traction as a development platform because for the most part, it broadly works the same on every device due to the web standards, and so it's very easy to develop something that works consistently on all the different devices. Purists may bemoan that things no longer respect the "native look and feel" but that is a feature, not a bug, for the vast majority of users and developers. As an example, I absolutely hate that my work email on Outlook does not have the same feature set on Windows vs Mac vs whatever, and even in scenarios where application developers want to deliver the same features everywhere the minutiae of the native development patterns make it like herding cats.
It is basically the electrical plug of our era, in that it is a means to an end, never mind if 110V 60Hz is necessarily the most efficient way to deliver power in the home in North America.
BrenBarn
> So then the questions are 'What would make for a better cross-platform development experience?' (Chromium embedded framework not included) and 'How do we make app distribution seamless?'
The question for me isn't what would make for a better developer experience, it's what would make for a better user experience. And, personally, as a user, what makes my experience better is when I get to decide how an app looks and works, and the developer's say in that is more limited. That is the big flaw with web apps: too many app authors want to be artists and want to create custom interfaces and lots of shiny gizmos. I want apps to generally be using standardized widgets like menus and buttons whose look and feel is determined by the user's platform preferences and are not a bespoke creation of the app author.
MaxBarraclough
We have JavaFX and Qt, and they're both better than ever, but they don't see much use. With JavaFX you can build and distribute a portable .jar file, and I think it can be used with JNLP/Java Web Start for distribution if you prefer that approach. With Qt, you're likely to be delivering self-contained native application packages in the target platform's native form.
(JavaFX has been carved out of the core JVM, which is annoying, but if the target machine has a JVM installed that bundles JavaFX, you're all set.)
marmarama
I'm not sure you can say that Qt doesn't see much use, when there are hundreds, probably thousands, of well-known commercial apps using it, and it's in millions upon millions of embedded systems. And obviously KDE as well.
It's just it's largely invisible. It works, it does exactly the job it's meant to do, and it does it well.
FrankDelporte
Indeed JavaFX is better than ever ;-) See https://www.jfx-central.com/ for many example applications, libraries, tutorials, etc.
pjmlp
Because most folks nowadays rather ship Chrome with their application, and then they complain Google has taken over the Web.
scarface_74
Native for Windows, Macs, Linux, iPhones and Android devices?
Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
Since I do use multiple devices, when everything is on the web, you also don’t have to worry about syncing or conflict resolution like you do with semi connected scenarios.
ryandrake
> Native for Windows, Macs, Linux, iPhones and Android devices?
> Now imagine trying to update all of those native apps across a large enterprise or multiple large enterprises.
With the tools we have now, it would absolutely not work. In my post I was imagining a parallel alternate universe where native development tools got all the brainpower and innovation over the last 30 years, instead of the web tools getting it.
feoren
> An OS on an OS, in a document reader
Versus an interpreted language executed in a runtime running on a virtual thread of an OS running on top of a BIOS over a glorified calculator!? Insanity! Whatever happened to good old-fashioned pen and paper!?
There's nothing wrong with the model of delivering your software as a small program to run in a sandboxed browser environment. WASM, canvas, WebGL -- you can do nearly as much on the web as native nowadays, with a dead-simple deployment model. One of the only type of programs that's much harder to make in a web application is malware. Calling a modern browser a "networked document reader" is as silly as calling a modern computer a calculator.
collingreen
The DOM seems fair to call a networked document reader. You've suggested a different build target for what would have been native apps - I think you and OP meet in the middle a bit; you get the power of non-html app development. OP laments the overhead of having to shove that into the existing web model designed for documents; you appreciate the sandboxing.
I think you have similar opinions that mostly overlap, regardless of insults about statements being silly.
dullcrisp
You’re assuming that teaching fleshy monkeys to smear their gunk on glorified bathroom tissue was ever a good idea.
billyp-rva
> Native apps can do literally anything.
That's just as much a downside as an upside. You're putting a lot of trust in a native app that you aren't putting in a website.
nullpoint420
What about sandboxed native apps? If the browser can do it, why can't native apps do it as well?
mike_hearn
It's much harder than it looks. I've investigated all this very deeply and should really write a blog post about it.
treyd
We have sandboxing technology on every modern operating system.
mjevans
In HN spirit / guidelines, I'm going to presume the best.
Did you mean: "the web (as an application platform) will die" / once again swing back from mainframe / thin client to powerful local computing platforms?
In the spirit of empowering the user, I too hope the average user once again owns their destiny, the storage, computation, and control of their data. Though I think the web as a publishing media does empower that user if there are open platforms that promote the ability to choose any fulfillment partner they desire.
ericmcer
The web is just a convention that gained rapid adoption so now browsers dominate software. As far as conventions go, it is not bad compared to some of the stuff humans have landed on. Better than paving over everything so we can drive and park cars all over, better than everything being single use and disposable. Web has it's ups and downs but it is decent based on our track record.
fumar
I am exploring an alternative browser-like platform concept that would allow for near native performance. However established web protocols are hard to overcome.
pphysch
Stagnation in viz design has pretty much nothing to do with the shrinking native<->web capability gap, and the web is here to stay.
pjmlp
Same here, although most of my UI work is on the Web nowadays, I miss the days when the browser was only for hypertext documents, and everything else was done natively with networking protocols, and standards.
Hence why my hobby coding has nothing to do with Web technologies, Web pays the bills, other stuff is for fun.
praptak
Even a small amount of data literacy makes you aware that visualizations can deceive. Pie charts make humans overestimate large percentages, nonzero axis is borderline fraud, choice of colors can totally warp color scales.
I think that in this context it is expected for data literacy to make people suspicious of complex visualizations.
garciasn
Data literacy should come down to the data itself, not only the visualization of those data. Sure pie charts are the bane of Tufte’s existence but even the best data visualizations of a particular segment of data can be misleading due to misrepresentation of the data underneath from collection to its analysis.
People should be far more skeptical of what they are fed. Data narratives are often misleading with manipulation of the data, its aggregation, visualization, and especially the interpretation within context. Data literacy needs to address all of these, not simply the how it’s visualized; that’s simply the final step in the entire data and information lifecycle.
I’m not saying “do your own research;” instead, folks should think critically about what they’re seeing and attempt to understand what’s presented and put it inside the appropriate context before taking anything at face value that they’re shown, by any organization.
e: just formatting
Analemma_
> nonzero axis is borderline fraud
This is an outrageously reductive meme that has long outstripped its actual usefulness and needs to die. The axis and scale should represent the useful range of values. For example, if your body temperature in Fahrenheit moves more than 5 degrees in either direction, you're having a medical emergency, but on a graph that starts from zero, this would barely be visible. Plotting body temperature from zero would conceal much more than it reveals, which is the opposite of what dataviz is supposed to do.
teddyh
The only reasonable zero-value for temperature is 0K, which unfortunately leads to unreadable graphs. (All other temperature scales are completely arbitrary.) So for the specific case of temperatures, it is in fact completely reasonable to have a nonzero axis. But most graphs are not temperatures.
matkoniecz
this is a very rare case where nonzero axis is justifiable
nevertheless >99% of cases where I am encountering nonzero axis it is misleading
> The axis and scale should represent the useful range of values
this should not be confused for "range of values present in data"
often actually useful visualization would show that value barely changed - but it makes for more truthful and boring news, so is avoided
Gormo
Any visualization that represents variation around a baseline value should use the baseline value as its axis, whether the baseline is zero or not.
roenxi
Pie charts are just as unreadable for medium and small percentages. They encode values as angles. Human perception is not suited to estimating angles relative to each other.
damnitbuilds
Correct title: What Killed Innovation in the Pretty Diagram field?
I keep seeing books with interesting titles like "The evolution of clothing" and then see a subtitle like "In Wisconsin. Between 1985 and 1986."
badc0ffee
"From jean vests to jean jackets"
ralferoo
Just looking at that "512 paths to the white house graphic", and I'd argue that it's more confusing than useful. Why is Florida at the top? Consider the point where it's "Obama has 255 ways" and "Romney has 1 way". What's the point of the massive arrow to Florida and then taking a very specific route to success? This would only make sense if there is a pre-determined order in which the results must come.
The way it's been done in the past in the UK, for instance, is "A needs X more seats to win, B needs Y more seats to win, Z more seats remain". Simple, clear, and no flashy graphics required.
I know the situation in the US is a bit more complicated with different numbers of representatives per state, but it's still not especially useful to prioritise one state over another in the graphic, because what's important is the relative difference between the totals so far received.
I get that there could be some more presentation towards uncalled results and the expected outcome, but it doesn't look like that graph gives that, which would be far more useful than this thing with arrows.
LegionMammal978
> Why is Florida at the top?
As you mention, the number of electors per state varies by quite a bit. E.g., in the 2012 election covered by the chart, Florida had 29 electors, Ohio had 18 electors, and North Carolina had 15 electors, which is why those three states appear at the top.
The main important effect is that (with only some small exceptions) if a candidate wins a simple majority of the votes in a state, then they receive all of that state's electors. E.g., if a candidate wins 50.01% of the Florida vote, they get 29 electors, but if they win 49.99% of the vote, they get 0 electors. See: the 2000 election, where the overall outcome depended on a few hundred votes in this way.
This means there's a lot of focus on 'flipping' states one way or the other, since their electoral votes all come in blocks. What the chart is showing is that if Romney won Florida, he could afford to lose a few other contested states and still win the national election. But if Obama won Florida (as he in fact did), then Romney would need every other state to go his way (very unlikely!) if he still wanted to have a chance.
That is to say, Florida really was extremely important, given the structure of U.S. presidential elections: it would make or break a candidate's whole campaign, regardless of what happened in the rest of the country. And similarly, the remaining states are ordered by decreasing importance.
Of course, while results are being counted, you also see simpler diagrams of the current situation. The classic format is a map of the country with each state colored red or blue depending on which way it flips. This is often accompanied by a horizonal line with a red bar growing from one side, a blue bar growing from the other side, and a line in the middle. But people are interested in which states are more important than others, which creates the imagery of 'paths to win'.
ralferoo
Except, in the extreme example I cited from the article: "Obama has 255 ways" and "Romney has 1 way"
At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
I'd also argue that the point you're making is obscured by this image. There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome. IMHO, how it was actually shown on news programs today is much clearer than what the article is proposing.
LegionMammal978
> At that point, Romney had to win every remaining state to win. Florida was no more important than any other state that still hadn't declared a result. Whatever one came next would determine the result.
When that diagram was created, the election hadn't even started yet, so they didn't know who would actually win Florida. The 1/255 number was contingent on Romney losing Florida (i.e., it was a hypothetical outcome the user could click through). But when they didn't know yet who would win Florida, it was still 76 ways for Romney and 431 ways for Obama.
Anyway, Florida was very important for the result regardless of the chronological order. Suppose that Florida's result was called first, in favor of Obama. Then one more state for Obama would seal the election in his favor, and everyone could call it a day and go home.
On the other end, suppose that Florida's result was called last, and Obama won at least one state beforehand, but not too many. Then everyone would have to wait for Florida in order to know the overall outcome: its result would be absolutely necessary.
> There's no way of determining from that image how many electors each state contributes, just lots of arrows and a red/blue outcome.
Well, if people really wanted to work out the math by hand, they could look it up. But states are called one at a time, so people are naturally interested in 'subsets of states' that can singlehandedly seal the election one way or the other.
phendrenad2
The US news covers the US elections from a really strange angle. They act as though even as the votes are coming in, and there is nothing more the candidates can do to change the outcome, that they are still "looking for a path to victory" and they list all of the "paths to victory" that could be possible. As though we're watching them stumble through a dark forest.
lxgr
I had the exact same thought here: https://news.ycombinator.com/item?id=43473149
Really bewildering from an epistemic point of view, even if it's "just a metaphor". (And do people really generally understand it to be just that?)
Ragnarork
I'm not sure about this. Why do we constantly need new ways of presenting data?
My main concern is that eventually it becomes easy to read and interpret data, especially for people that are not used to it, that are less data- or science-savvy in a way. That it's accessible.
It's good to try to find better ways to present certain cases, but also it's only needed as far as it's useful, and otherwise I feel consistency is way better instead of keeping churning out new ways to look at it that require an effort on the consumer part (no matter how beautiful / well presented this is) to figure out what they want to know out of it.
Innovation for the sake of usefulness is good. Innovation for the sake of innovation feels... definitely not as good (although I wouldn't discard it completely).
nine_k
Have we already achieved the absolute optimal ways to visualize data? Maybe in some simple cases, yes, but not necessarily in all practical cases.
Should new and better ways to visualize data look drastically different from what we're used to? Maybe, but likely not very often. Revolutionary changes are rare, and incremental improvements are important.
Animats
> That was the year I realized I was experiencing scrollytelling fatigue.
She nailed it.
The people who really, really have to look at graphs of numbers all day have a Bloomberg terminal. The graphics are visually unexciting but useful.
Avshalom
Unremarked is that while those examples are visually impressive, they're also unhelpful.
rqtwteye
Exactly. I see a lot of graphs and animations that look cool but when you take a closer look, they convey not much information .
Gormo
Every example in the article suffers from excessive visual complexity and a lack of clarity as to what's being quantified and how values relate to each other.
The best one is the "four ways to slice the budget" visualization, but even that would just have been better as four separate, straightforward charts.
I guess what killed innovation in data visualization is that the innovations were hindering, rather than helping, to accomplish the purpose of building data visualizations.
fullshark
The economics don't support innovative web visualizations, a slight engagement boost for a day is the return on investment. If you're lucky it goes viral on social media, but there's far cheaper ways to accomplish that (e.g. inflammatory rhetoric).
qoez
There was probably a core of 50 people mainly responsible for these (with hundreds of thousands in awed aspiration/inspiration) who've since retired or moved on to other interests or got distracted by politics in the meantime after 2016, or any other similar reason. It was probably Mike Bostock's departure from the scene in 2017 that was the core catalyst.
tech_ken
The point of data presentation is to gist the most salient trends; an interactive chart where you can zoom in to the lowest granularity of the data basically defeats the purpose of the plot in the first place. Similarly most animation in charts doesn't really add any meaningful visual data, it's just distracting. I think most consumers of data journalism got pretty bored of scrolling through some massive viz after only a few minutes, and why would they not? People read the news to have the critical points surfaced for them. They don't want to dig through the data themselves (and if they do they're not going to be satisfied with the prebuilt animation). These kinds of things are IMO more fun and interesting to build, rather than to actually try and learn something from.
jeffreyrogers
When a new technology comes along no one knows what ideas are good and what ideas are bad, so people try a bunch of things and most of them aren't very useful and the few that are become standardized. In the case of UX stuff like visualizations users also learn the grammar of the technology and get used to seeing things done in certain ways, which makes it harder to do things differently.
So basically there's less innovation in data visualization because we mostly figured out how to solve our data visualization problems. If you look at the history of printed visualizations I think you'd find a similar pattern. The only somewhat recent innovation I can think of there is the violin plot, which became possible due to advances in statistics that led to probability distributions becoming more important.
ARandumGuy
Sometimes something is a "solved" problem. There hasn't been a lot of innovation in say, firearms, because we pretty much figured out the best way to make a gun ~100 years ago and there isn't much to improve.
Not everything needs innovation, and trying to innovate anyway just creates a solution in search of a problem.
garciasn
They said the same thing about hash tables. Innovation from a single individual blew away (no pun intended) all prior expectations and opened an entirely new baseline understanding of this.
Just because we THINK we’ve solved the problem doesn’t mean coming at it from an entirely different angle and redefining the entire paradigm won’t pay dividends.
Gormo
Sure, and no one is saying that people should stop experimenting and testing out alternative approaches. But we wouldn't expect to see experimental approaches displacing established conventions in mature use cases unless they actually are major breakthroughs that unambiguously improve the status quo. And in those situations, we'd expect the new innovations to propagate rapidly and quickly integrate into the generally accepted conventions.
But there's obviously going to be something analogous to declining marginal utility when trying to innovate in mature problem spaces. The remaining uncaptured value in the problem space will shrink incrementally with each successive innovation that does solve more of the problem. So the rate at which new innovations propagate into the mainstream will naturally tend to slow, at least until some fundamental change suddenly comes along and modifies the constraints or attainable utility in the context, and the process starts over again.
ARandumGuy
That's true enough. We don't know what we don't know, and there's always the potential for some groundbreaking idea to shake things up. That's why it's important to fund research, even if that research doesn't have obvious practical applications.
But this sort of innovation comes from having an actual solution that makes tangible improvements. It does not come from someone saying "this technology hasn't changed in years, we need to find some way to innovate!" That sort of thinking is how you get stuff like Hyperloop or other boondoggles that suck up a lot of investments without solving any problems.
dredmorbius
What's the history here?
kusokurae
My irrational side really laments where many parts of modern life are in this process and how...standardised things have become. When I look at e.g. old camera designs, they are so much more exciting to see evolve, and offer so many cool variations on "box with a hole and a light sensitive surface in it". Seeing how they experimented and worked out different ways to make an image-making machine with that requirement, I feel like I'm missing out on a period of discovery and interesting development that now is at a well-optimised but comparatively homogenous dead end.
Innovation in data visualization? From a purely utilitarian view, the purpose of data visualization is to present data in a way people can understand it. If you’re constantly changing the method of visualizing the same thing it’s harder to do that. Sometimes a bar chart is best.
As far as cool visualizations go (that are better served as nonstandard visualizations) there are two recent ones that come to mind:
https://youtu.be/TkwXa7Cvfr8 (Especially around 16:56)
https://bbycroft.net/llm