This is not the future
273 comments
·December 16, 2025nostrademons
rmah
The statement "Game theory is inevitable. Because game theory is just math, the study of how independent actors react to incentives." implies that the "actors" are humans. But that's not what game theory assumes.
Game theory just provides a mathematical framework to analyze outcomes of decisions when parts of the system have different goals. Game theory does not claim to predict human behavior (humans make mistakes, are driven by emotion and often have goals outside the "game" in question). Thus game theory is NOT inevitable.
hyperadvanced
Yes, game theory is not a predictive model but an explanatory/general one. Additionally not everything is a game, as in statistics, not everything has a probability curve. They can be applied speculatively to great effect, but they are ultimately abstract models.
CGMthrowaway
A practical formula:
1) Identify coordination failures that lock us into bad equilibria, e.g. it's impossible to defect from the online ads model without losing access to a valuable social graph
2) Look for leverage that rewrites the payoffs for a coalition rather than for one individual: right-to-repair laws, open protocols, interoperable standards, fiduciary duty, reputation systems, etc.
3) Accept that heroic non-participation is not enough. You must engineer a new Schelling point[1] that makes a better alternative the obvious move for a self-interested majority
TLDR, think in terms of the algebra of incentives, not in terms of squeaky wheelism and moral exhortation
unformedelta
Perhaps you don't intend this, but I intuit that you imply that Game theory's inevitability leads to the inevitability of many claims the author's claims aren't inevitable.
To me, this inevitability only is guaranteed if we assume a framing of non-cooperative game theory with idealized self-interested actors. I think cooperative game theory[1] better models the dynamics of the real world. More important than thinking on the level of individual humans is thinking about the coalitions that have a common interest to resist abusive technology.
jasode
>I think cooperative game theory[1] better models the dynamics of the real world.
If cooperative coalitions to resist undesirable abusive technology models the real world better, why is the world getting more ads? (E.g. One of the author's bullet points was, "Ads are not inevitable.")
Currently in the real world...
- Ads frequency goes up : more ad interruptions in tv shows, native ads embedded in podcasts, sponsors segments in Youtube vids, etc
- Ads spaces goes up : ads on refrigerator screens, gas pumps touch screens, car infotainment systems, smart TVs, Google Search results, ChatGPT UI, computer-generated virtual ads in sports broadcasts overlayed on courts and stadiums, etc
What is the cooperative coalition that makes "ads not inevitable"?
ToucanLoucan
I'll try and tackle this one. I think the world is getting more ads because Silicon Valley and it's Anxiety Economy are putting a thumb on the scale.
For the entirety of the 2010's we had SaaS startups invading every space of software, for a healthy mix of better and worse, and all of them (and a number even today) are running the exact same playbook, boiled down to broad terms: burn investor money to build a massive network-effected platform, and then monetize via attention (some combo of ads, user data, audience reach/targeting). The problem is thus: despite all these firms collecting all this data (and tanking their public trust by both abusing it and leaking it constantly) for years and years, we really still only have ads. We have specifically targeted ads, down to downright abusive metrics if you're inclined and lack a soul or sense of ethics, but they are and remain ads. And each time we get a better targeted ad, the ones that are less targeted go down in value. And on and on it has gone.
Now, don't misunderstand, a bunch of these platforms are still perfectly fine business-wise because they simply show an inexpressible, unimaginable number of ads, and even if they earn shit on each one, if you earn a shit amount of money a trillion times, you'll have billions of dollars. However it has meant that the Internet has calcified into those monolith platforms that can operate that way (Facebook, Instagram, Google, the usuals) and everyone else either gets bought by them or they die. There's no middle-ground.
All of that to say: yes, on balance, we have more ads. However the advertising industry in itself has never been in worse shape. It's now dominated by those massive tech companies to an insane degree. Billboards and other such ads, which were once commonplace are now solely the domain of ambulance chasing lawyers and car dealerships. TV ads are no better, production value has tanked, they look cheaper and shittier than ever, and the products are solely geared to the boomers because they're the only ones still watching broadcast TV. Hell many are straight up shitty VHS replays of ads I saw in the fucking 90's, it's wild. We're now seeing AI video and audio dominate there too.
And going back to tech, the platforms stuff more ads into their products than ever and yet, they're less effective than ever. A lot of younger folks I know don't even bother with an ad-blocker, not because they like them, but simply because they've been scrolling past ads since they were shitting in diapers. It's just the background wallpaper of the Internet to them, and that sounds (and is) dystopian, but the problem is nobody notices the background wallpaper, which means despite the saturation, ads get less attention then ever before. And worse still, the folks who don't block cost those ad companies impressions and resources to serve those ads that are being ignored.
So, to bring this back around: the coalition that makes ads "inevitable" isn’t consumers or creators, it's investors and platforms locked into the same anxiety‑economy business model. Cooperative resistance exists (ad‑blockers, subscription models, cultural fatigue), but it’s dwarfed by the sheer scale of capital propping up attention‑monetization. That’s why we see more ads even as they get less effective.
groby_b
I'll just take the very first example on the list, Internet-enabled beds.
Absolutely a cooperative game - nobody was forced to build them, nobody was forced to finance them, nobody was forced to buy them. this were all willing choices all going in the same direction. (Same goes for many of the other examples)
_heimdall
Game theory is not inevitable, neither is math. Both are attempts to understand the world around us and predict what is likely to happen next given a certain context.
Weather predictions are just math, for example, and they are always wrong to some degree.
empressplay
Because the models aren't sophisticated enough (yet). There's no voodoo here.
I'm always surprised how many 'logical' tech people shy away from simple determinism, given how obvious a deterministic universe becomes the more time you spend in computer science, and seem to insist there's some sort of metaphysical influence out there somewhere we'll never understand. There's not.
Math is almost the definition of inevitability. Logic doubly so.
Once there's a sophisticated enough human model to decipher our myriad of idiosyncrasies, we will all be relentlessly manipulated, because it is human nature to manipulate others. That future is absolutely inevitable.
Might as well fall into the abyss with open arms and a smile.
CGMthrowaway
>Because the models aren't sophisticated enough (yet). There's no voodoo here.
Idk if that's true.
Navier–Stokes may yet be proven Turing-undecidable, meaning fluid dynamics are chaotic enough that we can never completely forecast them no matter how good our measurement is.
Inside the model, the Navier–Stokes equations have at least one positive Lyapunov exponent. No quantum computer can out-run an exponential once the exponent is positive
And even if we could measure every molecule with infintesimal resolution, the atmosphere is an open system injecting randomness faster than we can assimilate it. Probability densities shred into fractal filaments (butterfly effect) making pointwise prediction meaningless beyond the Lyapunov horizon
chunky1994
But the world is not deterministic, inherently so. We know it's probabilistic at least at small enough scales. Most hidden variable theories have been disproven, and to the best of our current understanding the laws of the physical universe are probabsilitic in nature (i.e the Standard Model). So while we can probably come up with a very good probabilistic model of things that can happen, there is no perfect prediction, or rather, there cannot be
rando77
There is strong reason to expect evolution to have found a system that is complex and changing for its control system, for this very reason so it can't get easily gamed (and eaten).
mirrir
There is voodoo in begging the question though.
nathan_compton
I think its hubris to believe that you can formulate the correct game theoretic model to make significant statements about what is and is not inevitable.
Aperocky
I don't think OP claimed that. There is no conflict between not able to formulate the true model vs the existence of that actual model.
Just because we weren't able to discover all of the law of physics, doesn't mean they don't apply to us.
DarkNova6
Game theory is only as good as the model you are using.
Now couple the fact that most people are terrible at modeling with the fact that they tend to ignore implicit constraints… the result is something less resembling science but something resembling religion.
lesuorac
I think you've missed the point.
The concept of Game Theory is inevitable because it's studying an existing phenomenon. Whether or not the researchers of Game Theory correctly model that is irrelevant to whether the phenomenon exists or not.
The models such as Prisoner's Dilemma are not inevitable though. Just because you have two people doesn't mean they're in a dilemma.
---
To rephrase this, Technology is inevitable. A specific instance of it (ex. Generative AI) is not.
wongarsu
In a world ruled by game theory alone marketing is pointless. Everyone already makes the most rational choice and has all the information, so why appeal to their emotions, build brand awareness or even tell them about your products. Yet companies spend a lot of money on marketing, and game theory tells us that they wouldn't do that without reason
Game theory makes a lot of simplifying assumptions. In the real world most decisions are made under constraints, and you typically lack a lot of information and can't dedicate enough resources to each question to find the optimal choice given the information you have. Game theory is incredibly useful, especially when talking about big, carefully thought out decisions, but it's far from a perfect description of reality
empressplay
> Game theory makes a lot of simplifying assumptions.
It does because it's trying to get across the point that although the world seems impossibly complex it's not. Of course it is in fact _almost_ impossibly complex.
This doesn't mean that it's redundant for more complex situations, it only means that to increase its accuracy you have to deepen its depth.
throwaway20174
When they coined the term "influencer" it was a perfect definition for what that is.
It's not a question about the one who cannot influence the 7.9B. The question is disparity, which is like income disparity.
What happens when fewer and fewer people influence greater and greater numbers. Understanding the risk in that.
nrclark
Game theory is a model that's sometimes accurate. Game theorists often forget that humans are bags of thinking meat, and that our thinking is accomplished by goopy electrochemical processes
Brains can and do make straight-up mistakes all the time. Like "there was a transmission error"-type mistakes. They can't be modeled or predicted, and so humans can never truly be rational actors.
Humans also make irrational decisions all the time based on gut feeling and instinct. Sometimes with reasons that a brain backfills, sometimes not.
People can and do act against the own self interest all the time, and not for "oh, but they actually thought X" reasons. Brains make unexplainable mistakes. Have you ever walked into a room and forgotten what you went in there to do? That state isn't modelable with game theory, and it generalizes to every aspect of human behavior.
chaseadam17
Agree with OP. This reminds me of fast food in the 90s. Executives rationalized selling poison as "if I don't, someone else will" and they were right until they weren't.
Society develops antibodies to harmful technology but it happens generationally. We're already starting to view TikTok the way we view McDonalds.
But don't throw the baby out with the bath water. Most food innovation is net positive but fast food took it too far. Similarly, most software is net positive, but some apps take it too far.
Perhaps a good indicator of which companies history will view negatively are the ones where there's a high concentration of executives rationalizing their behavior as "it's inevitable."
mat_b
Agree and disagree. It is also possible to take a step back and look at the very large picture and see that these things actually are somewhat inevitable. We do exist in a system where "if I don't do it first, someone else will, and then they will have an advantage" is very real and very powerful. It shapes our world immensely. So, while I understand what the OP is saying, in some ways it's like looking at a river of water and complaining that the water particles are moving in a direction that the levees pushed them. The levees are actually the bigger problem.
chaseadam17
We are the levees in your metaphor and we have agency. The problem is not that one founder does something before another and gains an advantage. The problem is the millions of people who buy or use the harmful thing they create - and that we all have control over. If we continue down this path we'll end up at free will vs determinism and I choose to believe the future is not inevitable.
mat_b
We aren't the real levees though. The system we live in is. Yes, a few people will push back and try to change the momentum to a different direction but that's painful and we have enough going on each day that most people don't have time for that (let alone agree on the direction). Structural change is the only real way to guide the river.
opminion
For any pleasurable activity, there's always somebody taking it too far.
lekevicius
I do disagree that some of these were not inevitable. Let me deconstruct a couple:
> Tiktok is not inevitable.
TikTok the app and company, not inevitable. Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable. Short form video follows gradual escalation of most engaging content formats, with legacy stretching from short-form-text in Twitter, short-form-photo in Instagram and Snapchat. Global content discovery is a natural next experiment after extended follow graph.
> NFTs were not inevitable.
Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
I could deconstruct more, but the broader point is: coordination is hard. All these can be done by anyone: anyone could have invented Ethereum-like system; anyone could have built a non-fungible standard over that. Inevitability comes from the lack of coordination: when anyone can push whatever future they want, a LOT of things become inevitable.
tinco
The author doesn't mean that the technologies weren't inevitable in the absolute sense. They mean that it was not inevitable that anyone should use those technologies. It's not inevitable that they will use Tiktok, and it is not inevitable for anyone, I've never used Tiktok, so the author is right in that regard.
If you disavow short form video as a medium altogether, something I'm strongly considering, then you can. It does mean you have to make sacrifices, for example Youtube doesn't let you disable their short form video feature so it is inevitable for people who choose they don't want to drop Youtube. That is still a choice though, so it is not truly inevitable.
The larger point is that there are always people pushing some sort of future, sketching it as inevitable. But the reality is that there always remains a choice, even if that choice means you have to make sacrifices.
The author is annoyed at people throwing the towel in the ring and declaring AI is inevitable, when the author apparently still sees a path to not tolerating AI. Unfortunately the author doesn't really constructively show that path, so the whole article is basically a luddite complaint.
Zigurd
This appears to be overthinking it: sure it's inevitable that when zero trust systems are shown to be practicable, they will be explored. But, like a million other ideas that nobody needed to spend time on, selling NFTs should've been relegated to obscurity far earlier than what actually happened.
simgt
Re Tiktok, what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
dj_gitmo
> what is definitely not inevitable is the monetization of human attention. It's only a matter of policy. Without it the incentives to make Tiktok would have been greatly reduced, if even economically possible at all.
This is not a new thing. TV monetizes human attention. Tiktok is just an evolution of TV. And Tiktok comes from China which has a very different society. If short-form algo slop video can thrive in both liberal democracies and a heavily censored society like China, than it's probably somewhat inevitable.
JeremyNT
> Perhaps Bitcoin as proof-of-work productization was not inevitable (for a while), but once we got there, a lot of things were very much inevitable. Explosion of alternatives like with Litecoin, explosion of expressive features, reaching Turing-completeness with Ethereum, "tokens" once we got to Turing-completeness, and then "unique tokens" aka NFTs (but also colored coins in Bitcoin parlance before that). The cultural influence was less inevitable, massive scam and hype was also not inevitable... but to be fair, likely.
The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers. These things don't have any utility otherwise and no reason to exist outside of those.
Scams and fraud are such potent drivers that perhaps it was inevitable, but one could imagine a more competent regulatory regime that nipped this stuff in the bud.
nb: avoiding financial regulations and money laundering are forms of fraud
pbmonster
> The only way I can get to the "crypto is inevitable" take relies on the scams and fraud as the fundamental drivers.
The idea of a cheap, universal, anonymous digital currency itself is old (e.g. eCash and Neuromancer in the '80s, Snow Crash and Cryptonomicon in the '90s).
It was inevitable that someone would try implementing it once the internet was widespread - especially as long as most banks are rent-seeking actors exploiting those relying on currency exchanges, as long as many national currencies are directly tied to failing political and economic systems, and as long as the un-banking and financially persecution of undesirables was a threat.
Doing it so extremely decentralized and with a the whole proof-of-work shtick tacked on top was not inevitable and arguably not a good way to do it, nor the cancer that has grown on top of it all...
epidemiology
I think you could say it's inevitable because of the size of both the good AND bad opportunities. Agree with you and the original point of the article that there COULD be a better way. We are reaping tons of bad outcomes across social media, crypto, AI, due to poor leadership(from every side really).
Imagine new coordination technology X. We can remove any specific tech reference to remove prior biases. Say it is a neutral technology that could enable new types of positive coordination as well as negative.
3 camps exist.
A: The grifters. They see the opportunity to exploit and individually gain.
B: The haters. They see the grifters and denigrate the technology entirely. Leaving no nuance or possibility for understanding the positive potential.
C: The believers. They see the grift and the positive opportunity. They try and steer the technology towards the positive and away from the negative.
The basic formula for where the technology ends up is -2(A)-(B) +C. It's a bit of a broad strokes brush but you can probably guess where to bin our current political parties into these negative categories. We need leadership which can identify and understand the positive outcomes and push us towards those directions. I see very little strength anywhere from the tech leaders to politicians to the social media mob to get us there. For that, we all suffer.
jobs_throwaway
> These things don't have any utility otherwise and no reason to exist outside of those.
Lol. Permissionless payments certainly have utility. Making it harder for governments to freeeze/seize your assets has utility. Buying stuff the government disallows, often illegitimately, has value. Currency that can't be inflated has value.
Any outside of pure utility, they have tons of ideological reason to exist outside scams and fraud. Your inability to imagine or dismissal of those is telling as to your close-mindedness.
empressplay
It was all inevitable, by definition, as we live in a deterministic universe (shock, I know!)
But further, the human condition has been developing for tens of thousands of years, and efforts to exploit the human condition for a couple of thousand (at least) and so we expect that a technology around for a fraction of that would escape all of the inevitable 'abuses' of it?
What we need to focus on is mitigation, not lament that people do what people do.
lern_too_spel
The point is that regulation could have made Bitcoin and NFTs never cause the harm they have inflicted and will inflict, but the political will is not there.
croes
> Short form video as the medium, and algorithm that samples entire catalog (vs just followers) were inevitable.
I doubt that. There is a reason the videos get longer again.
So people could have ignored the short form from the beginning. And wasn’t the matching algorithm the teal killer feature that amazed people, not the length of the videos?
nostrademons
I've got a hypothesis that the reason short-form video like TikTok became dominant is because of the decline in reading instruction (eg. usage of whole-word instruction over phonics) that started in 1998-2000. The timing largely lines up: the rise of video content started around 2013, just as these kids were entering their teenage years. Media has significant economies of scale and network effects (i.e. it is much more profitable to target the lowest common denominator than any niche group), and so if you get a large number of teenagers who have difficulty with reading, media will adjust to provide them content that they can consume effortlessly.
Anecdotally, I hear lots of people talking about the short attention span of Zoomers and Gen Alpha (which they define as 2012+; I'd actually shift the generation boundary to 2017+ for the reasons I'm about to mention). I don't see that with my kid's 2nd-grade classmates: many of them walk around with their nose in a book and will finish whole novels. They're the first class after phonics was reintroduced in the 2023-2024 kindergarten year; every single kid knew how to read by the end of kindergarten. Basic fluency in skills like reading and math matters.
nasmorn
My counter argument is that did not happen in the Austrian school system and people consume short form video just the same
asveikau
I recognize this is very anecdotal (your observation and mine), but my gen alpha daughter approaching the teenage phase always has her head in a book. She also has a very short attention span.
Sharlin
That’s ridiculously US-centric. TikTok is a global phenomenon initiated by a Chinese company. Nothing would be different in the grand scale if there were zero American TikTok users.
lekevicius
Even if that's true, that sub-minute videos are not the apex content, that only goes to prove inevitability. Every idea will be tested and measured; the best-performing ones will survive. There can't be any coordination or consensus like "we shouldn't have that" - the only signal is, "is this still the most performant medium + algorithm mix?"
dripdry45
I feel that the argument here hinges on “performant”
The regulatory, cultural, social, even educational factors surrounding these ideas are what could have made these not inevitable. But changes weren’t made, as there was no power strong enough to enact something meaningful.
vhcr
Have you seen YouTube's front-page? It's pretty much 20-40min videos full of fluff.
BloondAndDoom
I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
However tech people who thinks AI is bad, or not inevitable is really hard to understand. It’s almost like Bill Gates saying “we are not interested in internet”. This is pretty much being against the internet, industrialization, print press or mobile phones. The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me. I can only say being against this is either it’s self-interest or not able to grasp it.
So if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter? Like how does it matter? Is it because they “stole” all the art in the world. But somehow if a person “influenced” by people, ideas, art in less efficient way almost we applaud that because what else, invent the wheel again forever?
dbspin
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
Running this paragraph through Gemini, returns a list of the fallacies employed, including - Attacking the Motive - "Even if the artists are motivated by self-interest, this does not automatically make their arguments about AI's negative impacts factually incorrect or "bad."
Just as a poor person is more aware through direct observation and experience, of the consequences of corporate capitalism and financialisation; an artist at the coal face of the restructuring of the creative economy by massive 'IP owners' and IP Pirates (i.e.: the companies training on their creative work without permission) is likely far more in touch the the consequences of actually existing AI than a tech worker who is massively financially incentivised to view them benignly.
> The idea that AI is anything less than paradigm shifting, or even revolutionary is weird to me.
This is a strange kind of anti-naturalistic fallacy. A paradigm shift is not in itself a good thing. One paradigm shift that has occurred for example in recent goepolitics is the normalisation of state murder - i.e.: extrajudicial assassination in the drone war or the current US govts use of missel attacks on alleged drug traffickers. One can generate countless other negative paradigm shifts.
> if I produce something art, product, game, book and if it’s good, and if it’s useful to you, fun to you, beautiful to you and you cannot really determine whether it’s AI. Does it matter?
1) You haven't produced it.
2) Such a thing - a beautiful product of AI that is not identifiably artificial - does not yet, and may never exist.
3) Scare quotes around intellectual property theft aren't an argument. We can abandon IP rights - in which case hurrah, tech companies have none, or respect them. Anything else is legally and morally incoherent self justification.
4) Do you actually know anything about the history of art, any genre of it whatsoever? Because suggesting originality is impossible and 'efficiency' of production is the only form of artistic progress suggests otherwise.
rng-concern
Apologies, but I'm copy/pasting a previous reply of mine to a similar sentiment:
Art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
fennecfoxy
>Art is an expression of human emotion
Yet humans are the ones enacting an AI for art (of some kind). Is not therefore not art because even though a human initiated the process, the machine completed it?
If you argue that, then what about kinetic sculptures, what about pendulum painting, etc? The artist sets them in motion but the rest of the actions are carried out by something nonhuman.
And even in a fully autonomous sense; who are we to define art as being artefacts of human emotion? How typically human (tribalism). What's to say that an alien species doesn't exist, somewhere...out there. If that species produces something akin to art, but they never evolved the chemical reactions that we call emotions...I suppose it's not art by your definition?
And what if that alien species is not carbon based? If it therefore much of a stretch to call art that an eventual AGI produces art?
My definition of art is a superposition of everything and nothing is art at the same time; because art is art in the eye of the arts beholder. When I look up at the night sky; that's art, but no human emotion produced that.
mghackerlady
With a kinetic structure, someone went through the effort to design it to do that. With AI art, sure you ask it to do something but a human isn't involved in the creative process in any capacity beyond that
javier123454321
Sure, but in the case of AI it resembles the relationship of a patron to an art director. We generally don't assign artistry to the person hiring an art director to create artistic output, even if it requires heavy prompting and back and forth. I am not bold enough to try to encompass something as large and fundamental as art into a definition, though I suppose that art does cary something about the craft of using the medium.
At any rate, though there is some aversion to AI art for arts sake, the real aversion to AI art is that it squeezes one of the last viable options for people to become 'working artists' and funnels that extremely hard earned profit to the hands of the conglomerates that have enough compute to train generative models. Is making a living through your art something that we would like to value and maintain as a society? I'd say so.
BloondAndDoom
No doubt, but if your Starcraft experience against AI was "somehow" exactly same with AI, gave you the same joy, and you cannot even say whether it was AI or other players, does that matter? I get this is kind of Truman Show-ish scenario, but does it really matter? If the end results are same, does it still matter? If it does, why? I get the emotional aspect of it, but in practice you wouldn't even know. Now is AI at that point for any of these, possibly not. We can tell AI right now in many interactions and art forms, because it's hollow, and it's just "perfectly mediocre".
It's kind of the sci-fi cliche, can you have feelings for an AI robot? If you can what does that mean.
herpdyderp
On the other hand, I prefer playing video games against AI because human skill disparity almost always ruins PvP. Though really I simply prefer co-op.
hklgny
I actually think this is the same point as who you’re responding to. If the human vs ai factor didn’t matter, you wouldn’t care if it was the human or ai on your co-op. The differences are subtle but meaningful and will always play a role in how we choose experiences
imperio59
So are photos that are edited via Photoshop not art? Are they not art if they were taken on a digital camera? What about electronic music?
You could argue all these things are not art because they used technology, just like AI music or images... no? Where does the spectrum of "true art" begin and end?
mghackerlady
They aren't arguing against technology, they're saying that a person didn't really make anything. With photoshop, those are tools that can aid in art. With AI, there isn't any creative process beyond thinking up a concept and having it appear. We don't call people who commission art artists, because they asked someone else to use their creativity to realise an idea. Even there, the artist still put in creative effort into the composition, the elements, the things you study in art appreciation classes. Art isn't just aesthetically pleasing things, it has meaning and effort put into it
shadowgovt
I think your view makes sense. On the other hand, Flash revolutionized animation online by allowing artists to express their ideas without having to exhaustively render every single frame, thanks to algorithmic tweening. And yeah, the resulting quality was lower than what Disney or Dreamworks could do. But the ten thousand flowers that bloomed because a wall came down for people with ideas but not time utterly redefined huge swaths of the cultural zeitgeist in a few short years.
I strongly suspect automatic content synthesis will have similar effect as people get their legs under how to use it, because I strongly suspect there are even more people out there with more ideas than time.
I hear the complaints about AI being "weird" or "gross" now and I think about the complaints about Newgrounds content back in the day.
acedTrex
> Does it matter? Like how does it matter?
It matters because the amount of influence something has on you is directly attributable to the amount of human effort put into it. When that effort is removed so to is the influence. Influence does not exist independently of effort.
All the people yapping about LLM keep fundamentally not grasping that concept. They think that output exists in a pure functional vacuum.
shayway
I see what you're getting at, but I think a better framing would be: there's an implicit understand amongst humans that, in the case of things ostensibly human-created, a human found it worth creating. If someone put in the effort to write something, it's because they believed it worth reading. It's part of the social contract that makes it seem worth reading a book or listening to a lecture even if you don't receive any value from the first word.
LLMs and AI art flip this around because potentially very little effort went into making things that potentially take lots of effort to experience and digest. That doesn't inherently mean they're not valuable, but it does mean there's no guarantee that at least one other person out there found it valuable. Even pre-AI it wasn't an iron-clad guarantee of course -- copy-writing, blogspam, and astroturfing existed long before LLMs. But everyone hates those because they prey on the same social contract that LLMs do, except in a smaller scale, and with a lower effort-in:effort-out ratio.
IMO though, while AI enables malicious / selfish / otherwise anti-social behavior at an unprecedented scale, it also enables some pretty cool stuff and new creative potential. Focusing on the tech rather than those using it to harm others is barking up the wrong tree. It's looking for a technical solution to a social problem.
acedTrex
> there's an implicit understand amongst humans that, in the case of things ostensibly human-created, a human found it worth creating
Yep, this is the current understanding that is being hard challenged by LLMs.
Cthulhu_
I don't know if I'm misinterpreting the word "influence", but low-effort internet memes have a lot more cultural impact than a lot of high-effort art. Also there's botnets, which influence political voting behaviour.
javier123454321
> low-effort internet memes have a lot more cultural impact than a lot of high-effort art.
Memes only have impact in aggregate due to emergent properties in a Mcluhanian sense. An individual meme has little to no impact compared to (some) works of art.
mirekrusin
Maybe they're just trying to say that robo taxis look like the future, not bike taxis?
blastro
the words themselves have influence, regardless of who spoke them.
ariedro
Well, the LLMs were trained with data that required human effort to write, it's not just random noise. So the result they can give is, indirectly and probabilistically regurgitated, human effort.
entaloneralie
I'm paying infrastructure costs for our little art community, chatbot crawling our servers and ignoring robots.txt, mining the work of our users so it can make copies, and being told that I don't get because this is such a paradigm shift, is pretty great..
aljgz
Big tech senior software engineer working on a major AI product speaking:
I totally agree with the message in the original post. Yes, AI is going to be everywhere, and it's going to create amazing value and serious challenges, but it's essential to make it optional.
This is not only for the sake of users' freedom. This is essential for companies creating products.
This is minority report, until it is not.
AI has many modes of failure, exploitability, and unpredictability. Some are known and many are not. We have fixes for some, and band aids for some other, but many are not even known yet.
It is essential to make AI optional, to have a "dumb" alternative to everything delegated to a Gen AI.
These options should be given to users, but also, and maybe even more importantly, be baked into the product as an actively maintained and tested plan-b.
The general trend of cost cutting will not be aligned with this. Many products will remove, intentionally or not, the non-ai paths, and when the AI fails (not if), they regret this decision.
This is not a criticisms of AI or a shift in trends toward it, it's a warning for anyone who does not take seriously, the fundamental unpredictability of generative AI
bigbuppo
When people talk about AI, they aren't talking about the algorithms and models. They're talking about the business. If you can't honestly stand up and look at the way the AI companies and related business are operating and not feel at least a little unease, you're probably Sam Altman.
wasteofelectron
> I understand artists etc. Talking about AI in a negative sense, because they don’t really get it completely, or just it’s against their self interest which means they find bad arguments to support their own interest subconsciously.
This is an extremely crude characterisation of what many people feel. Plenty of artists oppose copyright-ignoring generative AI and "get" it perfectly, even use it in art, but in ways that avoid the lazy gold-rush mentality we're seeing now.
phreezie
> Does it matter? Like how does it matter?
Yes, it matters to me because art is something deeply human, and I don't want to consume art made by a machine.
It doesn't matter if it's fun and beautiful, it's just that I don't want to. It's like other things in life I try to avoid, like buying sneakers made by children, or sign-up to anything Meta-owned.
wccrawford
That's pretty much what they said about photographs at first. I don't think you'll find a lot of people who argue that there's no art in photography now.
Asking a machine to draw a picture and then making no changes? It's still art. There was a human designing the original input. There was human intention.
And that's before they continue to use the AI tools to modify the art to better match their intention and vision.
jgalecki
This post rhymes with a great quote from Joseph Weizenbaum:
"The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it. But, in fact, there are actors!"
mason_mpls
Perhaps we need more collective action & coordination?
I don’t see how we could politically undermine these systems, but we could all do more to contribute to open source workarounds.
We could contribute more to smart tv/e-reader/phone & tablet jailbreak ecosystems. We could contribute more to the fediverse projects. We could all contribute more to make Linux more user friendly.
__MatrixMan__
I admire volunteer work, but I don't think we should focus too hard on paths forward that summarize to "the volunteers need to work harder". If we like what they're doing we show find ways to make it more likely to happen.
For instance, we could forbid taxpayer money from being spent on proprietary software and on hardware that is insufficiently respectful of its user, and we could require that 50% of the money not spent on the now forbidden software now be spent on sponsorships of open source contributors whose work is likely to improve the quality of whatever open alternatives are relevant.
Getting Microsoft and Google out of education would be huge re: denormalizing accepting eulas and letting strangers host things you rely on.
France and Germany are investing in open source (https://chipp.in/news/france-and-germany-launch-docs-an-open...), though perhaps not as aggressively as I've proposed. Let's join them.
gwbas1c
I do think AI involvement in programming is inevitable; but at this time a lot of the resistance is because AI programming currently is not the best tool for many jobs.
To better the analogy: I have a wood stove in my living room, and when it's exceptionally cold, I enjoy using it. I don't "enjoy" stacking wood in the fall, but I'm a lazy nerd, so I appreciate the exercise. That being said, my house has central heating via a modern heat pump, and I won't go back to using wood as my primary heat source. Burning wood is purely for pleasure, and an insurance policy in case of a power outage or malfunction.
What does this have to do with AI programming? I like to think that early central heating systems were unreliable, and often it was just easier to light a fire. But, it hasn't been like that in most of our lifetimes. I suspect that within a decade, AI programming will be "good enough" for most of what we do, and programming without it will be like burning wood: Something we do for pleasure, and something that we need to do for the occasional cases where AI doesn't work.
delichon
For you it's "purely for pleasure," for me it's for money, health and fire protection. I heat my home with my wood stove to bypass about $1,500/year in propane costs, to get exercise (and pleasure) out of cutting and splitting the wood, and to reduce the fuel load around my home. If those reasons went away I'd stop.
That's a good metaphor for the rapid growth of AI. It is driven by real needs from multiple directions. For it to become evitable, it would take coercion or the removal of multiple genuine motivators. People who think we can just say no must be getting a lot less value from it then me day to day.
jasonshen
You may be saving money but wood smoke is very much harmful to your lungs and heart according to the American Lung and American Heart Associations + the EPA. There's a good reason why we've adopted modern heating technologies. They may have other problems but particulate pollution is not one of them.
> For people with underlying heart disease, a 2017 study in the journal Environmental Research linked increased particulate air pollution from wood smoke and other sources to inflammation and clotting, which can predict heart attacks and other heart problems.
> A 2013 study in the journal Particle and Fibre Toxicology found exposure to wood smoke causes the arteries to become stiffer, which raises the risk of dangerous cardiac events. For pregnant women, a 2019 study in Environmental Research connected wood smoke exposure to a higher risk of hypertensive disorders of pregnancy, which include preeclampsia and gestational high blood pressure.
https://www.heart.org/en/news/2019/12/13/lovely-but-dangerou...
delichon
I acknowledge that risk. But I think it is outweighed by the savings, exercise and reduced fire danger. And I shouldn't discount the value to me of living in light clothing in winter when I burn wood, but heavily dressed to save money when burning propane. To stop me you'd have to compel me.
This is not a small thing for me. By burning wood instead of gas I gain a full week of groceries per month all year!
I acknowledge the risk of AI too, including human extinction. Weighing that, I still use it heavily. To stop me you'd have to compel me.
xandrius
I'm already seeing this and it's only a few years old.
I like the metaphor of burning wood, I also think it's going to be left for fun.
smileson2
Where is that mentioned in the article?
Juliate
If it were this inevitable, would AI be pushed down our throats, against our will, against our own laws, even, so hard?
max51
Because the ones pushing it down your throats are trying to capture the entire market and get you to adopt their AI instead of a competitor.
ben_w
The industrial revolution was pushed down the throats of a lot of people who were sufficiently upset by the developments that they invented communism, universal suffrage*, modern* policing, health and safety laws, trade unions, recognisably modern* state pensions, the motor car (because otherwise we'd be knee-deep in horse manure), zoning laws, passports, and industrial-scale sewage pumping.
I do wonder who the AI era's version of Marx will be, what their version of the Communist Manifesto will say. IIRC, previous times this has been said this on HN, someone pointed out Ted Kaczynski's manifesto.
* Policing and some pensions and democracy did exist in various fashions before the industrial revolution, but few today would recognise their earlier forms as good enough to deserve those names today.
lioeters
"Arguably we would have been better off knee-deep in horse manure." -- MarxGPT
delichon
> I do wonder who the AI era's version of Marx will be
Serena Butler.
null
Mithriil
[delayed]
tolerance
This is high quality blog post!
I’m all for a good argument that appears to challenge the notion of technological determinism.
> Every choice is both a political statement and a tradeoff based on the energy we can spend on the consequences of that choice.
Frequently I’ve been opposed to this sort of sentiment. Maybe it’s me, the author’s argument, or a combination of both, but I’m beginning to better understand how this idea works. I think that the problem is that there are too many political statements to compare your own against these days and many of them are made implicit except among the most vocal and ostensibly informed.
pjc50
> Every choice is both a political statement
I think this is a variant of "every action is normative of itself". Using AI states that use of AI is normal and acceptable. In the same way that for any X doing X states that X is normal and acceptable - even if accompanied by a counterstatement that this is an exception and should not set a precedent.
jjice
I really don't like the "everything is political" sentiment. Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
To clarify, I don't think pushing an ideology you believe in by posting a blog post is a bad thing. That's your right! I just think I have to read posts that feel like they have a very strong message with more caution. Maybe they have a strong message because they have a very good point - that's very possible! But often times, I see people using this as a way to say "if you're not with me, you're against me".
My problem here is that this idea that "everything is political" leaves no room for a middle ground. Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
All that to say, maybe I'm totally wrong, I don't know. I'm open to an argument against mine, because there's a very good chance I'm missing the point.
polshaw
Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
>Is my choice to write some boiler plate code using gen AI truly political?
I am much closer to agreeing with your take here, but as you recognise, there are lots of political aspects to your actions, even if they are not conscious. Not intentionally being political doesn't mean you are not making political choices; there are many more that your AI choice touches upon; privacy issues, wealth distribution, centralisation, etc etc. Of course these choices become limited by practicalities but they still exist.
tolerance
> Your introductory paragraph comes across very much like "people who want to change the status quo are political and people who want to maintain it are not"; which is clearly nonsense. "how things are is how they should be" is as much of an ideology, just a less conspicuous one given the existing norms.
With respect, I’m curious how you read all of that out of what they said...and whether it actually proves their remarks correct.
madmountaingoat
I don't think you're wrong so much as you've tread into some semantic muddy water. What did the OP mean by 'inevitable', 'political' or 'everything'?. A lot hangs on the meaning. I lot of words could be written defending one interpretation over another and the chance of changing anyone's mind on the topic seems slim.
jjice
Very good point. At that point though, I think it becomes hard to read the post and take it with specifics. Not all writing has to be specific, but now I'm just a bit confused as to what was actually being said by the author.
But You do make a good point that those words are all potentially very loaded.
tolerance
> Sure, lots of things are or can be, but whenever I see this idea, it usually comes from people who have a very specific mindset that's leaning further in one direction on a political spectrum and is pushing their ideology.
This is also my core reservation against the idea.
I think that the belief only holds weight in a society that is rife with opposing interpretations about how it ought to be managed. The claim itself feels like an attempt to force someone toward the interests of the one issuing it.
> Is my choice to write some boiler plate code using gen AI truly political? Is it political because of power usage and ongoing investment in gen AI?
Apparently yes it is. This is all determined by your impressions on generative AI and its environmental and economic impact. The problem is that most blog posts are signaling toward a predefined in-group either through familiarity with the author or by a preconceived belief about the subject where it’s assumed that you should already know and agree with the author about these issues. And if you don’t you’re against them.
For example—I don’t agree that everything is inevitable. But I as I read the blog post in question I surmised that it’s an argument against the idea that human beings are not at the absolute will of technological progress. And I can agree with that much. So this influences how I interpret the claim “nothing is inevitable” in addition to the title of the post and in conjunction with the rest of the article (and this all is additionally informed by all the stuff I’m trying to express to you that surrounds this very paragraph).
I think that this is speaks to the present problem of how “politics” is conflated to additionally refer to one’s worldview, culture, etc., in and of itself instead of something distinct but not necessarily inseparable from these things.
Politics ought to indicate toward a more comprehensive way of seeing the world but this isn’t the case for most people today and I suspect that many people who claim to have comprehensive convictions are only 'virtue signaling’.
A person with comprehensive convictions about the world and how humans ought to function in it can better delineate the differences and necessary overlap between politics and other concepts that run downstream from their beliefs. But what do people actually believe in these days? That they can summarize in a sentence or two and that can objectively/authoritatively delineate an “in-group” from an “out-group” and that informs all of their cultural, political, environmental and economic considerations, and so on...
Online discourse is being cleaved into two sides vying for digital capital over hot air. The worst position you can take is a critical one that satisfies neither opponent.
You should keep reading all blog posts with a critical eye toward the appeals embedded within the medium. Or don’t read them at all. Or read them less than you read material that affords you with a greater context than the emotional state that the author was in when they wrote the post before they go back to releasing software communiques.
nancyminusone
>Garbage companies using refurbished plane engines to power their data centers is not inevitable
Was wondering what the beef with this was until I realized author meant "companies that are garbage" and not "landfill operators using gas turbines to make power". The latter is something you probably would want.
dale_glass
Individual specific things are not inevitable. But many generic concepts are because of various market, social and other forces.
There's such a thing as "multiple invention", precisely because of this. Because we all live in the same world, we have similar needs and we have similar tools available. So different people in different places keep trying to solve the same problems, build the same grounding for future inventions. Many people want to do stuff at night, so many people push at the problem of lighting. Edison's particular light bulb wasn't inevitable, but electric lighting was inevitable in some form.
So with regards to generative AI, many people worked in this field for a long time. I played with fractals and texture generators as a kid. Many people want for many reasons. Artwork is expensive. Artwork is sometimes too big. Or too fixed, maybe we want variation. There's many reasons to push at the problem, and it's not coordinated. I had a period where I was fiddling around with generating assets for Second Life way back because I found that personally interesting. And I'm sure I was not the only one by any means.
That's what I understand by "inevitable", that without any central planning or coordination many roads are being built to the same destination and eventually one will get there. If not one then one of the others.
cons0le
>Your computer changing where things are on every update is not inevitable.
This a million times. I honestly hate interacting with all software and 90% of the internet now. I don't care about your "U""X" front end garbage. I highly prefer text based sites like this
alnwlsn
As my family's computer guy, my dad complains to me about this. And there's no satisfactory answer I can give him. My mom told me last year she is "done learning new technology" which seems like a fair goal but maybe not a choice one can make.
You ever see those "dementia simulator" videos where the camera spins around and suddenly all the grocery store aisles are different? That's what it must be like to be less tech literate.
titzer
It's been driving me nuts for at least a decade. I can't remember which MacOS update it was, but when they reorganized the settings to better align with iOS, it absolutely infuriated me. Nothing will hit my thunder button like taking my skills and knowledge away. I thought I might swear off Mac forever. I've been avoiding upgrading from 13 now. In the past couple of updates, the settings for displays is completely different for no reason. That's a dialog that one doesn't use very often, except for example, when giving a presentation. It's pretty jarring to plug in on stage in front of dozens or even hundreds of people and suddenly you have to figure out a completely unfamiliar and unintuitive way of setting up mirroring.
I blame GUIs. They disempower users and put them at mercy of UX "experts" who just rearrange the deck chairs when they get bored and then tell themselves how important they are.
dpedu
The MacOs settings redesign really bothered me too. Maybe it's the 20+ years of muscle memory, or maybe the new one really is that bad, but I find myself clicking around excessively and eventually giving up and using search. I'm with you here.
the_other
It's got a bunch of problems.
- some options have moved to menus which make no sense at all (e.g. all the toggles for whether a panel's menubar icon appear in the menu bar have moved off the panel for that feature and onto the Control Centre panel. But Control Centre doesn't have any options of its own, so the entire panel is a waste of time and has created a confusing UX where previously there was a sensible one
- loads of useful stuff I do all the time has moved a layer deeper. e.g. there used to be a top-level item called "sharing" for file/internet/printer sharing settings. It's moved one level deeper, below "General". Admittedly, "the average user" who doesn't use sharing features much, let alone wanting to toggle and control them, probably prefers this, but I find it annoying as heck
- following on from that, and also exhibited across the whole settings UI is that UI patterns are now inconsistent across panels; this seems to be because the whole thing is a bunch of web views, presumably all controlled by a different team. So they can create whatever UI they like, with whatever tools make sense. Before, I assume, there was more consistency because panels seemed to reuse the same default controls. I'm talking about use of tabs, or drop-downs, or expanders, or modal overlays... every top level panel has some of these, and they use them all differently: some panels expand a list to reach sub controls, some add a model, some just have piles of controls in lozenges
- it renders much slower. On my m3 and m4 MPBs you can still see lag. It's utterly insane that on these basically cutting edge processors with heaps of RAM, spare CPUs, >10 GPU cores, etc, the system control panel still lags
- they've fallen into the trap of making "features" be represented by horizontal bars with a button or toggle on the right edge. This pattern is found in Google's Material UI as well. It _kinda_ makes sense on a phone, and _almost_ makes sense on a tablet. But on a desktop where most windows could be any width, it introduces a bunch of readability errors. When the window's wide, it's very easy for the eye to lose the horizontal connection between a label and its toggle/button/etc. To get around this, Apple have locked the width of the Settings app... but also seems a bit weird.
- don't get me started on what "liquid glass" has done to the look & feel
eszed
I personally agree with everything you say, and am equally frustrated with (years later) not being able to find MacOS settings quickly - though part of that's due to searching within settings being terrible. Screen mirroring is the worst offender for me, too.
However, I support ~80 non-technical users for whom that update was a huge benefit. They're familiar with iOS on their phones, so the new interface is (whaddya know) intuitive for them. (I get fewer support calls, so it's of indirect benefit to me, too.) I try to let go of my frustration by reminding myself that learning new technology is (literally) part of my job description, but it's not theirs.
That doesn't excuse all the "moving the deck chairs" changes - Tahoe re-design: why? - but I think Apple's broad philosophy of ignoring power users like us and aligning settings interfaces was broadly correct.
Funny story: when my family first got a Windows computer (3.1, so... 1992 or '93?) my first reaction was "this sucks. Why can't I just tell the computer what to do anymore?" But, obviously, GUIs are the only way the vast majority will ever be able to interact with a device - and, you know, there are lots of tasks for which a visual interface is objectively better. I'd appreciate better CLI access to MacOS settings: a one-liner that mirrors to the most recently-connected display would save me so much fumbling. Maybe that's AppleScript-able? If I can figure it out I'll share here.
vegadw
... Did you just complain about modern technology taking power away from users only to post an AI generated song about it? You know, the thing taking away power from musicians and filling up all modern digital music libraries with garbage?
There's some cognitive dissonance on display there that I'm actually finding it hard to wrap my head around.
titzer
> Did you just complain...only to post an AI generated song about it?
Yeah, I absolutely did. Only I wrote the lyrics and AI augmented my skills by giving it a voice. I actually put significant effort into that one; I spent a couple hours tweaking it and increasing its cohesion and punchiness, iterating with ideas and feedback from various tools.
I used the computer like a bicycle for my mind, the way it was intended.
white_dragon88
[dead]
1970-01-01
Ads are inevitable assuming a capitalistic fueled Internet continues. If you can remember the early WWW, you know ads weren't there, but e-commerce existed in a rudimentary fashion. You couldn't buy anything by clicking a mouse, but you could go and download a catalog or some shareware with a phone number during installation for the sales team to give you a license key.
I think a more accurate and more useful framing is:
Game theory is inevitable.
Because game theory is just math, the study of how independent actors react to incentives.
The specific examples called out here may or may not be inevitable. It's true that the future is unknowable, but it's also true that the future is made up of 8B+ independent actors and that they're going to react to incentives. It's also true that you, personally, are just one of those 8B+ people and your influence on the remaining 7.999999999B people, most of whom don't know you exist, is fairly limited.
If you think carefully about those incentives, you actually do have a number of significant leverage points with which to change the future. Many of those incentives are crafted out of information and trust, people's beliefs about what their own lives are going to look like in the future if they take certain actions, and if you can shape those beliefs and that information flow, you alter the incentives. But you need to think very carefully, on the level of individual humans and how they'll respond to changes, to get the outcomes you want.