Skip to content(if available)orjump to list(if available)

Building supercomputers for autocrats probably isn't good for democracy

tptacek

I agree directionally with this but it drives me nuts how much special pleading there is about what high-profile companies like OpenAI do, vs. what low-profile industry giants like Cisco and Oracle have been doing for a whole generation. The analysis can't credibly start and end with what OpenAI is doing.

bgwalter

I missed that the article is talking about Gulf monarchy autocrats instead of U.S. autocrats.

That is very simple: First, dumping graphics cards on trusting Saudi investors seems like a great idea for Nvidia. Second, the Gulf monarchies depend on the U.S. and want to avoid Islamic revolutions. Third, they hopefully use solar cells to power the data centers.

Will they track users? Of course, and GCHQ and the NSA can have intelligence sharing agreements that circumvent their local laws. There is nothing new here. Just don't trust your thoughts to any SAAS service.

timewizard

> Just don't trust your thoughts

It's a little more insidious than that, though, isn't it? They've got my purchases, address history, phone call metadata, and now with DOGE much of our federal data. They don't need a twitter feed to be adversarial to my interests.

> to any SAAS service.

They're madly scraping the web. I think your perimeter is much larger than SAAS.

zelphirkalt

But at the end of the day HN is a small bubble and many people out there are not well informed and even more will trade privacy for convenience sooner or later. Making it so that the temptations do not even come into existence would be preferable from a certain point of view.

Exoristos

I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society. Each member of hoi polloi will be absolutely cocooned in thick blankets of near-duplicative communications and interactions most of which are not human. The only way to control the internet, you see, proved to be to drown it out.

kragen

I cannot imagine what it would be like to have such overconfidence in my own knowledge and imagination to think it was obvious to me what LLMs' primary usefulness was. What did people think the primary usefulness of steam-engines was in 01780?

alfalfasprout

What a nonsensical argument. Improved locomotion was an obvious result of steam engines. What followed from that could be reasonably predicted.

With LLMs, suddenly we have a tool that can generate misinformation on a scale like never before. Messaging can be controlled. Given that the main drivers of this technology (zuck, nadella, altman, and others) have chosen to make befellows of autocrats what follows is surely not a surprise.

GolfPopper

Steam engines move stuff. That was obvious from the start. How that was applied became complex beyond imagination.

LLMs cheaply produce plausible and persuasive BS. This is what they've done from the start. Exactly how that ability will be applied we don't know, but it doesn't take a lot to see that the Venn Diagram of 'cheap & effective BS' and 'public good' does not have a great deal of overlap.

Exoristos

Usefulness as of today, approximately. That is, that the massive interest and investment is not all speculation.

forgetfreeman

This seems like a lack of perspective on your part. Why would AIs primary usefulness be substantially different than any other software? Steam engines don't really factor into this.

timewizard

> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.

I see this a lot and this is not at all obvious to me. I'm very much an introvert. Would you describe yourself as the same or opposite?

> Each member of hoi polloi will be absolutely cocooned

I generally read specific publications and generally don't seek to "interact" online and entirely avoid social media. Prior to the existence of social media this was the norm. Do you not at all suspect that this overuse of LLMs would push people back towards a more primitive use of the network?

> The only way to control the internet, you see, proved to be to drown it out.

Yet I see them sparing no expense when it comes to manipulating the law. It seems there's a bit more to it than punching down on the "hoi polloi."

hamstergene

Remember how the central idea of Orwell's 1984 was that TVs in everyone's home were also watching all time and someone behind that device actually understanding what they see?

That last part was considered dystopian: there can't possibly be enough people to watch and understand every other person all day long. Plus, who watches the watchers? 1984 has been just a scary fantasy because there is no practical way to implement it.

For the first time in history, the new LLM/GenAI makes that part of 1984 finally realistic. All it takes is a GPU per household for early alerting of "dangerous thoughts", which is already feasible or will soon be.

The fact that one household can be allocated only a small amount of compute, that can run only basic and poor intelligence is actually *perfect*: an AGI could at least theoretically side with the opposition by listening to the both sides and researching the big picture of events, but a one-track LLM agent has no ability to do that.

I can find at least 6 companies, including OpenAI and Apple, reported working on always-watching household device, backend by the latest GenAI. Watching your whole recent life is necessary to have enough context to meaningfully assist you from a single phrase. It is also sufficient to know who you'll vote for, which protest one might attend before it's even announced, and what is the best way to intimidate you to stay out. The difference is like between a nail-driving tool and a murder weapon: both are the same hammer.

During TikTok-China campaign, there were a bunch of videos showing LGBT people reporting how quickly TikTok figured their sexual preferences: without liking any videos, no following anyone, nor giving any traceable profile information at all. Sometimes before the young person has admitted that for themselves. TikTok figures that simply by seeing how long the user stares at what: spending much more time on boys' gym videos over girls', or vice versa, is already enough. I think that was used to scare people of how much China can figure about Americans from just app usage?

Well if that scares anyone, how about this: an LLM-backend device can already do much more by just seeing which TV shows you watch and which parts of them give you laugh or which comments you make to the person next to you. Probably doesn't even need to be multimodal: pretty sure subtitles and text-to-speech will already do it. Your desire to oppose the upcoming authoritarian can be figured out even before you admit it to yourself.

While Helen Toner (the author) is worried about democracies on the opposite end of the planet, the stronghold of democracy may as well be nearing the last 2 steps to achieve the first working implementation of Orwellian society:

1. convince everyone to have such device in their home for our own good (in progress)

2. intimidate/seize the owning company to use said devices for not our own good (TODO)

Borealid

Classifying a behaviour into either "dangerous" or "not dangerous" is a perfect example of non-generative AI (what was previously called Machine Learning). The output isn't intended to be a textual description, it's a binary yes/no.

You can use an LLM to do that, but a specific ML model trained on the same dataset would likely be better in every quantitative metric and that tech was available long before transformers stepped onto the stage.

prpl

Not clear to me anyone “in charge” cares in any case. In fact, that may be the point.

jeffbee

I have a question. In what sense is OpenAI going to assist UAE in building large-scale data centers suitable to machine learning workloads? Do they have experience and expertise doing that?

yyyk

Welcome to the future. Increasing technical progress makes the common person much less relevant, and political power will flow to elites as a result.

martin-t

The biggest danger of AI isn't that it will revolt but that it'll allow dictators and other totalitarians complete control over the population.

And I mean total. A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.) and determine their views and opinions with high accuracy. It'll be able to extrapolate the evolution of a person's opinions.

The government will be able to target dissidents even before they realize they are dissidents, let alone before they have time to organize.

blackoil

Listen, you can get one of these local LLMs - and let me tell you, some of them are tremendous, really tremendous - to write exactly like Trump. It's incredible, actually. People will come up to you all the time and they'll say, 'Sir, how do you do it? How do you write so beautifully?' And now, with these artificial intelligence things - which, by the way, I was talking about AI before anyone even knew what it was - you can have them copy his style perfectly. Amazing technology, really amazing.

fc417fc802

The right to locally maintain fully private AI shall not be infringed ... ?

noident

> A sufficiently advanced algorithm will be able to find everything a person has ever posted online (by cross referencing, writing style, etc.)

Is this like a sufficiently smart compiler? :)

Stylometry is well-studied. You'll be happy to know that it is only practical when there are few suspect authors for a post and each author has a significant amount of text to sample. So, tying a pseudonymous post back to an author where anyone and everybody is a potential suspect is totally infeasible in the vast majority of cases. In the few cases where it is practical, it only creates a weak signal for further investigation at best.

You might enjoy the paper Adversarial Stylometry: Circumventing Authorship Recognition to Preserve Privacy and Anonymity by Greenstadt et al.

sitkack

Someone did a stylometry attack against hn awhile ago, it would with very high confidence unmask alt accounts on this site. It worked. There is zero reason to believe that it couldn't be applied on a grand scale.

noident

That sounds considerably more narrow than what the GP described.

What if I don't have an alternate HN account? Or what if I do have one, but it has barely any posts? How can you tie this account back to my identity?

Stylometry.net is down now, so it's hard to make any arguments about its effectiveness. There are fundamental limitations in the amount of information your writing style reveals.

danaris

So, it found some alts.

How do you know it didn't miss 10x more than it found? Like, that's almost definitionally unprovable.

AStonesThrow

How do y’all prove it worked, O Privacy Wonks?

How do y’all establish ye Theory Of Stylometry, O Phrenology Majors?

O, @dang confirms it on Mastodon or something??

null

[deleted]

fooker

> that it is only practical

You're missing the point, it doesn't have to be practical, only the illusion of it working is good enough.

And if authoritarian governments believe it works well enough, they are happy to let a decent fraction of false positives fall through the cracks.

See for example, polygraph tests being used in court.

dfxm12

determine their views and opinions with high accuracy

The truth, accuracy doesn't matter to authoritarians. It doesn't matter to Trump, clearly, people are being sent away with zero evidence, sometimes without formal charges. That's the point of authoritarianism. The leader just does as he wishes. AI is not enabling Trump, the feckless system of checks and balances is. Similarly, W lied about wmd's, to get us into an endless war. It doesn't matter that this reason wasn't truthful. He got away with it and enriched himself and defense contractor buddies at the expense of the American people.

Terr_

Right: They often don't care about accuracy, only plausibility they can pick and choose from.

Mountain_Skies

Sounds like you'd be fine with the authoritarians having control over humanity as long as they align with your ideological views. Not that this would be all that unusual with the vast majority of the Hacker News crowd.

tabarnacle

Sure.. for folks who don’t worry about anonymity when sharing online. For those who prioritize anonymity, I’m doubtful.

throwanem

So am I. They would be among the first and most quietly vanished in this scenario, being trivially identifiable from a God's-eye view.

fooker

You can identify with a decent amount of confidence whether two paragraphs of text were written by the same person.

exiguus

I'm not entirely convinced that nations will play as significant a role in the coming decades as they have historically. Currently, we observe a trend where affluent individuals are increasingly consolidating power, a phenomenon that is becoming more apparent in the public sphere. Notably, these individuals are also at the forefront of owning and controlling advancements in artificial intelligence. Coincidentally, this trend is often referred to as 'tech fascism,' bringing us back to the dictator schema.

throwanem

States haven't always been a major feature of power. But we've never seen the interaction of personal power with modern weaponry, by which I do not mean nukes. When it was just a question of which debauched noble could afford more thugs or better assassins, sure. But 'how many Abrams has the Doge?'

exiguus

>But 'how many Abrams has the Doge?'

As many as you can control with signal chat.

Besides, I'm not sure if tanks like the Abrams are as important anymore. Nowadays, things like food and water really matter. For example, exporting corn is crucial. Also, having the soils needed to make modern tech, like chips and batteries, is super important. Therefore Greenland is.

nerdsniper

Across history, often the “state” is/was really just a kind of collective umbrella organization to help manage the interests of the powerful.

exiguus

I agree. Initially, this power was embodied by monarchs who claimed divine right, such as god-given kings. Over time, the influence shifted towards corporations that wielded significant economic and political control. Today, it is often the super-rich individuals who hold substantial sway over both economic and political landscapes.

Mountain_Skies

Governments remain the owners of significant weaponry and willingness to kill on a large scale. The tech world has empowered authoritarians, usually to the cheers of the ideologically aligned, but modern tech systems are as incredibly fragile as they are powerful.

AtlasBarfed

This is the great filter upon us more than anything else, even nuclear armageddon.

Virtually every "democracy" has a comprehensive camera monitoring system, tap into comm networks, have access to full social graph, whatever you buy,know all your finances, and if you take measures to hide it ... Know that you do that.

Previously the fire hose information being greater than the capability of governments to process it was our saving Grace from TurnKey totalitarianism.

With AI it's right there. Simple button push. And unlike nuclear weapons, it can be activated and no immediate klaxon sounds up. It can be ratcheted up like a slow boil, if they want to be nice.

Oh did I forget something? Oh right. Drones! Drones everywhere.

Oh wait, did I forget ANOTHER thing? Right right, everyone has mobile devices tracking their locations, with remote activatable cameras and microphones.

So ... Yeah.

timewizard

We can generate noise. Garbage data. Huge amounts of it. The asymmetry of this tactic is massively in our favor.

arcanus

I do not find her critique of argument #2 compelling [1]. Monetization of AI is key to economic growth. She's focused on the democratic aspects of AI, which frankly aren't pertinent. The real "race" in AI is between economic and financial forces, with huge infrastructure investments requiring a massive return on investment to justify the expense. From this perspective, increasing the customer base and revenue of the company is the objective. Without this success, investment in AI will drop, and with it, company valuations.

The essay attempted to mitigate this by noting OAI is nominally a non-profit. But it's clear the actions of the leadership are firmly aligned with traditional capitalism. That's perhaps the only interesting subtly of the issue, but the essay missed this entirely. The omission could not have been intentional, because it provides a complete motivation for item #2.

[1] #2 is 'The US is a democracy and China isn’t, so anything that helps the US “win” the AI “race” is good for democracy.'

gsf_emergency

>anything that helps the US “win”

That is, "the ends justifies the means"? Yep, seems like we are already at war. What happened to the project of adapting nonzero sum games to reality??

bgwalter

The U.S. may be a nominal democracy, but the governed have no influence over the oligarchy. For example, they will not be able to stop "AI" even though large corporations steal their output and try to make their jobs obsolete or more boring.

Real improvements are achieved in the real world, and building more houses or high speed trains does not require "AI". "AI" will just ruin the last remaining attractive jobs, and China can win that race if they want to, which isn't clear yet at all. They might be more prudent and let the West reduce its collective IQ by taking instructions from computers hosted by mega corporations.

ArthurStacks

[flagged]

antithesizer

If democracy builds supercomputers (and bombs, propaganda, prisons) for autocrats, of what good is democracy? The evidence points strongly to democracy and autocracy being friends, even "good cop, bad cop"

zelphirkalt

Or is it rather, that there are few well working democracies and most are infiltrated by autocrats at least to some degree?

Kapura

the ultra-wealthy in western democracies understand they have much more in common with the ruling autocrats than the average citizen of a democracy (the motherfuckers keep voting for taxes!)