Malicious AI swarms can threaten democracy
123 comments
·June 20, 2025bastawhiz
tyleo
I would actually love if what comes out of this is that people stop trusting social media entirely and put a lot more weight into face-to-face interactions.
Loughla
Legitimately though, when did we shift from 'don't believe anything on the Internet' to 'believe everything on the Internet'?
When and why did that happen?
const_cast
When people discovered that telling people what they want to hear makes them money.
My favorite example of this is the entire sphere of anti-science "health" stuff on TikTok. Seeds oils bad, steak good, raw milk good, chemo bad. I noticed something. Every single time, without fail, this person is trying to sell me something. Sometimes it's outright linked in the TikTok, sometimes it's in their bio. But they're always salespeople.
Salespeople lie guys. They want you to buy their stuff, of course they're going to tell you their stuff works.
kristjansson
General inability to distinguish the form of content from its veracity? The same people that assailed the internet in the 90s probably bought tabloids at the supermarket checkout. Newsweek, News of the World, who can tell the difference?
reaperducer
when did we shift from 'don't believe anything on the Internet' to 'believe everything on the Internet'?
It's largely generational.
Boomers and X's were there when the internet debuted and were exposed to lots of "Beware of the scary internet" stories in the legitimate media.
The internet was already normal and common when Millennials and Z's came along, so they didn't get the same warnings.
The notion that grandma falls for online scams more often than a Z is an ageist trope that has been disproven in several studies.
frollogaston
That and low-effort journalism or ads. Without AI, writing tons of bs is already a well-established skill, now it doesn't even take skill.
alganet
That doesn't solve the problem.
The person you are talking to face-to-face could have been targeted with disinformation as well.
This is suggested in the paper: manufacture of narrative across communities. Those communities are not exclusively online.
Atlas667
There are only 2 logical solutions, the powers that be will either create heavier blinders within nation states or the public creates truly independent/public systems.
Social mass-media is doomed to fail in its current form. These platforms are already manipulated by capital through advertising, nation states, data-brokers and platform self interest.
People need more federated networks where agents can be verified, at least locally and "feeds" cannot be manipulated.
The powers that be do not believe in democracy how you and I believe in it, they believe in manufactured consent.
Mass media in its current form is just a way to create consent within the masses, not the other way around, the masses don't make the decisions.
hayst4ck
AI execs push the message of how dangerous and powerful AI is because who doesn't want to invest in the next most powerful thing?
But AI is not dangerous because of the potential for sentience, but because it makes the rich richer and the poor poorer. It gives those with resources more advantage. It's dangerous because it gives individuals the power to provide answers when people ask questions out of curioity they don't know the answer to, which is when they are most able to be influenced.
guywithahat
> because it makes the rich richer and the poor poorer
This is a terrible take. Whenever there is a massive technical shift it's the incumbents who struggle to adapt. We've already seen companies go from nothing to being worth tens of billions.
esafak
The OP is likely referring to the technology's ability to replace certain workers, which naturally leaves them poorer. The net effect is one of increased wealth inequality, the minting of new AI entrepreneurs notwithstanding. Is the data on this out yet?
sgjohnson
We survived industrial revolution, we’ll survive AI.
hn_throwaway_99
Even if there are some new entrants who become remarkably successful, it still means that for the vast majority of humans that aren't AI experts that the "spoils" will still accrue primarily to these "AI winners" while making it much, much, much harder for many, many, many more people to sell their labor. And many of the investors in these companies that are poised to make bank are the usual suspects.
And even then, many of the biggest winners of AI so far (Google, Microsoft, NVidia, etc.) are already some of the biggest companies on the planet.
bilbo0s
Even the AI experts are the usual suspects.
The techies making money are the AI experts. The real AI experts. Not the TF/PyTorch monkeys.
There's actually a massive and underappreciated difference between the two groups of people. It goes to the heart of why so many AI efforts in industry fail, but AI Labs keep making more and more advances. Out in industry, we mistake Tensor Flow monkeys for AI experts. And it's not even close.
Worse, you look at market price to get some of the real AI experts, and you realize that you have no shot at securing any of that intellectual capital. And even if you have the resources to secure some of that talent, that talent has requirements. They're fools if they consider you at less than 10^4 H100s. So now you have another problem.
I think techniques, R&D, secrets, intellectual capital, and so on are all centralizing in the major labs. Startups as we knew them 5 to 10 years ago will simply be choosing which model to build on. They have no legitimate shot at displacing any of the core LLM ecosystems. They'll all be nibbling at the edges.
reaperducer
We've already seen companies go from nothing to being worth tens of billions.
And now the artists have to wash dishes because AI is making the art.
Nope, he was right. Rich get richer, poor get poorer. A few unicorns you can count on one hand doesn't change the facts.
hayst4ck
You can pay more for more access to a better model. Answer quality directly correlates to resources spent generating it.
It literally and structurally offers advantage to those with more resources.
That effect compounds over time and use.
I am not even remotely talking about worker replacement, even assuming no job was lost, a company that is able to pay for better answers should have more profit and therefore more ability to pay for more/better answers.
bilbo0s
Well, companies, by and large, started and funded by..
incumbents.
That said, I think that's just how reality works.
delusional
Yet it's somehow the same band of people at the helm of these "new" companies.
croes
You confuse companies with people.
Companies come and go but the rich people who owns them nearly stay the same
guywithahat
You are your own sole proprietorship, go learn an in-need skill and companies will compete for you. It can be hard to see in big companies but these market forces are very apparent in a 50-person company
oezi
As long as the stock market appreciates more than GDP grows, the rich will become richer quicker than the poor no matter what we do.
null
dismalaf
They push the danger factor so that governments will regulate the industry and prevent startups from competing. It's 100% about regulatory capture.
frollogaston
Social media is a pretty new thing, even home internet is. People used to get info from each other or from a select few broadcast/paper news sources, the latter owned by powerful people of course. AI or not, we're probably going back to that because people realized you can't trust stuff from random places.
reaperducer
People used to get info from each other or from a select few broadcast/paper news sources, the latter owned by powerful people of course.
Except, back then the newspaper and TV stations and radio stations would fact-check one another. If one of them was lying, the others would call them out on it.
That cost the people in power readers/viewers, and eventually money, so there was an incentive to tell the truth and develop a good reputation.
Tech doesn't care about reputation because it "doesn't scale" or is "long tail" or some other excuse for laziness.
SV_BubbleTime
>It's dangerous because it gives individuals the power to provide answers when people ask questions out of curiosity they don't know the answer to, which is when they are most able to be influenced.
Can you point to a time in history of written information this wasn't true though?
- Was the Library of Alexandria open to everyone? My quick check says it was not.
- The access to written information already precluded an education of some form.
- Do you think Google has an internal search engine that is better than the one you see? I suspect the have an ad-less version that is better.
- AI models are an easy one obviously. Big Players and others surely have hot versions of their models that would be considered too spicy for consumer access. One of the most memorable parts of ai 2027 for me was the idea that you might want to go high in government to get access to the super intelligence models, and use them to maintain your position.
The point is, that last one isn't the first one.
louwrentius
Thank you, this is an excellent assessment
ivape
It's dangerous for psychological reasons. We already saw how the internet was able to organically form into bubbles that escaped into the real world and fully influenced how people identify and conduct themselves. AI now lets people sub divide into further groups based on entirely arbitrary criteria which they feed into AI and AI feeds back their bubble. Think Manson, think cults, suicide pacts, tidepod eating cult, and of course, political parties.
The kids will at some point worship a prompt-engineered God (not the developer, the actual AI agent), and there will be nothing society will be able to do about it. Nobody verbalizes that Gen Z moves entirely like a cult, trend after trend is entirely cult-like behavior. The generation that is going to get raised by AI (Gen Zs kids) are going to be batshit crazy.
hansvm
> The kids will at some point worship a prompt-engineered God
The way some people talk about LLM coding I don't know that we're far off.
frollogaston
I'm on the older side of Gen Z, I think the older people move more like cults. And the younger ones are addicted to phones from a young age since their Millennial parents thought it'd be a great idea, though that's finally starting to change.
AtlasBarfed
It gives the powerful more power to monitor and control the people.
AI is despotism automated.
ilaksh
Stop blaming technology for the way humans misuse it. AI, like any technology, is a lever. Like a big metal rod. You could use that to move stones for building a structure, or to dislodge a boulder to roll down a hill and destroy someone else's building.
The pre-AI situation is actually incredibly bad for most people in the world who are relatively unprivileged.
"Democracy" alternates between ideological extremes. Even without media, the structure of the system is obviously wholly inadequate.
Advanced technologies can be used to make things worse. But they are also the best hope for improving things. And especially the best hope for empowering those with less privilege.
The real problems are the humans, their belief systems and social structures. The status quo may seem okay to you on most days, but it is truly awful in general. We need as many new tools as possible.
Don't blame the tools. This is the worst kind of ignorance.
const_cast
> Stop blaming technology for the way humans misuse it.
Exactly, that's why I propose every person has access to a nuke.
Elephant in the room here: the scale of technology matters. Being able to lie on a scale that eradicates truth as a concept matters.
We can't just naively say all tools are similar. No no, it doesn't work that way. I'm fine with people having knives. I'm fine with people having a subset of firearms. I am not fine with people having autonomous drones, or nuclear weaponry, or whatever. Everything is a function of scale. We cannot just remove scale from the equation and pretend our formulas still hold up. No, they don't. That's why the printing press created new religions.
mclau157
Its a bit more of a question of which can we change more easily; tools, or human nature
1659447091
Human nature. The tools will always follow that.
alephnerd
> Stop blaming technology for the way humans misuse it
> Don't blame the tools. This is the worst kind of ignorance
"Stop blaming [guns // religion // drugs // cars // <insert_innovation_here>] for the way humans misuse it"
There is a reason regulations exist. Too much regulation is detrimental to innovation, but some amount of standards is needed.
ilaksh
Guns, drugs and religion are not equivalent to AI.
I did not say there should not be regulation or standards.
The point is that the whole diagnosis is wrong. People point at AI as creating a new problem as if everything was okay.
But everything is already fucked and it's not because of technology it's because of people, their social structures and beliefs. That is what we need to fix.
There are lots of ways that AI could help democracy and government in general.
salawat
None of it outweighs the harms in the hands of provably malicious governments or corporate interests. Hell, the Western ideal of Government is only truly considered tolerable given the caveat that we're constantly cycling chunks of it out, and that it never becomes so efficient and automated that it can trivially operate without the consent of the governed.
croes
And there a lots of ways to fuck the situation even more up.
AI makes fake news in masses possible.
You think the situation is bad? Let’s talk about that in 5 years.
jimmyjazz14
> Stop blaming [guns // religion // drugs // cars // <insert_innovation_here>] for the way humans misuse it
I tend to agree with this statement honestly.
SV_BubbleTime
Right? People say this as some defense for regulation, but it's the opposite. Everything that can, will be misused. USA has a lot of shootings - where guns are the most common on earth - big surprise? But when you get beyond the surface knee jerk, it's almost entirely gangs over drugs. Well, those things are already "regulated" so clearly there is a disconnect somewhere between freedom and being mad at the next thing in the chain - punishing everyone because someone might be bad.
The adult take is that things do not have malice, people might, so address that because the world will never be regulated into safety enough for the people that don't get human nature.
voidhorse
This is a naive view of technology and its development. Yes, there is a certain degree to which technologies can be appropriate for a range of ends, but they are also created and distributed in historical situations in which the creators have incentives. There are plenty of cases in history in which purely "neutral" technological decisions and developments had intentional political and economic effects. Check out Langdon Winner's classic paper "Do artifacts have politics?"
The tools do not exist without the humans, and the humans, consciously of otherwise, design tools according to their own views and morals.
To outline just a basic example: many initial applications of generative AI were oriented toward the generation of images and other artistic assets. If artists, rather than technologists, had been the designers, do you think this would have been one of the earlier applications? Do you think that maybe they may have spent more time figuring out the intellectual property questions surrounding these tools?
Yes, the morals ultimately go back to humans, and it's not correct to impute morals onto a tool (though, ironically enough, the personification and encoding of linguistic behaviors in AI may be one reason that LLMs can be considered a first exception to this) but reducing the discussion to "technology is neutral" swings the pendulum too far in the other direction and ultimately tends to absolve technologists and designers of moral responsibility pushing it to the use, which, news flash, is illegitimate. The creators of things have a mora responsibility too. For example, the morality of designing weapons for the destruction of human life is clearly contestable.
ilaksh
My views are not naive. Let's get specnific. What do your propose? That LLMs or image generators be banned?
Are technologists creating AI swarms for political manipulation? Or is that being done by politicians or political groups?
Are you suggesting that an LLM or image generator is like a gun?
croes
We know that humans are the problem but you can’t change that.
Maybe it’s a bad idea to put powerful tools in the hands of people you know will misuse it.
Let‘s make e gedanken experiment.
We create mighty tools with two buttons. Button 1 solve world hunger and cure every disease. Button 2 kill everybody but you.
Would give everyone such a tool?
salawat
You could make it with only 1 button, (your first) and it could still end up doing the second. Full extermination of the people vulnerable to disease and hunger is, in fact, a valid optimization strategy.
There is no getting around the fact that these things are nothing like regular technology where you can at least decompose it down to functional working parts. It isn't debuggable basically. Nor predictable.
AtlasBarfed
The tool is the final key for a omnipresent full time monitoring and control.
Everywhere you move. Everything you say, everything you buy.
They've had the monitoring technology for decades. The problem was the fire hose.
There is no problem with the fire hose anymore.
If this doesn't scare the s** out of you, then you're ignorant.
Maybe if we had a well-functioning government I would hold out some degree of hope. But our Democratic institutions are already in shambles from facebook.
All previous technologies basically enhanced the talent and intelligence. Yes AI can do that, but the difference this time is it replaces intelligence on a huge scale
The role of the intelligensia, to borrow an old term, arguably has pushed idealistic progress on society with its monopoly on competence.
That Monopoly on competence generally came with it. The essential counterbalance to centralized authority and power along with idealism and philosophically derived morality and righteousness.
Ai is the end of the monopoly of competence for almost all the hacker news crowd.
North Korea is the future with AI.
anigbrowl
I'd love to hear why this was flagged, as it's very squarely within HN guidelines. Curiously, although it's not [dead] and was only posted a few hours ago, this story is totally missing from the HN topic list. I can only speculate as to why.
NitpickLawyer
I'd argue the only new development is that now it's cheaper / easier to do. But the same concept has been used previously with human-augmented bot farms.
happytoexplain
I'm often a little bewildered at why we so consistently label "cheaper/easier" as less significant than "new". "Cheaper/easier" is what creates consequences, not "new".
SoftTalker
Yes exactly. In the past we had privacy/anonymity in public because it simply wasn't feasible to follow everyone, everywhere, all the time. The technology did not exist, and while you could follow selected individuals around, that quickly broke down at numbers greater than "a few." Some regimes did it more than others (the old DDR/Stasi for example) but even they could only keep a close eye on targeted individuals, places, and events.
Now we have cameras on every major road and intersection, most places of business, most transportation facilities, and most public gathering places. We have facial recognition and license plate readers, and cheap storage that is easily searched and correlated. Almost all communications are logged if not recorded. Even the postal service is now imaging the outside of every envelope.
All because it's cheaper and easier.
spongebobstoes
I think it remains a useful distinction. framing this as an evolution ("cheaper") helps understand the problem space. for example, the motivations, capabilities and effectiveness of existing players
o_____________o
> "Cheaper/easier" is what creates consequences, not "new".
Nuclear bomb?
null
null
null
null
TypingOutBugs
Imagine if they were cheaper and easier to manufacture
rikafurude21
New developments in AI are used here to push the usual "solutions" - more privacy invasion, more centralized control, for the sake of democracy. Disinformation campaigns arent just used to influence elections but thats like the most common theme. The reality is that humans need to be upgraded with "mental antivirus". We're seeing the beginning of something like that with people being able to tell when text theyre reading is LLM-generated. Everyone probably gets psyopped every time they start scrolling a social media feed - We just need to be aware of that
fasthands9
There is an element that when it becomes cheaper/easier it cancels each other out. Spam became cheaper/easier but that just meant people were less trusting of random emails.
It probably is true that some populations are more vulnerable to AI produced propaganda/slop, but ultimately I have 50 more pressing AI concerns.
candiddevmike
Some GenAI manufactured consent for the Iran war should do the trick
dsabanin
I think it's pretty clear we're at this stage already.
lugu
Is it time to move away from directly elected representative toward topic-specific randomly pick representative? Would this help prevent operation to influenced opinion?
SketchySeaBeast
That feels like it'd turn most non-hot-topic decisions into noise, might as well flip a coin to determine policy.
Spooky23
We did this already here in the United States.
SoftTalker
I wonder if AI is going to track like nuclear power. In the 1950s it was the greatest thing. Electricity would be plentiful and cheap. "Too cheap to meter." All kinds of new conveniences would be possible. The future was bright.
Then we had growing environmental concerns. And the costs were much higher than initially promoted. Then we had Three Mile Island. Then Chernobyl. Then Fukushima. New reactor construction came to a standstill. There was no trust anymore that humans could handle the technology.
Now, there's some interest again. Different designs, different approaches.
Will AI follow the same path? A few disasters, a retreat, and then perhaps a renewal with lessons learned?
gneuron
Yes
matus-pikuliak
I have done some research about AI-disinformation. This is a really complex topic, as disinformation or influence operations are complex phenomena that can have many different forms. What I would argue is that disinformation in general do not have a supply problem (how to generate as much of them as possible) but a demand problem (how to get what is generated in front of some eyes). You don't really need a botnet of fake users pushing something, you need a few popular accounts/politicians to spread your message. There is no significant advantage in using AI there.
But, there are still situations where botnets would be useful. For example, spreading propaganda on social media during hot phases of various conflicts (R-U war, Israeli wars, Indo-Pakistani war) or doing short term influence operations before the elections. These cases need to be handled by social media platforms detecting nefarious activity by either humans or AI. So far they could half-ass it as it was pretty expensive to run human-based campaigns, but they will probably have to step up their game to handle relatively cheap AI campaigns that people will attempt to run.
hayst4ck
We are in an age where the technology of oppression gives the rich the feeling that they are above consequences. That people can be controlled in a way they will not be able to break free from.
Malicious AI swarms are only one manifestation of technology which gives incredible leverage to a handful of people. Incredible amounts of information are collected, and an individual AI agent per person watching for disobedience is becoming more and more possible.
Companies like Clearview already scour the internet for any public pictures and associated public opinions and offer a facial recognition database with political opinions to border patrol and police agencies. If you go to a protest, border patrol knows. Our government intelligence has outsourced functions to private companies like Palantir. Privatizing intelligence means intelligence capabilities in private hands, that might sound tautological, but if this does not make you fearful, then you did not fully understand. We have license plate tracking everywhere, cameras everywhere, mapped out "social graphs," and we carry around devices in our pockets that betray every facet of our personal lives. The vast majority of transactions are electronic, itemized, and tracked.
When every location you visit is logged, interaction you have is logged, every associate you communicate with known, every transaction itemized and logged for query, and there is a database designed to join that data seamlessly to look for disobedience and the resources available to fully utilize that data, then how do you mount a resistance if those people assert their own power?
We are becoming dangerously close to not being able to resist those who own or operate the technology of oppression and it is very much outpacing the technology of resistance.
uniqueuid
This paper builds on a series of pathways towards harm. Those are plausible in principle, but we still have frustratingly little evidence of the magnitude of such harms in the field.
To solve the question of whether or not these harms can/will actually materialize, we would need causal attribution, something that is really hard to do — in particular with all involved actors actively monitoring society and reacting to new research.
Personally, I think that transparency measures and tools that help civic society (and researchers) better understand what's going on are the most promising tool here.
alganet
There's plenty we can do before any attribution is made.
LLMs hallucinate. They're weak and we can induce that behavior.
We don't do it because of peer pressure. Anyone doing it would sound insane.
It's like a depth charge, to make them surface as non-human.
I think it's doable, specially if they constantly monitor specific groups or people.
There are probably many other methods to draw out evidence without necessarily going all the way into attribution (which we definitely should!).
edwardbernays
Malicious AI seams are merely the tool wielded by the actual danger, which is the threat actors directing them towards a particular goal.
I'm spreading the message because I want more socially conscious people to engage in this. Look into Curtis Yarvin and The Dark Enlightenment. Look into Peter Thiel's (the CEO of Palantir aka America's biggest surveillance contractor) explicitly technofascist musings on using technology to bulldoze democracy.
devrandoom
Sections of Reddit and Twitter have been taken over by incredibly toxic cesspit of bots. They fuel polarization and hate like nothing I've ever seen before.
It's catered for the algorithm which pumps it out to users.
> system-level oversight—a UN-backed AI Influence Observatory
> The Observatory should maintain and continually update an open, searchable database of verified influence-operation incidents, allowing researchers, journalists, and election authorities to track patterns and compare response effectiveness across countries in real time. To guarantee both legitimacy and skill, its governing board would mix rotating member-state delegates, independent technologists, data engineers, and civil society watchdogs.
We've really found ourselves in a pickle when the only way to keep Grandma from being psychologically manipulated is to have the UN keep a spreadsheet of Facebook groups she's not allowed to join. Honestly what a time to be alive.