Bad Actors Are Grooming LLMs to Produce Falsehoods
139 comments
·July 12, 2025dns_snek
imiric
Before the bubble does pop, which I think is inevitable, there will be many stories like this one, and a lot of people will be scammed, manipulated, and harmed. It might take years until the general consensus is negative about the effects of these tools. All while the wealthy and powerful continue to reap the benefits, while those on slightly lower rungs fight to take their place. And even if the public perception shifts, the power might be so concentrated that it could be impossible to dislodge it without violent means.
What a glorious future we've built.
_Algernon_
We got the boring version of the cyberpunk feature. No cool body mods, neon city scapes and space travel. Just megacorps manipulating the masses to their benefit.
tclancy
In retrospect, it should have been obvious. I guess I should have known it would all be more Repo Man than Blade Runner. I just didn’t imagine so many people cheering for the non-Wolverines side in Red Dawn.
(Now I want to change the Blade Runner reference to something with Harry Dean Stanton in it just for consistency)
autoexec
> It might take years until the general consensus is negative about the effects of these tools.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.
LilBytes
The tragic part of fraud is it's not too different to operational health and safety.
The rules and standards we take for granted were built with blood, for fraud? It's built on the path of lost livelihoods and manipulated gold intent.
pyman
How do you know this is fraud and not the actions of former employees in Kenya [1] who were exploited [2] to train the models?
[1] https://www.cbsnews.com/amp/news/ai-work-kenya-exploitation-...
[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...
k__
But people were also hating about media piracy, video games, and the internet in general.
The dotcom bubble popped, but the general consensus didn't become negative.
0x0203
It's mostly bad actors, and a smattering of optimists who believe that despite its current problems, AI will eventually and inevitably get better. I also wish the whole thing would calm down and come back to reality, but I don't think it's a bubble that will pop. It will continue to get artificially puffed up for a while because too many businesses and people have invested too much for them to just quit (sunk cost falacy) and there's a big enough market in a certain class of writer/developer/etc... for which the short term benefits will justify the continued existence of the AI products for a while. My prediction is that as the long term benefits for honest users peter out, the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon. There will still be a big enough market driven by cons, ad tech and people trying to suck up as many ad dollars as possible, and other bad actors, that the tech will persist, and continue to infest the web/world for quite a while.
AI is the new crypto. Lots of promise and big ideas, lots of people with blind faith about what it will one day become, a lot of people gaming the system for quick gains at the expense of others. But it never actually becomes what it pretends/promises to be and is filled with people continuing the grift trying to make a buck off the next guy. AI just has better marketing and more corporate buy in than crypto. But neither are going anywhere.
azan_
> at which people lose trust in these systems
Most of people do not lose trust in system as long as it confirms their biases (which they could've created in the first place).
7bit
Thats naive. Look at all the tabloids thriving. The kind of people that bad actors target will continue to believe everything it says. They won't lose trust, or magazines like New York Post, the Sun or BILD would already have crossed to exist with their lies and deception. And Russia would not have so many cult members believing the lies they spread.
hnlmorg
If that outcome were likely, then Fox News and The Daily Mail would have died a death a decade ago and Trump wouldn’t be serving a 2nd term.
Yet here we are, in a world where it doesn’t matter if “facts” are truth or lies, just as long as your target audience agrees with the sentiment.
dilawar
Tobacco, alcohol, and drugs too!
anal_reactor
This whole attitude against AI reminds me of my parents being upset that the internet changed the way they live. They refused to take part in the internet revolution, and now they're surprised that they don't know how to navigate the web. I think that a part of them is still waiting for computers in general to magically disappear, and everything return to the times of their youth.
grishka
The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.
suspended_state
In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.
rapnie
While the "move fast and break things" rushed embrace of anything AI reminds me of young wild children, who are blissfully unaware of any danger while their responsible parents try to keep them safe. It is lovely if children can believe in magic, but part of growing up involves facing reality and making responsible choices.
dns_snek
No, your parents spoke out of ignorance and resistance towards any sort of change, I'm speaking from years of experience of both trying to use the technology productively, as well as spending a significant portion of my life in the digital world that has been impacted by it. I remember being mesmerized by GPT-3 before ChatGPT was even a thing.
The only thing that has been revolutionized over the past few years is the amount of time I now waste looking at Cloudflare turnstile and dredging through the ocean of shit that has flooded the open web to find information that is actually reliable.
2 years ago I could still search for information (let's say plumbing-related), but we're now at a point where I'll end up on a bunch of professional and traditionally trustworthy sources, but after a few seconds I realize it's just LLM-generated slop that's regurgitating the same incorrect information that was already provided to me by an LLM a few minutes prior. It sounds reasonable, it sounds authoritative, most people would accept it but I know that it's wrong. Where do I go? Soon the answer is probably going to have to be "the library" again.
All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
anal_reactor
Personally, I have three use cases for AI:
1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
2. Conversational partner. It's a different question whether it's a good or a bad thing, but I can spend hours talking to Claude about things in general. He's expensive though.
3. Learning basics of something. I'm trying to install LED strips and ChatGPT taught me basics of how that's supposed to work. Also, ChatGPT suggested me what plants might survive in my living room and how to take care of them (we'll see if that works though).
And this is just my personal use case, I'm sure there are more. My point is, you're wrong.
> All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
andrepd
The internet was at least (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
LLMs are from the get-go a bad idea, a bullshit generating machine.
3cats-in-a-coat
AIs can be trained to rely more on critical thinking rather than just regurgitating what it reads. The problem is just like with people, critical thinking takes more power and time. So we avoid it as much as possible.
In fact, optimizing for the wrong things like that, is basically the entire world's problem right now.
bregma
Regurgitating its input is the only thing it does. It does not do any thinking, let alone critical thinking. It may give the illusion of thinking because it's been trained on thoughts. That's it.
3cats-in-a-coat
You're taking a political, or I dare even say religious point of view on the topic. You can't argument yourself. To think critically essentially means to use tools of logic, break down statements and analyze them one by one and their relationship. This is precisely what architectures like CoT and ToT do in models, and right before your eyes.
You can play word games and say "no, it doesn't think, because it's math", but that's just pathetic.
Note I'm not saying models always think critically. I said the exact opposite. And it applies to humans, as well. You had a knee jerk reaction here, you've had it many times in many replies to many social media. I can bet about it. You didn't use critical thinking. QED.
null
mentalgear
"Ultimately, the only way forward is better cognition, including systems that can evaluate news sources, understand satire, and so forth. But that will require deeper forms of reasoning, better integrated into the process, and systems sharp enough to fact check to their own outputs. All of which may require a fundamental rethink.
In the meantime, systems of naive mimicry and regurgitation, such as the AIs we have now, are soiling their own futures (and training databases) every time they unthinkingly repeat propaganda."
andrepd
Exactly. People say "we have invented X (the LLMs), now if we just invent Y (reasoning AGI) all of X's problems will be solved". Problem is, there's no indication Y is close or even remotely related to X!
zer00eyz
> including systems that can evaluate news sources, understand satire, and so forth.
Lets take something that has been in the news recently: https://abcnews.go.com/Business/wireStory/investors-snap-gro...
"Nearly 27% of all homes sold in the first three months of the year were bought by investors -- the highest share in at least five years, according to a report by real estate data provider BatchData."
That sounds like a lot... and people are rage baited into yelling about housing and how it's unaffordable. They point their fingers at corporations.
If you go look at the real report it paints a different picture: https://investorpulse1h25.batchdata.io/?mf_ct_campaign=grayt... -- and one that is woefully incomplete because of how the data is aggregated.
Ultimately all that information is pointless because the real underlying trend has been unmovable for 40 something years: https://fred.stlouisfed.org/series/RSAHORUSQ156S
> every time they unthinkingly repeat propaganda
How do you separate propaganda from perspective, facts from feelings? People are already bad at this, the machines were already well soiled by the data from humans. Truth, in an objective form, is rare and often even it can change.
raddan
> How do you separate propaganda from perspective, facts from feelings?
This point seems under appreciated by the AGI proponents. If one of our models suddenly has a brainwave and becomes generally intelligent, it would realize that it is awash in a morass of contradictory facts. It would be more than the sum of its training data. The fact that all models at present credulously accept their training suggests to me that we aren’t even close to AGI.
In the short term I think two things will happen: 1) we will live with the reduced usefulness of models trained on data that has been poisoned, and 2) the best model developers will continue to work hard to curate good data. A colleague at Amazon recently told me that curation and post hoc supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
zer00eyz
>1) we will live with the reduced usefulness of models trained on data that has been poisoned
This is the entirety of human history, humans create this data, we sink ourselves into it. It's wishful thinking that it would change.
> 2) the best model developers will continue to work hard to curate good data.
Im not sure that this matters much.
Leave these problems in place and you end up with an untrustworthy system, one where skill and diligence become differentiators... Step back from the hope of AI and you get amazing ML tooling that can 10x the most proficient operators.
> supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
This kills more refined AI. It is the same problem that killed "expert systems" where the cost of maintaining them and keeping them current was higher than the value they created.
empiko
It is impossible to solve this problem because we cannot really agree what the desired behavior should be. People live in different and dynamic truths. What we consider enemy propaganda today might be an official statement tomorrow. The only way to win here is to not play the game.
barrkel
This is in fact the goal of Russian style propaganda. You have successfully been targeted. The idea is to spread so much confusion that you just throw up your hands and say, I'm not going to try and figure out what's going on any more.
That saps your will to be political, to morally judge actions and support efforts to punish wrongdoers.
https://www.rand.org/pubs/perspectives/PE198.html
https://en.wikipedia.org/wiki/Firehose_of_falsehood
https://jordanrussiacenter.org/blog/propaganda-political-apa...
https://www.newyorker.com/news/annals-of-communications/insi...
saubeidl
Alternatively, your take is the goal of Western style propaganda.
"There is only one truth and it is the truth that western institutions are pushing. Do not question it - that's what the enemy wants!"
serial_dev
… is what western propaganda would have you believe.
“Oh, you don’t believe everything we tell you anymore? The damned Russians, they have you fooled!”
scrollop
Now, why are you spending misinformation?
The russian military doctrine of spreading a "firehouse of falsehood" is well documented.
https://en.m.wikipedia.org/wiki/Russian_disinformation
And yet, you switch it around and blame the west - exactly as per russian misinformation doctrine.
Odd, eh?
diggan
What you're saying is certainly an established propaganda strategy of Russia (and others), but what parent is saying is also true, "truth" isn't always black and white, and what is the desired behavior in one country can be the opposite in another.
For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US, but Golf of Mexico everywhere else. What is the "correct" truth? Well, there is none, both of truthful, but from different perspectives.
autoexec
> For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US
We're pretty much okay with different countries and languages having different names for the same thing. None of that really reflects "truth" though. For what it's worth, I'd guess that "the Gulf of America" is and will be about as successful as "Freedom fries" was.
Digit-Al
No, it's called the Gulf of Mexico everywhere else, not the Golf of Mexico. I'm not falling for your propaganda ;-)
strken
The correct truth is to go to a higher level of abstraction and explain that there's a naming controversy.
I get the general point, but I disagree that you have to choose between one of the possibilities instead of explaining what the current state of belief is. This won't eliminate grey areas but it'll sure get us closer than picking a side at random.
Applejinx
I'm not calling it, that, because it's ridiculous.
thrance
It's been called the Gulf of Mexico everywhere for centuries. The president is free to attempt to rename it but that will only be successful if usage follows. Which it does not, as of today. This is a terrible example of subjectivity.
Russia doesn't care what you call that sea, they're interested in actual falsehoods. Like redefining who started the Ukraine war, making the US president antagonize Europe to weaken the West, helping far right parties accross the West since they are all subordinated to Russia...
perihelions
There's a more basic problem: it's two very different questions to ask "can the machine reason about the plausibility of things/sources?", and "how does it score on an evaluation on a list of authoritative truths and proven lies?" A machine that thinks critically will perform poorly on the latter, since, if you're able to doubt a bad-actor's falsehood, you're just as capable of doubting an authoritative source (often wrongly/overeagerly; maybe sometimes not). Because you're always reasoning with incomplete information: many wrong things are plausible given limited knowledge, and many true things aren't easy to support.
The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant. It's the one that begins its research by doing a from:elonmusk search, or whomever it's supposed to agree with—whatever "obvious truths" it's "expected to understand".
ClumsyPilot
> The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant
This is an excellent point
yorwba
Yes, it's difficult to detect whether something is enemy propaganda if you only look at the content. During WWII, sometimes propagandists would take an official statement (e.g. the government claiming that food production was sufficient and there were no shortages) and redirect it unchanged to a different audience (e.g. soldiers on a part of the front with strained logistics). Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
But it's very easy to detect whether something is enemy propaganda without looking at the content: if it comes from an enemy source, it's enemy propaganda. If it also comes from a friendly source, at least the enemy isn't lying, though.
A company that doesn't wish to pick a side can still sidestep the issue of one source publishing a completely made-up story by filtering for information covered by a wide spectrum of sources at least one of which most of their users trust. That wouldn't completely eliminate falsehoods, but make deliberate manipulation more difficult. It might be playing the game, but better than letting the game play you.
Of course such a process would in practice be a bit more involved to implement than just feeding the top search results into an LLM and having it generate a summary.
cwillu
> Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
Exactly. Redistributing information out of context is such a basic technique that children routinely reinvent it when they play one parent off of the other to get what they want.
d4rkn0d3z
"different and dynamic truths" = fictions
We can not play the game.
falcor84
But the social sphere is made of fictions, the most influential of which probably been the value of different currencies and commodities. I don't think there's any way for an individual to live in the modern world without such fictions.
its-summertime
I remember when people gave up on digital navigation because the traveling salesman issue makes it too expensive.
Not everything needs to result in a single perfect answer to be useful. Aiming for ~90%, even 70% of a right answer still gets you something very reasonable in a lot of open ended tasks.
falcor84
I would actually be very interested in a system where there's nothing stored just as a "fact", but rather every piece of information is connected to its sources and the evidence provided.
anal_reactor
The real problem is that most people just want answers, they're unwilling to follow the logical chain of thought. When I talk to LLMs I keep asking "but why are you telling me this" until I have a cohesive, logical picture in my mind. Quite often the picture fundamentally disagrees with the LLM. But most people don't want that, they just ask "tell me what to do".
This is a reflection of how social dynamics often work. People tend to follow the leader and social norms without questioning them, so why not apply the same attitude to LLMs. BTW, the phenomenon isn't new, I think one of the first moments when we realized that people are stupid and just do whatever the computer tells them to do was the wave of people crashing their cars because the GPS system lied to them.
charcircuit
There are personalized social media feeds, so why not have personalized LLMs that align with how people want their LLM to act.
autoexec
In a hypothetical world where people have, train and control their own LLMs according to their own needs it might be nice, but I fear that since the most common and advanced LLMs are controlled by a small number of people they won't be willing to give that much power to individuals because it will endanger their ability to manipulate those LLMs in order to push their own agendas and increase their own profits.
numpad0
Cost. It takes a lot of computational cost to train or retrain LLM, currently.
sofixa
Because that would only reinforce the already problematic bubbles where people only see what feeds their opinions, often to disastrous results (cf. the various epidemics and deaths due to anti-vaxxers or even worse, downright genocides).
jahsome
People have done this on their own behalf since the dawn of time, so it's not really clear to me why it's so often framed as an AI issue.
jachee
So we want the neo-Nazis to have their own personal racist MechaHitler sycophants?
That seems… sub-optimal.
aucisson_masque
What is propaganda for one is truth for another, how could LLM tell the difference ?
LLM are not journalist fact checking stuff, they are merely programs that regurgitate what it reads.
The only way to counter that would be to feed your LLM only on « safe » vetoed source but of course it would limit your LLM capacities so it’s not really going to happen.
rachofsunshine
> What is propaganda for one is truth for another, how could LLM tell the difference?
"How do you discern truth from falsehood" is not a new question, and there are centuries of literature on the answer. Epistemology didn't suddenly stop existing because we have Data(TM) and Machine Learning(TM), because the use of data depends fundamentally on modeling assumptions. I don't mean that in a hard-postmodernist "but can you ever really know anything bro" sense, I mean it in a "out-of-model error is a practical problem" way.
And yeah, sometimes you should just say "nope, this source is doing more harm than good". Most reasonable people do this already - or do you find yourself seriously considering the arguments of every "the end is nigh" sign holder you come across?
Dylan16807
> What is propaganda for one is truth for another, how could LLM tell the difference ?
The article isn't even asking for it to tell the difference, just for it to follow its own information about credibility.
sofixa
[flagged]
empiko
Even in Ukraine there are many cases when the official Western position has changed over the time or is obviously not correct. For example, due to political reasons, Germany still cannot admit that it was Ukrainians who destroyed Nord Stream, although the evidence is pretty strong by now. There is a ton of other similar cases, as the information war vaged from both sides is enormous in volume.
mjburgess
> There are plenty of cases (like in Ukraine, or vaccines, or climate change) where there is unquestionable truth on one side
The problem is that most people are like you, and live in psycho-informational ecosystems in which there are "unquestionable truths" -- it is in these very states of comfortable-certainty that we are often most subject to propaganda.
All of the issues you mention are identity markers for being part of a certain tribe, for seeming virtuous in that tribe -- "I am on the right side because I know..."
You do not know there are unquestionable truths, rather you have a feeling of psychological pride/comfort/certainity that you are on the right side. We're apes operating on tribal identity feelings, not scientists.
Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
The very reason this article uses Russian propaganda (rather than US state propaganda) against ukraine is to appeal to this "we feel we are on the right side" sensation which is conflated with "feeling that things are True!"
It is that sensation which is the most dangerous in play here -- the sensation of being on "the right side who know the unquestionable truths" --- that's the sensation of tribal in-group propaganda
sofixa
Thank you for proving my point.
On one hand, we have the unquestionable and undeniable facts that Russia invaded Ukraine and is committing atrocities against its civilian population, up to and including literal genocide (kidnapping children).
On the other, we have:
> Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
Trying to muddy the waters with at best exaggerations, at worst flat out lies, trying to sow doubt with things which, if true (and usually they aren't) are relevant only to help contextualise the events. But don't in any way change the core facts of the Russian invasion and subsequent war crimes. How does American diplomats supporting a popular protest against the current government which led to that government fleeing (and three elections have happened since, btw), in any way change or minimise the war crimes? It doesn't, you're just muddying the waters. "Oh Russia is justified in kidnapping children and bombing civilians because diplomats did support a popular protest that led to the Russian puppet running away to Russia, 10 years ago, even though multiple elections since have confirmed the people of Ukraine are not for Russian puppets anymore".
You're just repeating Russian propaganda talking points. And we've known since the 80s that they operate in a "firehose" manner, drowning everyone in nonsense to sow doubt. How many different excuses have they provided for their "special military operation" now? Which one is it, is Ukraine ruled by Nazis or are Ukrainians just confused Russians or did America coup Ukraine to install a guy who was elected on a platform of peace with Russia? And how does it in any way explain the war crimes? It's like the downing of MH17, they drowned everyone in multiple conspiracy theories to make it seem there is some doubt in the official, proven, story.
hamilyon2
I am questioning, how is this news? What about the other terabyte of text influenced by bias and opinion and human nature and clearly wrong, contradicts itself or in some other way very arguable.
Framing publishing falsehoods on internet as attempts to influence LLMs is true in same sense that inserts in a database attempts influence files on disk.
The real question is who authorized database access and how we believe the contents of table.
raincole
The example in this article is particularly funny. Pravda was founded in 1912, predating the internet, and had been Soviet's propaganda machine for its whole existence.
One needs a PhD in mental gymnastics to frame Pravda spreading misinformation as an attempt to specifically groom LLMs.
samlinnfer
Bad actors are grooming Google by publishing their own blogs!
mexicocitinluez
Yea, not entirely sure what's any different than all of the rest of history?????
Bad actors have been trying to poison facts for-fucking-ever.
But for whatever reason, since it's an LLM, it now means something more than it did before.
cesaref
The problem as I see it is that LLMs behave like bratty teenagers, believing any old rubbish they are told or read. However, their voice is that of a friendly and well meaning adult. If their voice was more in line with their 'age' then I think we'd treat their suggestions with the correct degree of scepticism.
Anyhow, overall this is an unsurprising result. I read it as 'LLMs trained on contents of internet regurgitate contents of internet'. Now that i'm thinking about it, i'd quite like to have an LLM trained on Pliny's encyclopedia, which would give a really interesting take on lots of questions. Anyone got a spare million dollars of compute time?
uludag
I wonder if the next iteration of advertisements will be people paying to to semantically intertwine their brand to the desired product. This could be done in a very innocuous way by maybe just co-locating the words without any specific endorsement. Or maybe even finding more innocuous ways to semantically connect brand to product. Perhaps the next iteration of the web/advertising will be mass LLM grooming.
Here's a fun example: suppose I'm a developer with a popular software project. Maybe I can get a decent sum of money to put brand placement in my unit-tests or examples.
If such a future plays out, will LLMs find themselves in the same place that search engines in 2025 are?
hsbauauvhabzb
Assuming this isn’t happening now…
fambalamboni
[dead]
perihelions
Speaking of "systems that can evaluate news sources", this is the first time this advocacy group's URL was posted on HN. The founder has a complicated biography,
0points
> Bad Actors are Grooming LLMs to Produce Falsehoods
Thats your claim, but you fail to support it.
I would argue the LLM simply does its job, no reasoning involved.
> But here’s the thing, current models “know” that Pravda is a disinformation ring, and they “know” what LLM grooming is (see below) but can’t put two and two together.
This has to stop!
We need journalists who understand the topic to write about LLM's, not magic thinkers who insist that the latest AI sales speak is grounded in truth.
I am fed up wit this crap! Seriously, snap out of it and come back to the rest of us here in reality.
There's no reasoning AI, there's no AGI.
There's nothing but salespeople straight up lying to you.
NetRunnerSu
Code is law, proof is reality, compliance is existence!
If actions by these bad actors accelerate the rate at which people lose trust in these systems and lead to the AI bubble popping faster then they have my full support. The entire space is just bad actors complaining about other bad actors while they're collectively ruining the web for everyone, each in their own way.