Weaponizing Ads: How Google and Facebook Ads Are Used to Wage Propaganda Wars
136 comments
·September 9, 2025jpadkins
const_cast
You're correct, the solution then is to just block ads all together.
The reality is that ads are the primary vehicle for malicious content, whether it be malware, scams, or deception, on the web.
Google, as well as Meta, has demonstrated they do not take adequate measure to block said malicious content. This can lead to tangible real effects, such as getting scammed and losing your life savings.
Therefore, every web user should use a strict ad blocker per FBI recommendations. This is no longer a business question or a free-speech question, it is a computer system security question.
pyrale
Are you saying google does not apply editorial oversight on ads they run? To the best of my knowledge google does restrict who can advertise with them, and their decisions are final and not subject to judicial oversight.
In that context, what google chooses to allow and what they ban is newsworthy. In this specific case, even moreso, since the ads violate google’s own rules.
AnthonyMouse
Don't fall into the trap of "everything not mandatory is prohibited".
Google doesn't really want scam ads. It doesn't make a lot of sense to penalize them for removing some of them just because their process isn't perfect; removing them doesn't have to be banned.
But if you make not removing them mandatory, you're replacing the justice system with a private corporation, which is pretty crazy. If the police accuse you of a crime, they have to prove it to a judge and jury. You can appeal to a higher court. Google doesn't have that. And if you add liability for not removing something, they have to err on the side of removing things they ought not to, with no recourse for the victims. Competitor wants you out of the search results? Report it to Google and you're out, because they get a billion complaints and removing them by default is safer than getting prosecuted for missing a real one.
The correct solution is to let Google remove things that are bad without punishing them for not being perfect -- maybe even err on the side of imposing liability for removing things they shouldn't instead of not removing things they should -- and rely on the criminal justice system for going after the criminals.
jpadkins
Ad networks apply editorial oversight with respect to their own published policies* I am not aware of any ad network policies that approach the subject of "what is true" or "what is propaganda". They also apply restrictions to what they are legally liable for which is fairly narrow today (I.e. child porn or harmful substances to minors, etc)
Forcing ad networks to be the main arbiters of what is true vs. propaganda is a huge step towards an Orwellian society.
* some policies related to the concept of truth are one dealing with scams or fraudsters. Even then, it's only the scope of "does this advertiser actually provide the service they claim to be" or not, which is way more objective than anything related to war, religion, or the middle east.
specproc
The article does not document isolated cases of individual free speech, but a coordinated campaign of government run propaganda.
conover
The government (and/or society?) have already deputized private organizations to enforce various types of controls either implicitly or explicitly. Banks (AML) and Payment Processors (recent Steam content removal news) come to mind. Irrespective of whether it's a good or bad things, it already exists.
dghlsakjg
Just because something already exists doesn't mean that we want more of it.
jpadkins
The Banks don't determine if you are a terrorist or what not. They comply with the order when a judge gives them a lawful order to freeze accounts. I think it's okay to deputize corporations for the execution of the law in the digital world. You really, really don't want the federal government or mega corps to determine what is the "truth" vs. "propaganda". It's way too much power in society to be centralized. The decision on these nuanced issues needs to have due process and be de-centralized.
some_random
It's not just lawful court orders, over the years many explicit and implicit "suggestions" about "risk" have been issued to banks to discourage activity deemed undesirable. https://en.wikipedia.org/wiki/Operation_Choke_Point
xalava
That is not how AML-CFT work. Banks calculate your level of risk. When in doubt, they will cut you off or block individual transactions, unless the benefits outweigh the risks.
gruez
>The Banks don't determine if you are a terrorist or what not. They comply with the order when a judge gives them a lawful order to freeze accounts.
How do you think this works in reality when the people getting sanctioned are trying to bypass the sanctions by creating shell companies and false identities? You either have a totally ineffective sanctions regime because it can be trivially be bypassed by setting up new shell companies, or a vaguely effective one because banks are deputized to figure out whether their customers are sanctioned or not. Luckily we have the latter.
xalava
Your idea is that the U.S. government lawfully prosecutes foreign governments, including hostile ones?
adrr
Just repeal section 230 and we can make the court systems the arbiter of truth. Meta/Google don't care about what ads they run because they have no incentive to stop misinformation in fact they make money off of it.
toast0
If someone is putting up illegal ads, shouldn't you file suit against the advertiser?
Advertising is a commercial activity, so it should be reasonable to follow the money and find the advertiser. If necessary, add more requirements for advertisers to be identified/indetifiable so that suits can be served.
bjourne
Google and Meta penalized and curbed Covid vaccine misinformation. No reason they cannot do the same with state-sponsored Zionazi propaganda.
albulab
As I said, it's terribly amusing that you make yourself a victim of some Zionist conspiracy, while this response is itself a Muslim propaganda plot.
adhamsalama
Are you OK with spreading genocide-denial propaganda?
albulab
This article brings up important questions about digital influence in wartime, but it's hard to ignore how one-sided the framing is when it comes to Israel.
There's barely a mention of the October 7 massacre, where over 1,200 Israelis were murdered and hundreds taken hostage some of them are children. That’s the context behind Israel’s messaging. Leaving that out gives a very distorted picture of why these campaigns exist in the first place.
The article criticizes Israel for running ads that target UNRWA, but completely skips the fact that more than a dozen UNRWA staff were accused of actively participating in the massacre and holding hostages, That allegation was serious enough for countries like the US, Germany, the UK, and Australia to suspend their funding. That’s not “disinformation,” that’s a real international response.
There’s also zero mention of Hamas’s own propaganda operations. No discussion of how they use Telegram, TikTok, or social platforms to push graphic and often fake content to manipulate global opinion. If we're talking about the weaponization of information, how is that not relevant?
Instead, the article spends thousands of words dissecting Israel’s side while ignoring everything else. It presents only one narrative and wraps it in a moral argument that conveniently excludes key facts and context.
A fair critique would examine how all sides are using digital tools in modern conflicts, not just the one the author disagrees with politically. Otherwise, it’s not an analysis. It’s just a well-written piece of propaganda in itself.
ehnto
It is a disservice to yourself and family to not block ads. You shouldn't let these companies have a conduit into your life.
I don't think it's amorale to use the service for which you are blocking the ads for either. If they don't like it, they can try a new business model. They don't protect you, why should you protect them.
philipallstar
> If they don't like it, they can try a new business model
I agree with this logic. I own multiple Porsches because I don't think it's amoral to steal them from dealerships. If they don't like it, they can try a new business model.
const_cast
Stealing is illegal, blocking ads isn't. The FBI recommends you use an ad blocker.
If anything, ads steal from YOU. They take your time and attempt to get you to part with your money.
You're not obligated to support a business model based on theft, if you want to consider it that. You're not obligated to support any business model.
If it's allowed, then go for it. They can always switch to another business model.
snapcaster
Okay, continue to let your mind get polluted out of some bizarre sense of moral duty to a faceless corporation
_Algernon_
Stealing a physical thing = the previous owner can't use it.
Copying a thing or accessing a platform = the previous owner can still use or sell it.
Even if you consider it unethical access, the comparison to stealing really misses the mark.
philipallstar
> the comparison to stealing really misses the mark
I know this always triggers a hard-coded response based on regex, but the comparison doesn't rely on the specifics of stealing, so it's not a valid criticism. The logic is: people offer things in exchange for a price. You can take the things in exchange for the price, or you can leave the things. You shouldn't take the things without paying the price.
jpadkins
ok, call it theft of services. You used the service but blocked how the creator makes money on the service. Is it really different from someone who runs out of a barber or restaurant?
gchamonlive
At this point, whoever opposes big tech regulation is in favor of these kinds of abuses happening. Here in Brazil there is a big discussion around this regulation but for other reasons, like the social media algorithms pushing child abuse content to potential pedophiles.
The criticism against these regulations are all valid and need to be discussed, because we also don't want to create these mechanisms at the government level only so the next authoritarian president can use them for their own personal agenda. But all this discussion should be in the direction of how these companies are going to be regulated, not how they aren't.
gruez
Can you imagine this logic being applied to any other topic?
>At this point, whoever opposes [CSAM scanning/encryption backdoors] is in favor of [child abuse/criminal activity] ...
gchamonlive
This is a stawman designed to misdirect the discussion. How about we keep discussing regulation in the context of ad abuse?
A valid criticism would be an implied false dichotomy in my original comment (either regulation or rampant corporate abuse). My idea is for us to discuss this. Is regulation not the right way? What's the alternative? Not, "oh if that doesn't work for all possible universe of applicable solutions, it doesn't deserve merit"
gruez
>This is a stawman designed to misdirect the discussion. How about we keep discussing regulation in the context of ad abuse?
I can't see how my comment is a "strawman" in any meaningful sense.
>A valid criticism would be an implied false dichotomy in my original comment
That's exactly my point. Adopting a "you're either with us or against us" attitude is totally toxic, and shouldn't be accepted just because it's for a cause you happen to agree with.
>My idea is for us to discuss this. Is regulation not the right way? What's the alternative? Not, "oh if that doesn't work for all possible universe of applicable solutions, it doesn't deserve merit"
If you wanted an intelligent discussion on what regulation should consist of, what's the point of starting off which such an absolutist remark? What does it add compared to something like "what's the right form of regulation to address this?"
red_trumpet
You are comparing the regulation of business practices to the breach of human rights. Do you also think your water company should be allowed to poison the water coming from your tap?
gruez
>You are comparing the regulation of business practices to the breach of human rights.
So "you're either with us or on the side of the bad guys" is a valid form of argument, but only when the bad guys are evil corporations? More to the point, much of the "regulations" proposed does end up infringing on human rights. For instance regulations forcing social media companies to remove "disinformation" or "content causing hatred/discomfort" necessarily limits others' freedom of speech.
buellerbueller
The discussion, however, is about this topic, which is meaningfully different due to the sheer scale of the abuses occurring. Entire populations are being subjected to propagandistic brainwashing. That scale is not happening in your example.
gjm11
> like the social media algorithms pushing child abuse content to potential pedophiles
There's something weird about this complaint, isn't there? I mean, it's horrifying if social media algorithms are pushing child abuse content to anyone, but so far as I can see it isn't worse if the people they're showing it to are paedophiles. Maybe it's even a bit less bad since they're less likely to be distressed by it.
I think there's something deformed about a lot of the moral discourse around this stuff -- as if what matters is making sure that Those Awful People don't get anything they want rather than making sure bad things don't happen. (Far and away the most important bad thing associated with child abuse is the actual child abuse but somehow that's not where everyone's attention goes.)
armchairhacker
What kind of regulation do you have in mind?
The government controls the algorithm? Then the government pushes propaganda.
The algorithm is public? Then what kind of public algorithm? "Sort by recency", "sort by popularity", etc. will be gamed by propaganda-pushers. "Sort by closest friends" is better, but I suspect even it will be gamed by adversaries who initially push genuine interesting content and encourage you to befriend them, then shift to propaganda.
Sorry to be cynical, but I doubt you can prevent people from being attracted to and influenced by propaganda; if necessary, well-funded organizations will hire paid actors to meet people in person. You must narrow the goal, e.g. can hinder foreign propaganda by down-weighting accounts from foreign IP addresses, detecting and down-weighting foreign accounts which use residential VPNs, and perhaps detecting and down-weighting domestic people who are especially influenced by foreign propaganda to the extent they're probably being funded (but you don't know, so then you get controversy and ambiguity...)
gchamonlive
I'm no political scientist, but I believe in checks and balances. It translates roughly to costly burocracy, but if the next president or Congress will face significant pushback either from each other or the judiciary, and if the democratic institutions are strong, then we can trust that a reasonably well structured law will prevent by itself abuse.
The law is abused in the US because they have the tradition of keeping the constitution to a bare minimum and govern by precedence and common sense, which as we can see isn't very productive.
So yeah I guess I'm advocating for burocracy for now, at least until someone comes with a better idea. I'd take burocracy many times before corporation abuse.
EDIT: now I see I haven't addressed the main question. I believe that society needs a mechanism to hold big tech platforms accountable for abuse. The speed which big techs can push certain kinds of information through their services is such that the due process, when it works, is only effective after damage is done and by then different accounts and different outlets are already pushing the same kind of disinformation ads. Therefore preemptive removal of this content is necessary. The problem now becomes how to make it so that the universe of content eligible for preemptive removal can't be abused by the current administration. How can we make it so that the Israeli misinformation machine can't overshadow other institutions, but at the same time guaranteeing that the next political party in power can't abuse this system to suppress valid propaganda from the opposition?
nradov
Your comment makes no sense. Laws and regulations aren't intended to be "productive" so that's a total non sequitur. The US Constitution has some flaws but it's still the closest anyone has come to perfection in the governance of human society.
_Algernon_
Reverse chronological + subscription (ie. the user must actively make a choice to follow some channel or creator to get them in their feed). This is how most platforms started, and while there were still issues (eg. rewarding frequent posting) they seemed a lot less problematic than what we have today.
The main issue isn't the misinformation or disinformation; it is how quickly you can amplify reach and reach millions. Reverse chronological + follows based on active user choice would largely address that issue.
nradov
People think that's what they want but they really don't. For most regular social media users if they haven't checked their feed recently they would rather see major life events (birth, death, marriage, graduation) prioritized first instead of a picture of someone's lunch.
cjs_ac
> we also don't want to create these mechanisms at the government level only so the next authoritarian president can use them for their own personal agenda
There's nothing stopping this hypothetical authoritarian president from creating this after they come to power.
molszanski
It’s much easier to abuse existing oppression machine than to build it from scratch
gchamonlive
Which is why the democratic system relies on checks and balances. If the democratic institutions are strong an authoritarian governor will at worse face incredible pushback from the judiciary, if Congress and executive powers are taken over. If one of the three powers remain independent, there is hope to recover the democratic stability without a violent revolution.
_Algernon_
It requires less political capital to repurpose an existing system than to introduce a new system for a specific purpose. See for instance the number of times Chat Control has failed to become law.
chpatrick
The Orbán government here in Hungary is one of the biggest ad spenders in the EU. You literally can't open a YouTube video without seeing propaganda with just plain lies, increasingly with AI-generated video. I find it really hypocritical that these allegedly progressive companies are willing to sell millions of dollars of brainwashing to the most hateful toxic regime in the EU.
ceejayoz
> allegedly progressive companies…
If the last ~9 months has demonstrated anything, it's that this was never the case.
chpatrick
Hence "allegedly", but I think it's pretty telling that the people who work at Google and Meta are okay with this.
arethuza
I'd imagine those big salary cheques buy a lot of acceptance of what these companies are actually doing.
helqn
Should everybody who works at Google and Meta be progressive?
tensor
Ads have long been weapons. It's time the west woke up to the threat of propaganda. And no, addressing this is not incompatible with "free speech." Propaganda is not free speech. The person paying for the propaganda has a voice they can use, that is their free speech.
Whenever did it become somehow a "right" to be able to pay for large scale propaganda? Oddly enough this right is not afforded to those without the funds to pay for it.
axegon_
That's (initially a small-ish) part of the reason why I've gone from "no ad blockers" to "block absolutely everything". That said, that is only part of the problem. Take tiktok for instance, which is a clean cut, self-installed direct link to the ccp. And not just tiktok: why do you think products such as this [1] exist? And they are dirt cheap too. If you think these aren't selling like mad, boy are you in for a shock. While griefters do exist, the dictators of the world absolutely love these opportunities.
[1] https://www.alibaba.com/product-detail/Android-Phone-Farm-Se...
pen2l
Reddit is one of the most potent places where opinion-shaping has been happening. I've been getting ads for Reddit everywhere recently (even thought I've been a reddit user for about 20 years).
r/worldnews is pretty tightly controlled, it's a default subreddit meaning 50+ million people see the posts submitted in this subreddit, and most critically, the ensuing conversation in comments which goes only in one direction. Frankly I'm impressed this all was pulled off so seamlessly.
jimbohn
And a lot of lead-generating subreddits are gatekept by admins/mods, sometimes for money. Also, there are russians offering services to promote (spam upvotes and fake comments) your product, and for some of our competitors it's very obvious when that happens, somehow reddit doesn't notice. Same about twitter, somehow super tight checks for normal users while some spam is somehow unfiltered. Social media is a destructive force.
adhamsalama
That sub is pretty much controlled by Zionist. If you criticize Israel in any way shape or form, you'll get a permanent ban.
albulab
"controlled by zionists" Interestingly, every place Qatar doesn't pour billions into spreading propaganda sewage suddenly becomes "controlled by Zionists."
bhouston
This post from today on r/worldnews was hilarious -- all the top comments where deleted (and their authors probably permanently banned) because they didn't hold the party line:
https://www.reddit.com/r/worldnews/comments/1nc65sx/israel_i...
mediumsmart
Everyone here including the author and me have been raised on propaganda and the mantra of the true alcoholic - this time its going to be different
Regulating the corporations or their shadow, the government is both fine I guess and with that out of the way: lets discuss this!
daveguy
Don't forget about the entirety of regular social media -- Facebook, TikTok, Twitter, Instagram, etc, etc, etc. Planting follow bait and switching to propaganda memes is a pervasive tactic. At this point propaganda mongers only need processing investment not direct ad investment.
j45
It's profitable to let two sides of a topic run ads.
After the pressure from the outcome of election influencing, there seemed to be new rules come in place.
For other topics? Not so sure. Maybe it's something to look at before it has an election type response.
There's parallels to this I suspect in other industries affecting the world.
lyxsus
UNRWA is compromised as hell, ofc it must go, so it's a pretty good use case for ethical ads usage. Definitely not a reason to enforce an additional censorship.
> Google did not pro-actively vet the truth of Israeli government claims
It is really scary that people are pushing for Google and Meta to be the arbiter of truth. I don't think people realize what they are asking for. Western civilizations have a tradition of liberal free speech, and allowing the courts to sort out the specifics of what speech causes harm to what parties (libel, etc).
There are already laws on the books for false advertising. In the US, the FTC is one who prosecutes those laws, not Google or Meta!
full disclosure: I work on Ads at Google. You really don't want to privatize the prosecution, judgement, jury, and execution of speech laws to mega corps (and I am usually pro-privatization on most topics).