Skip to content(if available)orjump to list(if available)

How do we stop AI-generated 'poverty porn' fake images?

droptablemain

You can't police people into not being racist. People have always been racist/xenophobic to some extent and always will be. It's cultural conflict and tribal in nature.

null

[deleted]

krapp

You can police the execution of people's racist intent, and we often do. Freedom of speech and freedom of association mean racists aren't guaranteed a platform. Many countries (not the US, notably) police "hate speech" on the premise that such speech inevitably leads to hateful actions.

Arguing from human nature isn't compelling. Rape and murder are part of human nature as well, and people have always done both, yet it isn't controversial to police such behaviors. Racism is no different. We aren't mere animals entirely beholden to our base instincts, after all.

droptablemain

I would much rather live in a society that tolerates and shakes off a bit of racism than one that jails people for offensive memes.

krapp

Of course I wasn't talking about or advocating jailing people for offensive memes, but I understand this is one of those subjects Hacker News can't approach in good faith and I take the downvotes and shit-eating snark in stride.

GLdRH

The practice shows that hate speech is just speech they hate.

krapp

Most reasonable people do hate racism, yes.

redasadki

Researchers like Arsenii Alenichev are correctly identifying a new wave of “poverty porn 2.0,” where artificial intelligence is used to generate stereotypical, racialized images of suffering—the very tropes many of us have worked for decades to banish.

The alarms are valid.

The images are harmful.

But I am deeply concerned that in our rush to condemn the new technology, we are misdiagnosing the cause.

The problem is not the tool.

The problem is the user.

PaulHoule

redasadki

Yeah, of course. But that's the imperfect best we've been able to do as societies to respond to the needs of the most vulnerable. Unless you think we should just let people die when there is a disaster or a catastrophe that is overwhelming?

PaulHoule

We've seen a hollowing out of the state in the core under neoliberalism which on one hand is out-and-out austerity and the other half is the inability to execute which Ezra Klein talks about it.

In the same time period we've seen donor organizations like the Gates Foundation pursue a model where NGOs pick and choose a few state functions that they'd like to take over in the periphery. This bypassing of the state gets things done in the short term but in the long term it doesn't help countries develop the state capacity to do things themselves.

My radical proposal is that third world countries develop and tax their economy to provide the services that their people want and that those governments should be accountable to those people. However the NGO-industrial complex is part of the same tendency that erodes state capacity in both the core and periphery.

Structurally the problem at hand won't go away unless NGOs get past the model of showing people poverty porn to make them donate or believe in the legitimacy of the NGO. In the end they could send a photographer out to a refugee camp to make very similar images that are real and if you think those fake images are harmful the real images are too.

psunavy03

[flagged]

Retric

The problem is the tool.

To suggest otherwise is to suggest anyone should be able to buy nuclear weapons which on their own do nothing.

Bad actors can only leverage what exists. All the benefits and harms comes from the existence of those tools so it’s a good idea to consider if making such things makes the world better or worse.

redasadki

This assumes 'we' (ie societies) are in a position to stop it - whether that's nuclear weapons or AI. If we are not, then what can be usefully done is going to shift… by a lot.

Retric

> This assumes 'we' (ie societies) are in a position to stop it

There’s major advantages to understanding the world as it is independent anything else. People make tradeoffs around harm all the time, pretending it doesn’t exist is pointless.

We can mitigate harm from earthquakes and blizzards independently from our ability to prevent such events. That comes from understanding such events as more than just acts of gods who would happily use other means should we try to mitigate the harm from earthquakes etc.

jmull

We might want to treat two things differently when for one of them, its only function is unimaginably massive destruction and for the other it’s to produce words and images.

Retric

Treating them differently based on the harm they cause, is still judging them based on the harm they cause rather than treating them as a neutral entity.

ohyoutravel

I don’t know much about this subject. Would have been nice for the author to include some representative imagery.

redasadki

blog post has the link to the Guardian article that includes some images - but I don't think I'm allowed to use them

constantcrying

Who cares? This is not even close to the actual harmful use of AI.

Zero people are harmed by this. The idea that this is more worthy of attention, than e.g. the creation of humiliating or pornographic images of real people is absurd.

I get that this is "whataboutism", but to be honest this seems to be such a petty and minor complaint that it feels absurd to even consider this a real problem.

And on a final note. These images are used to invoke a sense of guilt in prospective donors, so that people donate to charities. I agree that these images are distasteful and should not be shown, real or fake.

gjsman-1000

HN, and tech libertarians in general, have been trying to square multiple values together:

1. Decentralized

2. Anonymous

3. Trustworthy

4. Immune (from bad actors)

History has shown that these values are incompatible. Not a little incompatible - completely incompatible. We only have three decks of cards open to us:

1. Open and anonymous (and hopelessly corrupted by bad actors to the point of uselessness - phase 1 - Google only got popular in the first place because people couldn't dig through the manure)

2. Closed (held hostage by Big Tech curation - you are here - phase 2 - but corruption causes government intervention)

3. Open and accountable (identities tied to the real world, with real world accountability - incoming phase - but at least you don’t need to worry about DDoS as much)

There is no other option that works, any more than 1 + 1 + 1 = 4, no matter how badly we wish it existed.

bodiekane

> with real world accountability

So the government can neutralize every journalist, political opponent, whistleblower or whoever by whatever means are most effective in their particular jurisdiction (China- disappear, Russia- jail, US- arrests, lawsuits, firing).

We need better tools for filtering out the bad actors, not just throwing the baby out with the bath water and accepting totalitarian control from a handful of dictators and wannabe-dictators.

gjsman-1000

> We need better tools for filtering out the bad actors

We've spent 30 years trying, and still haven't figured out. This is the mice all agreeing that they should put a bell on the cat, but they can't agree on how, because there's no actual way to accomplish that goal (and even if they did, there's always another cat, or an adult who takes off the bell). Saying "we're so close" or "we just need better tools" in 2025 is like saying "we've almost invented the perpetual motion machine, we just need to get rid of that last 2% of energy loss! It's just 2%!"

Everybody is focused on trying to make it work, but nobody sat down and thought: Is a system that is decentralized, anonymous, permissionless, censorship-resistant, privacy-preserving, bad-actor-resistant, misinformation-proof, CSAM-proof, all the mandatory requirements, and a place people want to be, even possible on paper? (It isn't. This is easily demonstrated by thinking even a little adversarially against all proposed solutions.)

Unusable hellscape of spam; centralized corporate walled gardens; or identity-verified government-level walled garden. We can only pick one. And the first option was already tried, fell flat on its face, and built Google (which curated the open web).

kridsdale1

> 1 + 1 + 1 = 4

If we’re talking about representing the vector glyph of 4 with 3 popsicle sticks, this equation is true.

redasadki

What is this, Reddit?

bigyabai

Yeah, stop trying to rationalize their bad-faith argument!

ares623

I think phase 3 is going to be “closed and accountable”