Skip to content(if available)orjump to list(if available)

Appeals court rules that Constitution protects possession of AI-generated CSAM

johnnyanmac

In the sense that no real subject is being harmed, this makes sense. Same reason why animated depictions of such content is protected in the US.

Seems like a microscopically thin line, though. If an LLM gets too close to recreating a recognizable subject, it's all hell from there on where to draw he line.

null

[deleted]

slg

>In the sense that no real subject is being harmed, this makes sense.

I don't like this logic for the legality of AI images because we wouldn't allow it with non-AI images. For example, should CSAM be legal if the person in the image and all their family is dead? There is "no real subject" around to be harmed anymore, so by this logic it should be legal. But we don't make this stuff illegal just because of the people harmed in creation or distribution. We make it illegal because most of us don't want to live in a society that tolerates it.

waterhouse

I think people make this stuff illegal because a lot of people really hate pedophiles and anything associated with them, and logical thinking ("Does this policy actually increase or decrease the number of children abused?") and other considerations ("Will this policy be abused to suppress legitimate speech? Can you get a website you hate taken down by uploading CSAM and reporting it? Will teenagers get arrested for sexting each other? Will parents who send a photo of their child's genitals to a doctor, to diagnose a medical problem, get into trouble?") go out the window. Anyone who expresses opposition to these policies can expect to get some of that hatred directed at them, and lots of people simply don't want to deal with that, so the policies stand.

RajT88

You have nailed it. Americans have that puritan history in our cultural consciousness so we do not even feel comfortable having rational discussions about it. Politicians use that to their advantage to get onerous bills passed "for the children".

To use an innocuous example nobody here is likely to get riled up about... The film "Let the right one in" had a scene where the eternally 11 year old girl vampire lifts her skirt to reveal... Nothing. No genitals at all. It was shocking and disturbing, and was supposed to be on a few levels. Genius filmmaking. Back when IMDB had forums it was pages of people saying it was a pedo's dream and the film should be banned. (I must have missed some scholarly article where pedophiles were into children with no genitals)

I do wonder from time to time about French filmmakers if I am honest...

dragonwriter

> I don't like this logic for the legality of AI images because we wouldn't allow it with non-AI images.

We absolutely would, and, in fact, this rule articulated in this ruling does.

> For example, should CSAM be legal if the person in the image and all their family is dead?

That's still actual CSAM and not obscenity that is unconnected to any actual abuse, and so the rule that already makes actual CSAM outside of the coverage of the prohibition the Supreme Court has found on criminalization of mere possession of obscene material continues to apply to it under this ruling. Non-AI imagery that would be legally situated similar to AI imagery here is, e.g., drawn imagery.

> But we don't make this stuff illegal just because of the people harmed in creation. We make it illegal because most of us don't want to live in a society that tolerates it.

That's true of most of the things that would be banned but for the protections of the First Amendment, and is not, itself, a good argument against First Amendment protection.

rockemsockem

I feel like a closer comparison would be a very talented artist drawing their own obscene content and keeping it to themselves. This seems like it should obviously be legal, gross as it is.

In your example I would think that harm being rendered against the now dead is still harm that occurred against a person who was living at the time??

slg

>I would think that harm being rendered against the now dead is still harm that occurred against a person who was living at the time??

Dead people generally lose most protections from stuff like invasions of privacy and defamation (obviously not every jurisdiction handles this the same way).

whatshisface

CSAM is illegal because it creates a market for/related to human trafficking no matter how far after the fact.

newAccount2025

This is a very interesting point. In the era of AI, does it begin to break down? Like, if the AI systems can eventually generate piles of these materials without human victims, will it actually reduce the incentive for “real” CSAM?

stogot

It’s illegal for more than that

danaris

If it is AI-generated, it may be "child porn", but it is, by definition, not "CSAM"—Child Sexual Abuse Materials—because no real, live child was harmed in its creation.

Words have meaning, and while language does drift over time, it's important to maintain the meanings of certain kinds of words and phrases that draw important distinctions.

RajT88

In the US at least. Last I heard in the UK you could still get sent to prison for Mangas.

https://en.m.wikipedia.org/wiki/Legal_status_of_fictional_po...

_--__--__

Not the whole US, the Texas state government just passed a ban on virtual and non-photorealistic 'obscene' depictions of minors

oyashirochama

It likely doesn't pass the constitution or will have to be subject to the Miller's test. Hence it doesn't really change much that the federal law already did.

seanmcdirmid

Yes, but not for CSAM, rather they have a different obscenity law for that. It is kind of in the article.

> Anderegg moved to dismiss each of the four counts. In an opinion last month, the court largely rejected the motions. However, the court did dismiss the possession charge, holding that Section 1466A is unconstitutional as applied to Anderegg’s private possession of obscene “virtual” CSAM.

> The Supreme Court has held that the First Amendment protects the right to possess obscene material in one’s own home, Stanley v. Georgia, 394 U.S. 557 (1969), so long as it’s not actual CSAM, Osborne v. Ohio, 495 U.S. 103 (1990).

Probably they can still charge him with distribution (to the teenage boy), but not possession. Probably production is also not dropped?

basisword

>> no real, live child was harmed in its creation

That’s a big claim. If the model was trained on CSAM then I would argue that any image generation that comes from it is harmful to the original children. In the same way copies of already existing content continue to harm the victim.

washadjeffmad

You're right, but your "if" is doing some heavy lifting. A result being able to be generated does not imply that a dataset contained content that resembled the result.

These models are wildly intuitive and can assemble impossible ideas, even when datasets are purged of certain tags. I discovered this early on when I couldn't diagnose why most subjects came out as weirdly child-like elderly people.

It turns out "mature" is not synonymous with "nsfw" when negative prompting, and a person absent of maturity, while not explicitly a child, doesn't quite look like a normal adult human, either.

I don't think the model was trained on a large corpus of very old babies, yet it was able to imagine them. As long as a model knows what the pieces are or are not, it can be prompted assemble them.

newAccount2025

How does it harm the children whose images it was trained on?

Maybe if it substantially recreates the input images it is equivalent to plain copying and redistribution.

Maybe it creates an economic incentive to abuse more children to create more training data?

like_any_other

> If the model was trained on CSAM

And if the used car I buy was stolen, I have contributed to car theft. That's not an argument against used car sales, especially from salesmen that try to make sure their inventory is legitimate: https://arstechnica.com/tech-policy/2024/08/nonprofit-scrubs...

chongli

Sure, though if you buy a stolen car in a private sale you have no recourse when the police show up to return it to the original owner. If you bought it from a registered dealership then you can be compensated for it.

Brian_K_White

English can be parsed multiple ways, and CSAM can just as validly refer to a depiction or topic or subject, like "flying alien dogs material" as some actual act. Unless there is some legal definition on the books that does explicitly make that distinction.

anigbrowl

This seems like pointless hair splitting. AI-generated images could certainly depict child sexual abuse. I opted to stick with the headline as written rather than recast it to 'child porn', because I felt the phrase 'AI-generated' provided sufficient context for people to understand what the article was about.

danaris

It is far from pointless.

Let's put it in a very HN-centric way—and please try to suppress your inherent disgust reaction:

There is a certain amount of demand for images of children in sexual positions.

In the past, the only way to satisfy this demand was to put actual children in sexual positions, or to draw such images.

Now, there may be a way to satisfy this demand with highly-realistic images without harming any real children at all.

If we can reduce or eliminate the demand for actual children to be abused, that seems to me like an unequivocal good in the world.

So long as no actual living children are involved, some people getting off to images that look like they're of children, in the privacy of their own homes, all alone, doesn't harm me or anyone else.

anigbrowl

None of this is responsive to what I wrote. Please read the bits where I talked about how the term could equally well apply to an artificial depiction, and how there was additional specific context in the headline. You brought this up as a semantic thing (talking about how words have particular meanings) and I gave you reply in that semantic context.

Now you've switched to talking about it as a policy issue, explaining it as if the concept wasn't immediately obvious to everyone else. If that's what you wanted to talk about you should have said so in the first place.

david38

This is bad. The Mere-Exposure Effect can increase the number of victims as AI CSAM can act as a gateway drug so to speak.

This is the justification for keeping the ban on legally culled elephants. Yes, a percentage of elephants are legally hunted to control population and fund national parks, but you cannot import that ivory because the thinking is it will create a market and that market will quickly turn to pouching.

throwaway749372

The anology here would be, that poaching is a thing even though it's illegal, so say we found a way to create ivory that is indistinguishable from the real thing, without any harm to animals.

You can flood the market with it curbing the demand, and still continue to hunt down poachers.

There's an argument to be made that they might have an easier time claiming their ivory is actually fake but the real strategy is to make poaching just not worth the risk by inpacting the reward.

charcircuit

>This is bad

It's a trade off choosing to favor free speech because that is something American society values even if it may come with downsides.

hahajk

You described two different things: a psychological effect first, and then a market. The elephant example isn't mere-exposure.

Also, I'm not an expert on this but do we see more sensless killing by kids playing violent video games? This is usually the counter example to the exposure argument and I haven't seen it properly argued against.

roenxi

In the US there's also been an interesting rise in sexlessness and male virginity recently [0] which is a fun one to compare with exposure to pornography. There is evidence that real-world conditions have a lot more to do with real-world action than pictures on the internet.

[0] https://ifstudies.org/in-the-news/young-adult-sexlessness-sk...

milesrout

Misleading headline. This appears to be a district court decision, not an appeal court decision. The first four words are "A US district court" and the very end is about leave to appeal being granted by an appellate court - but the appeal hasn't been heard yet.

Hizonner

Another important thing is that the guy was charged with production, possession, and distribution... and the court only dismissed the possession charge.

That fundamentally leaves the material illegal, since you couldn't possess it without either producing it, or having somebody distribute it to you.

treetalker

Came here to say this. The article refers to a decision from the United States District Court for the Western District of Wisconsin (of all places). HN post title should be changed.

anigbrowl

Yes, that's a transcription mistake on my part. Unfortunately the edit window has closed already. Sorry for the confusion.

userbinator

AFAIK, the possession of AI-generated (and non-AI-generated) content depicting various other illegal things is legal, so this doesn't seem too surprising. After all, mass media constantly contains fictional murder, drug use, etc.

synapsomorphy

There's a pretty big difference, a picture of a murder might be used as evidence but itself is not illegal.

3827HJahg

The AI submissiveness of courts is concerning. $500 billion promised investments are hard to argue against. Is this the new killer app for the broligarchs? Disgusting.

johnnyanmac

I don't think this has that much to do with AI interests outside of determining that AI generated images fall closer to "animated" laws than "real subject" ones.

anigbrowl

I don't know about that. The legal questions here are similar to those presented by human drawings/paintings, ie are such materials contraband even if they are fictional rather than documentary in nature. Of course, AI makes it vastly easier to produce because it requires little time or talent to operate, but you can imagine a similar case involving a finely-rendered pencil drawing.

metalman

the whole sordid thing is an intilectual exercise that is questionable in and of itself. but in the interest of moving forward, I would suggest, that sure ok, BUT, with a provisio: a law requiring the summary execution of anyone who having child abuse images also causes ANY real harm to a child, or expresses the intent to harm a child, or had previously harmed a child. A girlfiend who had been abused, when asked what would be the perfect punishment for abusers, suggested "the pit from hell", a vast excavation with towering smooth vertical walls, and an impossible to climb, slide they are thrown down. One way, no appeals, no doors, no cameras, no nothing, except for whatever redemption is availible in hell.

Hizonner

I propose that you continue to apply the legal rules that forbid summary executions (which are part of the US Constitution, but are older than the US itself).

Except that you should add an exception allowing the summary execution of anybody who calls for summary execution for anything else. Such people are obviously opposed to the goals of the government and legal system, dangerous to the rule of law, and unfit to exist in society.

sparky_

Not a lawyer, but this sure seems to open a legal and ethical can of worms.

Image generation models capable of generating this type of content would necessarily need to be trained on the real thing, the possession of which is inarguably illegal and immoral.

So how could the model be legally or ethically trained? And if they _cannot_ be legally or ethically trained, then how can the _use_ of those models be okay?

What will be the implications of this in cases where _real_ CSAM was produced or possessed? Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses. After all, how can one definitely prove that CSAM is authentic or not, unless the chain of production is verified?

From the article: > ...If purely private possession of AI-CSAM is constitutionally protected under current caselaw but production is not, then using AI models (even locally-hosted ones) to generate child obscenity in one’s own home is not wholly insulated from criminal prosecution. Subsequently transmitting it to someone else, especially someone underage, is also grounds for liability...

Can of worms, ye be released!

braiamp

This is a common misunderstanding. The thing knows how a naked woman looks, also knows how a child looks, it puts two and two together and voila. It doesn't need to be trained on the real thing to be able to generate it.

wavemode

Well, maybe. Maybe not. See "Why can't ChatGPT draw a full glass of wine?"

https://youtu.be/160F8F8mXlo

Diffusion models do posses some capability to synthesize ideas, but that capability does not necessarily generalize to every possible use case. So it's impossible to say for certain that that is what is happening.

Dylan16807

We can get more certainty by testing combinations of those concepts with a whole bunch of other ones. Naked skateboarder. Child construction worker. It has a lot more variety for both of those concepts than with wine glasses.

We can also check models that have very highly vetted input sets.

Calavar

That video makes some good observatons, but it's also hilarious that he tried to "retrain" ChatGPT by asking it in the chat to remove some items from its dataset.

null

[deleted]

delecti

Does that necessarily follow? Wouldn't that be prone to outputting small naked adult women, and/or naked children with boobs?

0cf8612b2e1e

Why not? I am pretty sure there is no training data of “whales playing the guitar”, but if you ask a model to draw one, it will do a respectable job of imagining that scenario.

dragonwriter

If it is trained on well-tagged images of adult men and women both clothed and unclothed, and clothed children (not that all pictures of unclothed children are CSAM to start with), understanding the relation of clothed to unclothed appearance could allow a model to reasonably generalize unclothed child bodies.

Further, models that are otherwise well trained with a mix of photographic and drawn content can often generalize specific concepts for which their training only includes examples from drawn content to photorealistic imagery involving that concept.

LinuxBender

I don't believe that is true. A woman and a child have distinct characteristics that are not interchangeable. A child for example can be detected by the shape of the nose and nostrils as just one data point. There are many more data-points that psycho-analysts use to determine if a person is attracted to children. AI would have to understand quite a bit of biology and understand how humans develop to get this right.

Sharlin

Image generation models are perfectly capable of mixing different concepts to create images of things they're incredibly unlikely to have seen during training.

charcircuit

You can have artist draw art with the differences. And then you can get it via style transfer. But there are already many images of children with noses.

jliptzin

This is not a matter of opinion. Go to any AI image generator and tell it to generate whatever you want

null

[deleted]

anigbrowl

Statistically, you'd expect this to result in depictions of children with public hair - some adults opt to get rid of theirs, but most have it. Are you sure you're not projecting your prior knowledge about human biology onto an image transformer model?

roenxi

> Image generation models capable of generating this type of content would necessarily need to be trained on the real thing,

I doubt that is so. In practice they might be trained on the real thing, but models generalise pretty well. It is going to be technically possible to train a model on other material (children, nudity and non-CSAM abuse scenes or maybe not even that) and have it generate CSAM.

But even if it was true, that would only make training the model illegal and ethically dubious. We use a tonne of technologies where the creator was legally and morally dubious. It's never been an ongoing issue before. So once the model is created there isn't a good reason to encumber it by how it was created.

davisp

I’m gonna give this a very charitable read by saying that while I find the ways that the treatment of burn victims was advanced by abhorrent means, we as a society have still benefited from those means.

> So once the model is created there isn't a good reason to encumber it by how it was created.

I am trying to be very specific here. I assume no untoward motivations from the parent commenter. I am not intending to cast aspersions. Whoever wrote this, I feel no ill will for you and this is not meant as a personal slight.

And I will be very clear, this statement as written could probably be defended because of the “by how it was created” clause.

However, “So once the model is created there isn’t a good reason to encumber it” is so… fucking I don’t even know, because what the actual fuck?

I apologize for the profanity, I really do. But, really? Are you fucking kidding me?

These models should not exist. Ever. By any means. Do not pass Go. Go directly to jail.

I understand the engineering brain enough to contemplate abstract concepts with detachment. That’s all I think happened here. But holy fuck, please pause and consider things a bit.

throwaway7313

Outlawing the use of existing material is vital market protection for producers. Denouncing these models may not actually be a good way to reduce harm.

throwaway749372

> These models should not exist. Ever. By any means. Do not pass Go. Go directly to jail.

If it's possible to produce CSAM that doesn't involve actual children and have a measurable impact in profitablity and demand of the real thing, leading to a net reduction in the harm done to children wouldn't you be on the wrong side of the argument you think you're making?

> I understand the engineering brain enough to contemplate abstract concepts with detachment.

I would argue it's a rational take.

Can we agree the goal to reduce harm of children is good? Or only if the solution is comfortable to you?

rdtsc

> These models should not exist. Ever. By any means. Do not pass Go. Go directly to jail.

Exactly. It's disturbing that this needs to be explained to people.

RajT88

> Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses

You are probably right, given what we saw with all the porn popup adware back in the 90's and 2000's. A friend of mine was a malware analyst for the FBI for a while.

All CSAM possession cases she heard about, the defense was "malware did it". Nearly all cases the jury convicted them. 100% of her cases for sure.

At some point using the defense everyone else uses and fails with is probably going to become a liability. Shit I am sure people are already trying to use this defense and failing!

grepfru_it

It only went to court because they had enough evidence to prove it is not malware. You have excluded all of the possible cases that used the malware defense and plea’d out or never went to trial.

Similarly, I think using the AI art excuse may be an uphill battle but not one that is impossible to defend

anigbrowl

This isn't how the legal system works. Most CP cases get prosecuted because the defendant solicits or shares CP with a minor of some other CP collectors, but let's imagine a situation where someone gets busted from something else and then investigators discover they have CP (real-world non-AI CP in this example).

Having it at all is a strict liability crime. If the defendant says malware put it on their computer and they don't know how or when it got there, that's called an affirmative defense - it's admissible as a claim, but the burden of proof for the claim is on the defendant. Otherwise you could just claim it was planted on your computer by ghosts or demons or space aliens. If the machine was infested with malware to the point of the browser being nearly inoperable and all sort so fother junk being present, a jury might buy it. But the defendant has to make some sort of showing to back up the claim. The whole thing about 'reasonable doubt' in criminal cases is not that something sounds possible or even plausible, but that you can support the claim with some mix of logic and empirical evidence like any other reasoned argument.

https://www.law.cornell.edu/wex/affirmative_defense

JKCalhoun

While disgusting, I'm thinking that if the courts insist on allowing this, I can try and comfort myself with this thought: I don't doubt that if you find one "AI CSAM" image on someone's drive, keep digging — you'll find illegal stuff too.

Sick people will still go to jail.

themaninthedark

Usually when someone is arrested for this kind of thing it's not one image but a couple thousand so you are right.

Edit add:

I have mixed feeling about this...Let me preface by saying that child abuse is abhorrent.

One of my former coworkers was arrested for possession of CSAM...he was never charged or accused of any abuse. I wasn't close to him; he was quiet and went out of his way to help. He (probably)killed himself(young, died suddenly) last week ahead of his trial.

So I have to wonder would having something like this help him and protect kids as well? Or does possession lead to abuse?

I don't know...people have problems and are sick. At what point do we write them off as irredeemable?

dragonwriter

> Image generation models capable of generating this type of content would necessarily need to be trained on the real thing

This is absolutely not true. Generalization is a key capability of image generation models.

> Certainly this opens the door to a whole plethora of new "it's AI art, I swear!" defenses.

The worst justification for a criminal prohibition that I can think of is that it is provides a convenient out for the difficulty of proving another, more clearly warranted, crime.

> After all, how can one definitely prove that CSAM is authentic or not, unless the chain of production is verified?

"Beyond a reasonable doubt" is not, and never had been, "definite".