Proposal: AI Content Disclosure Header
17 comments
·August 26, 2025AKSF_Ackermann
It feels like a header is the wrong tool for this, even if you hypothetically would want to disclose that, would you expect a blog cms to offer the feature? Or a web browser to surface it?
throwaway13337
Can we have a disclosure for sponsored content header instead?
I'd love to browse without that.
It does not bother me that someone used a tool to help them write if the content is not meant to manipulate me.
Let's solve the actual problem.
patrickhogan1
The bigger challenge here is that we already struggle with basic metadata integrity. Sites routinely manipulate creation dates for SEO - I regularly see 5-year-old content timestamped as "published yesterday" to game Google's freshness signals.
While this doesn't invalidate the proposal, it does suggest we'd see similar abuse patterns emerge, once this header becomes a ranking factor.
xgulfie
This is like asking the fox to announce itself before entering the henhouse
layer8
Why only for HTTP? This would be appropriate for MIME multipart/mixed part headers as well. ;)
Maybe better define an RDF vocabulary for that instead, so that individual DIVs and IMGs can be correctly annotated in HTML. ;)
grumbel
Completely the wrong way around. We are heading into a future where everything will be touched by AI in some way, be it things like Photoshop Generative Fill, spell check, subtitles, face filters, upscaling, translation or just good old algorithmic recommendations. Even many smartphones already run AI over every photo they make.
Doing it in a HTTP header is furthermore extremely lossly, files get copy around and that header ain't coming with them. It's not a practical place to put that info, especially when we have Exif inside the images themselves.
The proper way to handle this is mark authentic content and keeping a trail of how it was edited, since that's the rare thing you might to highlight in a sea of slop, https://contentauthenticity.org/ is trying to do that.
rossant
Interesting initiative but I wonder if the mode provides sufficient granularity. For example, what about an original human-generated text that is entirely translated by an AI?
dijksterhuis
> what about an original human-generated text that is entirely translated by an AI?
probably ai-modified -- the core content was first created by humans, then modified (translated into another language). translating back would hopefully return you the original human generated content (or at least something as close as possible to the original).
| class | author | modifier/reviewer |
| ----------------- | ------ | ----------------- |
| none | human | human/none |
| ai-modified | human | ai | <--*
| ai-originated | ai | human |
| machine-generated | ai | ai/none |
shortrounddev2
Years ago people were arguing that fashion magazines should have to disclose if they photoshopped pictures of women to make them look skinnier. France implemented this law, and I believe other countries have as well. I believe that we should have similar laws for AI generated content.
vntok
This feels like the Security Flag proposal (https://www.ietf.org/rfc/rfc3514.txt)
gruez
or end up like california prop 65 warnings: https://en.wikipedia.org/wiki/1986_California_Proposition_65
ugh123
Hoping I don't need to click on something, or have something obstructing my view.
odie5533
The cookie banner just got 200px taller.
GuinansEyebrows
Maybe an ignorant question but at the dictionary level, how would one indicate that multiple providers/models went into the resulting work (based on the example given)? Is there a standard for nested lists?
Maybe we should avoid training AI with AI-generated content: that's a use case I would defend.
Still I believe MIME would be the right place to say something about the Media, rather than the Transport protocol.
On a lighter note: we should consider second order consequences. The EU commission will demand its own EU-AI-Disclosure header be send to EU citizens, and will require consent from the user before showing him AI generated stuff. UK will require age validation before showing AI stuff to protect the children's brains. France will use the header to compute a new tax on AI generated content, due by all online platform who want to show AI generated content to french citizens.
That's a Pandora box I wouldn't even talk about, much less open...