Meta says it wont sign Europe AI agreement, calling it growth stunting overreach
64 comments
·July 18, 2025vanderZwan
mhitza
There seem to be 3 chapters of this "AI Code of Practice" https://digital-strategy.ec.europa.eu/en/policies/contents-c... and it's drafting history https://digital-strategy.ec.europa.eu/en/policies/ai-code-pr...
I did not read it yet, only familiar with the previous AI Act https://artificialintelligenceact.eu/ .
If I'd were to guess Meta is going to have a problem with chapter 2 of "AI Code of Practice" because it deals with copyright law, and probably conflicts with their (and others approach) of ripping text out of copyrighted material (is it clear yet if it can be called fair use?)
jahewson
> is it clear yet if it can be called fair use?
Yes.
https://www.publishersweekly.com/pw/by-topic/digital/copyrig...
Though the EU has its own courts and laws.
dmbche
District judge pretrial ruling on June 25th, I'd be surprised this doesn't get challenged soon in higher courts.
And acquiring the copyrighted materials is still illegal - this is not a blanket protection for all AI training on copyrighted materials
baxtr
Does anyone know if the data privacy regulations of the past had any effect on Meta whatsoever (except being a cost)? Their profits seem to be thriving more than ever…
sorokod
Presumably it is Meta's growth they have in mind.
Edit: from the linked in post, Meta is concerned about the growth of European companies:
"We share concerns raised by these businesses that this over-reach will throttle the development and deployment of frontier AI models in Europe, and stunt European companies looking to build businesses on top of them."
isodev
Of course. Skimming over the AI Code of Practice, there is nothing particularly unexpected or qualifying as “overreach”. Of course, to be compliant, model providers can’t be shady which perhaps conflicts with Meta’s general way of work.
ankit219
Not just Meta, 40 EU companies urged EU to postpone roll out of the ai act by two years due to it's unclear nature. This code of practice is voluntary and goes beyond what is in the act itself. EU published it in a way to say that there would be less scrutiny if you voluntarily sign up for this code of practice. Meta would anyway face scrutiny on all ends, so does not seem to a plausible case to sign something voluntary.
One of the key aspects of the act is how a model provider is responsible if the downstream partners misuse it in any way. For open source, it's a very hard requirement[1].
> GPAI model providers need to establish reasonable copyright measures to mitigate the risk that a downstream system or application into which a model is integrated generates copyright-infringing outputs, including through avoiding overfitting of their GPAI model. Where a GPAI model is provided to another entity, providers are encouraged to make the conclusion or validity of the contractual provision of the model dependent upon a promise of that entity to take appropriate measures to avoid the repeated generation of output that is identical or recognisably similar to protected works.
[1] https://www.lw.com/en/insights/2024/11/european-commission-r...
jahewson
There’s a summary of the guidelines here for anyone who is wondering:
https://artificialintelligenceact.eu/introduction-to-code-of...
It’s certainly onerous. I don’t see how it helps anyone except for big copyright holders, lawyers and bureaucrats.
cm2012
[flagged]
Atotalnoob
This all seems fine.
Most of these items should be implemented by major providers…
techjamie
The problem is this severely harms the ability to release opens weights models, and only leaves the average person with options that aren't good for privacy.
isoprophlex
I don't care about your overly verbose, blandly written slop. If I wanted a llm summary, I would ask an llm myself.
This really is the 2025 equivalent to posting links to a google result page, imo.
rokkamokka
It is... helpful though. More so than your reply
marcellus23
More verbose than the source text? And who cares about bland writing when you're summarizing a legal text?
JonChesterfield
Nope. This text is embedded in HN and will survive rather better than the prompt or the search result, both of which are non-reproducible. It may bear no relation to reality but at least it won't abruptly disappear.
rchaud
Kaplan's LinkedIn post says absolutely nothing about what is objectionable about the policy. I'm inclined to think "growth-stunting" could mean anything as tame as mandating user opt-in for new features as opposed to the "opt-out" that's popular among US companies.
chvid
Why does meta need to sign anything? I thought the EU made laws that anyone operating in the EU including meta had to comply to.
AIPedant
It's not a law, it's a voluntary code of conduct given heft by EU endorsement.
FirmwareBurner
> it's a voluntary code of conduct
So then it's something completely worthless in the globally competitive cutthroat business world, that even the companies who signed won't follow, they just signed it for virtue signaling.
If you want companies to actually follow a rule, you make it a law and you send their CEOs to jail when they break it.
"Voluntary codes of conduct" have less value in the business world than toilet paper. Zuck was just tired of this performative bullshit and said the quiet part out loud.
AIPedant
No, it's a voluntary code of conduct so AI providers can start implementing changes before the conduct becomes a legal requirement, and so the code itself can be updated in the face of reality before legislators have to finalize anything. The EU does not have foresight into what reasonable laws should look like, they are nervous about unintended consequences, and they do not want to drive good-faith organizations away, they are trying to do this correctly.
This cynical take seems wise and world-weary but it is just plain ignorant, please read the link.
paulddraper
Interesting because OpenAI committed to signing
nozzlegear
The biggest player in the industry welcomes regulation, in hopes it’ll pull the ladder up behind them that much further. A tale as old as red tape.
MPSFounder
LMAO. Facebook is not big? Its founder is literally the sleaziest CEO out there. Cambridge Analytica, Myanmar, restrictions on Palestine, etc. Let us not fool ourselves. There are those online who seek to defend a master that could care less about them. Fascinating. My opinion on this: Europe lags behind in this field, and thus can enact regulations that profit the consumer. We need more of those in the US.
zamadatix
Meta isn't actually an AI company, as much as they'd like you to think they are now. They don't mind if nobody comes out as the big central leader in the space, they even release the weights for their models.
Ask Meta to sign something about voluntarily restricting ad data or something and you'll get your same result there.
decremental
[dead]
jahewson
Sam has been very pro-regulation for a while now. Remember his “please regulate me” world tour?
nkmnz
OpenAI does direct business with government bodies. Not sure about Meta.
somenameforme
About 2 weeks ago OpenAI won a $200 million contract with the Defense Department. That's after partnering with Anduril for quote "national security missions." And all that is after the military enlisted OpenAI's "Chief Product Officer" and sent him straight to Lt. Colonel to work in a collaborative role directly with the military.
And that's the sort of stuff that's not classified. There's, with 100% certainty, plenty that is.
null
paul7986
The US, China and others are sprinting and thus spiraling towards the majority of society's destitution unless we force these billionaires hands; figure out how we will eat and sustain our economies where one person is now doing a white or blue (Amazon warehouse robots) collar job that ten use to do.
lvl155
I have a strong aversion to Meta and Zuck but EU is pretty tone-deaf. Everything they do reeks of political and anti-American tech undertone.
zeptonix
They're career regulators
vicnov
Just like GDPR, it will tremendously benefit big corporations (even if Meta is resistant) and those who are happy NOT to follow regulations (which is a lot of Chinese startups).
And consumers will bear the brunt.
null
I admit that I am biased enough to immediately expect the AI agreement to be exactly what we need right now if this is how Meta reacts to it. Which I know is stupid because I genuinely have no idea what is in it.