AI systems with 'unacceptable risk' are now banned in the EU
136 comments
·February 3, 2025DoingIsLearning
spacemanspiff01
From the laws text:
For the purposes of this Regulation, the following definitions apply:
(1) ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments; Related: Recital 12
https://artificialintelligenceact.eu/article/3/ https://artificialintelligenceact.eu/recital/12/ So, it seems like yes, software, if it is non-deterministic enough would qualify. My impression is that software that simply takes "if your income is below this threshold, we deny you a credit card." Would be fine, but somewhere along the line when your decision tree grows large enough, it probably changes.
btown
Notably, Recital 12 says the definition "should not cover systems that are based on the rules defined solely by natural persons to automatically execute operations."
https://uk.practicallaw.thomsonreuters.com/Glossary/UKPracti... describes a bit of how recitals interact with the operating law; they're explicitly used for disambiguation.
So your hip new AI startup that's actually just hand-written regexes under the hood is likely safe for now!
(Not a lawyer, this is neither legal advice nor startup advice.)
abdullahkhalids
Seems very reasonable. Not all software has the same risk profile, and autonomous+adaptive software certainly have a more dangerous profile than simpler software, and should be regulated differently.
johndhi
What? Why? Shouldn't those same use cases all be banned regardless of what tech is used to build them?
uniqueuid
Unfortunately yes, the article is a simplification, in part because the AI act delegates some regulation to existing other acts. So to know the full picture of AI regulation one needs to look at the combination of multiple texts.
The precise language on high risk is here [1], but some enumerations are placed in the annex, which (!!!) can be amended by the commission, if I am not completely mistaken. So this is very much a dynamic regulation.
impossiblefork
I wouldn't be surprised if it does cover all software. After all, chess solvers are AI.
belter
Yes its simplifying. There are more details here: https://news.ycombinator.com/item?id=42916414
teekert
Have been having a lot of laughs about all the things we call AI nowadays. Now it’s becoming less funny.
To me it’s just generative AI, LLMs, media generation. But I see the CNN folks suddenly getting “AI” attention. Anything deep learning really. It’s pretty weird. Even our old batch processing, SLURM based clusters with GPU nodes are now “AI Factories”.
layer8
Yes, you are better off reading the actual act, like the linked article 5: https://artificialintelligenceact.eu/article/5/
This is not about data collection (GDPR already takes care of that), but about AI-based categorization and identification.
"AI system" and other terms are defined in article 3: https://artificialintelligenceact.eu/article/3/
theptip
Seems like a mostly reasonable list of things to not let AI do without better safety evals.
> AI that tries to infer people’s emotions at work or school
I wonder how broadly this will be construed. For example if an agent uses CoT and they needs emotional state as part of that, can it be used in a work or school setting at all?
layer8
This quote is inaccurate. The actual wording is: "the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;" and it links to https://artificialintelligenceact.eu/recital/44/ for rationale.
So, this targets the use case of a third party using AI to detect the emotional state of a person.
unification_fan
We need to profile your every thought and emotion. Don't worry though, it's for medical or safety reasons only. Same for your internet history... You know, terrorism and all. Can't have that.
nielsole
I can definitely see use cases where cameras that detect distressed people can help prevent harm from them and others.
dmix
Is this just based on hypothetical scenario they sat in a room coming up with or has such a thing been tried and harmed people?
nielsole
While this is not primarily emotion, I hope it would squarely be covered by it: https://bigthink.com/the-present/attention-headbands/
stavros
The EU generally (so far) has passed reasonable legislation about these things. I'd be surprised if it was taken more broadly than the point where a reasonable person would feel comfortable with it.
Zenst
I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front. Maybe that will get challenged as disability discrimination by some Autistic group. Which would be interesting. As with most things, there are rules, and exceptions to those rules - no shoe fits everyone, though forcing people to wear the wrong shoe size, can do more harm than good.
danielheath
> I would imagine that such a tool to infer emotional states would be most useful for autistic people who are as I can attest, somewhat handicapped upon that front.
It might well be a useful tool to point at yourself.
It's an entirely inappropriate one to point at someone else. If you can't imagine having someone estimate your emotional state (usually incorrectly), and use that as a basis to disregard your opinion, you've lived a very different life to mine. Don't let them hide behind "the AI agreed with my assessment".
cwillu
On the other hand, as someone who's emotional state is routinely incorrectly assessed by people, I can't imagine a worse hell than having that misassessment codified into an ai that I am required to interact with.
null
hnburnsy
If AI is outlawed then only outlaws will have AI.
hcfman
I would like to see a new law that puts any member of government found obstructing justice is put in jail.
Except that the person responsible for travesty of justice framing 9 innocent people in this Dutch series is currently the president of the court of Maastricht.
https://npo.nl/start/serie/de-villamoord
Remember. The courts have the say as to who wins and looses in these new vague laws. The ones running the courts have to not be corrupt. But the case above shows that this situation is in fact not the case.
null
Havoc
For once that doesn’t seem overly broad. Pretty much agree with all of the list
johndhi
The "high risk" list is where the breadth comes in
hcfman
Laws that are open to interpretation with drastic consequences if it's interpreted against your favour pose unacceptable risk to business investors and stifle innovation.
jeffdotdev
There is no law that isn't open to interpretation. There is a reason for the judicial branch of government.
dns_snek
People said that about GDPR. Laws that don't leave any room for interpretation are bound to have loopholes that pose unacceptable risk to the population.
daedrdev
I think its quite clear gdpr has indeed lead to lower investment and delayed or cancelled products in Europe
Spooky23
Speed isn’t always ideal. My favorite example that getting dated in hotel WiFi.
Early adopters signed contracts with companies that provided shitty WiFi at high prices for a long time. A $500 hotel would have $30/night connections that were slow, while the Courtyard Marriott had it for free.
jeffgreco
Yet better privacy protections than we in the States enjoy.
null
sporkydistance
GDPR is one of the best pieces of legislation to come out of the EU this century.
It is the utter bane if "move fast and break things", and I'm so glad to have it.
I will never understand the submissive disposition of Americans to billionaires who sell them out. They are all about being rugged Cow Boys while smashing systems that foster their own well-being. It's like their pathology to be independent makes them shoot at their own feet. Utterly baffling.
_heimdall
What I don't see here is how the EU is actually defining what is and is not considered AI.
> AI that manipulates a person’s decisions subliminally or deceptively.
That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Or is this limited specifically to LLMs, as OpenAI has so successfully convinced us that LLMs really are Aai and previous ML tools weren't?
vitehozonage
Exactly what i thought too.
Right now, for 10 years at least, with targeted advertising, it has been completely normalised and typical to use machine learning to intentionally subliminally manipulate people. I was taught less than 10 years at a top university that machine learning was classified as AI.
It raises many questions. Is it covered by this legislation? Other comments make it sound like they created an exception, so it is not. But then I have to ask, why make such an exception? What is the spirit and intention of the law? How does it make sense to create such an exception? Isn't the truth that the current behaviour of the advertising industry is unacceptable but it's too inconvenient to try to deal with that problem?
Placing the line between acceptable tech and "AI" is going to be completely arbitrary and industry will intentionally make their tech tread on that line.
dijksterhuis
the actual text in the ~act~ guidance states:
> Exploitation of vulnerabilities of persons, manipulation and use of subliminal techniques
techcrunch simplified it.
from my reading, it counts if you are intentionally setting out to build a system to manipulate or deceive people.
edit — here’s the actual text from the act, which makes more clear it’s about whether the deception is purposefully intended for malicious reasons
> the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that causes or is reasonably likely to cause that person, another person or group of persons significant harm
Bjartr
Seems like even a rudimentary ML model powering ad placements would run afoul of this.
dijksterhuis
> In addition, common and legitimate commercial practices, for example in the field of advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful manipulative AI-enabled practices.
anticensor
That is by design.
blackeyeblitzar
Even ads without ML would run afoul of this
dist-epoch
so "sex sells" kind of ads are now illegal?
troupo
> What I don't see here is how the EU is actually defining what is and is not considered AI.
Because instead of reading the source, you're reading a sensationalist article.
> That can be a hugely broad category that covers any algorithmic feed or advertising platform.
Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
----
We're going to get a repeat of GDPR aren't we? Where 8 years in people arguing about it have never read anything beyond twitter hot takes and sensationalist articles?
_heimdall
Sure, I get that reading the act is more important than the article.
And in reading the act, I didn't see any clear definitions. They have broad references to what reads much like any ML algorithm, with carve outs for areas where manipulating or influencing is expected (like advertising).
Where in the act does it actually define the bar for a technology to be considered AI? A link or a quote would be really helpful here, I didn't see such a description but it is easy to miss in legal texts.
robertlagrant
The briefing on the Act talks about the risk of overly broad definitions. Why don't you just engage in good faith? What's the point of all this performative "oh this is making me so tired"?
pessimizer
> Again, read the EU AI Act. It's not like it's hidden, or hasn't been available for several years already.
You could point out a specific section or page number, instead of wasting everyone's time. The vast majority of people who have an interest in this subject do not have a strong enough interest to do what you have claim to have done.
You could have shared, right here, the knowledge that came from that reading. At least a hundred interested people who would have come across the pointing out of this clear definition within the act in your comment will now instead continue ignorantly making decisions you disagree with. Victory?
scarface_74
Maybe if the GDPR was a simple law instead of 11 chapters and 99 sections and all anyone got as a benefit from it is cookie banners it would be different.
HeatrayEnjoyer
GDPR doesn't benefit anyone? Is that a joke?
troupo
> Maybe if the GDPR was a simple law
It is a simple law. You can read it in an afternoon. If you still don't understand it 8 years later, it's not the fault of the law.
> instead of 11 chapters and 99 sections
News flash: humans and their affairs are complicated
> all anyone got as a benefit from it is cookie banners
Please show me where GDPR requires cookie banners.
Bonus points: who is responsible for the cookie banners.
Double bonus points: why HN hails Apple for implementing "ask apps not to track", boos Facebook and others for invasive tracking, ... and boos GDPR which literally tells companies not to track users
kazinator
OK, so certain documents are allowed to exist out there; they are not banned. But if you train a mathematical function to provide a very clever form of access to those documents, that is banned.
That is similar to, say, some substance being banned above a certain concentration.
Information from AI is like moonshine. Too concentrated; too dangerous. There could be methyl alcohol in there that will make you go blind. Must control.
mhitza
> AI that attempts to predict people committing crimes based on their appearance.
Should have been
> AI that attempts to predict people committing crimes
hcfman
Except government systems for the same. In the Netherlands we had the benefits affaire. A system that attempted to predict people committing benefits crime. Destroying the lives of more than 25,000 people before anyone kicked in.
Do you think they are going to fine their own initiatives out of existence? I don't think so.
However, they also have a completely extrajudicial approach to fighting organised crime. Guaranteed to be using AI approaches on the banned list. But you won't have get any freedom of information request granted investigating anything like that.
For example, any kind of investigation would often involve knowing which person filled a particular role. They won't grant such requests, claiming it involves a person, so it's personally. They won't tell you.
Let's have a few more new laws to provide the citizens please, not government slapp handles.
HeatrayEnjoyer
Why?
troupo
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
mhitza
Article 59 seems relevant, other two on a quick skim don't seem to relate to the subject.
> 2. For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security, under the control and responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.
https://artificialintelligenceact.eu/article/59/
Seems like it allows pretty easily for national states to add in laws that allow them to skirt around
hcfman
Yep, they want to be able to continue to violate human rights and do the dirty.
cccbbbaaa
There is the same kind of language in GDPR (see article 2(2)D), but it still did not prevent this decision: https://www.laquadrature.net/en/2025/01/31/justice-finally-f...
dijksterhuis
link to the actual act: https://artificialintelligenceact.eu/article/5/
link to the q&a: https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
(both linked in the article)
rustc
Does this affect open weights AI releases? Or is the ban only on the actual use for the listed cases? Because you can use open weights Mistral models to implement probably everything on that list.
ben_w
Use and development.
I know how to make chemical weapons in two distinct ways using only items found in a perfectly normal domestic kitchen, that doesn't change the fact that chemical weapons are in fact banned.
"""The legal framework will apply to both public and private actors inside and outside the EU as long as the AI system is placed on the Union market, or its use has an impact on people located in the EU.
The obligations can affect both providers (e.g. a developer of a CV-screening tool) and deployers of AI systems (e.g. a bank buying this screening tool). There are certain exemptions to the regulation. Research, development and prototyping activities that take place before an AI system is released on the market are not subject to these regulations. Additionally, AI systems that are exclusively designed for military, defense or national security purposes, are also exempt, regardless of the type of entity carrying out those activities.""" - https://ec.europa.eu/commission/presscorner/detail/en/qanda_...
ItsBob
> AI that attempts to predict people committing crimes based on their appearance.
FTFY: AI that attempts to predict people committing crimes.
By "appearance" are they talking about a guy wearing a hoodie must be a hacker or are we talking about race/colour/religious garb etc?
I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Just my $0.02
layer8
The actual wording is: "based solely on the profiling of a natural person or on assessing their personality traits and characteristics".
The Techcrunch article oversimplifies and is borderline misleading.
stared
There was a joke:
- Could you tell from an image if a man is gay?
- Depending on what he is doing.
troupo
> I'd rather they just didn't use it for any kind of criminal application at all if I have a say in it!
Instead of relying on Techcrunch and speculating, you could read sections (33), (42), and (59) of the EU AI Act yourself.
I am not expert but there seems to be an overlap in the article between 'AI' and well ... just software, or signal processing:
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
- AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
- AI that uses biometrics to infer a person’s characteristics
- AI that collects “real time” biometric data in public places for the purposes of law enforcement.
All of the above can be achieved with just software, statistics, old ML techniques, i.e. 'non hype' AI kind of software.
I am not familiar with the detail of the EU AI pact but it seems like the article is simplifying important details.
I assume the ban is on the purpose/usage rather than whatever technology is used under the hood, right?