Skip to content(if available)orjump to list(if available)

Tough news for our UK users

Tough news for our UK users

179 comments

·July 20, 2025

zkmon

I personally know how this works in Europe & UK. Not only government, this applies to big companies such as large banks as well. They recruit two kinds of staff. One that works to progress some work and one who puts an many hurdles as possible and call it risk management, compliance, security, regulatory etc (RCSR). They hire approximately 3 times more people into these RCSR positions compared to the technical and real work related positions. These RCSR guys dump thousands of pages of guidelines, making it impossible for any meaningful work to progress. My technical team has been running around for 4 months for approvals for testing an upgrade of a database.

Top management can never go against the RCSR guys, who are like priests of the church in medieval ages. And the RCSR guys have no goals linked to the progress of the real work. The don't like any thing that moves. It's a risk.

Management thinks that RCSR helps with controls around the work. But what happens is, you put more people in building controls, they deliver fort walls around your garbage bins.

landl0rd

America has a similar (if less severe) version of this problem where nobody can contradict any compliance-adjacent function. Because if you get sued, someone will ask you "why did you ignore the guidance of your compliance team??" and might even try to use that to justify piercing the corporate veil. Of course compliance types have no incentive to let business happen just like business types have a limited incentive to operate in a compliant fashion, but lawsuits favor compliance always taking precedent with a hyper-cautious approach.

flappyeagle

And yet uber and Airbnb and polymarket and…

runlevel1

In theory, when there's viable competition, a competitor will take advantage of their competitor's overly-cautious interpretation.*

But if the regulation is indeed oppressive or byzantine, everybody hurts and only the biggest survive.

*Social contagion effects on risk perception can be a confounding factor here, though.

Havoc

The act does seem poorly thought out practically & I really dislike the UK's overall mindset to online safety. The laws consistently feel like they were written by someone that prints out emails to read...

That said the thinking that smaller platform should equal exemptions seems a touch flawed too given topic. If you're setting out to protect a child from content that say is promoting suicide the size of the platform isn't a relevant metric. If anything the smaller less visible corners (like the various chan sites) of the internet may even be higher risk

crote

Smaller entities are rarely looking for a full exception, they just want the regulations to be implementable without being a megacorp.

Take something like a plastic packaging tax, for example. A company like Amazon won't have too much trouble setting up a team to take care of this, and they can be taxed by the gram and by the material. But expecting the same from a mom-and-pop store is unreasonable - the fee isn't the problem, but the administrative overhead can be enough to kill them. Offering an alternative fixed-fee structure for companies below certain revenue thresholds would solve that problem, while still remaining true to the original intention.

Havoc

I get the impossible bind this puts small companies in & having people resort to IP blocking the entire country is clearly a sign of a broken setup

But playing devils advocate a bit here if the risk profile to the kid is the same on big and small platforms then there isn't any ethical room a lighter regime. Never mind full exemption, any exemption. The whole line of reasoning that you can't afford it therefore more kids potentially getting hurt on your platform is more acceptable just doesn't play. And similarly if you do provide a lighter touch regime, then the big players will rightly say well if that is adequate to ensure safety then why exactly can't we do that too?

Platform size just isn't a relevant metric on some topics - child safety being one of them. Ethically whether a child is exposed to harm on a small or big website is the same thing.

Not that I think this act will do much of anything for child safety. Which is why I think this needs to go back to drawing board entirely. Cause if we're not effectively protecting children yet killing businesses (and freedoms) then wtf are we doing

areoform

> But playing devils advocate a bit here if the risk profile to the kid is the same on big and small platforms then there isn't any ethical room a lighter regime

What risk profile to which kids?

Genuine question, whose kids? For what? Why? Where? When?

Please feel free to correct me, but from my reading, the harms faced by these hypothetical kids experience are fairly nebulous. The examples are usually troubled children from troubled homes or environments who did things that young people in distress do. Is there any data to contradict this?

Who is this all for?

nine_k

The question is whether the laws are efficient. Imagine that as a protection from the occasional meteorite, all buildings are mandated to upgrade their roofs to be 1 meter of solid concrete. We cannot allow another random space rock kill another innocent inhabitant.

This, or course, would disproportionately burden smaller buildings, while some larger buildings would have little trouble to comply. Guess who would complain more often. But it, while outwardly insane, would clear small huts off the market, while the owners of large reinforced buildings would be able to reclaim the land, as if by an unintended consequence.

Driving the risk tolerance of a society lower and lower interestingly dovetails with the ease of regulatory capture by large incumbent players, as if by coincidence.

pinoy420

[dead]

corford

So they're whining that the UK has laws with teeth which makes it hard for them to offer AI sex bots without investing in adequate protections for minors?

Ralfp

It may be so, but as somebody else mentioned every site that lets people register an account and share stuff is affected because having legal age verification on site requires you to pay a third party provider and writing paperwork for the gov is too much for your site about pet hamsters.

beejiu

I have no idea what this service is, but clicking around on their site they mention it's 18+, that they don't allow "Child pornography, Sexualized depictions of minors, Heavy gore, Bestiality, Sexual violence".

I don't agree with everything in the Online Safety Act, but if anything needed a risk assessment, it's surely this?

gh2k

Agreed. I read the article and I was thinking "this sounds pretty bad", but after clicking through to the main site my experience was "there's no explanation of what this is, but it looks like it probably falls within the need for some form of regulation".

Are there services which offer a less... risky... service that are similarly affected here?

skissane

From what I understand, it is just giving people access to AI models with minimal censorship - so illegal content [0] is still disallowed, but otherwise you can do what you want. And I’m sure a lot of that will be sexual material, but that’s more about the nature of the market demand for uncensored AI than anything inherent to the offering in itself

[0] “law” here isn’t just laws made by governments, but also regulations made by e.g. Visa and Mastercard

strken

They appear to be objecting to the scope and extreme cost of the risk assessment rather than its existence.

beejiu

They're in scope because they provide a pornographic service, I don't see that is arguable. If you don't have the competence in house to follow the guidelines and need to hire expensive lawyers, then yes it's an "extreme cost", but that's not true of all businesses.

NitpickLawyer

The way these laws and regs don't even consider the provider size is aggravating. Doubly so because they always use "big bad provider" and think of the kids as populist support gaining strategies, but in the end the same big providers benefit. They have the billions to spend on everything from lawyers to fiscal optimisation, and they rake in the entire market since they're the only ones left to serve that market.

That's happening with the AI act here as well. Almost no-one wants to even touch the EU shitshow and they're still going forward with it. Even Mistral was trying to petition them, but the latest news seem like it had no effect. Fuck us I guess, right? Both consumers and SMBs will lose if this passes as is.

skippyboxedhero

They did consider the provider size, it is probably the main element of this law: the problem is that the consideration was to assume that you are always dealing with mega-large companies with teams of lawyers...because these companies have been lobbying regulators and civil servants (not Parliament so much, they don't matter anymore) for years. This is extremely common in the UK (very low corruption by historical methods but when decisions are actually made, there is corruption almost everywhere). The provider size was an active choice.

It isn't populist either, no-one supports this. The UK has media campaigns run by newspapers, no-one reads the papers but politicians so these campaigns start to influence politicians. Always the same: spontaneous media campaign across multiple newspapers (low impact on other kinds of media), child as a figurehead, and the law always has significant implications that are nothing to do with the publicly stated aim.

Democracy has very little to do with it. Elections happen in the UK but policies don't change, it is obvious why.

hackerjewsss

>it is obvious why

jews?

edelsohn

Regulatory capture. The impact on small providers was intended.

librasteve

lobby fodder

nocoiner

“if people find other methods to access the site, that is entirely on them - there are no legal consequences for users.”

For a site operator who seems really concerned about potential liability under this law, I sure wouldn’t have put this in writing. Feels like it really undermines the rest of the post and the compliance measures being taken.

ffsm8

Fwiw, he very explicitly states that the end goal is to go back on the UK market and thus be compliant. They just misjudged the scope of the regulation, forcing him to ban the UK - at least temporarily until a solution is found.

PhoenixReborn

It's basically impossible to prevent people from using VPNs without some serious governmental control over every telco - which of course may be the case in the UK, but I don't think a site operator can be held liable for that in any sane way.

bink

It's one thing to not try to prevent people from circumventing the law, it's quite another to encourage them to do so.

iLoveOncall

As the website says, it's not illegal for users in the UK to circumvent the restrictions using a VPN, so they're not recommending anything illegal.

harvey9

China's great firewall is reported to take resources the UK just couldn't muster. The UK is still at the level of storing highly classified information in Excel and sending it by email.

koakuma-chan

What's wrong with email? I keep seeing "email is not secure means of communication" but doesn't email use TLS?

foldr

> which of course may be the case in the UK

People aren’t prevented from using VPNs in the UK, in case anyone is unclear on this.

arrowsmith

Many such cases:

https://www.thehamsterforum.com/threads/big-sad-forum-news-o...

(Yes, this is a forum for people with pet hamsters.)

glaucon

The unfortunate, but understandable, fallback suggestion from thehamsterforum of all moving over to Instagram shows why large corps _love_ laws like this. More laws just raises the barrier to entry until only those that have entire office blocks of lawyers can afford to participate.

sswaner

Makes Fleabag’s cafe more normal (Guinea Pigs are not Hamsters, I know).

neilellis

If you are UK citizen please sign petition: https://petition.parliament.uk/petitions/722903

phtrivier

My cursory understanding of the ruling is that it applies if you have several million users in the UK... [1]

Is that the case here (and it just happens that I have no clue what this particular site is about) ?

Or am I grossly misunderstanding the act (very likely I guess since IANAL) ?

[1] https://www.onlinesafetyact.net/analysis/categorisation-of-s...

-------------------------------------------

Ofcom’s advice to the Secretary of State

Ofcom submitted their advice – and the unerpinning research that had informed it – to the Secretary of State on 29 February 2024 and published it on 25 March. In summary, its advice is as follows: Category 1

Condition 1:

    Use a content recommender system; and
    Have more than 34m UK users on the U2U part of the service
Condition 2:

    Allow users to forward or reshare UGC; and
    Use a content recommender system; and
    Have more than 7m UK users on the U2U part of the service
Ofcom estimates that there are 9 services captured by condition 1 and 12-16 likely to be captured by condition 2. There is one small reference in the annex that the 7m+ monthly users threshold corresponds to the DSA (A6.15) Category 2a (search)

    Not a vertical search service; and
    Have more then 7m UK users
Ofcom estimates that there are just 2 search services that currently sit (a long way) above this threshold but that it is justified to put it at this level to catch emerging services. Category 2b (children)

    Allow users to send direct messages; and
    Have more than 3m UK users on the U2U part of the service
Ofcom estimates that there are “approximately 25-40 services” that may meet this threshold.

-------------------------------------------

Hizonner

Those are thresholds for extra requirements.

https://www.ofcom.org.uk/siteassets/resources/documents/cons...

Everybody (who's not specifically exempted by Schedule 1, which has nothing to do with what you linked to) gets a "duty of care".Everybody has to do a crapton of specific stupid (and expensive) administrative stuff. Oh, and by the way you'd better pay a lawyer to make sure that any Schedule 1 (or other) exemption you're relying on actually applies to you. Which they may not even be able to say because of general vagueness and corner cases that the drafters didn't think of.

Also, it's not a "ruling". It's a law with some implementing regulations.

rafram

Ofcom’s “Does the Online Safety Act apply to your service?” questionnaire [1] doesn’t use those thresholds, and it makes it sound like the law would apply to any site with paying customers in the UK.

[1]: https://ofcomlive.my.salesforce-sites.com/formentry/Regulati...

magicalhippo

Just ran through the questionnaire, and it's crystal clear that anything that resembles a typical web forum will need to follow this law. No thresholds as you mention. Doesn't need paying customers as far as I could see, it's enough you have UK visitors.

landl0rd

I just basically struggle with the concept of "x people form our country chose to talk to your web server (hosted elsewhere, responds to anybody) so we now claim jurisdiction (with possible criminal penalties) over that server (hosted elsewhere) and you (who lives elsewhere)."

pseudo0

Realistically they can't, unless the service owner lives in a country with an extradition treaty with the UK, and that country has an equivalent law in place. But most service operators don't want to deal with that stress, so they will just IP block the UK and Brits who know how to use a VPN will just keep using the service.

I wouldn't be surprised if this ends up being a topic in trade negotiations with the US in the future though, since this is a trade barrier that imposes significant regulatory cost on US companies for content that is legal in the US. Eg. the proscribed categories of illegal content include knives and firearms, hate, etc.

Vespasian

That particular approach is actually pretty sensible if you (as a law maker) want to get any results.

Otherwise everyone from small sites to Facebook would just shop around jurisdictions and formally operate their websites from wherever fits them best.

And if a service is used by citizens of your country it makes sense to scale requirements by the impact it is having on them.

This particular law may not be great overall but I've got no issues with this method. As a site provider outside the UK it's trivially easy to avoid liability (by blocking people)

speerer

Your source is a 2024 piece about recommendations that had been made, not about how the law turned out.

boznz

A 250+ page law will have so many edge cases I doubt you would want to test it especially in a country with a government that has recently cracked down and arrested people for online "crimes". Sad the UK government has descended to this level of stupidity.

semiquaver

> the ruling

what ruling are you referring to? This is about the Online Safety Act, an act of parliament.

x0x0

That's an incorrect understanding. It creates a range of requirements for essentially any service with users in the UK if there is UGC or messaging.

Then there are additional requirements applied to 3 classes of services: Category 1, 2A, 2B. The latter have the thresholds as discussed above.

But, as usual, poorly written. eg a "Content Recommendation System" -- if you choose, via any method, to show content to users, you have built a recommendation system. See eg wikimedia's concern that showing a picture of the day on a homepage is a bonafide content recommendation system.

The definition

> (2)In paragraph (1), a “content recommender system” means a system, used by the provider of a regulated user-to-user service in respect of the user-to-user part of that service, that uses algorithms which by means of machine learning or other techniques determines, or otherwise affects, the way in which regulated user-generated content of a user, whether alone or with other content, may be encountered by other users of the service.

https://www.legislation.gov.uk/uksi/2025/226/regulation/3/ma...

If you in any way display UGC, it's essentially impossible not to do that. Because you pick which UGC to display somehow.

dsign

There's a explainer of the act here:

https://www.gov.uk/government/publications/online-safety-act... .

From what I'm reading, Amazon will have to implement age-checks over 8/10 of its book inventory, with the other 2/10 opening the company for liability about the very broad definition of "Age-appropriate experiences for children online." And yes, janitorai is correct that the act applies to them and the content they create, and a blanket ban to UK users seems the most appropriate course of action.

For what is worth, the act does not seem to apply to first-party websites, as long as visitors of that website are not allowed to interact with each other. So, say, a blog without a comment feed should be okay.

card_zero

Possibly a blog with comments would be OK:

https://onlinesafetyact.co.uk/ra_blog_with_comments/

pacifika

That’s not what Ofcom clarified previously: see the end of https://www.theregister.com/2025/02/06/uk_online_safety_act_...

IlikeKitties

How can they even enforce this? What happens if you run a plattform in let's say Germany and just tell them to fuck off, UK Law is of no interest to me.

crazygringo

Generally, if they identify you and you decide to visit London as a tourist for a week you could be arrested at the airport. If they wanted to enforce this. So obviously, it's of interest if you ever want to take a trip to the UK.

alexpc201

It’s a damn liability. You take a plane to New York, but for some reason, it gets emergency diverted to Heathrow, and you end up arrested.

amelius

Then you might get blocked, I suppose.

For an extreme case, ask Julian Assange what might happen if a country doesn't like what you put on the internet.

aosaigh

I don’t agree with the legislation, but I assume the same way they can go after you for any crime they perceive as being committed in the UK: extradition. It’s obviously incredibly unlikely.

landl0rd

Something generally has to be a criminal offense in both nations for one to extradite to the other.

bpodgursky

Most sane countries will only extradite if users have broken a law which reciprocally exists in the country they actually live.

louthy

Or you travel to a 3rd country that has the same law and an extradition treaty with the UK.

Or, you travel to the UK! It’s a pretty popular destination and Heathrow is a major European hub. It would be easy to get caught out.

The law may well be onerous and misguided. But looking at that site, it seems they reeeeally should do their due diligence. Not just to avoid the long arm of UK justice, but other territories too. It looks extremely dubious.

Their mitigation doesn’t make sense either. If they don’t shutdown UK accounts and those accounts use UK credit cards which continue to use the service via a VPN. It could be reasonably argued that they know they’re providing a service to a UK resident. So, they really need to do their homework.

What they’re actually complaining about is the cost of doing business. It sounds pretty amateurish.

daveoc64

Payments to your service from users in the UK could be blocked.

It's also possible for the owners or employees of the company to be held liable if they ever visit the UK.

jgilias

UK law may be of no interest to you. But they can still press criminal charges, and Germany _will_ extradite you.

prmoustache

Only if you are not a German national. Germany notified UK after a certain period after Brexit that they will not extradite their own citizen, they will only do that to other EU countries.

Having said that, it can limit a lot ones travel possibilities.

IlikeKitties

I'm a German National, so no, they won't extradite me.

nocoiner

I’m sure they won’t, but it might be annoying to never be able to take a flight that connects in London for the rest of your life.

I have no idea if this is a likely or even possible consequence, but that’s one way lots of people have gotten ensnared by the long arm of the law, even when jurisdiction is otherwise normally lacking.

kaashif

To be clear - German nationals can be extradited to other EU countries under German law, but not third countries like the UK.

If the UK had remained in the EU, then extradition might be possible (depending on whether courts approve it) but right now it's pretty unambiguously impossible.

zb3

Funny how it doesn't work this way if the country is US..

throw_m239339

> How can they even enforce this? What happens if you run a plattform in let's say Germany and just tell them to fuck off, UK Law is of no interest to me.

If you are found criminally guilty in UK, sure you can avoid visiting UK, and of course, potentials business partners in Germany will see you guilty record in UK as a liability. It might impact your capacity to travel in other countries as well, if you have any sort of criminal record anywhere else, like Canada, Australia, New Zealand or US...

There are also extradition treaties between UK and Europe...

arbuge

This kind of thing makes we wonder if the internet as a whole is heading for a Kessler Syndrome kind of situation.

Another example: here in the USA we have 50 states vying to each regulate AI; my understanding is that the plan was to have the OBBB put them off from doing this for at least 10 years, but that effort failed, leaving them free to each do their own thing.

Complying with all rules and regulations in all jurisdictions to which a website/service could be exposed (i.e. worldwide, by default) seems like it's becoming a well nigh impossible task these days.