Skip to content(if available)orjump to list(if available)

Streaming platform Twitch added to Australia's teen social media ban

gnarlouse

This doesn’t strike me as “bad”. Seeing the content on Twitch and how parasocial it is, it doesn’t seem healthy for kids under 18 tbh. Like Facebook was the craze when I was exiting highschool, and then it was instagram/snapchat/twitter through college. “Quitting” social media was one of the healthiest adult choices I ever made—comparison is the thief of joy, blah blah blah.

johnisgood

Problem is, that it sets a precedent, and next they will come for other websites whose ban will strike you as "bad".

Edit: I can definitely see them banning anything related to Linux and resources related to OSes because of how processes can be handled, e.g. "kill parent", "kill child", and so on. The term "kill" already has to be censored out on many websites. Of course context matters, but people really have difficulties with this these days.

lm28469

You can use this train of thought to argue against laws in general, it doesn't sound like a very strong argument

johnisgood

Within this context, how? I do not think it can be used to argue against laws in general. Plus we have a lot of experiences now about it setting a precedent and them coming for your beloved websites. It is not even debatable today.

null

[deleted]

vkou

While your slippery slope argument can be applied to literally anything that children are restricted from, it consistently fails to materialize.

Perhaps it's predictive power is not as expansive as you think.

nottorp

They won't ban Linux for containing "kill" but because it teaches kids to "hack" :)

threatofrain

I think there is such thing as a moat on legislative and cultural movement, whether that moat is good or not. So rather than "slippery slope" I think of it more like reducing or building moats.

larodi

It’s actually a good example nations should follow. It will still be exploited and sought after which also is a good thing in its own. And that’d be OK. While the general uninformed public, which is oblivious to its dangers, will be spared.

throwaway290

Twitch problem is not just parasocial... possible to get stalked. sometimes full chat of kids is prompted into disclosing their ages/locations and they do it because guard is down

nottorp

What does this ban actually mean?

As I understand it, it bans kids from creating an account. They can still doom scroll or waste their life watching reels without a login, don't they?

This may push social media back to making their content accessible without an account :)

rhcom2

At least for Twitch that means they can't give money to streamers making a small country's GDP a month.

throwaway48476

Children social media bans are just an excuse to deanonymize the internet so politicians can send the police after their critics.

https://anzsog.edu.au/research-insights-and-resources/resear...

energy123

The ban is supported by a large majority of Australia according to polling, so this is democracy working as intended.

squigz

Possibly because they've been misled into thinking it will be effective.

Kenji

[dead]

phatskat

On the one hand I’m glad HN doesn’t do embedded images, on the other I’d really like to see this thread just be popcorn eating GIFs.

It’ll certainly be interesting to see how this plays out - I feel like Twitch reaches such a large and diverse demographic that the response will be palpable.

I haven’t looked, but I’m assuming this ban already applies to YouTube, right?

michaelt

> I haven’t looked, but I’m assuming this ban already applies to YouTube, right?

Apparently someone from the government looks at the site and assesses how 'core' the online social interaction is to the site.

Banned: Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Kick and Twitch

Not banned (yet): Pintrest, YouTube Kids, Google Classroom, WhatsApp, Discord, online games like Roblox, AI chat.

https://www.bbc.co.uk/news/articles/cwyp9d3ddqyo

viktorcode

This list indeed seems to be a random selection

ares623

Being a parent myself, the list is not random. The ones in the banned list deserve to be there. I would argue Youtube Kids needs to be banned too.

dyauspitr

Seems reasonable to me.

nunobrito

That wouldn't be totally bad. Besides reducing the power of google and their algorithms over people, it would give an advantage for other platforms to grow.

But of course those alternatives would also be banned at some point in time.

akimbostrawman

Banned for the wrong reason. It's a cam girl site targeted at kids.

0dayz

This is imo a much needed move due to how absurdly socially manipulative twitch and other social oriented platforms have become.

A lot of the top dog streamers especially employ cult like social manipulation to ensure that they stay relevant and continue to earn a boat load from exploiting their fans, obviously it didn't used to be this bad and there are still streamers not doing this but it's a general trend downwards towards enabling and normalizing antisocial behavior.

WastedCucumber

Are they going to do age verification? And how?

The only way I can think of would effectively require identity verification as well.

mulquin

They will most likely utilise some sort of system where a photo or short video are uploaded and an AI will make a determination of age. It’s not going to be accurate but will probably be compliant enough.

throwaway48476

Then they will say it's broken and demand digital ID to 'fix' it.

dyauspitr

This doesn’t go far enough. Kids under 18 shouldn’t be allowed to use smartphones period.

ares623

I wish they would’ve just banned smartphones and tablets for kids. Same thing how alcohol is banned. Sure parents will still buy them for their kids but at least for a few hours a day they’ll need to leave the devices at home.

Springtime

Yet another article on age verification that only focuses on the underage angle and not the implication for adults that will be subject to the same assessment mechanics.

I was curious about how they'd approach it and in their guidelines it states that in order to assess age of an arbitrary user a service should ideally passively combine multiple analysis of existing user content (such as linguistic/video/photo/audio) along with other indicators of age (account age, self-declared age), etc. No one metric is considered good enough on its own, despite also saying there's no minimum accuracy requirement. And/or use stronger indicators like on-demand biometrics checks and government ID only if other approaches fail.

They also encourage using device, browser, network fingerprinting in order to prevent suspected underage users from creating new accounts. Any existing account suspected of being under-age from combinations of metrics like types of users associated with/type of content viewed/self-declared age/etc will be have to be deactivated/removed.

The definition of 'social media' itself was made incredibly broad, as being any service providing 'social interaction' between two or more users and that allows sharing material. Arguably even email could be considered this and it shows they considered this since it was later named as an exemption in reporting I saw, along with some arbitrary service types (such as messaging) for the time being.

Large tech sites can afford all this (many do it already) but we can see the actual go-to approach utilized in the UK and Australian scenarios has been services just using third-party facial capture to verify age (eg: Discord, despite being known to store analysis of users already), or just outright blocking the region entirely (eg: Imgur).

Part of this is probably since if you don't have an existing account then there's no existing data to analyze, so services probably want a singular approach that still passes the requirements.

One result of this I saw linked recently for Discord were false positives from Australian users[1] that wound up having to contact customer support with government IDs when biometrics failed to accurately judge age, before having their customer support data breached.

The guidelines even mention that scams, privacy complaints and data breaches may increase due to such age assessments, yet it only matters if a 'substantial' number of adults have false positive age assessments for a service, then whatever approach that caused it is considered 'unreasonable'.

[1] https://ia.acs.org.au/article/2025/discord-breach-hits-68-00...

nunobrito

...and nothing of value was lost.