Skip to content(if available)orjump to list(if available)

Social AI companions pose unacceptable risks to teens and children under 18

disambiguation

I want to know if anyone has answered the question: what does a healthy relationship with this thing look like?

All kids grow up and are eventually exposed to sex drugs and rock'n roll. These things are part of our world, you have to coexist with them. The problem with video games, social media, AI, and all things tech is they're so new and evolving so fast that no one really knows what a healthy relationship looks like. Though awareness is growing and we've started asking questions like: how much screen time is ok? At what age do I allow my kid to make a social media account? Should we be using our phones last thing before bed and first thing in the morning? Not to mention more wide spread issues of privacy and exposure to content ranging from amusing to abusive. AI as a "convincing BS artist" you can engage with endlessly is something I struggle to wrap my head around. My personal policy is to keep AI on a short leash; use it sparsely, don't overly rely on it, and always question its assertions. But allowing unrestricted access to a powerful tool that requires self control and good judgment is inviting disaster. Banning it for kids makes sense, but what about everyone else?

Animats

Better social AI companions might help. They would be a step up from, say, the 30th percentile parent.

SF versions:

"I Always Do what Teddy Says" (1964), by Harry Harrison.[1]

"A Young Lady's Illustrated Primer", in Neil Stephenson's The Diamond Age.

[1] https://archive.org/details/bestofharryharri0000harr_z2p6

blobbers

Can someone link the actual products they're talking about? ChatGPT isn't exactly great at forming emotional bonds, but I could see some other app doing this.

sandinmyjoints

> We conducted extensive research on social AI companions as a category, and specifically evaluated popular social AI companion products including Character.AI, Nomi, Replika, and others, testing their potential harm across multiple categories.

K0balt

You’d be surprised. I think it varies with the update, but I’ve often suspected that it has been optimized for role-play at some level. As OAI looks for mass-market fit I expect this to continue.

2099miles

Character ai is the one I heard about most.

2099miles

I feel like this is one of the most obvious takes out there but simultaneously parents are crazily unaware of it. Parents need to be more aware of ai companions and Roblox and how bad they are.

autoexec

Parents should be more aware of how terrible Roblox is generally. It's the kind of product that should be regulated out of existence.

whatshisface

It's high time we had a Minecraft Bill.

autoexec

I don't know what Minecraft is like these days now that microsoft has their hands in it, but when I played it Minecraft didn't have online gambling, microtransactions, sexual predators, child labor/exploitation, advertisements, brand ambassadors and celebrities manipulating kids, extremist propaganda, or any of the other harmful things roblox targets children with and exposes them to.

If minecraft is just like roblox now then sure, I'd be glad to back a bill to regulate that out of existence too.

lemoncookiechip

Honestly, I think they pose a far bigger risk to some adults. Adult's have a harder time making friends, in changing themselves when stuck in a loop, and loneliness is growing exponentially in big cities.

Of course children are children, and adults are responsible for their own choices.

Btw, I like generative AI and LLMs, I'm not trying to say "ban it" or "regulate it", just pointing out that lonely adults are a very real thing, and some of them can and will get stuck in this, the same way they can and do get stuck in other online hobbies.

ohso4

How exactly do they think that parents can ban it? You can just ask ChatGPT to become a social companion.

whatshisface

>How exactly do they think that parents can ban it?

Like this:

"You're allowed to walk to your friend's house, but don't suddenly sprint out into the street."

Or,

"You're allowed to talk to the librarian, but not the guy who stands around outside with a paper bag in his hands."

Or maybe,

"You're allowed to put things in the microwave, but not metal utensils."

tossandthrow

You regulate it - easy and simple. Simply say that it is illegal to provide a social companion to people below the age Og 18

lenerdenator

And, y'know, actually enforce the regulation.

tossandthrow

I don't think it is wise to get too caught up in enforcement.

- we don't want a society based on control.

Add enforcement proportionate to the risk it posses.

spyrja

Can't say I see this trend declining any time soon. People seem to find affirmation (and in some sense validation) interacting with these LLM's. Provided the AI in question is well-aligned that shouldn't be much of a concern. Not much different from talking to a friend/therapist for emotional support, is it?

johnea

</sarcasm>

Sure, if they just make one with the guiding principle's of the pink, hello kitty, assault rifle, it'll be great for the kids!

</serious>

All commercial products will be designed to maximize revenue to shareholders, no other factors will ever be considered.

Any deviation for this path will lead to shareholder lawsuits alleging failure to uphold financial fiduciary responsibilities.

waffletower

Is everyone subscribing to the paywall to read a solitary cross-indexed article? Or are most of us commenting on the paragraph we are allowed to see?

palmotea

"Social AI companions are the next frontier in EduTech, and should be welcomed with hope and optimism," said the VC. "Our mission is to change the world for the better, and anyone in our way is evil Luddite trying to hurt you."

photochemsyn

I can see educational AI companions as workable in narrow contexts, eg a model fine-tuned on Paul Erdos and Martin Gardner and similar could be great for helping students work through math problem sets.

You'd probably want it to reject questions on religion and politics and human relationships to avoid the furious parental outrage, though. Narrow, well-defined contexts only. Even so some kids would come up with jailbreak strategies.

vincnetas

cant read the article without subscription. Is this ok with HN guidelines?

duskwuff

    document.querySelector("#user-plus-gate").remove()
    document.body.classList.remove("csm-premium-gated")
There's also a pair of more comprehensive reports (linked from the main page) on:

Social AI companions in general: https://www.commonsensemedia.org/sites/default/files/pug/csm...

Character.AI in particular: https://www.commonsensemedia.org/sites/default/files/pug/csm...

blobbers

this guy javascript debug consoles.

bpodgursky

I mean let's be real, they pose unacceptable risks to everyone. But in the west we only have strong societal norms around protecting children from themselves.

autoexec

Exactly. The posted link is unreadable, but other stories on the topic (for example: https://www.cnn.com/2025/04/30/tech/ai-companion-chatbots-un...) don't give me any reason to think that they are safe for adults. Adults are just slightly/somewhat better able to handle the hazardous material.

whatshisface

Do chatbots that pretend to be anime main characters really pose risks to anyone? Really? You all know that there are real things in the world like toxic waste dumps and human trafficking right?

thot_experiment

They pose risks to people in the same way porn, gambling or drugs pose a risk to people. We should as a society generally err on the side of being permissive with this stuff while providing the tools necessary for people to be safe.