Skip to content(if available)orjump to list(if available)

Character.ai to bar children under 18 from using its chatbots

Edmond

There is a correct way to do age verification (and information verification in general) that supports strong privacy and makes it difficult to evade:

https://news.ycombinator.com/item?id=44723418

It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).

alyxya

I worry that all this does is reduce future liability issues and make those users use another chatbot instead. I trust character.ai more than other chatbots for safety.

> [Dr. Nina Vasan] said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users.

I hope there’s a more gradual transition here for those users. AI companions are often far more available than other people, so it’s easy to talk more and get more attached to them. This restriction may end up being a net negative to affected users.

thw_9a83c

"Going forward, it will use technology to detect underage users based on conversations and interactions on the platform, as well as information from any connected social media accounts"

Something tells me that this ain't gonna work. Kids and teens are more inventive than they probably realize.

gs17

I don't think the "use technology to detect underage users" approach has ever worked well for its stated intent (it works okay for being able to say you're "doing something" about the problem while not losing as many users), but it's slightly better than mandatory ID for everyone.

mothballed

It's just a way to mitigate being sued when inevitably the lawsuits pour in that the AI-assisted harmful decisions that were made totally weren't shared fault of the parents, other external environment, and/or unlucky genetics.

add-sub-mul-div

A model would also have to account for the emotionally stunted adults using an AI bf/gf service.

gs17

I doubt they care that much, since the cost of a false positive is that user having to verify their age and the alternative is all users having to verify their age.

causal

Finally. CAI is notoriously shady with harvesting, selling, and making it difficult to remove data. How many kids have CAI's chatbots already seduced and had their intimate conversations sold to Google?

hatefulheart

It makes no difference to their bottom line. After all, appealing to children over the age of 18 is where LLMs find their market.

hackernewds

If they are banning a large part of their current and future market, with competitors serving the space, how does it not affect their bottom line?

ChrisArchitect

Related:

Teen in love with chatbot killed himself – can the chatbot be held responsible?

https://news.ycombinator.com/item?id=45726556

ivape

This is the only way. Tech companies cannot become like the police in many countries. The police in many countries are used as a "catch-all" function at the moment of, where they have to deal with the failure of other functions (parenting, community) exactly at the moment of (just-in time catch-all).

Your kid is on the fucking computer all day building an unhealthy relationship with essentially a computer game character. Step the fuck in. These companies absolutely have to make liability clear here. It's an 18+ product, watch your kids.

TZubiri

Very nice. Just yesterday I wrote about the 13-18 age group using ChatGPT and how I think it should be disallowed (without guardian consent), this was in the context of suicide cases.

https://news.ycombinator.com/item?id=45733618

On a similar note, I was completing my application for YC Startup School / Co-Founder matching program. And when listing possible ideas for startups I straight out explicitly mentioned I'm not interested in pursuing AI ideas at the moment, AI features are fine, but not as the main pitch.

It feels like at least for me the bubble has popped, I have talked also recently about the way in which the bubble might pop would be due to legal liability collapse in the courts. https://news.ycombinator.com/item?id=45727060

This added with the fact that AI was always a vague folk category of software, it's being used for robotics, NLP and fake images, I just don't think it's a real taxon.

Similar to the crypto buzz from the last season, the reputable parties will exit and stop associating, while the grifters and free-associating mercenaries will remain.

Even if you are completely selfish, it's not even hugely beneficial to be in the "AI" space, at least in my experience, customers come in with huge expectations, and non-huge budgets. Even if you sell your soul to implement a chatbot that will replace 911 operators, at this point the major actors have already done so, or not, and you are left with small companies that want to be able to fire 5 employees and will pay you 3 months of employee salary if you can get it done by vibe code completing their vibe coded prototype within a 2-3 deadline.