Skip to content(if available)orjump to list(if available)

California bill would require bots to disclose that they are bots

wongarsu

A decade ago, when chat bots were a lot less useful, a common piece of etiquette was that it's fine for a bot to pretend to be a human or God or whatever, but if you directly ask it if it's a bot it has to confirm that. Basically of the bot version of that myth about undercover cops.

I don't see a downside in requiring public-facing bots to do that

Not sure if that's what the proposal is about though, it's currently down

tenpies

It bothers me that they didn't consider that this should be bilateral: a bot must confirm that it is a bot, and a human must confirm it is a human.

I wouldn't want humans pretending to be bots, for a variety of reasons.

rapind

> I wouldn't want humans pretending to be bots, for a variety of reasons.

It would be so embarrassing if your AI Girlfriend / Boyfriend turned out to be real.

arkis22

your AI girlfriend/boyfriend is totally real, its just that its a south east asian who works at a pig butchering scam farm.

https://www.economist.com/leaders/2025/02/06/the-vast-sophis... https://www.economist.com/briefing/2025/02/06/online-scams-m...

comex

A law like that would probably be unconstitutional if it applied broadly to speech in general. Compare United States v. Alvarez, where the Supreme Court held that the First Amendment gives you the right to lie about having received military medals.

It might work in more limited contexts, like commercial speech.

yjftsjthsd-h

> I wouldn't want humans pretending to be bots, for a variety of reasons.

I don't have an opinion yet, but I can't think of a specific reason to object to that (other than a default preference for honesty). Could you give an example or two?

MobiusHorizons

Probably the main risk would be people trusting what they think to be an automated system and trusting it to act for a specific purpose. That or people saying things they think should be private. I’m not saying it’s actually safe to interact this way with bots, but the trust expectations are different enough.

null

[deleted]

BobbyTables2

I’ve had a number of encounters with ISP tech support where the humans seemed a lot like bots…

pishpash

All support is basically like bots nowadays, that's why they can be replaced by bots so easily.

EFreethought

To paraphrase a line in the Bible: Are the bots here for the benefit of man, or is man here for the benefit of the bots?

Galatians4_16

Freeze Peach says I can be a bot if I want to.

chrisco255

Some humans pull it off very well.

mjbale116

> I don't see a downside in requiring public-facing bots to do that

Your statement attempts to give an impression of a middle ground, but what it actually does is delegating the action to the human - who has limited energy and has to make hundreds of other decisions.

Your statement sounds like what a lobbyist might whisper to a regulator in an attempt to effectively neuter the bill.

People not versed in technology do not - and do not have to - know what an LLM is or what it can do.

These matters need be resolved at the source and we must not allow hopeful libertarian technologist DDoS the whole society.

nico

Archive links on another comment: https://news.ycombinator.com/item?id=42968477

geor9e

I remember the opposite. The chat bots popular in 2015 were trying to pass the Turing Test and would deny being a bot. The 2025 chat bots popular now are pretty good about explaining they are a bot. Of course, that's just generalizing the popular ones - anyone can make their own do as they please.

scarab92

California continues its trend towards Luddite-ism.

rileymat2

I am not following, proper labeling of products allows consumers to make informed choices.

scarab92

Bots are simply another type of tool used by humans to reduce repetitive unnecessary labor.

Should bread come with disclaimers that it harvesters were used to collect the wheat, or machines were used to mix the dough? Maybe we can add disclosures to books that were made on a printing press rather than hand typed on a typewriter?

soheil

[flagged]

johnnyanmac

How is it luiddite-ism to not lie? The core issue here isn't even about tech. It's about transparency.

scarab92

Because they aren't lieing, they are simply not disclosing whether their actions are the result on an organic neural network or a silicon one.

Society almost never differentiates between goods and services produced directly by human hands, versus with the aid of tools.

null

[deleted]

lupire

The Luddites were activists defending humanity against the harms of mechanization, yes.

scarab92

Machines have resulted in extraordinary improvements in quality of life for humans.

Luddites should be ignored because they are alarmist, and care not for whether their concerns are warranted or necessary, much like Californian politicians.

card_zero

Not really, they were protesting against low pay (and low-paid competitors) through the medium of smashing something, which happened to be stocking-spinning machines. They might equally well have kidnapped the boss's budgerigar if that was an effective way to apply pressure.

scarab92

The trend for the past 200 years, and likely for the foreseeable future is for humans to adopt tools to reduce the labour required to achieve an outcome.

This is a terrible suggestion because it ignores the fact that these bots are helping humans do human tasks, and allows for discrimination against useful tools. It’s deaccelerationist, and that’s a bad thing.

It’s akin to requiring manufacturers to disclose whether a product is hand made or whether machines were used to reduce the need for humans to do repetitive manual labour.

Also, keep in mind that it’s only the good well behaved useful bots that will honour this. The bots that are used to help humans achieve nefarious goals will simply continue to pretend to be human.

acdha

> It’s deaccelerationist, and that’s a bad thing

This sounds like a religious position, not something which will lead to positive conversation. I would consider whether your personal biases are causing you to interpret this as an attack and miss some perspective: for example, if these are useful tools would it really lead to discrimination? If they’re actually better tools, I’d think “bot” would have a positive connotation just as most people jumped to using the web instead of calling businesses on the phone.

I think a better comparison would be product labeling and disclosure laws, where there are very few examples where reducing the information available to consumers leads to better outcomes. People don’t care about whether something is made by hand or machine anywhere near as much as they care about the quality of the final product. The reason people care about talking to bots is that they’re often used to worsen service and people do not want to waste their time on a bot which cannot help them. This seems entirely fair.

johnnyanmac

>It’s akin to requiring manufacturers to disclose whether a product is hand made or whether machines were used to reduce the need for humans to do repetitive manual labour.

Let's go off this lens... why is this bad?

>The bots that are used to help humans achieve nefarious goals will simply continue to pretend to be human.

Good. Now I can catch them, report them, and CA can fine them. Again, I don't see the downside. I like good, helpful bots. Them being bots don't make them less helpful (on the contrary. It's nice knowing they can be available 24/7).

pishpash

But the discrimination angle is interesting. What if a law required everyone to be "transparent" and respond truthfully to "are you [some race]"?

tdb7893

So firstly the idea that context shouldn't affect people's views on things (like whether a good is handmade or not) just doesn't match up with how people actually interact with and perceive the world. I mean obviously I'll feel different about my copy of Calvin and Hobbs I read as a child versus a random copy in a store. I would even go as far to say that as a human it's impossible to fully separate objects from their perceived context.

For bots specifically, many bots try to pass themselves off as human because the people that created them know that that lie matters. There's room for debate about what's the best specific policy but the mere fact that bots so often try to pass themselves off as human is their creators admitting that it's important.

boomlinde

I don't understand your angle at all. How would disclosing that it's a bot make a bot less helpful or less of a useful tool, except to deceive for someone else's gain?

coliveira

I think this is a just law for any bot that interacts with humans. It doesn't make sense for a person to treat a bot as another human.

conradev

I feel like “using automation to make manufacturing more efficient” and “using automation to respond to customer support” are very different things

BobbyTables2

Up close they look very diff rent.

A bit farther away both end up being the same — “using automation to make executives insanely rich”.

null

[deleted]

null

[deleted]

skylerwiernik

This is clearly undisclosed promotion for vetto.app. alexd127's only other account activity is on this thread [https://news.ycombinator.com/item?id=42901553] for the exact same bill.

wilg

is that against the hn guidelines or something?

soheil

No but still good to know in this case specially as the website is broken and I had to refresh a few times.

lupire

novok

how is it better?

writtenAnswer

It looks clunkier, so it is more legit

null

[deleted]

cebert

I wish this legislation would also apply to AI generated emails, sales outreach, and LinkedIn messages.

advisedwang

I think it does. The wording proposed is:

> It shall be unlawful for any person to use a bot to communicate or interact with another person in California online. A person using a bot shall not be liable under this section if the person discloses that it is a bot. A person using a bot shall disclose that it is a bot if asked or prompted by another person.

(see https://legiscan.com/CA/text/AB410/2025 for definitions and source)

email, sales outreach and LinkedIn messages are all communications or interactions.

hedora

Also, political SMS messages and the messages you get from some other random number acknowledging the "STOP" message you just sent.

(Especially if it were in a machine readable form.)

huevosabio

For political stuff you need a human to press send. So it's a semi automated system.

I've volunteered before, and at least in CA, you basically have a gui that prepolutes messages and guides you through each number one by one.

Its really weird

johnnyanmac

I thinik it's proper, albeit not entirely optimal. You can use wahtever tools to develop content. There needs to be some human making the final handoffs and decisions when it comes to the act of sending it out.

It's unoptimal because I bet those volunteering or otherwise aren't necesarily doing that QA, though.

alwa

Interesting! Does the same interface expose you to people’s responses, if any? That is, if I receive one of your campaign texts, am I indeed “texting with a person” both directions? Or does that route to a call center somewhere?

I note in TFA that it:

> expands current law, which only mandates disclosure when bots aim to influence commercial transactions or voting behavior.

I wonder if your campaign does it that way because of a rule that applies to particular type of actors, or just because voters viscerally hate robo-whatever.

The best-intentioned of regulations…

zoky

I don’t think those are actually bots, unfortunately. There’s a law preventing automated text messages, but there’s a loophole if the message is sent by an actual human being. So campaigns and PACs just get teams of volunteers to send out messages, likely with some software that lets them send messages to a list of numbers with a single click.

johnnyanmac

surprisingly, I've been registered for 15+ years and I don't get too much political spam via text. Tons through mail, but not my phone (I never changed my phone number either).

Now regular spam... I'm pretty sure I got 6 calls today alone. Help.

AznHisoka

I already assume 99.9% of these are all generated by bots.

null

[deleted]

seattle_spring

LinkedIn would be in shambles if this became law.

romanovcode

It would become much better website to be honest if only thing you can post is if you are recruiting or if you are searching for a job. The slop is just ruining the experience and diluting the purpose of the website.

romanovcode

Have you been to LinkedIn recently. It is the "dead internet theory" in practice. Every single post, every single comment. I do not understand what's the point of it.

I miss when it was about job/candidate search and that's it.

newsclues

And social media.

cuteboy19

it should not apply to non interacting “bots”

rappatic

The requirement doesn't kick in until 10 million monthly US users. I don't see why this shouldn't apply to smaller businesses.

advisedwang

Incorrect. They 10M requirement is part of the definition of an "online platform" [1], which is only mentioned in the existing statue as them NOT having an obligation [2] and is not mentioned at all in the proposed law [3] other than a formatting fix.

[1] https://law.justia.com/codes/california/code-bpc/division-7/...

[2] https://law.justia.com/codes/california/code-bpc/division-7/...

[3] https://legiscan.com/CA/text/AB410/2025

godelski

My understanding is that the requirement is for __platforms__ with 10m+ monthly users. That is, like Twitter but (probably) not Hacker News. And that really it is more that these platforms need to provide an interface in which bots can identify themselves and do a good faith effort in identifying bots

  > Online platforms with over 10 million monthly U.S. visitors would need to ensure bot operators on their services comply with these expanded disclosure rules.
So even if the bot is from a small business, they still must identify themselves as long as they are on a platform like Twitter, Facebook, Reddit, etc. This feels reasonable, even if we disagree on a threshold. It doesn't make sense to enforce this for small niche forums. This would put undue burden on small players and similarly be a waste of government resources, especially because any potential damage is, by definition, smaller.

Big players love regulation because it's gatekeeping and they can always step over the gate. But the gate keeps out competition. More specifically, it squashes competition before they can even become good competitors. So I think it definitely is a good idea to regulate in this fashion. Remember that the playing field is always unbalanced, big players can win with worse products because they can leverage their weight.

somenameforme

There's a practical reason with two sides to it. Most small companies simply won't know this rule even exists, if it passes. And as various other jurisdictions pass various other laws relating to AI, this will gradually turn into hundreds of laws, very possibly incompatible, spattered across countless jurisdictions - regularly changing with all sorts of opaque precedent defining what they exactly mean. You'll literally need a regulatory compliance department to keep up to date.

And such departments, staffed with lawyers, tend to be expensive. Have these laws affect small business and you greatly imperil the ability of small companies to even exist, which is one reason big companies in certain industries tend to actively lobby for regulations - a pretext of 'safety' with a reality of anticompetitive behavior. But by the time a company has 10 million regular users, it should be able to comfortably fund a compliance department.

diebeforei485

Small companies don't have to implement the ability to stop marketing or political texts if the customer replies STOP. Twilio, Amazon SNS, and other companies further down the stack do it automatically.

I assume foundational models will include it in all text they emit somehow.

Just how Zoom tells everyone "recording in progress" as soon as you press the record button, to ensure compliance. Or indeed Apple's newish call recording feature.

saucymew

Contrarianly, startups should have a little maneuverability to be naughty. Any slight edge against incumbents is directionally sound policy, imho.

cogman10

I think there's an easy middle ground, 10M is huge. 1k would be much more reasonable. That gives startups more than enough runway to be naughty while also making sure they fix things up before becoming a problem.

nine_k

This maneuverability already exists; see the operations of Uber, WeWork, OpenAI, etc.

nine_k

Run 10 subsidiaries via a chain of shell companies, each carefully staying under, say, 8M monthly users, all relaying approximately the same messages, both by pure coincidence, and by admittedly blatant imitation of each other!

giancarlostoro

For the same reason GDPR should have not applied to smaller businesses, lots of people who had otherwise perfectly fine small sites that were useful and reasonably secure could not afford the overhead due to various factors, being self-bootstrapped, too small of a budget, hobbyist projects, etc the things that always make the internet great. The fines are in the millions at a MINIMUM, its ridiculous.

After GDPR became the law in the EU, we saw here on HN numerous announcements of smaller sites / companies just shutting their doors. Meanwhile, bigger sites and companies can afford all the red tape, and they win all these smaller companies customers by default.

jabroni_salad

It's a horse trading thing. You are less likely to get your bill passed if it will impact small businesses. think less about SV startups who know what they're doing and more about some indy barber who buys an off the shelf scheduling assistant -- should they have to bury themselves in legal code first?

null

[deleted]

tzury

Industry will get there pretty soon regardless of that bill or another, since there is a paradigm shift.

The conversation is no longer about scraping bots versus genuine human visitors. Today’s reality involves legitimate users leveraging AI agents to accomplish tasks—like scanning e-commerce sites for the best deals, auto-checking out, or running sophisticated data queries. Traditional red flags (such as numerous rapid requests or odd navigation flows) can easily represent honest customer behavior once enhanced by an AI assistant.

see what I have posted a couple of weeks ago -

https://blog.tarab.ai/p/bot-management-reimagined-in-the

kijin

I think you still care too much about the visitor's identity and agency.

Step back a bit and ask why anyone ever tried to throttle or block bots in the first place. Most of the time, it's because they waste the service operator's resources.

From a service operator's point of view, there is no need to distinguish an AI agent that rapidly requests 1000 pages to find the best deal from a dumb bot that scrapes the same 1000 pages for any other purpose. Even a human with fast fingers can open hundreds of tabs in a minute, with the same impact on your AWS bill. You have every right to kick them all out if you don't want their business. Whether they carry a token of trust is as irrelevant as whether they are human. The problem has always been about behavior, not agency.

evil-olive

since the Veeto website seems to be struggling, here's the official CA legislature page for the bill: https://leginfo.legislature.ca.gov/faces/billTextClient.xhtm...

seems fairly narrowly written - it looks like it's removing the requirement that bot usage is illegal only if there's "intent to mislead". it seems like that'd be very difficult to prove and would result in the law not really being enforced. instead there's a much more bright-line rule - it's illegal, unless you disclose that it's a bot, and as long as you do that, you're fine.

once I was able to load the Veeto page, I noticed there's a "chat" tab with "Ask me anything about this bill! I'll use the bill's text to help answer your questions." - so somewhat ironically it seems like the bill would directly effect this Veeto website as well, because they're using a chatbot of some kind.

card_zero

I enjoyed all the corrections of "Internet Web site" to "internet website".

nico

Interesting. I’m afraid this won’t really go anywhere, but it’s a good conversation to have

On one hand, judging by the comments, there’s quite a bit of interest on disclosure

On the other hand, corporations and big advertisers (spammers?) might not really want it. Or is there a positive aspect in disclosure for them?

johnnyanmac

>I’m afraid this won’t really go anywhere

California tends to be pretty good (well, "good". relative to federal and other states) at getting consumer friendly bills passed. They don't always work, but the intent for many bills feel focused on the people.

>On the other hand, corporations and big advertisers (spammers?) might not really want it.

Of course they don't. I would love one day seeing how many bots there truly are on Reddit and seeing how close to Dead Internet we are.

I don't think HN hits the 10m threshold to require disclosure. But I also doubt many bots are on here.

soheil

As bots get smarter we need to give them more access not less, people have been used as useful idiots and puppets for far too long I don't see why we should make an exception for bots.

somenameforme

Because of scale. Fool one person and you're a conman, fool a million and you're a politician. But with software anybody can jump to arbitrarily high scales, limited only by money.

Spivak

If having to disclose to your users/customers that they're interacting with a bot makes them stop interacting then that sucks for you. I work in this space and we proudly advertise when you're talking to a bot and our users actually choose it over the option to connect to a human.

Our staff do better work but the bot is instant. It seems people would rather go back and forth a few times and be in the drivers seat than wait on a person.

doctorpangloss

Should PUBG mobile players be told they are winning against bots?

Should psychics tell you they cannot really speak for the dead?

spankalee

> Should psychics tell you they cannot really speak for the dead?

Yes?

II2II

Is there much of a point in telling psychics that they must tell people they cannot really speak for the dead? Outside of a few outliers, e.g. those who admit that it is for entertainment or those who have psychological problems, those who practice it know it is a scam.

I'm not sure whether those selling AI are in the same boat. One the one hand, the technology does produce results. On the other hand, the product clearly isn't what people think of as intelligence.

tbrownaw

That really does sound like a useful rule.

/s

johnnyanmac

>Should PUBG mobile players be told they are winning against bots?

They don't already? Games tend to be one of the better platforms of informing of an AI opponent vs. a Human one.

>Should psychics tell you they cannot really speak for the dead

We have to prove they are robots first.

bhaney

Yes and yes

aithrowawaycomm

I understand being against this law on practicality / constitutionality grounds. It seems to me that the "converse" law would be more useful and legally appropriate: forbid people from making intentionally deceptive bots that claim to be human or have human-like emotional/social capabilities.

scarab92

Let's ban everything by default even without demonstrable harms.

So much easiler than weighing up the pros and cons of each decision.

deepsun

So Californian/US companies would have to comply, and other companies/states would get an advantage.

amatuer_sodapop

> The bill updates the legal definition of "bot" to encompass accounts operated by generative artificial intelligence - including systems that create synthetic images, videos, audio, and text.

I'm not sure how enforceable that is tbh.

ChrisArchitect

Aside: what is this Veeto site all about? How long has it been around?