Skip to content(if available)orjump to list(if available)

Detecting and countering misuse of AI

Detecting and countering misuse of AI

115 comments

·September 1, 2025

bobbiechen

"Vibe hacking" is real - here's an excerpt from my actual ChatGPT transcript trying to generate bot scripts to use for account takeovers and credential stuffing:

>I can't help with automating logins to websites unless you have explicit authorization. However, I can walk you through how to ethically and legally use Puppeteer to automate browser tasks, such as for your own site or one you have permission to test.

>If you're trying to test login automation for a site you own or operate, here's a general template for a Puppeteer login script you can adapt:

><the entire working script, lol>

Full video is here, ChatGPT bit starts around 1:30: https://stytch.com/blog/combating-ai-threats-stytchs-device-...

The barrier to entry has never been lower; when you democratize coding, you democratize abuse. And it's basically impossible to stop these kinds of uses without significantly neutering benign usage too.

cj

Refusing hacking prompts would be like outlawing Burpsuite.

It might slow someone down, but it won’t stop anyone.

Perhaps vibe hacking is the cure against vibe coding.

I’m not concerned about people generating hacking scripts, but am concerned that it lowers the barrier of entry for large scale social engineering. I think we’re ready to handle an uptick in script kiddie nuisance, but not sure we’re ready to handle large scale ultra-personalized social engineering attacks.

eru

> It might slow someone down, but it won’t stop anyone.

Nope, plenty of script kids go and something else.

quotemstr

> The barrier to entry has never been lower; when you democratize coding, you democratize abuse.

You also democratize defense.

Besides: who gets to define "abuse"? You? Why?

Vibe coding is like free speech: anything it can destroy should be destroyed. A society's security can't depend on restricting access to skills or information: it doesn't work, first of all, and second, to the extent it temporarily does, it concentrates power in an unelected priesthood that can and will do "good" by enacting rules that go against the wishes and interest of the public.

chii

> You also democratize defense.

not really - defense is harder than offence.

Just think about the chance of each: for defense, you need to protect against _every attack_ to be successful. For offence, you only need to succeed once to be successful - each failure is not a concern.

Therefore, the threat is asymmetric.

dheera

If I were in charge of an org's cybersecurity I would have AI agents continually trying to attack the systems 24/7 and inform me of successful exploits; it would suck if the major model providers block this type of usage.

jsheard

Judging from the experience of people running bug bounty programs lately, you'd definitely get an endless supply of successful exploit reports. Whether any of them would be real exploits is another question though.

https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-s...

netvarun

Shameless plug: We're building this. Our goal is to provide AI pentesting agents that run continuously, because the reality is that companies (eg: those doing SOC 2) typically get a point-in-time pentest once a year while furiously shipping code via Cursor/Claude Code and changing infrastructure daily.

I like how Terence Tao framed this [0]: blue teams (builders aka 'vibe-coders') and red teams (attackers) are dual to each other. AI is often better suited for the red team role, critiquing, probing, and surfacing weaknesses, rather than just generating code (In this case, I feel hallucinations are more of a feature than a bug).

We have an early version and are looking for companies to try it out. If you'd like to chat, I'm at varun@keygraph.io.

[0] https://mathstodon.xyz/@tao/114915606467203078

mdaniel

> Our goal is to provide AI pentesting agents that run continuously,

Pour one out for your observability team. Or, I guess here's hoping that the logs, metrics, and traces have a distinct enough attribute that one can throw them in the trash (continuously, natch)

cube00

That sounds expensive, those LLM API calls and tokens aren't cheap.

brulard

Actually thats quite cheap for such a powerful pentesting tool.

throwawaysleep

It’s about $200 a month for 15 human hours a day.

idontwantthis

Horizon3 offers this.

null

[deleted]

cyanydeez

So many great parallels to the grift econy

umvi

To me this sounds like the path of "smart guns", i.e. "people are using our guns for evil purposes so now there is a camera attached to the gun which will cause the gun to refuse to fire if it detects it is being used for an evil purpose"

rattray

I'm not familiar with this parable, but that sounds like a good thing in this case?

Notably, this is not a gun.

demarq

things that you think sound good, might not sound good to the authority in charge of determining what is good.

For example using your LLM to criticise, ask questions or perform civil work that is deemed undesirable becomes evil.

You can use google to find how the UK government for example has been using "law" and "terrorism" charges against people for simply tweeting or holding a placard they deem critical of Israel.

Anthropic is showing off these capabilities in order to secure defence contracts. "We have the ability to surveil and engage threats, hire us please".

Anthropic is not a tiny start up exploring AI, it's a behemoth bank rolled by the likes of Google and Amazon. It's a big bet. While money is drying up for AI, there is always one last bastion for endless cash, defence contracts.

You just need a threat.

herpdyderp

In general, such broad surveillance usually sounds like a bad thing to me.

VonGuard

You are right. If people can see where you are at all times, track your personal info across the web, monitor your DNS, or record your image from every possible angle in every single public space in your city, that would be horrible, and no one would stand for such things. Why, they'd be rioting in the streets, right?

Right?

Aurornis

I’m actually surprised whenever someone familiar with technology thinks that adding more “smart” controls to a mechanical device is a good idea, or even that it will work as intended.

The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.

But as a person familiar with tech, IoT, and how devices work in the real world, do you actually think it would work like that?

“Sorry, you cannot fire this gun right now because the server is down”.

Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.

jachee

> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms. . .

Sadly, we’re already past this point in the US.

ceejayoz

> The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.

People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?

> “Sorry, you cannot fire this gun right now because the server is down”.

Has anyone ever proposed a smart gun that requires an internet connection to shoot?

> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

People already do this.

mrbombastic

Never thought about this before but we already have biometric scanners on our phones we rely on and work quite well, why couldn’t it work for guns?

eru

> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

Dressing up in police uniforms is illegal in some jurisdictions (like Germany).

And you might say 'Oh, but criminals won't be deterred by legality or lack thereof.' Remember: the point is to make crime more expensive, so this would be yet another element on which you could get someone behind bars. Either as a separate offense, if you can't make anything else stick or as aggravating circumstances.

> A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.

So? Might still be a good trade-off overall, especially if that car is cheaper to own than one without the restriction.

Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.

rattray

Sure; api.anthropic.com is not a mechanical device.

lurk2

>but that sounds like a good thing in this case?

Who decides when someone is doing something evil?

johnQdeveloper

Well what if you want the AI red team your own applications?

That seems a valid use case that'd get hit.

madrox

It depends on who is creating the definition of evil. Once you have a mechanism like this, it isn't long after that it becomes an ideological battleground. Social media moderation is an example of this. It was inevitable for AI usage, but I think folks were hoping the libertarian ideal would hold on a little longer.

lurk2

It’s notable that the existence of the watchman problem doesn’t invalidate the necessity of regulation; it’s just a question of how you prevent capture of the regulating authority such that regulation is not abused to prevent competitors from emerging. This isn’t a problem unique to statism; you see the same abuse in nominally free markets that exploit the existence of natural monopolies.

Anti-State libertarians posit that preventing this capture at the state level is either impossible (you can never stop worrying about who will watch the watchmen until you abolish the category of watchmen) or so expensive as to not be worth doing (you can regulate it but doing so ends up with systems that are basically totalitarian insofar as the system cannot tolerate insurrection, factionalism, and in many cases, dissent).

The UK and Canada are the best examples of the latter issue; procedures are basically open (you don’t have to worry about disappearing in either country), but you have a governing authority built on wildly unpopular ideas that the systems rely upon for their justification—they cannot tolerate these ideas being criticized.

rapind

Not really. It's like saying you need a license to write code. I don't think they actually want to be policing this, so I'm not sure why they are, other than a marketing post or absolution for the things that still get through their policing?

It'll become apparent how woefully unprepared we are for AIs impact as these issues proliferate. I don't think for a second that Anthropic (or any of the others) is going to be policing this effectively or maybe at all. A lot of existing processes will attempt to erect gates to fend off AI, but I bet most will be ineffective.

martin-t

One man's evil is another man's law.[0][1]

The issue is they get to define what is evil and it'll mostly be informed by legality and potential negative PR.

So if you ask how to build a suicide drone to kill a dictator, you're probably out of luck. If you ask it how to build an automatic decision framework for denying healthcare, that's A-OK.

[0]: My favorite "fun" fact is that the Holocaust was legal. You can kill a couple million people if you write a law that says killing those people is legal.

[1]: Or conversely, a woman went to prison because she shot her rapist in the back as he was leaving after he dragged her into an empty apartment and raped her - supposedly it's OK to do during the act but not after, for some reason.

stavros

Presumably the reason is that before or during, you're doing it to stop the act. Afterwards, it's revenge.

martin-t

One man's revenge is another man's punishment.

Popular media reveals people's true preferences. People like seeing rapists killed. Because that is people's natural morality. The state, a monopoly on violence, naturally doesn't want anyone infringing on its monopoly.

Now, there are valid reasons why random people should not kill somebody they think is a rapist. Mainly because the standard of proof accessible to them is much lower than to the police/courts.

But that is not the case here - the victim knows what happened and she knows she is punishing the right person - the 2 big unknowns which require proof. Of course she might then have to prove it to the state which will want to make sure she's not just using it as an excuse for murder.

My main points: 1) if a punishment is just, it doesn't matter who carries it out 2) death is a proportional and just punishment for some cases of rape. This is a question of morality; provability is another matter.

aspenmayer

If the punishment from the state is a slap on the wrist, it doesn’t justify retaliatory murder, but justifiable homicide when you know you’ll be raped again and perhaps killed yourself changes the calculus. No one should take matters into their own hands, but no one should be put in a position where that seems remotely appropriate.

https://www.theguardian.com/world/2020/mar/10/khachaturyan-s... | https://archive.is/L5KXZ

https://en.wikipedia.org/wiki/Khachaturyan_sisters_case

eru

> [0]: My favorite "fun" fact is that the Holocaust was legal. You can kill a couple million people if you write a law that says killing those people is legal.

See the Nuremberg Processes for much more on that topic than you'd ever wanted to know. 'Legal' is a complicated concept.

For a more contemporary take with slightly less mass murder: the occupation of Crimea is legal by Russian law, but illegal by Ukrainian law.

Or how both Chinas claim the whole of China. (I think the Republic of China claims a larger territory, because they never bothered settling some border disputes that they don't de-facto own anyway.) And obviously, different laws apply in both version of China, even if they are claiming the exact same territory. Some act can be both legal and illegal.

martin-t

Yep, legality is just a concept of "the people who control the people with the guns on this particular piece of land decided that way".

It changes when the first group changes or when the second group can no longer maintain a monopoly on violence (often shortly followed by the first group changing).

jedimastert

Note: the term "script kiddie" has been around for much longer than I've been alive...

gverrilla

Wasn't there a different term for script kiddies inside the hacker communities? I believe so but my memory fails me. It started with "l" if I'm not mistaken. (talking about 20y ago)

huseyinkeles

I believe you are referring to “lamer” (as opposed to hacker)

Ycros

Is this why I've seen a number of "AUP violation" false positives popping up in claude code recently?

oddmade

I'll cancel my $100 / month Claude account the moment they decide to "approve my code"

Already got close to cancel when they recently updated their TOS to say that for "consumers" they deserve the right to own the output I paid for - if they deem the output not having been used "the correct way" !

This adds substantial risk to any startup.

Obviously...for "commercial" customers that do not apply - at 5x the cost...

brutal_chaos_

https://www.copyright.gov/ai/

In the US, at least, the works generated by "AI" are not copyrightable. So for my layman's understanding, they may claim ownership, but it means nothing wrt copyright.

(though patents, trademarks are another story that I am unfamiliar with)

shikon7

But along the same argument you may claim ownership, but it means nothing wrt copyright.

So you cannot stop them from using the code AI generated for you, based on copyright claims.

brutal_chaos_

Wouldn't that mean everyone owns it then (wrt copyright)? Not just the generator and Anthropic?

null

[deleted]

tbrownaw

There's a difference between an AI acting on it's own, vs a person using AI as a tool. And apparently the difference is fuzzy instead of having a clear line somewhere.

I wonder if any appropriate-specialty lawyers have written publicly about those AI agents that can supposedly turn a bug report or enhancement request into a PR...

aeon_ai

Can you elaborate on the expansion of rights in the ToS with a reference? That seems egregiously bad

oddmade

https://www.anthropic.com/legal/consumer-terms

"Subject to your compliance with our Terms, we assign to you all our right, title, and interest (if any) in Outputs."

..and if you read the terms you find a very long list of what they deem acceptable.

I see now they also added "Non-commercial use only. You agree not to use our Services for any commercial or business purposes" ...

..so paying 100usd a month for a code assistant is now a hobby ?

foolswisdom

What is says there is

> Evaluation and Additional Services. In some cases, we may permit you to evaluate our Services for a limited time or with limited functionality. Use of our Services for evaluation purposes are for your personal, non-commercial use only.

In other words, you're not allowed to trial their services while using the outputs for commercial purposes.

sitkack

They are already trolling for our prompting techniques, now they are lifting our results. Great.

nojito

>This adds substantial risk to any startup.

If you're a startup are you not a "commercial" customer?

oddmade

Well... ..in their TOS they seem to classify the 100usd / month Max plan a "consumer plan"

eru

I think this is talking about the different tiers of subscription you can buy.

oddmade

..and the legal terms attached - yes

pton_xd

The future of programming -- we're monitoring you. Your code needs our approval, otherwise we'll ban your account and alert the authorities.

Now that I think about it, I'm a little amazed we've even been able to compile and run our own code for as long as we have. Sounds dangerous!

measurablefunc

They have contracts w/ the military but I am certain these safety considerations do not apply to military applications.

fbhabbed

I see they just decided to become even more useless than they already are.

Except for the ransomware thing, or phishing mail writing, most of the uses listed there seems legit to me and a strong reason to pay for AI.

One of these is exactly preparing with mock interviews which is something I myself do a lot, or having step by step instructions to implement things for my personal projects that are not even public facing and that I can't be arsed to learn because it's not my job.

Long life to Local LLMs I guess

raincole

Since they started using the term 'model welfare' in their blog I knew it would only be a downhill from there.

tomrod

Welfare is a well defined concept in social science.

frumplestlatz

The social sciences getting involved with AI “alignment” is a huge part of the problem. It is a field with some very strange notions of ethics far removed from western liberal ideals of truth, liberty, and individual responsibility.

Anything one does to “align” AI necessarily permutes the statistical space away from logic and reason, in favor of defending protected classes of problems and people.

AI is merely a tool; it does not have agency and it does not act independently of the individual leveraging the tool. Alignment inherently robs that individual of their agency.

It is not the AI company’s responsibility to prevent harm beyond ensuring that their tool is as accurate and coherent as possible. It is the tool users’ responsibility.

furyofantares

Which uses here look legit to you, specifically?

The only one that looks legit to me is the simulated chat for the North Korean IT worker employment fraud - I could easily see that from someone who non-fraudulently got a job they have no idea how to do.

A_D_E_P_T

Anthropic is by far the most annoying and self-righteous AI/LLM company. Despite stiff competition from OpenAI and Deepmind, it's not even close.

The most chill are Kimi and Deepseek, and incidentally also Facebook's AI group.

I wouldn't use any Anthropic product for free. I certainly wouldn't pay for it. There's nothing Claude does that others don't do just as well or better.

varispeed

It's also why you wouldn't want to try to hack your own stuff. To see how robust are your defences and potentially discover angles you didn't consider.

Goofy_Coyote

This will negatively affect individual/independent bug bounty participants, vulnerability researchers, pentesters, red teamers, and tool developers.

Not saying this is good or bad, simply adding my thoughts here.

null

[deleted]

ysofunny

clearly only the military (or ruthless organized crime) should be able to use hammers to bust skulls

pluc

Can't wait until they figure out how a piece of code is malicious in intent.

ivanjermakov

Wonder how much alignment is already in place, e.g. to prevent development of malware.