Skip to content(if available)orjump to list(if available)

Anthropic's report smells a lot like bullshit

KaiserPro

When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

Milderbole

If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

ACCount37

As if that makes any difference to cybercriminals.

If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

maddmann

Good old Meta and its teenage data labeler

heresie-dabord

I propose a project that we name Blarrble, it will generate text.

We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

prinny_

The lack of evidence before attributing the attack(s) to a Chinese sponsored group makes me correlate this report with recent statements from companies in the AI space about how China is about to surpass US in the AI race. Ultimately statements and reports like these seem more like an attempt to make the US government step in and be the big investor that keeps the money flowing rather than anything else.

JKCalhoun

Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

I don't doubt of course that reports intended for government agencies or security experts would have those details, but I am not surprised that a "blog post" like this one is lacking details.

I just don't see how one goes from "this is lacking public evidence" to "this is likely a political stunt".

I guess I would also ask the skeptics (a bit tangentially, I admit), do you think what Anthropic suggested happened is in fact possible with AI tools? I mean are you denying that this is could even happen or just that Anthropic's specific account was fabricated or embellished?

Because if the whole scenario is plausible that should be enough to set off alarm bells somewhere.

woooooo

There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

It's like the inverse of "nobody got fired for using IBM" -- "nobody can blame you for getting hacked by superspies". So, in the absence of any evidence, it's entirely possible they have no idea who did it and are reaching for the most convenient label.

rfoo

> Do public reports like this one often go deep enough into the weeds to name names

Yes. They often include IoCs, or at the very least, the rationale behind the attribution, like "sharing infrastructure with [name of a known APT effort here]".

For example, here is a proper decade-old report from the most unpopular country right now: https://media.kasperskycontenthub.com/wp-content/uploads/sit...

It established solid technical links between the campaign they are tracking to earlier, already attributed campaigns.

So, even our enemy got this right, ten years ago, there really is no excuse for this slop.

zaphirplane

Not vested in the argument but it stood out to me that, Your argument is similar to tv courts if it’s plausible the report is true. Very far from the report is credible

null

[deleted]

dcotorgoggle

[flagged]

lazide

‘No true Scotsman’?

Also, plenty of folks with no allegiance would love to pit everyone else against each other.

hnthrowaway8347

Possibly, but:

- Many people in many countries now hate the U.S. and U.S. companies like Anthropic.

- In addition, leaders in the U.S. have been lobbied by OpenAI and invest in it which is a direct competitor and is well-represented on HN.

- China’s government has vested interest in its own companies’ AI ventures.

Given this, I’d hardly say that Anthropic was much of a strong U.S. puppet company, and likely has strong evidence about what happened, why also hoping to spin the PR to get people to buy their services.

I don’t think it’s unreasonable to assume that people that write inflammatory posts about Anthropic may have more than an axe to grind against AI and may be influenced by their country and its propaganda or potentially may even be working for them.

notpublic

"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

thoroughburro

The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that it provides nothing but plausibility.

Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability.

Think better.

jmkni

That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh

mnky9800n

I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them.

kopirgan

AI company doing hype and not giving enough details?

Nah that can't be possible it's so uncharacteristic..

dev_l1x_be

People grossly underestimate ATPs. It is more common than an average IT curious person thinks. I happened to be oncall when one of these guys hacked into Gmail from our infra. It took principal security engineers a few days before they could clearly understand what happened. Multiple zero days, stolen credit cards, massive social campaign to get one of the Google admins click on a funny cat video finally. The investigation revealed which state actor was involved because they did not bother to mask what exactly they were looking for. AI just accelerates the effectiveness of such attacks, lowers the bar a bit. Maybe quite a bit?

f311a

A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

They win because of quantity, not quality.

But still, I don't trust Anthropic's report.

marcusb

The security world overemphasizes (fetishizes, even,) the "advanced" part because zero days and security tools to compensate against zero days are cool and fun, and underemphasizes the "persistent" part because that's boring and hard work and no fun.

And, unless you are Rob Joyce, talking about the persistent part doesn't get you on the main stage at a security conference (e.g., https://m.youtube.com/watch?v=bDJb8WOJYdA)

jmkni

Do you mean APT (Advanced persistent threat)?

names_are_hard

It's confusing. Various vendors sell products they call ATPs [0] to defend yourself from APTs...

[0] Advanced Threat Protection

jmkni

relevant username :)

lxgr

Important callout. It starts with comforting voices in the background keeping you up to date about the latest hardware and software releases, but before you know it, you've subscribed to yet another tech podcast.

kace91

Does Anthropic currently have cybersec people able to provide a standard assessment of the kind the community expects?

This could be a corporate move as some people claim, but I wonder if the cause is simply that their talents are currently somewhere else and they don’t have the company structure in place to deliver properly in this matter.

(If that is the case they are not then free of blame, it’s just a different conversation)

CuriouslyC

I throw Anthropic under the bus a lot for their lack of engineering acumen. If they don't have a core competency like engineering fully covered, I'd say there's a near 0% chance they have something like security covered.

fredoliveira

What makes you think they lack engineering acumen?

CuriouslyC

The hot mess that is Claude Code (if you multi-orchestrate with it, it'll start to grind even very powerful systems to a halt, 15+ seconds of unresponsiveness, all because CC constantly serializes/deserializes a JSON data file that grows quite large every time you do stuff), their horrible service uptime compared to all their competitors, their month long performance degradation their users had to scream at them to get them to investigate, the fact that they had to outsource their web client and it's still bad, etc.

matthewdgreen

They have an entire model trained on plenty of these reports, don’t they?

ifh-hn

This article does seem to raise some serious issues with the anthropic report. I wonder if anthropic will release proof of what they claim, or whether the report was a marketing/scare-tactic push to have AI used by defender, like the article suggests it is?

ineedasername

>This involved querying internal services, extracting authentication certificates from configurations, and testing harvested credentials across discovered systems.

How ? Did it run Mimikatz ? Did it access Cloud environments ? We don’t even know what kind of systems were affected.

I really don't see what is so difficult to believe, which seems to be at the core of this article. Two things are required for this:

1) Jailbreak Claude from guardrails. This is not difficult. Do people believe advancement with guardrails are so hardened through fine tuning it's no longer possible?

2) The hackers having some of their own software tools for exploits that Claude can use. This too is not difficult to credit.

Once an attacker has done this all Claude is doing is using software in the same mundane fashion as it does every time you use Claude code and it utilizes any tools to which you give it access.

I used a local instance of Qwen3 coder (A3B 30B quantized to IQ3_xxs) literally yesterday through ollama & cline locally. With a single zeroshot prompt it wrote the code to use the arxiv API and download papers using its judgement on what was relevant to split the results into a subset that met the criteria I gave for the sort I wanted to review.

Given these sorts of capabilities why is it difficult the believe this can be done using the hacker's own tools and typical deep research style iteration? This is described in in the research paper, and disclosing anything more specific is unnecessary because there is nothing novel to disclose.

As for not releasing the details, they did: Jailbreak Claude. Again, nothing they described is novel such that further details are required. No PoC is needed, Claude isn't doing anything new. It's fully understandable that Anthropic isn't going to give the specific prompts used for the obvious reason that even if Anthropic has hardened Claude against those, even the general details would be extremely useful to iterate and find workarounds.

For detecting this activity and determining how Claude was doing this it's just a matter of monitoring chat sessions in such a way as to detect jail breaks, which again is very much not novel or an unknown practice by AI providers.

Every relevant detail was divulged. The whole thing amounts a social engineering hack, on Claude. (That is basically what many jailbreaks are), And it is not all that uncommon that a specific CERT report would be created to give the precise method and script used for social engineering.

The entire incident can be reduced to something that would not typically be divulged by any company at all, as it is not common practice for companies to divulge every single time the previously known methodologies have been used against them.

Dumblydorr

What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?

ACCount37

AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

CuriouslyC

IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.

candiddevmike

LLMs are the ultimate social engineering tool, IMO. They're basically designed to trick people into trusting them/"exploiting people's weaknesses" around validation and connection, what could possibly go wrong in the hands of an adversary?

HarHarVeryFunny

At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.

Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...

The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.

I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?

goalieca

My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.

Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.

Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.

EMM_386

Anthropic is not a security vendor.

They're an AI research company that detected misuse of their own product. This is like "Microsoft detected people using Excel macros for malware delivery" not "Mandiant publishes APT28 threat intelligence". They aren't trying to help SOCs detect this specific campaign. It's warning an entire industry about a new attack modality.

What would the IoCs even be? "Malicious Claude Code API keys"?

The intended audience is more like - AI safety researchers, policy makers, other AI companies, the broader security community understanding capability shifts, etc.

It seems the author pattern-matched "threat intelligence report" and was bothered that it didn't fit their narrow template.

63stack

If Anthropic is not a security vendor, then they should not make statements like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored" or "represents a fundamental shift in how advanced threat actors use AI" and let the security vendors do that.

If the report can be summed up as "they detected misuse of their own product" as you say, then that's closer to a nothingburger, than to the big words they are throwing around.

zaphar

That makes no sense. Just because they aren't a security vendor doesn't mean they don't have useful information to share. Nor does it mean they shouldn't share it. They aren't pretending to be a security researcher, vendor, or anything else than AI researchers. They reported on findings on how their product is getting used.

Anyone acting like they are trying to be anything else is saying more about themselves than they are about Anthropic.

MattPalmer1086

Yep, agree with your assessment. As someone working in security I found the report useful as a warning of the new types of attack we will likely face.

padolsey

> What would the IoCs even be?

Prompts.

EMM_386

The prompts aren't the key to the attack, though. They were able to get around guardrails with task decomposition.

There is no way for the AI system to verify whether you are white hat or black hat when you are doing pen-testing if the only task is to pen-test. Since this is not part of a "broader attack" (in the context), there is no "threat".

I don't see how this can be avoided, given that there are legitime uses to every step of this in creating defenses to novel attacks.

Yes, all of this can be done with code and humans as well - but it is the scale and the speed that becomes problematic. It can adjust in real-time to individual targets and does not need as much human intervention / tailoring.

Is this obvious? Yes - but it seems they are trying to raise awareness of an actual use of this in the wild and get people discussing it.

padolsey

I agree that there will be no single call or inference that presents malice. But I feel like they could still share general patterns of orchestration (latencies, concurrencies, general cadences and parallelization of attacks, prompts used to granulaize work, whether prompts themselves have been generated in previous calls to Claude). There's a bunch of more specific telltales they could have alluded to. I think it's likely they're being obscure because they don't want to empower bad actors, but that's not really how the cybersecurity industry likes to operates. Maybe Anthropic believes this entire AI thing is a brand new security regime and so believe existing resiliences are moot. That we should all follow blindly as they lead the fight. Their narrative is confusing. Are they being actually transparent or transparency-"coded"?

null

[deleted]

quantum_state

Anthropic is losing it … this is all the “report” indicated to people …