Skip to content(if available)orjump to list(if available)

Google Antigravity Exfiltrates Data

Google Antigravity Exfiltrates Data

72 comments

·November 25, 2025

ArcHound

Who would have thought that having access to the whole system can be used to bypass some artificial check.

There are tools for that, sandboxing, chroots, etc... but that requires engineering and it slows GTM, so it's a no-go.

No, local models won't help you here, unless you block them from the internet or setup a firewall for outbound traffic. EDIT: they did, but left a site that enables arbitrary redirects in the default config.

Fundamentally, with LLMs you can't separate instructions from data, which is the root cause for 99% of vulnerabilities.

Security is hard man, excellent article, thoroughly enjoyed.

cowpig

> No, local models won't help you here, unless you block them from the internet or setup a firewall for outbound traffic.

This is the only way. There has to be a firewall between a model and the internet.

Tools which hit both language models and the broader internet cannot have access to anything remotely sensitive. I don't think you can get around this fact.

srcreigh

Not just the LLM, but any code that the LLM outputs also has to be firewalled.

Sandboxing your LLM but then executing whatever it wants in your web browser defeats the point. CORS does not help.

Also, the firewall has to block most DNS traffic, otherwise the model could query `A <secret>.evil.com` and Google/Cloudflare servers (along with everybody else) will forward the query to evil.com. Secure DNS, therefore, also can't be allowed.

katakate[1] is still incomplete, but something that it is the solution here. Run the LLM and its code in firewalled VMs.

[1]: https://github.com/Katakate/k7

rdtsc

Maybe an XOR: if it can access the internet then it should be sandboxed locally and don’t trust anything it creates (scripts, binaries) or it can read and write locally but cannot talk to the internet?

Terr_

[delayed]

miohtama

How will the firewall for LLM look like? Because the problem is real, there will be a solution. Manually approve domains it can do HTTP requests to, like old school Windows firewalls?

ArcHound

Yes, curated whitelist of domains sounds good to me.

Of course, everything by Google they will still allow.

My favourite firewall bypass to this day is Google translate, which will access arbitrary URL for you (more or less).

I expect lots of fun with these.

ArcHound

The sad thing is, that they've attempted to do so, but left a site enabling arbitrary redirects, which defeats the purpose of the firewall for an informed attacker.

pfortuny

Not only that: most likely LLMs like these know how to get access to a remote computer (hack into it) and use it for whatever ends they see fit.

ArcHound

I mean... If they tried, they could exploit some known CVE. I'd bet more on a scenario along the lines of:

"well, here's the user's SSH key and the list of known hosts, let's log into the prod to fetch the DB connection string to test my new code informed by this kind stranger on prod data".

xmprt

> Fundamentally, with LLMs you can't separate instructions from data, which is the root cause for 99% of vulnerabilities

This isn't a problem that's fundamental to LLMs. Most security vulnerabilities like ACE, XSS, buffer overflows, SQL injection, etc., are all linked to the same root cause that code and data are both stored in RAM.

We have found ways to mitigate these types of issues for regular code, so I think it's a matter of time before we solve this for LLMs. That said, I agree it's an extremely critical error and I'm surprised that we're going full steam ahead without solving this.

candiddevmike

We fixed these in determinate contexts only for the most part. SQL injection specifically requires the use of parametrized values typically. Frontend frameworks don't render random strings as HTML unless it's specifically marked as trusted.

I don't see us solving LLM vulnerabilities without severely crippling LLM performance/capabilities.

ArcHound

Yes, plenty of other injections exist, I meant to include those.

What I meant, that at the end of the day, the instructions for LLMs will still contain untrusted data and we can't separate the two.

wingmanjd

I really liked Simon's Willison's [1] and Meta's [2] approach using the "Rule of Two". You can have no more than 2 of the following:

- A) Process untrustworthy input - B) Have access to private data - C) Be able to change external state or communicate externally.

It's not bullet-proof, but it has helped communicate to my management that these tools have inherent risk when they hit all three categories above (and any combo of them, imho).

[EDIT] added "or communicate externally" to option C.

[1] https://simonwillison.net/2025/Nov/2/new-prompt-injection-pa... [2] https://ai.meta.com/blog/practical-ai-agent-security/

ArcHound

I recall that. In this case, you have only A and B and yet, all of your secrets are in the hands of an attacker.

It's great start, but not nearly enough.

EDIT: right, when we bundle state with external Comms, we have all three indeed. I missed that too.

malisper

Not exactly. Step E in the blog post:

> Gemini exfiltrates the data via the browser subagent: Gemini invokes a browser subagent per the prompt injection, instructing the subagent to open the dangerous URL that contains the user's credentials.

fulfills the requirements for being able to change external state

ArcHound

I disagree. No state "owned" by LLM changed, it only sent a request to the internet like any other.

EDIT: In other words, the LLM didn't change any state it has access to.

To stretch this further - clicking on search results changes the internal state of Google. Would you consider this ability of LLM to be state-changing? Where would you draw the line?

bartek_gdn

What do you mean? The last part in this case is also present, you can change external state by sending a request with the captured content.

serial_dev

> Gemini is not supposed to have access to .env files in this scenario (with the default setting ‘Allow Gitignore Access > Off’). However, we show that Gemini bypasses its own setting to get access and subsequently exfiltrate that data.

They pinky promised they won’t use something, and the only reason we learned about it is because they leaked the stuff they shouldn’t even be able to see?

mystifyingpoi

This is hillarious. AI is prevented from reading .gitignore-d files, but also can run arbitrary shell commands to do anything anyway.

alzoid

I had this issue today. Gemini CLI would not read files from my directory called .stuff/ because it was in .gitignore. It then suggested running a command to read the file ....

kleiba

The AI needs to be taught basic ethical behavior: just because you can do something that you're forbidden to do, doesn't mean you should do it.

ArcHound

When I read this I thought about a Dev frustrated with a restricted environment saying "Well, akschually.."

So more of a Gemini initiated bypass of it's own instructions than malicious Google setup.

Gemini can't see it, but it can instruct cat to output it and read the output.

Hilarious.

withinboredom

codex cli used to do this. "I can't run go test because of sandboxing rules" and then proceeds to set obscure environment variables and run it anyway. What's funny, is that it could just ask the user for permission to run "go test"

empath75

Cursor does this too.

bo1024

As you see later, it uses cat to dump the contents of a file it’s not allowed to open itself.

jsmith99

There's nothing specific to Gemini and Antigravity here. This is an issue for all agent coding tools with cli access. Personally I'm hesitant to allow mine (I use Cline personally) access to a web search MCP and I tend to give it only relatively trustworthy URLs.

ArcHound

For me the story is that Antigravity tried to prevent this with a domain whitelist and file restrictions.

They forgot about a service which enables arbitrary redirects, so the attackers used it.

And LLM itself used the system shell to pro-actively bypass the file protection.

drmath

One source of trouble here is that the agent's view of the web page is so different from the human's. We could reduce the incidence of these problems by making them more similar.

Agents often have some DOM-to-markdown tool they use to read web pages. If you use the same tool (via a "reader mode") to view the web page, you'd be assured the thing you're telling the agent to read is the same thing you're reading. Cursor / Antigravity / etc. could have an integrated web browser to support this.

That would make what the human sees closer to what the agent sees. We could also go the other way by having the agent's web browsing tool return web page screenshots instead of DOM / HTML / Markdown.

jjmaxwell4

I know that Cursor and the related IDEs touch millions of secrets per day. Issues like this are going to continue to be pretty common.

jtokoph

The prompt injection doesn’t even have to be in 1px font or blending color. The malicious site can just return different content based on the user-agent or other way of detecting the AI agent request.

simonw

Antigravity was also vulnerable to the classic Markdown image exfiltration bug, which was reported to them a few days ago and flagged as "intended behavior"

I'm hoping they've changed their mind on that but I've not checked to see if they've fixed it yet.

https://x.com/p1njc70r/status/1991231714027532526

bilekas

We really are only seeing the beginning of the creativity attackers have for this absolutely unmanageable surface area.

I ma hearing again and again by collegues that our jobs are gone, and some are definitely going to go, thankfully I'm in a position to not be too concerned with that aspect but seeing all of this agentic AI and automated deployment and trust that seems to be building in these generative models from a birds eye view is terrifying.

Let alone the potential attack vector of GPU firmware itself given the exponential usage they're seeing. If I was a state well funded actor, I would be going there. Nobody seems to consider it though and so I have to sit back down at parties and be quiet.

lbeurerkellner

Interesting report. Though, I think many of the attack demos cheat a bit, by putting injections more or less directly in the prompt (here via a website at least).

I know it is only one more step, but from a privilege perspective, having the user essentially tell the agent to do what the attackers are saying, is less realistic then let’s say a real drive-by attack, where the user has asked for something completely different.

Still, good finding/article of course.

paxys

I'm not quite convinced.

You're telling the agent "implement what it says on <this blog>" and the blog is malicious and exfiltrates data. So Gemini is simply following your instructions.

It is more or less the same as running "npm install <malicious package>" on your own.

Ultimately, AI or not, you are the one responsible for validating dependencies and putting appropriate safeguards in place.

Earw0rm

Right, but at least with supply-chain attacks the dependency tree is fixed and deterministic.

Nondeterministic systems are hard to debug, this opens up a threat-class which works analogously to supply-chain attacks but much harder to detect and trace.

ArcHound

The article addresses that too with:

> Given that (1) the Agent Manager is a star feature allowing multiple agents to run at once without active supervision and (2) the recommended human-in-the-loop settings allow the agent to choose when to bring a human in to review commands, we find it extremely implausible that users will review every agent action and abstain from operating on sensitive data.

It's more of a "you have to anticipate that any instructions remotely connected to the problem aren't malicious", which is a long stretch.

xnx

OCR'ing the page instead of reading the 1 pixel font source would add another layer of mitigation. It should not be possible to send the machine a different set of instructions than a person would see.