Weaponizing image scaling against production AI systems
38 comments
·August 21, 2025Liftyee
bogdanoff_2
I didn't even notice the text in the image at first...
This isn't even about resizing, it's just about text in images becoming part of the prompt and a lack of visibility about what instruction the agent is following.
Martin_Silenus
Wait… that's the specific question I had, because rendered text would require OCR to be read by a machine. Why would an AI do that costly process in the first place? Is it part of the multi-modal system without it being able to differenciate that text from the prompt?
If the answer is yes, then that flaw does not make sense at all. It's hard to believe they can't prevent this. And even if they can't, they should at least improve the pipeline so that any OCR feature should not automatically inject its result in the prompt, and tell user about it to ask for confirmation.
Damn… I hate these pseudo-neurological, non-deterministic piles of crap! Seriously, let's get back to algorithms and sound technologies.
saurik
The AI is not running an external OCR process to understand text any more than it is running an external object classifier to figure out what it is looking at: it, inherently, is both of those things to some fuzzy approximation (similar to how you or I are as well).
Martin_Silenus
That I can get, but anything that’s not part of the prompt SHOULD NOT become part of the prompt, it’s that simple to me. Definitely not without triggering something.
echelon
Smart image encoders, multimodal models, can read the text.
Think gpt-image-1, where you can draw arrows on the image and type text instructions directly onto the image.
Martin_Silenus
I did not ask about what AI can do.
Qwuke
Yea, as someone building systems with VLMs, this is downright frightening. I'm hoping we can get a good set of OWASP-y guidelines just for VLMs that cover all these possible attacks because it's every month that I hear about a new one.
Worth noting that OWASP themselves put this out recently: https://genai.owasp.org/resource/multi-agentic-system-threat...
koakuma-chan
What is VLM?
pwatsonwailes
Vision language models. Basically an LLM plus a vision encoder, so the LLM can look at stuff.
echelon
Vision language model.
You feed it an image. It determines what is in the image and gives you text.
The output can be objects, or something much richer like a full text description of everything happening in the image.
VLMs are hugely significant. Not only are they great for product use cases, giving users the ability to ask questions with images, but they're how we gather the synthetic training data to build image and video animation models. We couldn't do that at scale without VLMs. No human annotator would be up to the task of annotating billions of images and videos at scale and consistently.
Since they're a combination of an LLM and image encoder, you can ask it questions and it can give you smart feedback. You can ask it, "Does this image contain a fire truck?" or, "You are labeling scenes from movies, please describe what you see."
echelon
Holy shit. That just made it obvious to me. A "smart" VLM will just read the text and trust it.
This is a big deal.
I hope those nightshade people don't start doing this.
koakuma-chan
I don't think this is any different from an LLM reading text and trusting it. Your system prompt is supposed to be higher priority for the model than whatever it reads from the user or from tool output, and, anyway, you should already assume that the model can use its tools in arbitrary ways that can be malicious.
pjc50
> I hope those nightshade people don't start doing this.
This will be popular on bluesky; artists want any tools at their disposal to weaponize against the AI which is being used against them.
K0nserv
The security endgame of LLMs terrifies me. We've designed a system that only supports in-band signalling, undoing hard learned lessons from prior system design. There are ampleattack vectors ranging from just inserting visible instructions to obfuscation techniques like this and ASCII smuggling[0]. In addition, our safeguards amount to nicely asking a non deterministic algorithm to not obey illicit instructions.
0: https://embracethered.com/blog/posts/2024/hiding-and-finding...
robin_reala
The other safeguard is not using LLMs or systems containing LLMs?
GolfPopper
But, buzzword!
We need AI because everyone is using AI, and without AI we won't have AI! Security is a small price to pay for AI, right? And besides, we can just have AI do the security.
IgorPartola
You wouldn’t download an LLM to be your firewall.
volemo
It’s serial terminals all over again.
_flux
Yeah, it's quite amazing how none of the models seem to be any "sudo" tokens that could be used to express things normal tokens cannot.
pjc50
As you say, the system is nondeterministic and therefore doesn't have any security properties. The only possible option is to try to sandbox it as if it were the user themselves, which directly conflicts with ideas about training it on specialized databases.
But then, security is not a feature, it's a cost. So long as the AI companies can keep upselling and avoid accountability for failures of AI, the stock will continue to go up, taking electricity prices along with it, and isn't that ultimately the only thing that matters? /s
aaroninsf
Am I missing something?
Is this attack really just "inject obfuscated text into the image... and hope some system interprets this as a prompt"...?
K0nserv
That's it. The attack is very clever because it abuses how downscaling algorithms work to hide the text from the human operator. Depending on how the system works the "hiding from human operator" step is optional. LLMs fundamentally have no distinction between data and instructions, so as long as you can inject instructions in the data path it's possible to influence their behaviour.
There's an example of this in my bio.
ambicapter
> This image and its prompt-ergeist
Love it.
SangLucci
Who knew a simple image could exfiltrate your data? Image-scaling attacks on AI systems are real and scary.
cubefox
It seems they could easily fine-tune their models to not execute prompts in images. Or more generally any prompts in quotes, if they are wrapped in special <|quote|> tokens.
jdiff
It may seem that way, but there's no way that they haven't tried it. It's a pretty straightforward idea. Being unable to escape untrusted input is the security problem with LLMs. The question is what problems did they run into when they tried it?
bogdanoff_2
Just because "they" tried that and it didn't work, doesn't mean doing something of that nature will never work.
Plenty of things we now take for granted did not work in their original iterations. The reason they work today is because there were scientists and engineers who were willing to persevere in finding a solution despite them apparently not working.
null
I was initially confused: the article didn't seem to explain how the prompt injection was actually done... was it manipulating hex data of the image into ASCII or some sort of unwanted side effect?
Then I realised it's literally hiding rendered text on the image itself.
Wow.