Microsoft 365 Copilot – Arbitrary Data Exfiltration via Mermaid Diagrams
17 comments
·October 26, 2025simonw
That site just gave me a 503 but here's the Internet Archive copy: https://web.archive.org/web/20251023095538/https://www.adaml...
This isn't the first Mermaid prompt injection exfiltration we've seen - here's one from August that was reported by Johann Rehberger against Cursor (and fixed by them): https://embracethered.com/blog/posts/2025/cursor-data-exfilt...
That's mentioned in the linked post. Looks like that attack was different - Cursor's Mermaid implementation could render external images, but Copilot's doesn't let you do that so you need to trick users with a fake Login button that activates a hyperlink instead.
luke-stanley
The Lethal Trifecta strikes again! Mermaid seems like a bit of a side issue, presumably there are lots of ways data might leak out. It could have just been a normal link. They should probably look further into the underlying issue: unrelated instruction following.
Thanks for the archive link and the very useful term BTW! I also got 503 when trying to visit.
simonw
I think they're doing this the right way. You can't fix unrelated instruction following with current generation LLMs, so given that the only leg you can remove from the trifecta is mechanisms for exfiltrating the data.
The first AI lab to solve unrelated instruction following is going to have SUCH a huge impact.
hshdhdhehd
Not even humans can do it perfectly (hence social engineering)
binarymax
> MSRC bounty team determined that M365 Copilot was out-of-scope for bounty and therefore not eligible for a reward.
What a shame. There’s probably LOTS of vulns in copilot. This just discourages researchers and responsible disclosure, likely leaving copilot very insecure in the long run.
candiddevmike
It's irresponsible for any company to be using copilot with MS having this bug bounty attitude, IMO. Would be curious what other products are out of bounds so I know not to use them...
p_ing
QQ for the LLM folks -- is this possibly due to the lack of determinization of LLM output?
If I code a var blah = 5*5; I know the answer is always 35. But if I ask an LLM, it seems like the answer could be anything from correct to any incorrect number one could dream up.
We saw this at work with the seahorse emoji question. A variety of [slight] different answers.
null
CaptainOfCoit
> There’s probably LOTS of vulns in copilot
Probably exactly why they "determined" it to be out of scope :)
Nextgrid
It’s both interesting to see all the creative ways people find to exploit LLM-based systems, but also disappointing that to this day designers of these systems don’t want to accept that LLMs are inherently vulnerable to prompt injection and short of significant breakthroughs in AI interpretability will remain hopelessly broken regardless of ad-hoc “mitigations” they implement.
a-dub
" ... BUT most importantly, ... "
i love the use of all capitals for emphasis for important instructions in the malicious prompt. it's almost like an enthusiastic leader of a criminal gang explaining the plot in a dingey diner the night before as the rain pours outside.
narrator
Prompt Injection is an interesting difference between human consciousness and machine "consciousness", or what people try and liken to it. A human can easily tell when information is coming from his memory or internal thoughts and when it is coming from a possibly less reliable outside source. Gaslighting is essentially an attempted prompt injection and is considered psychological abuse. Interestingly, people complain about AI gaslighting them and AI doesn't seem to think that's a problem.
lazyasciiart
Isn’t that what marketing is?
https://web.archive.org/web/20251023095538/https://www.adaml...