Skip to content(if available)orjump to list(if available)

We pwned X, Vercel, Cursor, and Discord through a supply-chain attack

dllu

The fact that SVG files can contain scripts was a bit of a mistake. On one hand, the animations and entire interactive demos and even games in a single SVG are cool. But on the other hand, it opens up a serious can of worms of security vulnerabilities. As a result, SVG files are often banned from various image upload tools, they do not unfurl previews, and so on. If you upload an SVG to discord, it just shows the raw code; and don't even think about sharing an SVG image via Facebook Messenger, Wechat, Google Hangouts, or whatever. In 2025, raster formats remain way more accessible and easily shared than SVGs.

This is very sad because SVGs often have way smaller file size, and obviously look much better at various scales. If only there was a widely used vector format that does not have any script support and can be easily shared.

poorman

All SVGs should be properly sanitized going into a backend and out of it and when rendered on a page.

Do you allow SVGs to be uploaded anywhere on your site? This is a PSA that you're probably at risk unless you can find the few hundred lines of code doing the sanitization.

Note to Ruby on Rails developers, your active storage uploaded SVGs are not sanitized by default.

poorman

GitLab has some code in their repo if you want to see how to do it.

nradov

Is there SVG sanitization code which has been formally proven correct and itself free of security vulnerabilities?

ivw

just run them through `svgo` and get the benefits of smaller filesizes as well

Wowfunhappy

IMO, the bigger problem with SVGs as an image format is that different software often renders them (very) differently! It's a class of problem that raster image formats basically don't have.

zffr

I would have expected SVGs to be like PDFs and render the same across devices. Is the issue that some renderers don’t implement the full spec, or that some implement parts incorrectly?

aidenn0

External entities in XML[1] were a similar issue back when everyone was using XML for everything, and parsers processed external-entities by default.

1: https://owasp.org/www-community/vulnerabilities/XML_External...

hinkley

At least with external entities you could deny the parser an internet connection and force it to only load external documents from a cache you prepopulated and vetted. Turing completeness is a bullshit idea in document formats.

actionfromafar

Postscript is pretty neat IMHO and it’s Turing complete. I really appreciated my raytraced page finally coming out of that poor HP laser after an hour or so.

aidenn0

With SVGs you can serve them from a different domain. IIUC the issue from TFA was that the SVGs were served from the primary domain; had they been on a different domain, they would have not been allowed to do as much.

gnerd00

calling Leonard Rosenthol ...

bobbylarrybobby

Would it be possible for messenger apps to simply ignore <script> tags (and accept that this will break a small fraction of SVGs)? Or is that not a sufficient defense?

demurgos

I looked into it for work at some point as we wanted to support SVG uploads. Stripping <script> is not enough to have an inert file. Scripts can also be attached as attributes. If you want to prevent external resources it gets more complex.

The only reliable solution would be an allowlist of safe elements and attributes, but it would quickly cause compat issues unless you spend time curating the rules. I did not find an existing lib doing it at the time, and it was too much effort to maintain it ourselves.

The solution I ended up implementing was having a sandboxed Chromium instance and communicating with it through the dev tools to load the SVG and rasterize it. This allowed uploading SVG files, but it was then served as rasterized PNGs to other users.

hoppp

I agree, when animating SVGs I never put the js inside them so having the ability embed it is just dangerious I think

css_apologist

is santizing SVGs hard, or just everyone forgets they can contain js?

nightski

Does it need to be as complicated as a new format? Or would it be enough to not allow any scripting in the provided SVGs (or stripping it out). I can't imagine there are that many SVGs out there which take advantage of the feature.

HPsquared

Could there be a limited format that excludes the scripting element? Like in Excel: xlsx files have no macros, but xlsm (and the old xls) can contain macros.

llmslave2

This feels so emblematic of our current era. VC funded vibe coded AI documentation startup somehow gets big name customers who don't properly vet the security of the platform, ship a massive vulnerability that could pwn millions of users and the person who reports the vulnerability gets...$5k.

If I recall last week Mintlify wrote a blog post showcasing their impressive(ly complicated) caching architecture. Pretending like they were doing real engineering, when it turns out nobody there seems to know what they're doing, but they've managed to convince some big names to use them.

Man, it's like everything I hate about modern tech. Good job Eva for finding this one. Starting to think that every AI startup or company that is heavily using gen-ai for coding is probably extremely vulnerable to the simplest of attacks. Might be a way to make some extra spending money lol.

tptacek

I don't think anybody in SFBA-style software development, both pre- and post-LLM, is really resilient against these kinds of attacks. The problem isn't vibe coding so much as it is multiparty DLL-hell dependency stacks, which is something I attribute more to Javascript culture than to any recent advance in technology.

llmslave2

You're right that it's a specific programming culture that is especially vulnerable to it. And for the same reasons they were vulnerable to the same thing to a lesser degree before the rise of LLMs.

But like, this case isn't really a dependency or supply chain attack. It's just allowing remote code execution because, idk, the dev who implemented it didn't read the manual and see that MDX can execute arbitrary code or something. Or maybe they vibe coded it and saw it worked and didn't bother to check. Perhaps it's a supply-chain attack on Discord et al to use Mintlify, if thats what you meant then I apologize.

I think you're right that I have an extreme aversion to SFBA-style software development, and partly because of how gen-ai is used there.

michaelt

One might consider this a supply chain attack because the title of the post is “We pwned X, Vercel, Cursor, and Discord through a supply-chain attack”

Banditoz

I'm curious what caching architecture a docs site needs, it can't be more complicated than a standard fare CDN?

mosura

Search indexing, etc.

padjo

Seems like such a tiny amount of money for a bug that can be used to completely own your customers accounts. Also not much excuse for xss these days.

tptacek

This comes up on every story about bug bounties. There is in general no market at all for XSS vulnerabilities. That might be different for Twitter, Facebook, Instagram, and TikTok, because of the possibility of monetizing a single strike across a whole huge social network, and there's maybe a bank-shot argument for Discord, but you really have to do a lot of work to generate the monetization story for any of those.

The vulnerabilities that command real dollars all have half-lives, and can't be fixed with a single cluster of prod deploys by the victims.

jijijijij

If a $500 drone is coming for your $100M factory, the price limit for defense considerations isn't $500.

In the end, you are trying to encourage people not to fuck with your shit, instead of playing economic games. Especially with a bunch of teenagers who wouldn't even be fully criminally liable for doing something funny. $4K isn't much today, even for a teenager. Thanks to stupid AI shit like Mintlify, that's like worth 2GB of RAM or something.

It's not just compensation, it's a gesture. And really bad PR.

tptacek

That's not how any of this works. A price for a vulnerability tracking the worst-case outcome of that vulnerability isn't a bounty or a market-clearing price; it's a shakedown fee. Meanwhile: the actual market-clearing price of an XSS vulnerability is very low (in most cases, it doesn't exist at all) because there aren't existing business processes those vulnerabilities drop seamlessly into; they're all situational and time-sensitive.

da_grift_shift

>Also not much excuse for xss these days.

XSS is not dead, and the web platforms mitigations (setHTML, Trusted Types) are not a panacea. CSP helps but is often configured poorly.

So, this kind of widespread XSS in a vulnerable third party component is indeed concerning.

For another example, there have been two reflected XSS vulns found in Anubis this year, putting any website that deploys it and doesn't patch at risk of JS execution on their origin.

Audit your third-party dependencies!

https://github.com/TecharoHQ/anubis/security/advisories/GHSA...

https://github.com/TecharoHQ/anubis/security/advisories/GHSA...

azemetre

Is it really fair to compare an open source project that desperately wants only $60k a year to hire a dev with companies that have collectively raised over billions of dollars in funding?

rafram

I think it’s very fair. Anubis generated a lot of buzz in tech communities like this one, and developers pushed it to production without taking a serious look at what it’s doing on their server. It’s a very flawed piece of software that doesn’t even do a good job at the task it’s meant for (don’t forget that it doesn’t touch any request without “Mozilla” in the UA). If some security criticism gets people to uninstall it, good.

noirscape

I'd say it's probably worse in terms of scope. The audience for some AI-powered documentation platform will ultimately be fairly small (mostly corporations).

Anubis is promoting itself as a sort of Cloudflare-esque service to mitigate AI scraping. They also aren't just an open source project relying on gracious donations, there's a paid whitelabel version of the project.

If anything, Anubis probably should be held to a higher standard, given many more vulnerable people (as in, vulnerable against having XSS on their site cause significant issues with having to fish their site out of spam filters and/or bandwidth exhaustion hitting their wallet) are reliant on it compared to big corporations. Same reason that a bug in some random GitHub project somewhere probably has an impact of near zero, but a critical security bug in nginx means that there's shit on the fan. When you write software that has a massive audience, you're going to have to be held to higher standards (if not legally, at least socially).

Not that Anubis' handling of this seems to be bad or anything; both XSS attacks were mitigated, but "won't somebody think of the poor FOSS project" isn't really the right answer here.

0xbadcafebee

How these companies don't hire kids like Daniel for pennies on the dollar and have him attack their stacks on a loop baffles me. Pay the kid $50k/yr (part time, he still needs to go to school) to constantly probe your crappy stacks. Within a year or two you'll have the most goddamn secure company on the internet - and no public vulns to embarrass you.

bink

It's not quite that simple. I don't think most bug bounty participants want a full-time job. But even more-so in my experience they are not security generalists. You can hire one person who is good at finding obscure XSS vulns, another that's good at exploiting cloud privilege escalation in IAM role definitions, another that's good at shell or archive exploits. If you look at profiles on H1 you'll see most good hackers specialize in specific types of findings.

wiether

That's a bit simplistic.

If you sign a contract with a "hacker", then you are expecting results. Otherwise how do you decide to renew the contract next year? How do you decide to raise it next year? What if, during this contract, a vulnerability that this individual didn't found is exploited? You get rid of them?

So you're putting pressure on a person who is a researcher, not a producer. Which is wrong.

And also there's the scale. Sure, here you have one guy who exploited a vulnerability. But how long it took them to get there? There's probably dozens of vulnerabilities yet to be exploited, requiring skills that differ so much from the ones used by this person that they won't find them. Even if you pay them for a full-time position.

Whereas, if you set up a bug bounty program, you are basically crowdsourcing your vulnerabilities: not only you probably have thousands of people actively trying to exploit vulnerabilities in your system, but also, you only give money to the ones that do manage to exploit one. You're only paying on result.

Obviously, if the reward is not big enough, they could be tempted to sell them to someone else or use them themselves. But the risk is here no matter how you decide to handle this topic.

sammy2255

They've already proved themselves as competent. $50k a year to a billion dollar company is nothing. Even if they find 0 vulnerabilities a year it's still worth it to them

zwnow

While I would love that for the kid I dont think these companies care about security at all.

Illniyar

Nice discovery and writeup. Let alone for a 16 yo!.

I've never heard an XSS vulnerability described as a supply-chain attack before though, usually that one is reserved for package managers malicious scripts or companies putting backdoors in hardware.

bink

I think that's misuse of the term as well, but like you said they are only 16.

dfbrown

Their collaborator's report includes a more significant issue, an RCE on a mintlify server: https://kibty.town/blog/mintlify/

orliesaurus

I've been following the rise of SVG based attacks recently... It's not just hypothetical anymore... People are using SVG files to deliver full phishing pages and drive by downloads by hiding JavaScript in the markup

ALSO as someone who maintains a file upload pipeline I run every SVG through a sanitizer... Tools like DOMPurify remove scripts and enforce a safe subset of the spec... I even go as far as rasterizing user uploaded vectors to PNG when possible

HOWEVER the bigger issue is mental... Most folks treat SVG like a dumb image when browsers treat it like executable content... Until the platform changes that expectation there will always be an attack surface

throwaway613745

Ok, I’m never opening an svg ever again.

Found by a 16 year old, what a legend.

bri3d

Proxying from the "hot" domain (with user credentials) to a third party service is always going to be an awful idea. Why not just CNAME Mintlify to dev-docs.discord.com or something?

This is also why an `app.` or even better `tenant.` subdomain is always a good idea; it limits the blast radius of mistakes like this.

gkoberger

I run a product similar to Mintlify.

We've made different product decisions than them. We don't support this, nor do we request access to codebases for Git sync. Both are security issues waiting to happen, no matter how much customers want them.

The reason people want it, though, is for SEO: whether it's true or outdated voodoo, almost everyone believes having their documentation on a subdomain hurts the parent domain. Google says it's not true, SEO experts say it is.

I wish Mintlify the best here – it's stressful to let customers down like this.

omneity

To my knowledge it's not as much hurting the parent domain as having two separate "worlds". Your docs which are likely to receive higher traffic will stop contributing any SEO juice to your main website.

pverheggen

I think the reason companies do this for doc sites is so they can substitute your real credentials into code snippets with "YOUR_API_KEY". Seems like a poor tradeoff given the security downside.

multisport

decided to make a new account to post:

Mintlify security is the worse I have even encountered in a modern SaaS company.

They will leak your data, code, assets, etc. They will know they did this. You will tell them, they will acknowledge that they knew it happened, and didn't tell you.

Your docs site will go down, and you will need to page their engineers to tell them its down. This will be a surprise to them.

hinkley

It’s clear to me now that I need to set up my home machine the way I set up BYOD when I was contracting last. I need a separate account for all of my development.

I have a friend who at one point had five monitors and 2 computers (actually it might be 3) on his desk and maybe he’s the one doing it right. He keeps his personal stuff and his programming/work stuff completely separate.

bluetidepro

Slightly related, as someone who doesn’t engage in this type of work, I’m curious about the potential risks associated with discovering, testing, and searching for security bugs. While it’s undoubtedly positive that this individual ultimately became a responsible person and disclosed the information, what if they hadn’t? Furthermore, on Discord’s side, what if they were unaware of this person and encountered someone attempting to snoop on this information, mistakenly believing them to be up to no good? Has there been cases where the risk involved wasn’t justified by the relatively low $4k reward? Or any specific companies you wouldn’t want to do this with because of a past incident with them?

michaelt

If you engage in “white hat security research” on organisations who haven’t agreed to it (such as by offering roles of engagement on a site like hacker one) there is indeed a risk.

For example they might send the police to your door, who’ll tell you you’ve violated some 1980s computer security law.

I know 99.99% of cybercrime goes unpunished, but that’s because the attackers are hard to identify, and in distant foreign lands. As a white hat you’re identifiable and maybe in the same country, meaning it’s much easier to prosecute you.

pverheggen

> Furthermore, on Discord’s side, what if they were unaware of this person and encountered someone attempting to snoop on this information, mistakenly believing them to be up to no good?

Companies will create bug bounty programs where they set ground rules (like no social engineering), and have guides on how to identify yourself as an ethical hacker, for example:

https://discord.com/security

jijijijij

There are laws governing these scenarios. It's different everywhere. Portugal just updated theirs in favor of security researchers: https://www.bleepingcomputer.com/news/security/portugal-upda...