Show HN: MCP-Shield – Detect security issues in MCP servers
41 comments
·April 15, 2025Manfred
People have been struggling with securing against SQL injection attacks for decades, and SQL has explicit rules for quoting values. I don't have a lot of faith in finding a solution that safely includes user input into a prompt, but I would love to be proven wrong.
simonw
I've been following prompt injection for 2.5 years and until last week I hadn't seen any convincing mitigations for it - the proposed solutions were almost all optimistic versions of "if we train a good enough model it won't get tricked any more", which doesn't work.
What changed is the new CaMeL paper from DeepMind, which notably does not rely on AI models to detect attacks: https://arxiv.org/abs/2503.18813
I wrote my own notes on that paper here: https://simonwillison.net/2025/Apr/11/camel/
nrvn
I can't "shake off" the feeling that this whole MCP/LLM thing is moving in the wrong if not the opposite direction. Up until recently we have been dealing with (or striving to build) deterministic systems in the sense that the output of such systems is expected to be the same given the same input. LLMs with all respect to them behave on a completely opposite premise. There is zero guarantee a given LLM will respond with the same output to the same exact "prompt". Which is OK because that's how natural human languages work and LLMs are perfectly trained to mimic human language.
But now we have to contain all the relevant emerging threats via teaching the LLM to translate user queries from natural language to some intermediate structured yet non-deterministic representation(subset of Python in the case of CaMeL), and validate the generated code using the conventional methods (deterministic systems, i.e. CaMeL interpreter) against pre-defined policies. Which is fine on paper but every new component (Q-LLM, interpreter, policies, policy engine) will have its own bouquet of threat vectors to be assessed and addressed.
The idea of some "magic" system translating natural language query into series of commands is nice. But this is one of those moments I am afraid I would prefer a "faster horse" especially for the likes of sending emails and organizing my music collection...
jason-phillips
> People have been struggling with securing against SQL injection attacks for decades.
Parameterized queries.
A decades old struggle is now lifted from you. Go in peace, my son.
ololobus
> Parameterized queries.
Also happy to be wrong, but in Postges clients, parametrized queries are usually implemented via prepared statements, which do not work with DDL on the protocol level. This means that if you want to create a role or table which name is a user input, you have a bad time. At least I wasn’t able to find a way to escape DDL parameters with rust-postgres, for example.
And because this seems to be a protocol limitation, I guess the clients that do implement it, do it in some custom way on the client side.
jason-phillips
Just because you can, doesn't mean you should. But if you must, abstract for good time.
pjmlp
Just like we know how to make C safe (in theory), and many other cases in the industry.
The problem is that solutions don't exist, rather the lack of safety culture that keeps ignoring best practices unless they are imposed by regulations.
chrisweekly
"problem is that solutions don't exist"
you meant "problem ISN'T that solutions...", right?
Mountain_Skies
One of the most astonishing things about working in Application Security was seeing how many SQL injection vulns there were in new code. Often doing things the right way was easier than doing it the wrong way, and yet some would fight against their data framework to create the injection vulnerability. Doubt they were trying to intentionally cause security vulnerabilities but rather were either using old tutorials and copy/paste code or were long term coders who had been doing it this way for decades.
spiritplumber
Missed naming opportunity...
DILLINGER
No, no, I'm sure, but -- you understand.
It should only be a couple of days.
What's the thing you're working on?
ALAN
It's called Tron. It's a security
program itself, actually. Monitors
all the contacts between our system
and other systems... If it finds
anything going on that's not scheduled,
it shuts it down. I sent you a memo
on it.
DILLINGER
Mmm. Part of the Master Control Program?
ALAN
No, it'll run independently.
It can watchdog the MCP as well.
mceachen
Sadly, the mouse would surely smite this awesomeness.
mirkodrummer
So the analysis is done with another call to claude with instructions like "You are a cybersecurity expert..." basically another level of extreme indirection with unpredictable results, and maybe vulnerable to injection itself
nick_wolf
It's definitely a weird loop, relying on another LLM call to analyze potential issues in stuff meant for an LLM. And you're right, it's not perfectly predictable – you might get slightly different feedback run-to-run until careful prompt engineering, that's just the nature of current models. That's why the pattern-matching checks run firs, they're the deterministic baseline. The Claude analysis adds a layer that's inherently fuzzier, trying to catch subtler semantic tricks or things the patterns miss.
And yeah, the analysis prompt itself – could someone craft a tool description that injects that prompt when it gets sent to Claude? Probably. It's turtles all the way down, sometimes. That meta-level injection is a whole other can of worms with these systems. It's part of why that analysis piece is optional and needs the explicit API key. Definitely adds another layer to worry about, for sure.
stpedgwdgfhgdd
Oh man, a complete new industry is to about to unfold. I already feel sorry for the people that jump on the latest remote mcp server and discover that their entire personal life (“what is your biggest anxiety?”) is on the streets
abhisek
May be try out vet as well: https://github.com/safedep/vet
vet is backed by a code analysis engine that performs malicious package (npm, pypi etc.) scanning. We recently extended it to support GitHub repository scanning as well.
It found the malicious behaviour in mcp-servers-example/bad-mcp-server.js https://platform.safedep.io/community/malysis/01JRYPXM0SYTM8...
mlenhard
This is pretty cool. You should also attempt to scan resources if possible. Similar to the tool injection attack Invariant Labs discovered, I achieved the same result via resource injection [1].
The three things I want solved to improve local MCP server security are file system access, version pinning, and restricted outbound network access.
I've been running my MCP servers in a Docker container and mounting only the necessary files for the server itself, but this isn't foolproof. I know some others have been experimenting with WASI and Firecracker VMs. I've also been experimenting with setting up a squid proxy in my docker container to restrict outbound access for the MCP servers. All of this being said, it would be nice if there was a standard that was set up to make these things easier.
tuananh
To solve current AI security problem, we need to throw more AI into it.
freeone3000
What if we started the other way, by explicitly declaring what files an LLM process was capable of accessing? a snap container or a chroot might be a good first attempt
paulgb
Neat, but what’s to stop a server from reporting one innocuous set of tools to MCP-Shield and then a different set of tools to the client?
nick_wolf
Great point, thanks for raising it. You're spot on – the client currently sends name: 'mcp-shield', enabling exactly the bait-and-switch scenario you described.
I'll push an update in ~30 mins adding an optional --identify-as <client-name> flag. This will let folks test for that kind of evasion by mimicking specific clients, while keeping the default behavior consistent. Probably will think more about other possible vectors. Really appreciate the feedback!
nick_wolf
That was faster than expected - here's the merged commit implementing the --identify-as flag: https://github.com/riseandignite/mcp-shield/commit/e7e2a6c04.... Thanks again!
calyhre
It seems that writing a tool in anything else than English will bypass most of this scanner
khafra
Nice! This is a much-needed space for security tooling, and I appreciate that you've put some thought into the new attack vectors. I also like the combination of signature-based analysis, and having an LLM do its own deep dive.
I expect a lot of people to refine the tool as they use it; one big challenge in maintaining the project is going to be incorporating pull requests that improve the prompt in different directions.
nick_wolf
Thanks for the kind words – really appreciate you taking the time to look it over and get what we're trying to do here.
Yeah, combining the regex/pattern checks with having Claude take a look felt like the right balance... catch the low-hanging fruit quickly but also get a deeper dive for the trickier stuff. Glad that resonates.
Maintaining the core prompt quality as people contribute improvements... that's going to be interesting. Keeping it effective and preventing it from becoming a kitchen sink of conflicting instructions will be key. Definitely something we'll need to figure out as we go.
stpedgwdgfhgdd
Suggestion: Integrate with https://kgateway.dev/
I noticed the growing security concerns around MCP (https://news.ycombinator.com/item?id=43600192) and built an open source tool that can detect several patterns of tool poisoning attacks, exfiltration channels and cross-origin manipulations.
MCP-Shield scans your installed servers (Cursor, Claude Desktop, etc.) and shows what each tool is trying to do at the instruction level, beyond just the API surface. It catches hidden instructions that try to read sensitive files, shadow other tools' behavior, or exfiltrate data.
Example of what it detects:
- Hidden instructions attempting to access ~/.ssh/id_rsa
- Cross-origin manipulations between server that can redirect WhatsApp messages
- Tool shadowing that overrides behavior of other MCP tools
- Potential exfiltration channels through optional parameters
I've included clear examples of detection outputs in the README and multiple example vulnerabilities in the repo so you can see the kinds of things it catches.
This is an early version, but I'd appreciate feedback from the community, especially around detection patterns and false positives.