MCP server for Ghidra
75 comments
·March 25, 2025randomtoast
airza
Current SOTA models are really bad at RE and i don't really expect this to improve through training on open data.
There are just not a lot of high quality examples on the internet, and more importantly the people writing this code are doing their best to make it actively more difficult.
sebzim4500
It is quite easy to produce high quality synthetic data to train reverse engineering. Just take any open source project and ask the model to produce the code (or something equivalent) given the binary.
ai-christianson
Right. You could even run it through code obfuscators and such to create more diverse, realistic examples.
gus_massa
You can't open source code that is not yours. They are implementing a clean new version.
On the other direction, a company can't pick a GPL project, uncompile the code and release it as proprietary.
randomtoast
> They are implementing a clean new version.
Much of reverse engineering involves analyzing existing code, and this is not a secret. There are forums where people discuss and share their reverse engineering findings. Without this, creating a nearly 100% compatible clone, such as one that can use the original game files, would be nearly impossible.
Xx_crazy420_xX
For LLMs to solve code I think they should be AST-native. Code is a tree, not a sequence — yet we feed it to models linearly, with no explicit structure. Todays models lack recurrence or true memory, so they can’t reason over hierarchical structures effectively.
Nesco
LLMs are autoregressive models. However, the notion of order in ASTs might be nonexistent, especially for parallel branches of computation/control flow. You could attempt to untangle each branch into N sequences, but this would erase control-flow information.
Even when there is an objective ordering of the children of every node, you still have four traversal options: {preorder, postorder} × {BF, DF}.
Note: For children lacking an objective ordering, you might apply generic rules to define a traversal order, but you’d end up with as many depth-first traversals as there are possible orders—essentially a crude heuristic. If you want the evaluation order to be dynamic at each step (e.g., using RL), the complexity grows geometrically worse. That’s been my experience tinkering with a custom AST DSL for ARC-AGI.
Xx_crazy420_xX
Cool to hear you've worked on ARC-AGI — I poked with it too. You’re totally right about the messy traversal space, especially with parallel branches. What feels ambiguous at the token level becomes structured ambiguity in the AST — and that’s progress.
My hunch is that LLMs don’t need to solve the whole traversal space — they just need a clean, abstract interface. Even parallel branches can be normalized into a schema that the model can reason over consistently. And in practice, you rarely need full recursion or a complete tree walk to understand a node — but having that option unlocks deeper comprehension when it counts.
This kind of structural understanding would also massively improve Copilot-style tools, especially for less popular libraries where token-level familiarity breaks down. If models could reason over types and structure instead of guessing based on frequency, completions would be a lot more reliable outside the top 1% of APIs.
dragonwriter
> LLMs are autoregressive models.
Most LLMs are autoregressive models, but exceptions exist, e.g., Mercury [0] is a diffusion LLM.
Nesco
Well, from my very limited comprehension of diffusion models, they apply to fixed length structure, mostly from a continuous space. Maybe a way to make them work with tree structures could be found - that's no trivial task
gnfargbl
Has there been much work on reversing binaries into an AST form? It seems like something that somebody would have thought of researching, but I've not come across any efforts.
Is it something you can do generically, or do you need to know the specific compiler? Do you need to know the specific language, even, or could you perhaps create some other hypothetical AST in a different language that would have led to the same binary?
lmeyerov
The graph part , more so than the ast part, makes sense to me. We reason over programs as hairy dataflow/controlflow/etc dependency graphs that happen to originally be encoded as some sort of text->ast.
GNNs went down some roads here, but never felt like a path to reasoning. So how to get an RL reasoner flow to do what is easy for datalog, natively and/or as a tool?
pilooch
Or just we could forget about code and have model act directly :) That's my bet.
otabdeveloper4
LLMs process information in a strictly sequential manner. It's their core capability and what makes them feel so anthropomorphic.
dragonwriter
> LLMs process information in a strictly sequential manner.
"LLMs" as a class do not. Most LLMs, because most LLMs are autoregressive models, but diffusion LLMs exist and are not sequential in the way that autoregressive models are.
> It's their core capability
Being sequential is not a capability at all, much less a core one defining Large Language Models.
> and what makes them feel so anthropomorphic.
I disagree with this, too; I think what makes LLMs "feel so anthropomorphic" is the fact that most humans are very focused on language in perceiving other humans as human, and LLMs' output (as their name suggests) models human use of language, directly targeting a key feature used to identify something as human-like.
otabdeveloper4
The gimmick of the LLM is that it outputs text sequentially, as if it is talking to us. That's what makes them feel "alive" and "intelligent" to us. (And yes, ironically it's this sequential nature that actually limits their intelligence in practice, but whatever. The AI hype is about appearances, not facts.)
null
mike_hearn
Not fully.
The point of transformer attention is cross-wise processing of tokens that computes their relationship to each other at multiple levels of abstraction. That's why LLMs can read so fast: they're processing all the input tokens in parallel.
LLMs emit tokens in a sequential manner at the level of the outer loop, but clearly inside the activations is a non-sequential map of the entire planned output, otherwise they wouldn't be able to make coherent sentences or speak German (which puts verbs at the end).
qwertox
Which tools can currently invoke MCP? I have read only a little about MCP and got to know that Claude's desktop application is capable of using MCP locally.
Are there any chat interfaces which allow using MCP remotely?
I would like to be able to specify MCP endpoints and the functions they offer in ChatGPT's, Claude's and Gemini's web interfaces so that I can have them call my servers remotely. A bit like "GPTs" and "Gems".
lauriewired
I touch on this briefly in the video, beside Claude Desktop, 5ire is a fairly model-agnostic local MCP client, I'm sure there are others.
sama also recently mentioned ChatGPT Desktop is getting MCP client functionality "soon".
As for remote clients, Cloudflare has some really useful tooling, look at their "AI Playground".
jauntywundrkind
OpenAI just announced support in their Agents SDK. https://news.ycombinator.com/item?id=43485566 https://openai.github.io/openai-agents-python/mcp/
electroly
I use them in Cursor. Writing an MCP server is trivial, just ask Cursor to put one together in TypeScript. You would use your local MCP server to call whatever remote API you want (or perform some other task). The MCP server uses stdin/stdout to talk to Cursor.
jevyjevjevs
I'm using Librechat which I've found to be quite feature complete. I updated an Obsidian MCP to get my most recent journal entries to act like a therapist. Example setup here: https://www.jevy.org/articles/obsidian-mcps-to-work-with-not...
dockerd
@jevyjevjevs,
Can you add rss feed to your site blog? I found few of the articles interesting and helpful. I would like to subscribe but I don't see rss or email subscription.
efunnekol
You can use MCP servers in SAM (Solace Agent Mesh). That has a chat interface and can be run remotely. Perhaps the easiest way to do it remotely is to use a Slack integration to SAM with a free Slack workspace, which doesn't require poking a hole to serve the browser UI
nekitamo
I had the same question as you, and some quick Googling led me to this list here:
lordviet
and the list of servers - https://github.com/punkpeye/awesome-mcp-servers
salgorithm
Block has an open source tool called Goose that invokes MCP. https://block.github.io/goose/
hedgehog
Is there a trick to making it work well? I tried Goose briefly but it seemed very flaky compared to Open Web UI with hand-configured tool calling.
fixprix
Unity, Blender and Photoshop all have rough MCP integrations available. You can find them on GitHub.
mdaniel
Her previous integration with Ghidra and an LLM had a good video, too: https://news.ycombinator.com/item?id=42860849
Malimite – iOS and macOS Decompiler - https://news.ycombinator.com/item?id=42829402 - Jan, 2025 (37 comments)
sorenjan
If you haven't watched her Youtube channel before I recommend checking it out. Besides the technical content I think the editing with retro OS graphics are fun.
foooorsyth
It's really impressive. Technical content, GitHub repos that go along with the videos, set design, retro editing -- much higher quality than a lot of stuff out there from major studios
npace12
Also one for radare2:
ngneer
Thought experiment. Suppose all binaries could be instantly reverse engineered to perfection. How would that change security?
LegionMammal978
Everyone would just replace all their proprietary programs with dumb clients that communicate with a server. Either that, or they'd go all in on homomorphic encryption.
ynniv
Only formally proven systems will be secure
gosub100
Secure enclaves would appear in most computers. Nothing would be run without everything being encrypted.
xeckr
Everything is open source is you speak assembly.
brokensegue
my experience with just copying and pasting things from ghidra into LLMs and asking it to figure it out wasn't so successful. it'd be cool to have benchmarks for this stuff though.
Everdred2dx
I actually have only tried this once but had the opposite experience. Gave it 5 or so related functions from a ps2 game and it correctly inferred they were related to graphics code, properly typing and naming the parameters. I’m sure this sort of thing is extremely hit or miss though
strstr
Had the same experience. Took the janky decompilation from ghidra, and it was able to name parameters and functions. Even figured out the game based on a single name in a string. Based in my read of the labeled decompilation, it seemed largely correct. And definitely a lot faster than me.
Even if I weren’t to rely on it 100% it was definitely a great draft pass over the functions.
cedws
Most likely there was just a mangled symbol somewhere that it recognised from its training data.
rowanG077
Where is that coming from? The chances that some random ps2 games code symbols are in the training data are infinitesimal. It's much more likely that it can understand code and rewrite it. Basically what LLM have been capable of for years now.
unit149
[dead]
rfoo
I've been thinking on how to build a benchmark for this stuff for a while, and don't have a good idea other than LLM-as-judge (which quickly gets messy). I guess there's a reason why current neural decompilation attempts are all evaluated on "seemingly meaningless" benchmarks like "can it recompile without syntax error" or "functional equivalence of recompilation" etc.
vessenes
Hmm, specifically when it comes to reverse engineering, you have the best benchmark ever - you can check the original code, no?
brokensegue
that requires LLM as judge
bitfieldz
[dead]
Everdred2dx
Is anyone working on a "catalog" of MCP servers? Searching on Github is not exactly the best way to discover these.
meander_water
I've noticed a lot of websites popping up recently which is basically just a list of MCP servers. Some examples:
- https://glama.ai/mcp/servers
- https://www.claudemcp.com/servers
Not to mention the usual GitHub ones:
- https://github.com/punkpeye/awesome-mcp-servers
The hype is real.
knowaveragejoe
To clarify somewhat, while they all index MCP servers out there, some of them also will _host_ the MCP server remotely as well. Glama, mcp.run and just recently Cloudflare have offerings in this realm.
Klaster_1
Do these MCP registries expose an MCP server too, so the client can do MCP server auto discovery based on registry?
dSebastien
There are multiple directories already. I listed some in my notes: https://notes.dsebastien.net/30+Areas/33+Permanent+notes/33....
celesian
This is very cool but it would be nice to have more features on the MCP server, such as arbitrary read and write of programs. For example, I was working on a self-unpacking CTF challenge which XORed instructions. It would be nice to have it be able to read the values at the addresses it xored.
dang
Related (but merged hither):
GhidraMCP: Now AI can reverse malware [video] - https://news.ycombinator.com/item?id=43475025
userbinator
RE is exactly the sort of work that requires precision and careful reasoning, not hallucinatory statistical inference. Seeing how LLMs stumble very heavily on the former makes it clear that AI will not replace us.
iugtmkbdfil834
I hate to be that guy, but one does not follow the other. To some, just the initial appearance of 'acceptable'/'good enough' is, well, good enough. Current set of LLMs can absolutely replace us while breaking a lot in the process.
bitfieldz
[dead]
I hope that one day we have a tool that can convert any proprietary binary to source code with a single click. It would be so much fun to have an "open source" version of all games. Currently, there are projects like https://github.com/Try/OpenGothic and https://github.com/SFTtech/openage, but these require years of community effort.