Skip to content(if available)orjump to list(if available)

Decompiling 2024: A Year of Resurgance in Decompilation Research

benob

> If you’ve ever talked to me in person, you’d know that I’m a disbeliever of AI replacing decompilers any time soon

Decompilation, seen as a translation problem, is by any means a job that suits AI methods. Give time to researchers to gather enough mappings between source code and machine code, get used to training large predictive models, and you shall see top notch decompilers that beat all engineered methods.

jcranmer

Yes and no.

My first priority for a decompiler is that the output is (mostly) correct. (I say mostly because there's lots of little niggling behavior you probably want to ignore, like representing a shift instruction as `a << b` over `a << (b & 0x1f)`). When the decompiler's output is incorrect, I can't trust it anymore, and I'm going to go straight back to the disassembly because I need to work with the correct output. And AI--especially LLMs--are notoriously bad at the "correct" part of translation.

If you look at decompilation as a multistep problem, the main steps are a) identify the function/data symbol boundaries, b) lift the functions to IR, c) recover type information (including calling convention for functions), d) recover high-level control flow, and e) recover variable names.

For step b, correctness is so critical that I'm wary of even trusting hand-generated tables for disassembly, since it's way too easy for someone to copy something by hand. But on the other hand, this is something that can be machine-generated with something that is provably correct (see, e.g., https://cs.stanford.edu/people/eschkufz/docs/pldi_16.pdf). Sure, there's also a further step for recognizing higher-level patterns like manually-implemented-bswap, but that's basically "implement a peephole optimizer," and the state of the art for compilers these days is to use formally verifiable techniques for doing that.

For a lot of the other problems, if you instead categorize them as things where the AI being wrong doesn't make it incorrect, AI can be a valuable tool. For example, control flow structuring can be envisioned as identifying which branches are gotos (including breaks/continues/early returns), since a CFG that has no gotos is pretty trivial to structure. So if your actual AI portion is a heuristic engine for working that out, it's never going to generate wrong code, just unnecessarily complicated code.

svilen_dobrev

> identifying which branches are gotos

mmh. Yesterday i tried some LLM-augmented "analysis", given a 50 lines source of C, a function with few goto's in it.. somehow all "explanations" were ~correct except it completely ignored the goto's. Using a deepseek-r1-...7b, ollama's default, probably too weak ; but i don't believe other models would be 100% correct either.

sitkack

You are right on a lot of things, but LLMs are the best bijective lens that humanity has ever discovered. They can invert functions we didn't think were invertible.

If given a mostly correct transform from binary back to code, how would we fix that?

Exactly!

Heuristics are dead.

mahaloz

I agree with many other sentiments here that if it can replace decompilers, then surely it can replace compilers... which feels unlikely soon. So far, I've seen four end-to-end binary-to-code AI approaches, and none have had convincing results. Even those that crawled all of GitHub continue to have issues of making fake code, not understanding math, omitting portions of code, and (a personal irritant for me) being unable to map what address a line of decompilation came from.

However, I also acknowledge that AI can solve many pattern-based problems well. I think a considerable value can be extracted from AI by focusing in on micro decisions in the decompiler process, like variable types, as recent work has.

wzdd

> Decompilation, seen as a translation problem, is by any means a job that suits AI methods.

Compilation is also a translation problem but I think many people would be leery of an LLM-based rust or clang -- perhaps simply because they're more familiar with the complexities involved in compilation than they are with those involved in decompilation.

(Not to say it won't eventually happen in some form.)

chrisco255

LLMs are not deterministic, and I want deterministic builds from compiled code to assembly. I also do not want the LLM to arbitrarily change the functionality, I have no such guarantees.

sitkack

Compilers aren't deterministic in the ways that people would think matter.

We will have LLM based compilers in the near future. Determinism is a property of the system, not the components.

__alexander

> Give time to researchers to gather enough mappings between source code and machine code, get used to training large predictive models, and you shall see top notch decompilers that beat all engineered methods.

Not anytime soon. There is more to a decompiler than assembly being converted to x language. File parsers, disassemblers, type reconstruction, etc are all functionality that have to run before a “machine code” can be converted to the most basics of decompiler output.

donatj

It's pattern matching, plain and simple, An area where AI excels. AI driven decomp is absolutely on its way

ChrisKnott

It's also perfect for RL because it can compile it's output and check it against the input. It's a translation exercise where there's already a perfect machine translator in one direction.

It probably just hasn't happened because decompilation is not a particularly useful thing for the vast majority of people.

dartos

Maybe in conjunction with a deterministic decompiler.

precision wrt translation, especially when the translation is not 1-to-1, is not excellent with LLMs.

In fact, their lack of precision is what makes them so good at translating natural languages!

kachapopopow

Typical obfuscation sure. vms, obfuscation and everything in between is just noise to AI.

mips_avatar

Decompilers aren’t just for security research they’re a key part of data compression of software updates. Delta compressors make deltas between decompiled code. So an improvement in mapping of decompiled files could have as much as a 20x improvement in software update size.

mahaloz

I love this use case! Do you have any public links acknowledging/mentioning/showing this use case? Including it in the Applications portion of the Dec Wiki would be great.

loloquwowndueo

“Resurgence” not “resurgance”. I wanted to leave a comment in the article itself but it wants me to sign in with GitHub, which: yuk, so I’m commenting here instead.

mahaloz

Welp, that's a really sad typo... I've made a post-publication edit to the article now, but my shame is immortalized in Git: https://github.com/mahaloz/mahaloz.re/commit/90b760f53ef51b7...

sitkack

Proof you didn't use AI? :)