Skip to content(if available)orjump to list(if available)

Fast-DLLM: Training-Free Acceleration of Diffusion LLM

ProofHouse

Wait, under everything I’ve read about Diffusion Language Models and demos I’ve also seen and tried, inference is faster than traditional architectures. They state the opposite what gives?

gurtinator

Thats because those demos probably use parallel decoding. In principle, dLLM inference is slower since you have to do bidirectional generation over the whole generation window for each diffusion step. Example; you unmask one token in the 128 window for 128 diffusion steps to generate the full window.