Skip to content(if available)orjump to list(if available)

Show HN: Tiny Diffusion – A character-level text diffusion model from scratch

Show HN: Tiny Diffusion – A character-level text diffusion model from scratch

6 comments

·November 10, 2025

This is a character-level language diffusion model for text generation.

The model is a modified version of Nanochat's GPT implementation and is trained on Tiny Shakespeare!

It is only 10.7 million parameters, so you can try it out locally.

Majromax

The basic MLP block in this model uses a ReLU^2 activation function (x <- ReLU(x)^2). That seems to be copied from the nanochat project, and it's not present in nanoGPT. Is there some documentation on the choice of this activation function?

simonw

This is really neat.

I noticed the diffusion-process.py demo was using matplotlib in a window, but I figured it would be cute if it used a terminal UI instead - so I had Claude Code convert it to use curses. Code and demo GIF here: https://gist.github.com/simonw/9033ebd8dd17b4c0ad101ddda7a54...

yugretcx

Why do these text diffusion demos always look like the number of allowed tokens is fixed for a specific unfilled region?

Is this the case?

Ie. if the region only has four tokens(here characters) but calculates the best word is “forget” does it just abandon the best fit or truncate it to fit?

Are there text diffusion models with lax infill directives?

nathan-barry

Yes, this is the case. During training, the model will get a sequence of text (ex, 512 tokens long) with a percentage of them masked out (with a special <MASK> token). It learns how to unmask those tokens to construct the original text.

In the case that you mentioned, if we had 4 <MASK> tokens in a row, all we are doing for decoding is predicting what those 4 tokens should be.

Generally, this does not seem to be a significant problem, as there are usually multiple ways to express an idea in varying lengths. Also, with confidence-aware parallel decoding, it can usually avoid the scenario you mentioned, as focusing on decoding the highest confident tokens will generally avoid such scenarios with a well trained model.

rand0mwalk

Tokens start as a special [MASK] token. Then as the diffusion process runs they are "unmasked" i.e. sampled.

So yes, you define a sequence of [MASK] tokens with some length ahead of time.

In practice, if a model wants to write a shorter sequence, it'll just fill the remaining tokens with empty content. If it wants to write a longer sequence, you'll have to identify this and extend the sequence with more [MASK] tokens. This is typically obvious since there's no "end of sequence" token present if the model wants to generate more.

null

[deleted]