Surface-Stable Fractal Dithering
32 comments
·January 23, 2025con____rad
shiandow
That does seem lovely, I might experiment with that at some point.
You don't need to make things all that difficult just to get some random looking blue noise though. You can get some pretty reasonable randomized ordered dithering by just rotating the 2x2, 4x4, 8x8 blocks of the bayer matrix randomly. Though this makes more sense if you view it as adding octaves of the 2x2 bayer matrix together randomly.
aeontech
Wow, that's a very cool paper and demo!
egypturnash
Damn, the demo around 3:30 is lovely.
eru
Yes, but not as good as in the submitted post, alas. You can see how the points 'flicker' when they zoom in.
crazygringo
Link to part of video that shows it in action:
adamrezich
Technically, this is very cool. Aesthetically, though, the end result doesn't look very good, at least in my opinion. The Obra Dinn visual style is attempting to find a compromise between having the game look like its visuals are prerendered 3D scenes, dithered for an old 1-bit display. As the video explains, a lot of work had to go into striking a balance between the intended aesthetic, and playability, because it turns out that the dithered aesthetic is difficult to work with. This, though, just kind of ends up looking like a pseudo-halftone style applied to high-res polygonal models. Maybe it would look better at 320x240 or something?
pvg
Related discussion a couple of months ago https://news.ycombinator.com/item?id=42084080
Lots of related links including one to a tweetier version of this work.
JKCalhoun
I got the itch to try and create an Atkinson dithering paint program.
What do I mean by that? Imagine a grayscale paint program (like, say, Procreate, just all grays) but all the pixels go through an Atkinson dither before hitting the screen.
To the artist you're kind of darkening or lightening the dither, so to speak, in areas of the canvas with your brush/eraser. (Kind of crowding or thinning the resulting B&W pixels).
An hour spent with Claude to make this happen in HTML5 caused me to set the experiment aside. It was okay, but only did the dither after the mouse was released. I wasn't driven enough to try to get it to dither in real-time (as the brush is being stroked).
The mouse is a terrible painting tool too — with a touch interface on an iPad (again, like Procreate) it might be worth pursuing further. It would need to be very performant as I say — so that you could see the dither as the brush is moving. (This might require a special bitmap and code where you store away the diffusion error so that you can update only a portion of the screen — where the brush has move - rather than having to re-dither the entire document 60 fps.)
cobalamin
I distinctly remember that "Paintbrush" in Windows 3.1 had something very similar to this. Check it out in the Win3.1 emulator: https://archive.org/details/win3_stock -- open Paintbrush, go to Options -> Image Attributes and set "Colors" to "Black and White".
However, the dithering there is not fixed to the background but depends on your brushstroke / mouse position.
itishappy
Error diffusion is not ideal for real time work, and it cannot be parallelized. Consider storing a precomputed blue noise texture.
AndrewStephens
This is correct. Atkinson dithering looks cool (at least to my eyes) but is not the best choice for real time work. In particular, it cannot be implemented in a shader, although you can approximate it. But computers are fast enough that a CPU-bound algorithm can still work at interactive speeds. Not sure how well it would work in an editor though - in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
I did an implementation of Atkinson dithering for a web component in case anyone is feeling the itch to dither like it is 1985.
Demo: https://sheep.horse/2023/1/improved_web_component_for_pixel-...
Source: https://github.com/andrewstephens75/as-dithered-image
JKCalhoun
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
My thought is to store the error for each pixel in a separate channel. When a portion of the bitmap is "dirtied" you could start in the top left of the dirty rectangle, re-Atkinson until, once outside the dirty rect, you compute the same error as the existing error for a given pixel. At that point there would follow the same dither pattern and you can stop.
As you say, it's conceivable you would have to go to the very end of the document. If the error is an integer though I feel like you would hit a point where you can stop early. Maybe I am misunderstanding how error diffusion works or grossly misjudging how wild mismatched the before/after errors would be.
crazygringo
> in theory changing a single pixel at the top left of the image could cause a chain reaction that could force the whole image to be re-dithered.
For a paint program, I think it would be acceptable if painting with the brush never changed existing pixels, only pixels newly painted with the brush, and you'd apply dithering only to newly added pixels in the brush stroke as the mouse dragged. The fact that you might be able to kinda see the discontinuities at the edge of the brush feels like it would be a feature, not a bug -- that you can see the brush strokes.
The really interesting effect would come when you implemented a dodge or burn tool...
londons_explore
I am pondering a different approach:
* Use error diffusion dithering in screen space
* Generate motion vectors for every pixel from the previous frame
Now, to make the next frame:
* Take the previously displayed (dithered) image and apply the motion vectors.
* Now use that as the threshold map to do error diffusion dithering on the next frame.
The threshold doesn't really matter for error diffusion dithering - since any error will be propagated to the next pixel. However, if you use a previous frame as a threshold map, it will encourage pixels not to 'flicker' every frame.
jnurmine
As I watched the video I got an idea.
Could this somehow be repurposed such that the points would be "check points" for a generative texture algorithm, with zoom level somehow taken into account (distance of dots maybe)?
Then one could, in a computer game, for example look at a brick wall. At first, as one is further back, the surface from tiles look somewhat matte smooth. But when one gets closer, the features become more and more coarse with more and more detail, eventually even some tiny tiny holes in the surface are visible, and so on.
Another example: sand, it looks smooth from afar but as one zooms in, actual grains of sand become visible.
djmips
That's level of detail (LOD) and there are various ways of implementing it. This dither technique incorporates LOD but I'm not sure how it would be useful for the type of LOD you are suggesting unless perhaps you think it might be applicable as a procedural technique in which case some of the observations might be an inspiration.
null
simlevesque
That looks crazy good ! I'm speechless.
Ono-Sendai
What is the point of this?
abetusk
To create a dithering effect for 3D scenes.
Doing a straight 2D dither on each frame will result in noisy artifacts and won't be "stable".
Doing an "ordered" dithering produces a "porch screen" artifact effect, where there's a stable pattern in screen space that can be seen.
Trying to naively map dithering patterns as textures on 3D objects again results in unstable patterns as the dithering is dependent on the distance of the screen to the object and produces noise.
This is a proposed solution that maps a dithering pattern as a texture that doesn't have the "porch screen" effect and is stable.
The video linked [0] to in the repo talks about experiments and shortcomings from the above I just listed. This work is in response to some challenges the developer of the "Return of the Obra Dinn" game experienced when trying to create a dithering effect in game [1] [2].
[0] https://www.youtube.com/watch?v=HPqGaIMVuLs&t=201s
[1] https://news.ycombinator.com/item?id=42084080
[2] https://forums.tigsource.com/index.php?topic=40832.msg136374...
Aardwolf
I do wonder if it would be possible now to not map this to textures, but to pixels in screen space.
Because it looks like high-res 3D mapped textures with circular dots on them now, rather than low-res screen pixel dithering.
I remember seeing the video about the difficulties of dithering stability in "Return of the Obra Dinn".
The solution here has it rendered on textures, but the dots still all have similar size in screen space, so mapping this to actual screen space might still be possible (as in, no circular dots on textures, but low-res pixels on screen being turned on/off directly)?
Retr0id
As far as I can tell, it's purely artistic.
kevingadd
If dithering isn't stable in motion it creates distracting 'shimmering' and other effects as the camera or objects move, which can be unpleasant to look at.
Unstable dithering is also potentially harder to compress in videos.
cma
If this is surface stable it should also work for stereo/VR without much stereo disparity, where normal dithering would have a mismatch. And PCVR is often streamed over video codecs now so what you said aboutvideo compression should help there too.
AndrewStephens
I was also wondering how well it would look in stereo. My guess is it would still look strange (the "depth map" would also appear dithered) but the effect would be interesting to experiment with.
Ono-Sendai
Yeah but if your dithering is noticable in the first place you're doing it wrong.
Rohansi
Unless you want it as an artistic style.
Unordered dithering gives better form shading as there is no structure overlaying the shape,I would love to see "Recursive Wang Tiles for Real-Time Blue Noise ps://www.youtube.com/watch?v=ykACzjtR6rc" combined with that technique