Skip to content(if available)orjump to list(if available)

Mapping to the PICO-8 palette, perceptually

aquova

An interesting article, but it seems like quite an oversight to not even mention dithering techniques, which in my opinion give much better results.

I've done some development work in Pico-8, and some time ago I wrote a plugin for the Aseprite pixel art editor to convert an arbitrary image into the Pico-8 palette using Floyd-Steinberg dithering[0]

I ran their example image through it, and personally I think the results it gives were the best of the bunch https://imgur.com/a/O6YN8S2

[0] https://github.com/aquova/aseprite-scripts/blob/master/pico-...

greysonp

They don't explicitly state it in the article that I can see, but the PICO-8 is 128x128, and it appears that their output images were constrained to that. Your dithered images appear to be much higher resolution. I'd be curious what dithering would look like at 128x128!

lugarlugarlugar

I too thought about dithering while reading the article, but couldn't have imagined the result would be this much better. Thanks for sharing!

kibwen

Dithering is sort of like having the ability to "blend" any two colors of your palette (possibly even more than any two, if you use it well), so instead of being a 16-color pallete, it's like working with a 16+15+14+13+12+...=136-color pallete. It's a drastic difference (at the cost of graininess, of course).

smusamashah

Tried this online tool https://onlinetools.com/image/apply-dithering-to-image and Floyd and Atkinson both look great, Atkinson a bit better.

chrismorgan

Direct link to the image, though you may have to fetch it through a non-browser to avoid it redirecting back to the stupid HTML: https://i.imgur.com/y93naNw.png

(I don’t know how it works for others, but it has always been atrocious for me. Their server is over 200ms away, and even with uBlock Origin blocking ten different trackers it takes fully 35 seconds before it even begins to load the actual image, and the experience once it’s finished is significantly worse than just navigating directly to the image anyway. Tried it in Chromium a couple of times, 55 and 45 seconds. Seriously, imgur is so bad. Maybe it ain’t so bad in the USA, I don’t know, but in Australia and in India it’s appallingly bad. You used to be able to open the image URLs directly, but some years ago they started redirecting to the HTML in general if not loading as a subresource or maybe something about accept headers; curl will still get it directly.)

a_shovel

Something I've noticed from automatic palette mappings is that they tend to produce large blocks of gray that a human artist would never consider. You can see it in the water for most mappings in this sample, and even some grayish-brown grass for sRGB. It makes sense mathematically, since gray is the "average" color, and pixel art palettes are typically much more saturated than the average colors in a 24-bit RGB image. It looks ugly regardless.

CAM16-UCS looks the best because it avoids this. It gives us peach-and-pink water that matches the "feel" of the original image better. I wonder if it's designed to saturate the image to match the palette?

growingkittens

I notice that many palettes tend to follow the "traditional" color wheel strictly, without defining pink as a separate color on the main wheel.

WithinReason

I was looking forward to seeing a dithered [0] version but it was missing. In addition, shouldn't OKLAB already be perceptually uniform and not require luma weighting?

[0]: https://en.wikipedia.org/wiki/Floyd%E2%80%93Steinberg_dither...

Fraterkes

Kinda off-topic but for a while I’ve had an idea for a photography app where you’d take a picture, and then you could select a color in the picture and adjust it till it matched the color you see in reality. You could do that for a few colors and then eventually just map all the colors in the picture to be much closer to the perceived colors without having to do coarser post-processing.

Even if you got something very posterized like in the article I think it could at least be a great reference for a more traditional processing step afterwards. Always wonder why that doesn’t seem to exist yet.

latexr

Sounds like a lot of work for something which wouldn’t produce that good of a result. If you ever tried to take a colour from a picture with the eyedropper tool, you quickly realise what you see as one colour is in fact disparate number of pixels and it can be quite hard to get the exact thing you want. So right there you find the initial hurdle of finding and mapping the colour to change. Finding the edges would also be a problem.

Not to mention every screen is different, so whatever changes you’re doing, even if they looked right to you in the moment, would be useless when you sent your image to your computer for further processing.

Oh, and our eyes can perceive it differently too. So now you’re doing a ton of work to badly change the colours of an image so they look maybe a bit closer to reality for a single person on a single device.

Marazan

The main issue with any pixel-to-pixel colour mapping approach is that we don't perceive individual pixels so you will not get a good overall effect from pixel-to-pixel mapping (the article touches on this by talking about structure but you don;t have to go that far to see massively improved results).

Any serious attempt would involve higher level dithering to better reproduce the colours of the original image and dithering is one of those topics that goes unexpectedly crazy deep if you are not familiar with the literature.