Skip to content(if available)orjump to list(if available)

Rendering Crispy Text on the GPU

Rendering Crispy Text on the GPU

130 comments

·June 13, 2025

vecplane

Subpixel font rendering is critical for readability but, as the author points out, it's a tragedy that we can't get pixel layout specs from the existing display standards.

crazygringo

Only on standard resolution displays. And it's not even "critical" then, it's just a nice-to-have.

But the world has increasingly moved to Retina-type displays, and there's very little reason for subpixel rendering there.

Plus it just has so many headaches, like screenshots get tied to one subpixel layout, you can't scale bitmaps, etc.

It was a temporary innovation for the LCD era between CRT and Retina, but at this point it's backwards-looking. There's a good reason Apple removed it from macOS years ago.

jeroenhd

> the world has increasingly moved to Retina-type displays

Not my world. Even the display hooked up to the crispy work MacBook is still 1080p (which looks really funky on macOS for some reason).

Even in tech circles, almost everyone I know still has a 1080p laptop. Maybe some funky 1200p resolution to make the screen a bit bigger, but the world is not as retina as you may think it is.

For some reason, there's actually quite a price jump from 1080p to 4k unless you're buying a television. I know the panels are more expensive, but I doubt the manufacturer is indeed paying twice the price for them.

josephg

My desktop monitor is a 47” display … also running at 4k. It’s essentially a TV, adapted into a computer monitor. It takes up the whole width of my desk.

It’s an utterly glorious display for programming. I can have 3 full width columns of code side by side. Or 2 columns and a terminal window.

But the pixels are still the “normal” size. Text looks noticeably sharper with sub-pixel rendering. I get that subpixel rendering is complex and difficult to implement correctly, but it’s good tech. It’s still much cheaper to have a low resolution display with subpixel font rendering than render 4x as many pixels. To get the same clean text rendering at this size, I’d need an 8k display. Not only would that cost way more money, but rendering an 8k image would bring just about any computer to its knees.

It’s too early to kill sub pixel font rendering. It’s good. We still need it.

MindSpunk

MacOS looks garbage on non-retina displays largely because they don't do any sub pixel AA for text.

HappMacDonald

Reading this message on a 4k (3840x2160 UHD) monitor I bought ten (10) years ago for $250usd.

Still bemoaning the loss of the basically impossible (50"? I can't remember precisely) 4k TV we bought that same year for $800usd when every other 4k model that existed at the time was $3.3k and up.

It's black point was "when rendering a black frame the set 100% appears to be unpowered" and the whitepoint was "congratulations, this is what it looks like to stare into baseball stadium floodlights". We kept it at 10% brightness as a matter of course and still playing arbitrary content obviated the need for any other form of lighting in our living room and dining room combined at night.

It was too pure for this world and got destroyed by one of the kids throwing something about in the living room. :(

NoGravitas

Even on standard resolution displays with standard subpixel layout, I see color fringing with subpixel rendering. I don't actually have hidpi displays anywhere but my phone, but I still don't want subpixel text rendering. People act like it's a panacea, but honestly the history of how we ended up with it is pretty specific and kind of weird.

zozbot234

> ...I see color fringing with subpixel rendering.

Have you tried adjusting your display gamma for each RGB subchannel? Subpixel antialiasing relies on accurate color space information, even more than other types of anti-aliased rendering.

f33d5173

Because apple controls all their hardware and can assume that everyone has a particulr set of features and not care about those without. The rest of the industry doesn't have that luxury.

akdor1154

Apple could easily have ensured screens across their whole ecosystem had a specific subpixel alignment - yet they still nixed the feature.

eviks

But the world has done nothing of the sorts: what's your assessment of what % of *all* used displays are of retina type?

jeroenhd

The funny thing is that in some ways it's true. Modern phones are all retina (because even 1080p at such a resolution is indistinguishable from pixelless). Tablets, even cheap ones, have impressive screen resolutions. I think the highest tea device I own may be my Galaxy Tab S7 FE at 1600x2500.

Computers, on the other hand, have stuck with 1080p, unless you're spending a fortune.

I can only attribute it to penny pinching by the large computer manufacturers, because with the high-res tablets coming to market for Chromebook prices, I doubt they're unable to put a similarly high-res display in a similarly sized laptop without bumping the price up by 500 euros like I've seen them do.

gfody

> like screenshots get tied to one subpixel layout

we could do with a better image format for screenshots - something that preserves vectors and text instead of rasterizing. HDR screenshots on Windows are busted for similar reasons.

zozbot234

It looks like the DisplayID standard (the modern successor to EDID) is at least intended to allow for this, per https://en.wikipedia.org/wiki/DisplayID#0x0C_Display_device_... . Do display manufacturers not implement this? Either way, it's information that could be easily derived and stored in a hardware-info database, at least for the most common display models.

jeroenhd

I don't think any OS exposes an API for this. There's a Linux tool I sometimes use to control the brightness of my screen that works by basically talking directly to the hardware over the GPU.

Unfortunately, EDID isn't always reliable, either: you need to know the screen's orientation as well or rotated screens are going to look awful. You're probably going to need administrator access on computers to even access the hardware to get the necessary data, which can also be a problem for security and ease-of-use reasons.

Plus, some vendors just seem to lie in the EDID. Like with other information tables (ACPI comes to mind), it looks almost like they just copy the config from another product and adjust whatever metadata they remember to update before shipping.

jasonthorsness

I don't understand why, this has been a thing for decades :(. The article is excellent and links to this "subpixel zoo" highlighting the variety: https://geometrian.com/resources/subpixelzoo/

layer8

“Tragedy” is a bit overstating it. Each OS could provide the equivalent of Window’s former ClearType tuner for that purpose, and remember the results per screen or monitor model. You’d also want that in the inevitable case where monitors report the wrong layout.

mrob

Subpixel rendering isn't necessary in most languages. Bitmap fonts or hinted vector fonts without antialiasing give excellent readability. Only if the language uses characters with very intricate details such as Chinese or Japanese is subpixel rendering important.

Fraterkes

Ah so only 20% of the global population? Nbd

osor_io

Author here, didn't expect the post to make it here! Thanks so much to everyone who's reading it and participating in the interesting chat <3

null

[deleted]

muglug

It's a great post!

What happened to the dot of the italic "j" in the first video?

kvemkon

GTK4 moved rendering to GPU and gave up on RGB subpixel rendering. I've heard, that this GPU-centric decision made it impractical to continue with RGB subpixel rendering. The article shows it is possible. So perhaps, the reason for GTK was another one or the presented solution would have disadvantages or just not integrate in the stack...

dbcooper

Cosmic Text (Cosmic DE) might do this on the GPU via swash. It has subpixel rendering.

xiaoiver

If you're interested in how to implement SDF and MSDF in WebGL / WebGPU, take a look at this tutorial I wrote: https://infinitecanvas.cc/guide/lesson-015#msdf.

Buttons840

This looks great. I have some interest in WGPU (Rust's WebGPU implementation), and your tutorial here appears to be an advance course on it--thought it doesn't advertise itself as such. I've translated JavaScript examples to Rust before, and it's ideal for learning, because I can't just copy/paste code, but the APIs are close enough that it's easy to port the code and it gives you an excuse to get used to using the WGPU docs.

tamat

wow, I love the format of the site.

Can you tell me more about it? I love making tutorials about GPU stuff and I would love to structure them like yours.

Is it an existing template? Is it part of some sort of course?

dcrazy

The Slug library [1] is a commercial middleware that implements such a GPU glyph rasterizer.

[1]: https://sluglibrary.com/

bschwindHN

They describe a fair amount of their algorithm directly on their website. Do they have patents for it? It would be fun to make an open source wgpu version, maybe using some stuff from cosmic-text for font parsing and layout. But if at the end of that I'd get sued by Slug, that would be no fun.

grovesNL

Slug is patented but there are other similar approaches being worked on (e.g., vello https://news.ycombinator.com/item?id=44236423 that uses wgpu).

I also created glyphon (https://github.com/grovesNL/glyphon) which renders 2D text using wgpu and cosmic-text. It uses a dynamic glyph texture atlas, which works fine in practice for most 2D use cases (I use it in production).

bschwindHN

I did something similar with cosmic-text and glium, but it would be fun to have a vector rendering mode to do fancier stuff with glyph outlines and transforms for games and 3D stuff. And open source, of course.

I suppose vello is heading there but whenever I tried it the examples always broke in some way.

oofabz

Very impressive work. For those who aren't familiar with this field, Valve invented SDF text rendering for their games. They published a groundbreaking paper on the subject in 2007. It remains a very popular technique in video games with few changes.

In 2012, Behdad Esfahbod wrote Glyphy, an implementation of SDF that runs on the GPU using OpenGL ES. It has been widely admired for its performance and enabling new capabilities like rapidly transforming text. However it has not been widely used.

Modern operating systems and web browsers do not use either of these techniques, preferring to rely on 1990s-style Truetype rasterization. This is a lightweight and effective approach but it lacks many capabilities. It can't do subpixel alignment or arbitrary subpixel layout, as demonstrated in the article. Zooming carries a heavy performance penalty and more complex transforms like skew, rotation, or 3d transforms can't be done in the text rendering engine. If you must have rotated or transformed text you are stuck resampling bitmaps, which looks terrible as it destroys all the small features that make text legible.

Why the lack of advancement? Maybe it's just too much work and too much risk for too little gain. Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering? It would be a daunting task. Rendering glyphs is one thing but how about handling line breaking? Seems like it would require a lot of communication between CPU and GPU, which is slow, and deep integration between the software and the GPU, which is difficult.

chrismorgan

> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering? […] Rendering glyphs is one thing but how about handling line breaking?

I’m not sure why you’re saying this: text shaping and layout (including line breaking) are almost completely unrelated to rendering.

zozbot234

> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering?

https://github.com/servo/pathfinder uses GPU compute shaders to do this, which has way better performance than trying to fit this task into the hardware 3D rendering pipeline (the SDF approach).

Someone

> Can you imagine rewriting a modern web browser engine to use GPU-accelerated text rendering?

It is tricky, but I thought they already (partly) do that. https://keithclark.co.uk/articles/gpu-text-rendering-in-webk... (2014):

“If an element is promoted to the GPU in current versions of Chrome, Safari or Opera then you lose subpixel antialiasing and text is rendered using the greyscale method”

So, what’s missing? Given that comment, at least part of the step from UTF-8 string to bitmap can be done on the GPU, can’t it?

zozbot234

The issue is not subpixel rendering per se (at least if you're willing to go with the GPU compute shader approach, for a pixel-perfect result), it's just that you lose the complex software hinting that TrueType and OpenType fonts have. But then the whole point of rendering fonts on the GPU is to support smooth animation, whereas a software-hinted font is statically "snapped" to the pixel/subpixel grid. The two use cases are inherently incompatible.

kevingadd

Just for the record, text rendering - including with subpixel antialiasing - has been GPU accelerated on Windows for ages and in Chrome/Firefox for ages. Probably Safari too but I can't testify to that personally.

The idea that the state of the art or what's being shipped to customers haven't advanced is false.

vendiddy

Thanks for the breakdown! I love reading quick overviews like this.

moron4hire

SDF is not a panacea.

SDF works by encoding a localized _D_istance from a given pixel to the edge of character as a _F_ield, i.e. a 2d array of data, using a _S_ign bit to indicate whether that distance is inside or outside of the character. Each character has its own little map of data that gets packed together into an image file of some GPU-friendly type (generically called a "map" when it does not represent an image meant for human consumption), along with a descriptor file of where to find the sub-image of each character in that image, to work with the SDF rendering shader.

This definition of a character turns out to be very robust against linear interpolation between field values, enabling near-perfect zoom capability for relatively low resolution maps. And GPUs are pretty good at interpolating pixel values in a map.

But most significantly, those maps have to be pre-processed during development from existing font systems for every character you care to render. Every. Character. Your. Font. Supports. It's significantly less data than rendering every character at high resolution to a bitmap font. But, it's also significantly more data than the font contour definition itself.

Anything that wants to support all the potential text of the world--like an OS or a browser--cannot use SDF as the text rendering system because it would require the SDF maps for the entire Unicode character set. That would be far too large for consumption. It really only works for games because games can (generally) get away with not being localized very well, not displaying completely arbitrary text, etc.

The original SDF also cannot support Emoji, because it only encodes distance to the edges of a glyph and not anything about color inside the glyph. Though there are enhancements to the algorithm to support multiple colors (Multichannel SDF), the total number of colors is limited.

Indeed, if you look closely at games that A) utilize SDF for in-game text and B) have chat systems in which global communities interact, you'll very likely see differences in the text rendering for the in-game text and the chat system.

rudedogg

If I understand correctly, the authors approach doesn't really have this problem since they only upload the glyphs being used to the GPU (at runtime). Yes you still have to pre-compute them for your font, but that should be fine.

chii

but the grandparent post is talking about a browser - how would a browser pre-compute a font, when the fonts are specified by the webpage being loaded?

null

[deleted]

cyberax

Why not prepare SDFs on-demand, as the text comes in? Realistically, even for CJK fonts you only need a couple thousand characters. Ditto for languages with complex characters.

kevingadd

Generating SDFs is really slow, especially if you can't use the GPU to do it, and if you use a faster algorithm it tends to produce fields with glitches in them

meindnoch

Because it's slow.

null

[deleted]

null

[deleted]

tuna74

To all people that want sub-pixel rendering: Unless you know the sub-pixel grid on the display it is going to look worse. Therefore the only good UX that you can do is to ask the user for every display they use if they want to turn it on for that specific display. The OS also have to handle rotations etc as well.

strongpigeon

Even better would be as the author suggests: having a way for the display to indicate its subpixel structure to the system.

mananaysiempre

I always think about the Samsung SyncMaster 173P I used to have once. It was good for its time, but not usable with any kind of subpixel antialiasing (even on Gnome which allowed to you to choose between horizontal and vertical RGB and BGR): the subpixel grid on it was diagonal. Absolutely tractable as far as the signal-processing math, yet would be unlikely to fit in any reasonable protocol.

tuna74

There is, but displays do not send out correct information unfortunately.

atoav

Well it is a style of text you could use to emphasize certain words, which for most people translates to a different pronounciation of said word in their heads.

meindnoch

Impressive work!

But subpixel AA is futile in my opinion. It was a nice hack in the aughts when we had 72dpi monitors, but on modern "retina" screens it's imperceptible. And for a teeny tiny improvement, you get many drawbacks:

- it only works over opaque backgrounds

- can't apply any effect on the rasterized results (e.g. resizing, mirroring, blurring, etc.)

- screenshots look bad when viewed on a different display

fleabitdev

Getting rid of subpixel AA would be a huge simplification, but quite a lot of desktop users are still on low-DPI monitors. The Firefox hardware survey [1] reports that 16% of users have a display resolution of 1366x768.

This isn't just legacy hardware; 96dpi monitors and notebooks are still being produced today.

[1]: https://data.firefox.com/dashboard/hardware

layer8

Even more strikingly, two-thirds are using a resolution of FHD or lower, and only around a sixth are using QHD or 4K. Low-DPI is still the predominant display situation on the desktop.

vitorsr

See also Linux Hardware Database (developer biased) [1] and Steam Hardware & Software Survey (gamer biased) [2].

[1] https://linux-hardware.org/?view=mon_resolution

[2] https://store.steampowered.com/hwsurvey

ahartmetz

What you're saying is "I have a high DPI screen, don't care about those who don't". Because these other arguments are really unimportant compared to the the better results of subpixel rendering where applicable.

NoGravitas

Not sure about that. I don't really like subpixel rendering on a 100dpi screen very much because of color fringing. But add in the other disadvantages and it just seems not worth it.

ahartmetz

Subpixel rendering is configurable. Some algorithms are patented, but the patents have expired. I'm not sure if the "good" algorithms have made it to all corners of computing. I use latest Kubuntu, slight hinting and subpixel rendering. It looks very good to me.

On my rarely used Windows partition, I have used ClearType Tuner (name?) to set up ClearType to my preferences. The results are still somewhat grainy and thin, but that's a general property of Windows font rendering.

mistercow

Also, even if, as the author wishes, there were a protocol for learning the subpixel layout of a display, and that got widespread adoption, you can bet that some manufacturers would screw it up and cause rendering issues that would be very difficult for end users to understand.

ahartmetz

This kind of problem has been dealt with before. It has a known solution:

- A protocol to ask the hardware

- A database of quirks about hardware that is known to provide wrong information

- A user override for when neither of the previous options do the job

cchance

After seeing the cursive all i immediately thought was "who the fuck ever thought cursive was a good idea" lol

jml7c5

People who handwrote. (And especially people who handwrote with quills and fountain pens — usable ballpoint pens are only 70 years old.)

adiabatichottub

People who wrote lots of letters, that's who. The internet and free long-distance calling killed cursive.

rossant

I can't find the link to the code is it available?

pjmlp

While the article is great, I am missing a WebGL/WebGPU demo to go along the article, instead of videos only.

xiaoiver

Maybe you can take a look at this tutorial I wrote: https://infinitecanvas.cc/guide/lesson-015#msdf.

kh_hk

This is a good resource and looks very well written. Many thanks for sharing!

pjmlp

Thanks, looks like a nice reading over the weekend.

z3t4

When making a text editor from scratch my biggest surprise was how slow/costly text rendering is.