Skip to content(if available)orjump to list(if available)

Memory Safety for Web Fonts

Memory Safety for Web Fonts

131 comments

·March 19, 2025

SquareWheel

I've recently been learning about how fonts render based on subpixel layouts in monitor panels. Windows assumes that all panels use RGB layout, and their ClearType software will render fonts with that assumption in mind. Unfortunately, this leads to visible text fringing on new display types, like the alternative stripe pattern used on WOLED monitors, or the triangular pattern used on QD-OLED.

Some third-party tools exist to tweak how ClearType works, like MacType[1] or Better ClearType Tuner[2]. Unfortunately, these tools don't work in Chrome/electron, which seems to implement its own font rendering. Reading this, I guess that's through FreeType.

I hope that as new panel technologies start becoming more prevalent, that somebody takes the initiative to help define a standard for communicating subpixel layouts from displays to the graphics layer, which text (or graphics) rendering engines can then make use of to improve type hinting. I do see some efforts in that area from Blur Busters[3] (the UFO Test guy), but still not much recognition from vendors.

Note I'm still learning about this topic, so please let me know if I'm mistaken about any points here.

[1] https://github.com/snowie2000/mactype

[2] https://github.com/bp2008/BetterClearTypeTuner

[3] https://github.com/microsoft/PowerToys/issues/25595

wkat4242

I'm pretty sure windows dropped subpixel anti-aliasing a few years ago. When it did exist there was a wizard to determine and set the subpixel layout.

Personally I don't bother anymore anyway since I have a HiDPI display (about 200dpi, 4K@24"). I think that's a better solution, simply have enough pixels to look smooth. It's what phones do too of course.

ghusbands

To be clear: Windows still does subpixel rendering, and the wizard is still there. The wizard has not actually worked properly for at least a decade at this point, and subpixel rendering is always enabled, unless you use hacks or third-party programs.

layer8

DirectWrite doesn’t apply subpixel anti-aliasing by default [0], and an increasing number of applications use it, including Microsoft Office since 2013. One reason is the tablet mode starting with Windows 8, because subpixel ClearType only works in one orientation. Nowadays non-uniform subpixel layouts like OLED panels use are another reason.

[0] https://en.wikipedia.org/wiki/ClearType#ClearType_in_DirectW...

nine_k

I still have subpixel antialiasing on when using a 28" 4K display. It's the same DPI as a FHD 14" display, typical on laptops. Subpixel AA makes small fonts look significantly more readable.

But it only applies to Linux, where the small fonts can be made look crisp this way. Windows AA is worse, small fonts are a bit more blurred on the same screen, and amcOS is the worst: connecting a 24" FHD screen to an MBP ives really horrible font rendering, unless you make fonts really large. I suppose it's because macOS does not do subpxel AA at all, and assumes high DPI screens only.

cosmic_cheese

macOS dropped it a few years ago, primarily because there are no Macs with non-HiDPI displays any more (reducing benefit of subpixel AA) and to improve uniformity with iOS apps running on macOS via Catalyst (iOS has never supported subpixel AA, since it doesn’t play nice with frequently adjusted orientations).

Windows I believe still uses RGB subpixel AA, because OLED monitor users still need to tweak ClearType settings to make text not look bad.

grishka

> because there are no Macs with non-HiDPI displays any more

That is not true. Apple still sells Macs that don't come with a screen, namely Mac Mini, Mac Studio, and Mac Pro. People use these with non-HiDPI monitors they already own all the time.

hnuser123456

I didn't have to do any cleartype tuning on my LG C2. But maybe since it's a TV they have room for conventional subpixel layouts.

hnuser123456

It absolutely still does subpixel AA. Take a screenshot of any text and zoom way in, there's red and blue fringing. And the ClearType text tuner is still a functional builtin program in Win11 24H2.

perching_aix

It didn't. Some parts of the UI are using grayscale AA, some are on subpixel AA. And sometimes it's just a blur, to keep things fun I guess.

Pretty sure phones do grayscale AA.

Clamchop

As far as I'm aware, ClearType is still enabled by default in Windows.

Subpixel text rendering was removed from MacOS some time ago, though, presumably because they decided it was not needed on retina screens. Maybe you're thinking of that?

tadfisher

The standard is EDID-DDDB, and subpixel layout is a major part of that specification. However I believe display manufacturers are dropping the ball here.

https://glenwing.github.io/docs/VESA-EEDID-DDDB-1.pdf

kiicia

For me, being old time user, (ab)using any subpixel layouts for text rendering and antialiasing is counterproductive and (especially with current pixel densities, but also in general) introduces much more issues that it actually ever solved

“Whole pixel/grayscale antialiasing” should be enough and then specialized display controller would handle the rest

tadfisher

Agreed, but layouts such as Pentile don't actually have all three subpixel components in a logical pixel, so you'll still get artifacts even with grayscale AA. You can compensate for this by masking those missing components.

https://github.com/snowie2000/mactype/issues/932

kiicia

surprising info, I thought this was supposed to be the part about "display controller taking care of any additional issues", thanks for link with details, will read it with interest

zozbot234

Sub-pixel anti-aliasing requires outputing a pixel-perfect image to the screen, which is a challenge when you're also doing rendering on the GPU. You generally can't rely on any non-trivial part of the standard 3D-rendering pipeline (except for simple blitting/compositing) and have to use the GPU's compute stack instead to address those requirements. This adds quite a bit of complexity.

RKFADU_UOFCCLEL

Windows has always allowed you to change subpixel layout, its right there in the clear type settings.

cosmic_cheese

I may be totally off the mark here, but my understanding is that the alternative pixel arrangements found in current WOLED and QD-OLED monitors are suboptimal in various ways (despite the otherwise excellent qualities of these displays) and so panel manufacturers are working towards OLED panels built with traditional RGB subpixel arrangements that don’t forfeit the benefits of current WOLED and QD-OLED tech.

That being the case, it may end up being that in the near future, alternative arrangements end up being abandoned and become one of the many quirky “stepping stone” technologies that litter display technology history. While it’s still a good idea to support them better in software, that might put into context why there hasn’t been more efforts put into doing so.

TheRealPomax

Mandatory reading when getting into this topic: http://rastertragedy.com/

null

[deleted]

declan_roberts

> Merely keeping up with the stream of issues found by fuzzing costs Google at least 0.25 full time software engineers

I like this way of measuring extra work. Is this standard at Google?

pradn

Yes, a SWE-year is a common unit of cost.

And there are internal calculators that tell you how much CPU, memory, network etc a SWE-year gets you. Same for other internal units, like the cost of a particular DB.

This allows you to make time/resource tradeoffs. Spending half an engineer’s year to save 0.5 SWE-y of CPU is not a great ROI. But if you get 10 SWE out of it, it’s probably a great idea.

I personally have used it to argue that we shouldn’t spend 2 weeks of engineering time to save a TB of DB disk space. The cost of the disk comes to less than a SWE-hour per year!

jorvi

Note that this can lead to horrid economics for the user.

An example being Google unilaterally flipping on VP8/VP9 decode, which at that time purely decoded on the CPU or experimentally on the GPU.

It saved Google a few CPU cycles and some bandwidth but it nuked every user's CPU and battery. And the amount of energy YouTube consumed wholesale (so servers + clients) skyrocketed. Tragedy of the Commons.

crazysim

It also saved licensing costs to MPEG-LA too I guess.

nairb774

FTE. Full time equivalent. Mosts costs are denominated in FTE - headcount as well as things like CPU/memory/storage/...

The main economic unit for most engineers is FTE not $.

summerlight

Yeap, kinds of. It's preferred because whenever you propose/evaluate some changes, you can have a rough idea whether it was worth the time. Like you worked on some significant optimization then measure it and justify like the saving was 10 SWE-years where you just put 1 SWE-quarter.

kiicia

Yes, all those well paid C-level managers cannot handle multiple units so they require everyone to use one “easy to understand unit so that everything is easy to compare and micromanage”

adam_gyroscope

Yep, often things are measured in FTE or FTE-equivalent units. It’s not precise of course but is a reasonable shorthand for the amount of work required.

whazor

I think this means the engineer fuzzes 4 projects?

Keyb0ardWarri0r

This is the true power of Rust that many are missing (like Microsoft with its TypeScript rewrite in Go): a gradual migration towards safety and the capability of being embedded in existing project.

You don't have to do the Big Rewrite™, you can simply migrate components one by one instead.

TheCoreh

> like Microsoft with its TypeScript rewrite in Go

My understanding is that Microsoft chose Go precisely to avoid having to do a full rewrite. Of all the “modern” native/AoT compiled languages (Rust, Swift, Go, Zig) Go has the most straightforward 1:1 mapping in semantics with the original TypeScript/JavaScript, so that a tool-assisted translation of the whole codebase is feasible with bug-for-bug compatibility, and minimal support/utility code.

It would be of course _possible_ to port/translate it to any language (Including Rust) but you would essentially end up implementing a small JavaScript runtime and GC, with none or very little of the safety guarantees provided by Rust. (Rust's ownership model generally favors drastically different architectures.)

jeppester

As I understood their arguments it was not about the effort needed to rewrite the project.

It was about being able to have two codebases (old and new) that are so structurally similar, that it won't be a big deal to keep updating both

K0nserv

In a similar way Rust can be very useful for the hot path in programs written in Python, Ruby, etc. You don't have to throw out and rewrite everything, but because Rust can look like C you can use it easily anywhere C FFI is supported.

steveklabnik

> like Microsoft with its TypeScript rewrite in Go

Go is also memory safe.

gpm

I'd argue technically not due to data races on interface values, maps, slices, and strings... but close enough for almost all purposes.

PS. Note that unlike most languages, a datarace on something like an int in go isn't undefined behavior, just non-deterministic and discouraged.

steveklabnik

Yes, these issues are real, but as you say, it's not really the same as UB. As such, Go is generally considered a MSL.

For anyone not familiar with this, see https://go.dev/ref/mem#restrictions

Incidentally, Java is very similar: https://docs.oracle.com/javase/specs/jls/se8/html/jls-17.htm...

Keyb0ardWarri0r

But can't be embedded in other projects as easily as Rust (FFI, WASM).

steveklabnik

I don't disagree, but "not as easily" is different than "cannot be."

GaggiX

Go can have data races, so I would not consider it memory safe.

null

[deleted]

K0nserv

I appreciate the pun in the repository name https://github.com/googlefonts/fontations/

tsuru

It looks like there is an extern C interface... I wonder if it is everything necessary for someone to use it via FFI.

steveklabnik

Given that it's being used in a large C++ codebase, I would assume it has everything needed to use it in that API.

pornel

They just need to rewrite the rest of Chrome to use the native Rust<>Rust interface.

(in reality Google is investing a lot of effort into automating the FFI layer to make it safer and less tedious)

sidcool

This is a wonderful write up. Reminiscent of the old google

jasonthorsness

If fresh code in Rust truly reduces the number or severity of CVE in a massively tested and fuzzed C library like this one it will be a major blow to the “carefully written and tested C/C++ is just as safe” perspective. Just need the resources and Rust performance to rewrite them all.

cbdumas

Despite mounting evidence this perspective remains shockingly common. There seems to be some effect where one might convince oneself that while others are constantly introducing high severity bugs related to memory un-safety, I always follow best practices and use good tooling so I don't have that issue. Of course evidence continues to build that no tooling or coding practice eliminates the risk here. I think what's going on is that as a programmer I can't truly be aware of how often I write bugs, because if I was I wouldn't write them in the first place.

jasonthorsness

I sort of have this perspective, slowly changing… I think it comes from a fallacy of take a small 20-line function in C, it can be made bug-free and fully tested, a program is made of small functions, why can’t the whole thing be bug free? But somehow it doesn’t work like that in the real world.

woodruffw

> why can’t the whole thing be bug free? But somehow it doesn’t work like that in the real world.

It can be, if the composition is itself sound. That's a key part of Rust's value proposition: individually safe abstractions in Rust also compose safely.

The problem in C isn't that you can't write safe C but that the composition of individually safe C components is much harder to make safe.

graemep

There would be far fewer bugs if people actually stick to writing code like that.

I once had to reverse engineer (extract the spec from the code) a C++ function that was hundreds of lines long. I have had to fix a Python function over a thousand lines long.

I am sure the people who wrote that code will find ways to make life difficult with Rust too, but I cannot regret having one less sharp knife in their hands.

dkarl

> I think it comes from a fallacy of take a small 20-line function in C, it can be made bug-free and fully tested

It can't. You can make yourself 99.9...% confident that it's correct. Then you compose it with some other functions that you're 99.9...% about, and you now have a new function that you are slightly less confident in. And you compose that function with other functions to get a function that you're slightly less confident in. And you compose them all together to get a complete program that you wouldn't trust to take a dollar to the store to buy gum.

twic

There's also a sort of dead sea effect at work. People who worry about introducing safety bugs use safe languages to protect themselves from that. Which means that the only people using C are people who don't worry about introducing safety bugs.

oergiR

FreeType was written when fonts were local, trusted, resources, and it was written in low-level C to be fast. The TrueType/OpenType format is also made for fast access, e.g. with internal pointers, making validation a pain.

So though FreeType is carefully written w.r.t. correctness, it was not meant to deal with malicious input and that robustness is hard to put in afterwards.

oefrha

If you think FreeType is bad, wait until you find out win32k.sys used to parse TrueType fonts directly in the kernel. https://msrc.microsoft.com/blog/2009/11/font-directory-entry... (that’s just one of a million vulnerabilities thanks to kernel mode font parsing.)

whizzter

That perspective is blown to those that see beyond themselves.

Or can admit themselves as fallible.

Or realize that even if they are near-infallible, that unless they've studied the C _and_ C++ standards to the finest detail they will probably unintentionally produce code at some point that the modern C++ compilers will manage to make vulnerable in the name of undefined behaviors optimizations (see the recent HN article about how modern C/C++ compilers has a tendency to turn fixed-time multiplications into variable time and vulnerable to timing attacks).

But of course, there is always tons of people that believe that they are better than the "average schmuck" and never produce vulnerable code.

hgs3

Is the FreeType2 test suite public? It looks like the project's tests/ directory only contains a script for downloading a single font [1]. They have a fuzz test repo with what appears to be a corpus of only a few hundred tests [2]. For critical infrastructure, like FreeType, I'd expect hundreds of thousands if not millions of tests, just like SQLite [3].

[1] https://gitlab.freedesktop.org/freetype/freetype/

[2] https://github.com/freetype/freetype2-testing/

[3] https://www.sqlite.org/testing.html

dsp_person

Better comparison would be fresh code in Rust vs fresh code in C. Re-writing in both languages following best practices testing and fuzzing. A big difference vs comparing to legacy C code that is trying to be maintained by throwing 0.25 google engineers at it to fix an ongoing stream of fuzzing issues.

IshKebab

There are no major blows required. The idea that carefully written C/C++ code is just as safe was never tenable in the first place, and obviously so.

GoblinSlayer

freetype code looks chaotic, though, it was written for local trusted fonts first. It was a typical google move "let's expose this random thing to internet, we can add security in 2025".

null

[deleted]

kiicia

The real issue is the effort required to rewrite everything without introducing new bugs resulting from a misunderstanding of the original code

kevingadd

New (likely aesthetic only) bugs in font rendering are probably considered preferable to existing security vulnerabilities, I would hope.

kiicia

it's hard to argue that "occasional accent rendering issue" is better than "occasional zero-click issue", but rewriting code of any actual complexity is hard in practice... and we are talking about only one library here, with thousands libraries and utils to go

arccy

if you have too many aesthetic bugs, nobody will use it. it then becomes the most secure code because it doesn't run anywhere so can't be exploited.

kortilla

“Aesthetic only” bugs for a project entirely for aesthetics can easily kill the usage of it.

Nobody cares about a CVE-free rendering library if it can’t render things correctly.

fresh_broccoli

It's very annoying that Google's websites, including this blog, are automatically translated to my browser's preferred language.

Sillicon Valley devs clearly believe that machine translation is a magical technology that completely solved internationalization, like in Star Trek. No, it's far from it. Ever since internet has been flooded with automatically translated garbage, the experience of international users, at least bilingual ones, got much worse.

slyzmud

The worst offender to me is Google Maps. I'm a native Spanish speaker but set my phone to English because I hate app translations. The problem is when I want to read reviews it automatically translates them from Spanish (my native language) to English. It doesn't make any sense!

3form

Hey, at least it's preferred language. It's much worse when it bases it on the country that I'm in, which I can only reasonably influence with a VPN, and calling that reasonable is a stretch.

ch4s3

This is the worst. I was in France recently and tons of mobile websites were just suddenly in French. It was a real chore to read through them, and I can only imagine how frustrating this is when you can't read the language put in front of you at all.

ryandrake

This is always infuriating, because browsers send the Accept-Language header, that these sites just ignore.

spookie

It cuts in both ways.

I'm trying to learn the language of the country I now live in. And yet, Google thinks they know better than me, my preference, at the moment.

And this preference is quite circumstantial, mind you.

vaylian

It would be nice to have a browser setting that says: Use the original version of the text, if it is written in one of my preferred languages.

stevekemp

I keep setting all my preferences *everywhere* to English, yet 50% of the time google search results are 100% Finnish. With a helpful "change to English" link that does not work.

Worst still Google Maps will insist on only showing street names, area names, an directions in Swedish.

kccqzy

Strong agree. I had worked at Google for four years before discovering that the API documentation I meticulously wrote was machine translated to other languages. Only by perusing these translations did I realize some API concepts had been translated; I had to overuse the backtick notation for code to suppress these translations.

This is not a just Silicon Valley problem though; Redmond had similar issues if you just use the newer parts of Windows in a non-English language.

kypro

Wouldn't the alternative be worse for most people?

If you're a global company it would be silly to assume all your readers speak/read a single language. AI translations (assuming that's what this is) are not perfect, but better than nothing.

I get how poor translation could be irritating for bilingual people who notice the quality difference though, but I guess you guys have the advantage of being able to switch the language.

3form

Excluded middle. It doesn't have to be automatic, it could well be a choice. Best of all worlds if you ask me.

cyberax

I have `en-US` set as the second preference language, so just show me the content in `en-US` unless you have a real human-vetted translation into my primary language.

GaggiX

Translation technology has gotten so much better in the meanwhile, I didn't even notice it at first.

null

[deleted]

nine_k

Look, a language that was conceived out of necessity to write a web browser in a safer way is being used just for that. It's a different, unrelated browser, but the language still reaches its design goals beautifully.

null

[deleted]

waynecochran

I get it, but switching to Rust places the codebase on an Island I can't easily get to. I am already switching between C++, Swift, Kotlin, and Python on an almost daily basis.

MyOutfitIsVague

Was this a codebase you were working with regularly already? This project exposes a C FFI, so unless you were already working in the guts here, I don't think this should affect you terribly.

edit: I'm actually not seeing the C FFI, looking at the library. It must be there somewhere, because it's being used from a C++ codebase. Can somebody point to the extern C use? I'm finding this inside that `fontations` repository:

  > rg 'extern "'
  fontations/fauntlet/src/font/freetype.rs
  161:extern "C" fn ft_move_to(to: *const FT_Vector, user: *mut c_void) -> c_int {
  168:extern "C" fn ft_line_to(to: *const FT_Vector, user: *mut c_void) -> c_int {
  175:extern "C" fn ft_conic_to(
  187:extern "C" fn ft_cubic_to(
  >

drott

We're using https://cxx.rs/ to create the bindings and FFI interface. That's not provided with Fontations, but this bit is part of the Chromium and Skia integration. The code is here: https://source.chromium.org/chromium/chromium/src/+/main:thi...

jsheard

Chromium is >35 million lines of code, switching languages all in one go just isn't happening.

bobajeff

For me it was already hard to get into chromium's code base. It takes too long to build and there's just so much to understand about it before you can make changes.

It might help if there was some way to play around with the APIs without having to wait so long for it to build. But as far as i know that's not currently possible.

null

[deleted]

cl0ckt0wer

just get the AI to understand it for you /s

waynecochran

It is not a matter of understanding source code. It is matter of bridging, building, and linking another language with yet another binary layout and interface.

int0x29

It's an extern c interface. That's not any different than dealing with a c library

alberth

[flagged]

steveklabnik

Is anyone claiming that it is?

9rx

If anyone was, how would it remain a feeling?

alberth

I didn't say anyone was claiming it was.

Ono-Sendai

FreeType is a dinosaur that should be replaced regardless of Rust or not.

kiicia

It already is replaced by HarfBuzz https://harfbuzz.github.io

wiredfool

Harfbuzz is designed to run on top of the FreeType Renderer. https://harfbuzz.github.io/what-does-harfbuzz-do.html

kiicia

I stand corrected, thank you for follow up

TheRealPomax

The best libraries are libraries that have a clearly defined goal, hit that goal, and then don't change until the goalposts get moved. Something that happens only very slowly in font land.

Its age is completely irrelevant other than as demonstration that it did what it set out to do, and still performs that job as intended for people who actually work on, and with fonts.

Ono-Sendai

I don't think you understand how backwards it is. My patch that makes SDF rendering 4x faster was not accepted, presumably because floating point-maths is some strange new-fangled technology. Freetype uses antiquated fixed-point integer maths everywhere. See https://gitlab.freedesktop.org/freetype/freetype/-/merge_req...