Skip to content(if available)orjump to list(if available)

RFC 9839 and Bad Unicode

RFC 9839 and Bad Unicode

86 comments

·August 23, 2025

integralid

I'm not certain... On one hand I agree that some characters are problematic (or invalid) - like unpaired surrogates. But the worst case scenario is imo when people designing data structures and protocols start to feel the need to disallow arbitrary classes of characters, even properly escaped.

In the example, username validation is a job of another layer. For example I want to make sure username is shorter than 60 characters, has no emojis or zalgo text, and yes, no null bytes, and return a proper error from the API. I don't want my JSON parsing to fail on completely different layer pre-validation.

And for username some classes are obviously bad - like explained. But what if I send text files that actually use those weird tabs. I expect things that work in my language utf8 "string" type to be encodable. Even more importantly, I see plenty of use cases for null byte, and it is in fact often seen in JSON in the wild.

On the other hand, if we have to use a restricted set of "normal" Unicode characters, having a standard feels useful - better than everyone creating their own mini standard. So I think I like the idea, just don't buy the argumentation or examples in the blog post.

Joker_vD

Seriously, please don't use C0 (except for LF and, I cede grudgingly, HT) and C1 characters in your plain text files. I understand that you may want to store some "ANSI coloring markup" (it's not "VT100 colors" — the VT series was monochrome until VT525 of 1994), sure, but it's then, arguably, not a plain text anymore, is it? It's in a text markup format of sorts, not unlike Markdown, only the one that uses a different encoding that dips into the C0 range. Just because your favourite output device can display it prettily when you cat your data into it doesn't really mean it's a plain text.

Yes, I do realize that there is a lot of text markup formats that encode into plain text, for better interoperability.

csande17

Yeah, I feel like the only really defensible choices you can make for string representation in a low-level wire protocol in 2025 are:

- "Unicode Scalars", aka "well-formed UTF-16", aka "the Python string type"

- "Potentially ill-formed UTF-16", aka "WTF-8", aka "the JavaScript string type"

- "Potentially ill-formed UTF-8", aka "an array of bytes", aka "the Go string type"

- Any of the above, plus "no U+0000", if you have to interface with a language/library that was designed before people knew what buffer overflow exploits were

mort96

> - "Potentially ill-formed UTF-16", aka "WTF-8", aka "the JavaScript string type"

I thought WTF-8 was just, "UTf-8, but without the restriction to not encode unpaired surrogates"? Windows and Java and JavaScript all use "possibly ill-formed UTF-16" as their string type, not WTF-8.

mananaysiempre

WTF-8 is more or less the obvious thing to use when NT/Java/JavaScript-style WTF-16 needs to fit into a UTF-8-shaped hole. And yes, it’s UTF-8 except you can encode surrogates except those surrogates can’t form a valid pair (use the normal UTF-8 encoding of the codepoint designated by that pair in that case).

(Some people instead encode each WTF-16 surrogate independently regardless of whether it participates in a valid pair or not, yielding an UTF-8-like but UTF-8-incompatible-beyond-U+FFFF thing usually called CESU-8. We don’t talk about those people.)

layer8

Also known as UCS-2: https://www.unicode.org/faq/utf_bom.html#utf16-11

Surrogate pairs were only added with Unicode 2.0 in 1996, at which point Windows NT and Java already existed. The fact that those continue to allow unpaired surrogate characters is in parts due to backwards compatibility.

zahlman

I've always taken "WTF-8" to mean that someone had mistakenly interpreted UTF-8 data as being in Latin-1 (or some other code page) and UTF-8 encoded it again.

stuartjohnson12

> "WTF-8", aka "the JavaScript string type"

This sequence of characters is a work of art.

dcrazy

Why didn’t you include “Unicode Scalars”, aka “well-formed UTF-8”, aka “the Swift string type?”

Either way, I think the bitter lesson is a parser really can’t rely on the well-formedness of a Unicode string over the wire. Practically speaking, all wire formats are potentially ill-formed until parsed into a non-wire format (or rejected by same parser).

csande17

IMO if you care about surrogate code points being invalid, you're in "designing the system around UTF-16" territory conceputally -- even if you then send the bytes over the wire as UTF-8, or some more exotic/compressed format. Same as how "potentially ill-formed UTF-16" and WTF-8 have the same underlying model for what a string is.

layer8

There is no disagreement that what you can receive over the wire can be ill-formed. There is disagreement about what to reject when it is first parsed at a point where it is known that it should be representing a Unicode string.

alright2565

> "Unicode Scalars", aka "well-formed UTF-16", aka "the Python string type"

Can you elaborate more on this? I understood the Python string to be UTF-32, with optimizations where possible to reduce memory use.

csande17

I could be mistaken, but I think Python cares about making sure strings don't include any surrogate code points that can't be represented in UTF-16 -- even if you're encoding/decoding the string using some other encoding. (Possibly it still lets you construct such a string in memory, though? So there might be a philosophical dispute there.)

Like, the basic code points -> bytes in memory logic that underlies UTF-32, or UTF-8 for that matter, is perfectly capable of representing [U+D83D U+DE00] as a sequence distinct from [U+1F600]. But UTF-16 can't because the first sequence is a surrogate pair. So if your language applies the restriction that strings can't contain surrogate code points, it's basically emulating the UTF-16 worldview on top of whatever encoding it uses internally. The set of strings it supports is the same as the set of strings a language that does use well-formed UTF-16 supports, for the purposes of deciding what's allowed to be represented in a wire protocol.

zahlman

You're not wrong; I gave more detail in a direct reply https://news.ycombinator.com/item?id=44997146 .

zahlman

>"Unicode Scalars", aka "well-formed UTF-16", aka "the Python string type"

"the Python string type" is neither "UTF-16" nor "well-formed", and there are very deliberate design decisions behind this.

Since Python 3.3 with the introduction of https://peps.python.org/pep-0393/ , Python does not use anything that can be called "UTF-16" regardless of compilation options. (Before that, in Python 2.2 and up the behaviour was as in https://peps.python.org/pep-0261/ ; you could compile either a "narrow" version using proper UTF-16 with surrogate pairs, or a "wide" version using UTF-32.)

Instead, now every code point is represented as a separate storage element (as they would be in UTF-32) except that the allocated memory is dynamically chosen from 1/2/4 bytes per element as needed. (It furthermore sets a flag for 1-byte-per-element strings according to whether they are pure ASCII or if they have code points in the 128..255 range.)

Meanwhile, `str` can store surrogates even though Python doesn't use them normally; errors will occur at encoding time:

  >>> x = '\ud800\udc00'
  >>> x
  '\ud800\udc00'
  >>> print(x)
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  UnicodeEncodeError: 'utf-8' codec can't encode characters in position 0-1: surrogates not allowed
They're even disallowed for an explicit encode to utf-16:

  >>> x.encode('utf-16')
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  UnicodeEncodeError: 'utf-16' codec can't encode character '\ud800' in position 0: surrogates not allowed
But this can be overridden:

  >>> x.encode('utf-16-le', 'surrogatepass')
  b'\x00\xd8\x00\xdc'
Which subsequently allows for decoding that automatically interprets surrogate pairs:

  >>> y = x.encode('utf-16-le', 'surrogatepass').decode('utf-16-le')
  >>> y
  '𐀀'
  >>> len(y)
  1
  >>> ord(y)
  65536
Storing surrogates in `str` is used for smuggling in binary data. For example, the runtime does it so that it can try to interpret command line arguments as UTF-8 by default, but still allow arbitrary (non-null) bytes to be passed (since that's a thing on Linux):

  $ cat cmdline.py 
  #!/usr/bin/python
  
  import binascii, sys
  for arg in sys.argv[1:]:
      abytes = arg.encode(sys.stdin.encoding, 'surrogateescape')
      ahex = binascii.hexlify(abytes)
      print(ahex.decode('ascii'))
  $ ./cmdline.py foo
  666f6f
  $ ./cmdline.py 日本語
  e697a5e69cace8aa9e
  $ ./cmdline.py $'\x01\x00\x02'
  01
  $ ./cmdline.py $'\xff'
  ff
  $ ./cmdline.py ÿ
  c3bf
It does this by decoding with the same 'surrogateescape' error handler that the above diagnostic needs when re-encoding:

  >>> b'\xff'.decode('utf-8')
  Traceback (most recent call last):
    File "<stdin>", line 1, in <module>
  UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte
  >>> b'\xff'.decode('utf-8', 'surrogateescape')
  '\udcff'

TheRealPomax

I think you missed the part where the RFC is about which Unicode is bad for protocols and data formats, and so which Unicode you should avoid when designing those from now on, with an RFC to consult to know which ones those are. It has nothing to do with "what if I have a file with X" or "what if I want Y in usernames", it's about "what should I do if I want a normal, well behaved, unicode-text-based protocol or data format".

It's not about JSON, or the web, those are just example vehicles for the discussion. The RFC is completely agnostic about what thing the protocols or data formats are intended for, as long as they're text based, and specifically unicode text based.

So it sounds like you like misread the blog post, and what you should be doing is now read the RFC. It's short. You can cruise through https://www.rfc-editor.org/rfc/rfc9839.html in a few minutes and see it's not actually about what you're focussing on.

CharlesW

> I like the idea, just don't buy the argumentation or examples in the blog post.

Which ones, and why? Tim and Paul collectively have around 100,000X the experience with this than most people do, so it'd be interesting to read substantive criticism.

It seems like you think this standard is JSON-specific?

doug_durham

I thought the question was pretty substantive. What layer in the code stack should make the decisions about what characters to allow? I had exactly the same question. If the library declares that it will filter out certain subsets then that allows me to choose a different library if needed. I would hate to have this RFC blindly implemented randomly just because it's a standard.

CharlesW

> What layer in the code stack should make the decisions about what characters to allow?

I was responding to the parent's empty sniping as gently as I could, but the answer to your (good) question has nothing to do with this RFC specifically. It's something that people doing sanitation/validation/serialization have had to learn.

The answer to your question is that you make decisions like this as a policy in your business layer/domain, and then you enforce it (consistently) in multiple places. For example, usernames might be limited to lowercase letters, numbers, and dashes so they're stable for identity and routing, while display names generally have fewer limitations so people can use accented characters or scripts from different languages. The rules live in the business/domain layer, and then you use libraries to enforce them everywhere (your API, your database, your UI, etc.).

vintermann

> What layer in the code stack should make the decisions about what characters to allow?

OK, but where does it get decided what even counts a character? Should that be in the same layer? Even within a single system, there may be different sensible answers to that.

JimDabell

> PRECISion · You may find yourself wondering why the IETF waited until 2025 to provide help with Bad Unicode. It didn’t; here’s RFC 8264: PRECIS Framework: Preparation, Enforcement, and Comparison of Internationalized Strings in Application Protocols; the first PRECIS predecessor was published in 2002. 8264 is 43 pages long, containing a very thorough discussion of many more potential Bad Unicode issues than 9839 does.

I’d also suggest people check out the accompanying RFCs 8265 and 8266:

PRECIS Framework: Preparation, Enforcement, and Comparison of Internationalized Strings in Application Protocols:

https://www.rfc-editor.org/rfc/rfc8264

Preparation, Enforcement, and Comparison of Internationalized Strings: Representing Usernames and Passwords

https://www.rfc-editor.org/rfc/rfc8265

Preparation, Enforcement, and Comparison of Internationalized Strings Representing Nicknames:

https://www.rfc-editor.org/rfc/rfc8266

Generally speaking, you don’t want usernames being displayed that can change the text direction, or passwords that have different byte representations depending on the device that was used to type it in. These RFCs have specific profiles to avoid that.

I think for these kinds of purposes, failing closed is more secure than failing open. I’d rather disallow whatever the latest emoji to hit the streets is from usernames than potentially allow it to screw up every page that displays usernames.

Waterluvian

I’m frustrated by things like Unicode where it’s “good” except… you need to know to exclude some of them. Unicode feels like a wild jungle of complexity. An understandable consequence of trying to formalize so many ways to write language. But it really sucks to have to reason about some characters being special compared to others.

The only sanity I’ve found is to treat Unicode strings as if they’re some proprietary data unit format. You can accept them, store them, render them, and compare them with each other for (data, not semantic) equality. But you just don’t ever try to reason about their content. Heck I’m not even comfortable trying to concatenate them or anything like that.

csande17

Unicode really is an impossibly bottomless well of trivia and bad decisions. As another example, the article's RFC warns against allowing legacy ASCII control characters on the grounds that they can be confusing to display to humans, but says nothing about the Explicit Directional Overrides characters that https://www.unicode.org/reports/tr9/#Explicit_Directional_Ov... suggests should "be avoided wherever possible, because of security concerns".

weinzierl

I wouldn’t be so harsh. I think the Unicode Consortium not only started with good intentions but also did excellent work for the first decade or so.

I just think they got distracted when the problems got harder, and instead of tackling them head-on, they now waste a lot of their resources on busywork - good intentions notwithstanding. Sure, it’s more fun standardizing sparkling disco balls than dealing with real-world pain points. That OpenType is a good and powerful standard which masks some of Unicode’s shortcomings doesn’t really help.

It’s not too late, and I hope they will find their way back to their original mission and be braver in solving long-standing issues.

zahlman

A big part of the problem is that the reaction to early updates was so bad that they promised they would never un-assign or re-assign a code point ever again, making it impossible for them to actually correct any mistakes (not even typos in the official standard names given to characters).

The versioning is actually almost completely backwards by semver reasoning; 1.1 should have been 2.0, 2.0 should have been 3.0 and we should still be on 3.n now (since they have since kept the promise not to remove anything).

yk

I would. The original sin of Unicode is really their manifold idea, at that point they stopped trying to write a string standard and started to become a kinda general description of how string standards should look like and hopefully string standards that more or less conform to this description are interoperable if you remember which direction "string".decode() and "string".encode() is.

socalgal2

What could be better? Human languages are complex

estebank

The security concerns are those of "Trojan source", where the displayed text doesn't correspond to the bytes on the wire.[1]

I don't think a wire protocol should necessarily restrict them, for the sake of compatibility with existing text corpus out there, but a fair observation.

1: https://trojansource.codes/

yencabulator

The enforcement is an app-level issue, depending on the semantics of the field. I agree it doesn't belong in the low-level transport protocol.

The rules for "username", "display name", "biography", "email address", "email body" and "contents of uploaded file with name foo.txt" are not all going to be the same.

arp242

I always thought you kind of need those directional control characters to correctly render bidi text? e.g. if you write something in Hebrew but include a Latin word/name (or the reverse).

dcrazy

This is the job of the Bidi Algorithm: https://www.unicode.org/reports/tr9/

Of course, this is an “annex”, not part of the core Unicode spec. So in situations where you can’t rely on the presentation layer’s (correct) implementation of the Bidi algorithm, you can fall back to directional override/embedding characters.

layer8

Read the parent’s link. The characters “to be avoided” are a particular special-purpose subset, not directional control characters in general.

Etheryte

As a simple example off the top of my head, if the first string ends in an orphaned emoji modifier and the second one starts with a modifiable emoji, you're already going to have trouble. It's only downhill from there with more exotic stuff.

kps

Unicode combining/modifying/joining characters should have been prefix rather than suffix/infix, in blocks by arity.

zahlman

They should have at least all used a single system. Instead, we have:

* European-style combining characters, as well as precomposed versions for some arbitrary subset of legal combinations, and nothing preventing you from stacking them arbitrarily (as in Zalgo text) or on illogical base characters (who knows what your font renderer will do if you ask to put a cedilla on a kanji? It might even work!)

* Jamo for Hangul that are three pseudo-characters representing the parts of a larger character, that have to be in order (and who knows what you're supposed to do with an invalid jamo sequence)

* Emoji that are produced by applying a "variation selector" to a normal character

* Emoji that are just single characters — including ones that used to be normal characters and were retconned to now require the variation selector to get the original appearance

* Some subset of emoji that can have a skin-tone modifier applied as a direct suffix

* Some other subset of emoji that are formed by combining other emoji, which requires a zero-width-joiner in between (because they'd also be valid separately), which might be rendered as the base components anyway if no joined glyph is available

* National flags that use a pair of abstract characters used to spell a country code; neither can be said to be the base vs the modifier (this lets them say that they never removed or changed the meaning of a "character" while still allowing for countries to change their country codes, national flags or existence status)

* Other flags that use a base flag character, followed by "tag letter" characters that were originally intended for a completely different purpose that never panned out; and also there was temporary disagreement about which base character should be used

* Other other flags that are vendor-specific but basically work like emoji with ZWJ sequences

And surely more that I've forgotten about or not learned about yet.

layer8

One benefit of the suffix convention is that strings sort more usefully that way by default, without requiring special handling for those characters.

Unicode 1.0 also explains: “The convention used by the Unicode standard is consistent with the logical order of other non-spacing marks in Semitic and Indic scripts, the great majority of which follow the base characters with respect to which they are positioned. To avoid the complication of defining and implementing non-spacing marks on both sides of base characters, the Unicode standard specifies that all non-spacing marks must follow their base characters. This convention conforms to the way modern font technology handles the rendering of non-spacing graphical forms, so that mapping from character store to font rendering is simplified.”

eviks

Indeed, though a lot of that complexity like surrogates and control codes aren't due to attempts to write language, that's just awful designs preserved for posterity

ninkendo

It seems like most of these are handled by just rejecting invalid UTF-8 byte sequences (ideally, erroring out altogether) when interpreting a string as UTF-8. I mean, unpaired surrogates, or any surrogate for that matter, is already illegal as a UTF-8 byte sequence. Any competent language that uses UTF-8 for strings should already be returning errors when given such sequences.

The list of code points which are problematic (non-printing, etc) are IMO much more useful and nontrivial. But it’d be useful to treat those as a separate concept from plain-old illegal UTF-8 byte sequences.

doug_durham

That seems reasonable. It should be up to the application implementer to make that choice and not a lower level more general purpose library. I haven't run into any JSON parsers for usernames only code.

ks2048

It's worth noting that Unicode already defines a "General Category" for all code points that categorizes some of these types of "weird" characters.

https://en.wikipedia.org/wiki/Unicode_character_property#Gen...

e.g. in Python,

   import unicodedata
   print(unicodedata.category(chr(0)))
   print(unicodedata.category(chr(0xdead)))
Shows "Cc" (control) and "Cs" (surrogate).

arp242

Excluding all of "legacy controls" not just as literals but also escaped strings (e.g. "\u0027") seems too much. C1 is essentially unused AFAIK and that's okay, but a number of C0 characters do see real-world use (escape, EOF, NUL). IMHO there are valid and reasonable use cases to use some of them.

weinzierl

I think there should be a restriction in the standard on how many Unicode scalar values a graphical unit can have.

Last time I checked (a couple of years ago admittedly) there was no such restriction in the standard. There was however a recommendation to restrict a graphical unit to 128 bytes for "streaming applications".

Bringing this or at least a limit on the scalar units into the standard would make implementation and processing so much easier without restricting sensible applications.

djoldman

I don't understand how this helps.

Defining a subset of unicode to accept does not obviate the need to check that values conform to type definitions.

skybrian

Yes, that's true. It's for generic, low-level parsing code that doesn't know what a username is. There will also need to be field-specific validation.

yencabulator

I am torn on one decision: Whether to control inputs, or to wrap untrusted input in a datatype that displays it safely (web+log+debug).

o11c

I have had real-world programs broken by blind assumption of "does not deliberately contain controls" (form feed is particularly common for things intended to be paginated, escape is common for things designed for a terminal, etc.) and even "is fully UTF-8" (there are lots of old data files and logs that are never going away).

If you aren't doing something useful with the text, you're best off passing a byte-sequence through unchanged. Unfortunately, Microsoft Windows exists, so sometimes you have to pass `char16_t` sequences through instead.

The worst part about UTF-16 is that invalid UTF-16 is fundamentally different than invalid UTF-8. When translating between them (really: when transforming external data into an internal form for processing), the former can use WTF-8 whereas the latter can use Python-style surrogateescape, but you can't mix these.

develatio

I was not able to understand why these code points are bad. The post states that they are bad, but why? Any examples? Any actual situations and PoC that might help me understand how will that break "my code"?

orangeboats

Sometimes it's not just "your code". Strings are often interchanged and sent to many other parties.

And some of the codepoints, such as the surrogate codepoints (which MUST come in pairs in properly encoded UTF-16), may not break your code but break poorly-written spaghetti-ridden UTF-16-based hellholes that do not expect unpaired surrogates.

Something like:

1. You send a UTF-8 string containing normal characters and an unpaired surrogate: "Hello /uDEADworld" to FooApp.

2. FooApp converts the UTF-8 string to UTF-16 and saves it in a file. All without validation, so no crashes will actually occur; worst case scenario, the unpaired surrogate is rendered by the frontend as "�".

3. Next time, when it reads the file again, this time it is expecting normal UTF-16, and it crashes because of the unpaired surrogate.

(A more fatal failure mode of (3) is out-of-bounds memory read if the unpaired surrogate happens at the end of string)

JimDabell

Suppose, when you were registering your username `develatio`, you decided to put U+202E RIGHT-TO-LEFT OVERRIDE in there as well. Now when somebody is reading this page and their browser gets to your username, it switches the text direction to render it right-to-left.

develatio

and "that's it"? I mean, it does sound like it might introduce unexpected UI behaviour, but are there any other more serious / dangerous consequences?

yencabulator

One of my pet peeves is when UIs don't clearly constrain and delineate the extent of user-controlled text. Plenty of phishing attacks have relied on having attacker-controlled input seem authoritative, e.g. getting gmail to repeat back something to the victim.

JimDabell

Making any page that mentions you – including admin pages that might be used to disable your account – become unreadable is bad enough.

Another comment linked to this:

https://trojansource.codes

nikolayasdf123

how does this compare to Go `unicode.IsPrint(r rune)`? https://pkg.go.dev/unicode#IsPrint

what does bad/dangerous this code catch that `unicode.IsPrint` is not catching?

or other way, what good/useful does `unicode.IsPrint`removing, that this code keeps?

mort96

I don't know all the details of `unicode.IsPrint` function, but one major issue is: it's Go-specific. If you're defining a protocol, you probably don't want the spec to include text such as, "the username field must only contain Unicode code points which are conidered printable by the Go programming language's 'unicode.IsPrint' function". You would rather want to write, "the username field must not contain Unicode code points which are considered problematic by RFC 9839".

nikolayasdf123

interesting. so Go seem to be using unicode categories, which is part of unicode spec/standard. so it is fairly language agnostic.

> IsPrint == .. categories L, M, N, P, S and the ASCII space character.

how does that compare to this standard (RFC 9839)? (don't mind that this is Go. just consider same unicode categories).