Skip to content(if available)orjump to list(if available)

The best – but not good – way to limit string length

wavemode

In the age of unicode (and modern computing in general), all of this is more headache than it's worth. What is actually important is that you limit the size of an HTTP request to your server (perhaps making some exceptions for file upload endpoints). As long as the user's form entries fit within that, let them do what they want.

HeyImAlex

Thank you for writing this! It’s something I’ve always wanted a comprehensive guide on, now I have something to point to.

jasonthorsness

Huh, apparently HTML input attributes like maxsize don't try anything fancy and just count UTF-16 code units same as JavaScript strings (I guess it makes sense...) With the prevalence of emojis this seems like it might not do the right thing.

https://html.spec.whatwg.org/multipage/input.html#attr-input...

null

[deleted]

neuroelectron

This is why my website is going to be ASCII only.

poincaredisk

Which is a reasonable and clean solution - I love simplicity of ASCII like every programmer does.

Except ASCII is not enough to represent my language, or even my name. Unicode is complex, but I'm glad it's here. I'm old enough to remember the absolute nightmare that was multi-language support before Unicode and now the problem of encodings is... almost solved.

Retr0id

> The byte size allowed would need to be about 100x the length limit. That’s… kind of a lot?

Would it need to be, though? ~10x ought to be enough for any realistic string that wasn't especially crafted to be annoying.

bsder

TIL: In worst case, "20 UTF-8 bytes" == "1 Hindi character"

Going to have to remember that.

null

[deleted]

o11c

Note that normalization involves rearranging combining characters of different combining classes:

  > Array.from("\u{10FFff}\u0300\u0327".normalize('NFC')).map(x=>x.codePointAt().toString(16))
  [ '10ffff', '327', '300' ]
If a precombined character exists, the relevant accent will be pulled into the base regardless of where it is in the sequence. Note also that normalization can change the visual length (see below) under some circumstances.

The article is somewhat wrong when it says Unicode may "change character normalization rules"; new combining characters may be added (which affects the class sort above) but new precombined ones cannot.

---

There's one important notion of "length" that this doesn't cover: how wide is this on the screen?

For variable-width fonts of course this is very difficult. For monospace fonts, there are several steps for the least-bad answer:

* Zeroth, if you have reason to believe a later stage has a limit on the number of combining characters or will normalize, do the normalization yourself if that won't ruin your other concerns. (TODO - since there are some precomposed characters with multiple accents, can this actually make things worse?)

* First, deal with whitespace. Do you collapse space? What forms of line separator do you accept? How far apart are tab stops?

* Second, deal with any nonprintable/control/format characters (including spaces you don't recognize), e.g. escaping them or replacing them by their printable form but adding the "inverted" attribute.

* Third, deal with any leading (meaning, immediately after a nonprintable or a line-separator) combining characters, treat them by synthesizing a NBSP (which is not a space), which has length 1. Likewise, synthesize missing Hangul fillers anywhere in the line.

* Now, iterate through the codepoints, checking their EastAsianWidth (note that you can usually have a table combining this lookup with the earlier stages): -1 for a control character, 0 for a combining character (unless dealing with a system that's too dumb to strip them), 1 or 2 for normal characters.

* Any codepoints that are Ambiguous or in one of the Private Use Areas should be counted both ways (you want to produce two separate counts). Any combining characters that are enclosing should be treated as ambiguous (unless the base was already wide). Likewise for the Korean Hangul LVT sequences, you should produce a range of lengths (since in practice, whether they will combine depends on whether the font includes that exact sequence).

* If you encounter any ZWJ sequences, regardless of whether or not they correspond to a known emoji, count them both ways (min length being the max of any single component, max length as counted all separately).

* Flag characters are evil, since they violate Unicode's random-access rule. Count them both as if they would render separately and if they would render as a flag.

* TODO what about Ideographic Description Characters?

* Finally, hard-code any exceptions you encounter in the wild, e.g. there are some Arabic codepoints that are really supposed to be more than 2 columns.

For the purpose of layout, you should mostly work based on the largest possible count. But if the smallest possible count is different, you need to use some sort of absolute positioning so you don't mess up the user's terminal.

adam-p

@dang Can the title be changed? It should be "The best – but not good – way to limit string length". Thanks.

dang

Fixed!