Third Base (2001)
9 comments
·January 23, 2025chungy
madcaptenor
Another fun one is base phi = (1 + sqrt(5))/2 - see https://en.wikipedia.org/wiki/Golden_ratio_base
andrewla
Almost all of these analyses of ternary hinge on the idea of efficiency of representation, being the (number of possible symbols) * (number of digits to represent).
But it seems way more useful to use the log of the number of possible symbols, in which case all bases (except unary, which is non-positional) have the exact same efficiency, and the argument becomes moot. It becomes largely a question of representation in reality, and binary wins on that front.
More exotic non-positional number systems seem way more interesting -- Fibonacci bases or Gray bases. Even balanced ternary if we have to break out of the binary paradigm.
variaga
>Almost all of these analyses of ternary hinge on the idea of efficiency of representation, being the (number of possible symbols) * (number of digits to represent). >But it seems way more useful to use the log of the number of possible symbols
This always bothered me as well. Hardware cost does not scale linearly with the number of possible states of each digit.
My personal theory is that the real hardware cost is less related to the number of possible states/symbols, and more related to the amount of effort required to reliably distinguish one state from another. Taking a cue from information theory, that implies the complexity per digit should track the Eb/No (SNR per information bit) of the symbol set.
madcaptenor
"Gray bases"? What do you have in mind? (I'm assuming you mean as in Gray codes, but I'm having trouble making sense of that.)
andrewla
I can't seem to find a reference at the moment.
I think I saw a talk on this years ago. The idea is that the circuit complexity of adders is high because of carry cascade but since Gray codes only ever flip a single bit on increments, then a Gray code counter can be made very simple and have no glitch states. Might have showed up in the context of asynchronous VLSI designs.
variaga
Speaking as a digital HW designer, Gray counters (where you only ever calculate 'x+1' are actually slightly more complicated than normal binary counters, and general Gray code addition (calculate 'x+y' for arbitrary x,y) is significantly more complicated (to the extent that I've never actually seen it done in 28 years of practice - everyone figures out a way to move addition to binary values instead).
You are correct that Gray counters only ever toggle one bit at a time and have no 'glitch' states in the sense that an asynchronous counter will always capture either 'x' or 'x+1' (can't be sure which), but not any other value. This is commonly used to pass e.g. FIFO pointers (or modular counters, generally) between asynchronous clock domains.
The problem is, Gray codes (the standard, symmetric gray code, that is - there are multiple possible gray codes, but the non-standard ones are worse) are not a purely positional encoding as defined in the article.
- With a binary encoding, for all 'N' if bit N is set, you add 2^N to the calculated value. - With a Gray encoding, for all N if (bit N XOR bit N+1) is set, you add 2^N to the calculated value - the encoding of each power of 2 is spread across 2 bits.
sitkack
The book from 1950 mentioned in the article https://web.archive.org/web/20100505110621/https://bitsavers...
PaulHoule
I like balanced ternary a lot.
One of my favorite parts in The Art of Computer Programming is when Knuth goes into alternative numeric bases. He makes mention there of "base 2i" as well, though he details it in a much older paper: https://dl.acm.org/doi/10.1145/367177.367233
Base 2i is particularly fascinating for allowing a numeric system that can represent all complex numbers without signs and without addition.