Skip to content(if available)orjump to list(if available)

Iterated Log Coding

Iterated Log Coding

18 comments

·February 26, 2025

fc417fc802

It's a neat encoding but the writeup is needlessly confusing. The introduction would really benefit from a graph or two. Or even just a textual sequence showing how the value evolves as you iterate on it.

Similarly, while the end result is quite elegant the consequences of the choices made to get there aren't really explained. It's quite a bit of extra effort to reason out what would have happened if things had been done differently.

lblume

> Any value representable in n bits is representable in n+1 bits

> [1 1 1 1 1 1 1] 2.004e+19728

Does that mean that the 8 bits version has numbers larger than this? Doesn't seem very useful for 10^100 is already infinity for all practical purposes.

dullcrisp

Who said anything about practical purposes?

Retr0id

How feasible is it to do arithmetic directly in this representation?

kleiba

That's exactly the crux. Addition and subtraction in log space is not exactly straight-forward.

xixixao

Given an integer n, what is the number of bits required to represent every integer in the range -n..n ?

maggit

Exact integers doesn't seem to be its strong suite. Can it even represent 3 exactly?

Running code from the linked notebook (https://github.com/AdamScherlis/notebooks-python/blob/main/m...), I can see that a 32 bit representation of the number 3 decodes to the following float: 2.999999983422908

(This is from running `decode(encode(3, 32))`)

dullcrisp

Yeah I think any numbers away from 0 or infinity aren’t its strong suit.

lmm

> Given an integer n, what is the number of bits required to represent every integer in the range -n..n ?

(log n) + 1

maggit

That's true for normal binary encoding of integers, but I think we should understand the question in context of the post: What's the number of bits required in iterated log coding?

yorwba

Empirically, it seems to grow more like 2*log2(n)+1. A handwavy argument can be made that the first bit serves to distinguish the positive values from the negative ones, but after that on average every second bit only adds more precision to values that are already distinguishable or out of range, but doesn't help with values whose representation has the same prefix. I don't know how to make that airtight, though...

dullcrisp

This feels like you can use e (or I guess any number) for the logarithm base rather than 2. I wonder if that’d make it a bit more uniform in some sense.

rossant

Nice and elegant! Surprising it was not discovered earlier (unless it was and we're unaware of it).

maggit

Cute! I wonder if it would be amenable for use as a variable-width encoding for, say, DCT coefficients in a JPEG-like codec..?

bflesch

Feels a bit like binary encoding with + and - instead of 0 and 1

null

[deleted]