Skip to content(if available)orjump to list(if available)

TOON – Token Oriented Object Notation

inopinatus

JSON unmarshalling often has to consider separately whether an attribute is absent, false, zero, null, or the empty string, but this was never quite semantically ambiguous enough for my tastes, so adding that void-ish values may also now be serialised as a tuple of length [0] seems to me an excellent additional obfuscation.

joshribakoff

The use case here is to reduce the token usage with LLMs, such as an agent that outputs a list of commands eg. Tuples with files to write and their new contents.

Supporting this use case doesn’t require perfectly marshaling every data structure ever.

But to your point the tool could have wider use cases without the limitations.

inopinatus

If one trains a model to understand it then that model will inevitably emit it, which means in turn one shall have to parse it, and now the application supports TOON for anything, and good luck telling the users/customers any different.

null

[deleted]

hedgehog

It would be interesting to compare this to BAML and TOML.

toobulkeh

Definitely is a core feature of BAML. My main complaint with BAML is that it’s all or nothing. It’s very opinionated and we can’t get the benefits without the DX and vice versa. Separating this feature without requiring a DSL of model definition is a great add.

vessenes

I’ll be interested to see benchmarks. My expectation is that accuracy will take a hit on mid or longer context prompts: I’d bet that the heavy use of JSON in fine tuning will end up impacting quality of a more terse (less reasoning space) novel encoding.

That said: I like the idea!

brian-bk

There are a very light benchmarks in the Readme, or are you looking for more?

Mumps

Do you mean the [0] Token Benchmarks section? I only see token count numbers.

Which doesn't address the question: do LLMs understand TOON the same as they would JSON? It's quite likely that this notation is not interpreted the same by most LLM, as they would JSON. So benchmarks on, say, data processing tasks, would be warranted.

[0] https://github.com/johannschopplich/toon?tab=readme-ov-file#...

tujux

I think they're talking about these sections:

1. Retrieval Accuracy - https://github.com/johannschopplich/toon?tab=readme-ov-file#...

2. Performance by dataset - https://github.com/johannschopplich/toon?tab=readme-ov-file#...

3cats-in-a-coat

I'll say the obvious. A lot of this you can just do in JSON.

Let's take the example:

    {
      "users": [
        { "id": 1, "name": "Alice", "role": "admin" },
        { "id": 2, "name": "Bob", "role": "user" }
      ]
    }

    users[2]{id,name,role}:
      1,Alice,admin
      2,Bob,user
We can keep it JSON, but use more compact list expressions, as tuples when pragmatic:

    ["users",
       [1, "Alice", "admin"],
       [2, "Bob", "user"]
    ]
The thing is the game with LLMs is not what's shortest, but what's:

1. Mainstream, so they understand it.

2. What they're tuned for, and their tuned for what's mainstream (JSON).

If you want to go extreme compression you can shove it all in JSON strings too and keep the larger structure JSON:

    ["users",
       "1:admin:Alice",
       "2:user:Bob",
    ]
You may say "how is this better". Well it's better because it's still JSON, there's less to explain to the LLM, and to your other devs. Even if we use a weird compact format like "id:role:name" this is still shorter to explain than a completely different syntax with its whole world of rules.

anonymoushn

Hello, it's probably better to add leading spaces before all of the words rather than none of them

Pxtl

I'm sorry I don't see this adding value over various other formats. I don't really want a new object serialization format, I just want the existing ones to have the features I need. YAML but with static typing and schema. XML but without crazy internet features. TOML but with an object format that doesn't hurt my brain. JSON but with decent multiline strings and comments. NestedText but with a sub-standard that provides static-typing and schema and whatnot.

foxglacier

The benchmarks show it performs better than them, so that's the value - cost savings and improved accuracy. I suppose you could convert JSON to TOON just for the LLM and not actually read it with your own brain.

meander_water

I don't get it, can't you just use yaml instead of inventing another DSL.

jscheel

For repeating objects of the same structure, yaml will still require each key on each object, whereas this is a hybrid with csv, so it defines the keys once.

3cats-in-a-coat

No one forces us to use objects in JSON with repeated keys you know.

mhosayny

It's more compact than YAML. More like a combination of YAML and CSV.

inopinatus

Norway.

dragonwriter

YAML 1.2 has been out for 16 years now, so I would simply not assume that the suggestion to use YAML for a new purpose means “use YAML 1.1”.

inopinatus

I could agree that you would not make poor assumptions.

Your LLM, however, may experience cross-format feature superposition and consequential spurious activation.

flyer23

It is, also noone uses it:)

ifh-hn

[dead]

moralestapia

[flagged]

jayd16

I'm not sure which one would win but its a bit telling that compression isn't mentioned at all.

I guess its about LLMs so the idea is has to be plaintext? But if you can train it on TOON can't you train it on BSON?