Skip to content(if available)orjump to list(if available)

TPSV, an Alternative to TSV (and CSV)

aidenn0

If only ASCII had a field separator character, then we could just use that instead.

1vuio0pswjnm7

Now sure about "we" but I have used these for years in personal projects for own purposes.

          Dec      Octal  Hex   Binary

          028      034    01C   00011100       FS    (File Separator)
          029      035    01D   00011101       GS    (Group Separator)
          030      036    01E   00011110       RS    (Request to Send)(Record Separator)
          031      037    01F   00011111       US    (Unit Separator)

Rhapso

The poor delimiter special characters in the ascii table never get any love.

ctenb

Yeah :) Though I can think of two reasons why: it's not typable for most people on a keyboard, and most programs are not designed to deal with it, or render it properly in an aligned way, like tab characters.

TheTaytay

I have been using TSV a LOT lately for batch inputs and outputs for LLMs. Imagine categorizing 100 items. Give it a 100 row tsv with an empty category column, and have it emit a 100 row tsv with the category column filled in.

It has some nice properties: 1) it’s many fewer tokens than JSON. 2) it’s easier to edit prompts and examples in something like Google sheets, where the default format of a copied group of cells is in TSV. 3) have I mentioned how many fewer tokens it is? It’s faster, cheaper, and less brittle than a format that requires the redefinition of every column name for every row.

Obviously this breaks down for nested object hierarchies or other data that is not easily represented as a 2d table, but otherwise we’ve been quite happy. I think this format solves some other things I’ve wanted, including header comments, inline comments, better alignment, and markdown support.

ilyagr

It's a clever format, especially if the focus is on machines generating it and humans or machines reading it. It might even work for humans occasionally making minor edits without having to load the file in the spreadsheet.

I think it can encode anything except for something matching the regex `(\t+\|)+` at the end of cells (*Update:* Maybe `\n?(\t+\|)+`, but that doesn't change my point much) including newlines and even newlines followed by `\` (with the newline extension, of course).

For a cell containing `cell<newline>\`, you'd have:

    |cell<tab>|
    \\<tab   >|
(where `<tab >` represents a single tab character regardless of the number of spaces)

Moreover, if you really needed it, you could add another extension to specify tabs or pipes at the end of cells. For a POC, two cells with contents `a<tab>|` and `b<tab>|` could be represented as:

    |a<tab  ><tab>|b
    ~tab pipe<tab>|tab pipe
(with literal words "tab" and "pipe"). Something nicer might also be possible.

*Update:* Though, if the focus is on humans reading it, it might also make sense to allow a single row of the table to wrap and span multiple lines in the file, perhaps as another extension.

ctenb

For multiline cell contents, there is rule 7, the multi line extension. Newlines are not allowed in cells otherwise, because of rule 2, it's a line based format.

I personally use it to write tabular data manually, used to define our datamodel. Because this format is editor agnostic, colleagues can easily read and edit as well. So in my case it's focus on human read/write and machine read.

karmakaze

Is there a text format like TSV/CSV that can represent nested/repeating sub-structures?

We have YAML but it's too complex. JSON is rather verbose with all the repeated keys and quoting, XML even moreso. I'd also like to see a 'schema tree' corresponding to a header row in TSV/CSV. I'd even be fine with a binary format with standard decoding to see the plain-text contents. Something for XML like what MessagePack does for JSON would work, since we already have schema specifications.

culi

Well there's JSONL which is used heavily in scientific programs (especially in biology)

But CSV represented as JSON is usually accomplished like so:

  {
    "headers": ["name", "habitat", "food"],
    "data": [
      ["Acorn Woodpecker", "forest", "grain"],
      ["American Goldfinch", "grassland", "grain"],
      ["Anhinga", "wetland", "fish"],
      ["Australian Reed Warbler", "wetland", "grub"],
      ["Black Vulture", "forest", null]
    ]
  }

gglitch

S-expressions?

dreamcompiler

Greenspun's Tenth Corollary:

Every textual data format that is not originally S-expressions eventually devolves into an informally-specified, bug-ridden, slow implementation of half of S-expressions.

Hackbraten

Good on you to leverage EditorConfig settings. Almost every modern IDE or editor supports it either out of the box or with a plug-in.

montroser

This is pretty under-specified...

> A cell starts with | and ends with one or more tabs.

    |one\t|two|three
How many cells is this? Seems like just one, with garbage at the end, since there are no closing tabs after the first cell? Should this line count as a valid row?

> A line that starts with a cell is a row. Any other lines are ignored.

Well, I guess it counts. Either way, how should one encode a value containing a tab followed by a pipe?

jasonthorsness

The spec says the last cell does not need to end in a tab, so this would be two cells IMO

ctenb

That's correct

bvrmn

I think spec tries and fails to translate code implementation into human language. In code cell separator is `\t|`.

ctenb

That's also correct. In what way does it fail?

Hashex129542

We need binary formats. In this era we are capable for it. Throw away the text formats.

baby_souffle

> We need binary formats. In this era we are capable for it.

We have them, they're used where appropriate.

> Throw away the text formats.

I would argue that _most_ of the time tsv or csv are used it's because either:

a) the lowest common denominator for interchange. Oh, you don't have _my specific version of $program? How about I give you the data in csv? _everything_ can read that...

b) a human is expected to inspect/view/adjust the data and they'd be using a bin -> text tool anyways. The move to binary based log formats (`journald`) is still somewhat controversial. It would have been a non-starter if the tooling to make the binary "readable" wasn't on-par with (or, in a few cases, better!) than the contemporary text based tooling we've been used to for the prior 30+ years..

account-5

Text is universally accessible and widely supported. Binary has it's benefits, but human facing, it has to be text.

chthonicdaemon

The idea that binary formats are the way because "you're going to use a program to interact with the format anyway" ignores the network effects of having things like text editors and unix commands that handle text as a universal intermediate, while having bespoke programs for every format dooms you to developing a full set of tooling for every format (or more likely, writing code that converts the binary format to text formats).

More recently though, consider that LLMs are terrible at emitting binary files, but amazing at emitting text. I can have a GPT spit out a nice diagram in Mermaid, or create calendar entries from a photo of an event program in ical format.

zzo38computer

Which formats are helpful can depend on the use. I think DER (which is a binary format) is not so bad (although I added a few additional types (such as key/value list, BCD string, and TRON string), but not all uses are required to use them). I had also made up Multi-DER, which is simply any number of DER concatenated together (there are formats of JSON like that too). (I had also made up TER which is a text format and a program to convert TER to DER. I also wrote a program to convert JSON to DER. It would also be possible to convert CSV, etc.)

It was also my idea of an operating system design, it will have a binary format used for most stuff, similar to DER but different in some ways (including different types are available), which is intended to be interoperable among most of the programs on the system.

Hashex129542

Same Vibe. Yes I am using the same DER files in my apps. So we can have more distinguished universal value types than just a text.

My very next step is OS development too but I'm not sure where to learn the OS in the opcode coding level. I thought to get started with Intel docs for my CPU.

voidfunc

Yep, and stuff it into a sqlite db too you have a query interface all built.

culi

What's a good program that non-technical people can use to write sqlite db data. I think it's a great idea in theory but lacking in support

smallerize

Ok but how do you type them? How do you search them? How do you copy-and-paste between documents?

Hashex129542

We are just changing the encoders and decoder program only from text to binary. The front end software remains same for example we are using for CSV.

The binary form has lot of benefits than plain text form in editing. For example, when you are replacing UIn8 value from 0 to 100 then you just replacing a byte at a position instead of rewriting whole document.

rr808

parquet?

bsder

Data will always outlive the program that originally produced it.

This is why you should almost always use text formats.

helix278

I like that there is plenty of room for comments, and the multiline extension is also cool. The backslash almost looks like what I would write on paper if I wanted to sneak something into the previous line :)

stevage

I hate this kind of format. It's trying to be both a data format for computers and a display format for humans. Much better off just using a tool that can edit CSV files as tables.

Also it doesn't seem to say anything about the header row?

nmz

CSV is also a display format for humans and also for computers. Its also a terrible one because its too variable FS is variable, escapes could exist, "" could be used, this slows down parsing.

stevage

I wouldn't say CSV is a display format. Attempting to edit it by hand is pretty error prone, and reading it is hard work.

CJefferson

Honestly at this point my favorite format is JSONLines (one JSON object per line).

It instinctively feels horrible, but it’s easy to create and parse in basically every language, easy to fully specify, recovers well from one broken line in large datasets, chops up and concatenates easily.

hiAndrewQuinn

I second this. I'm using JSONL to bake in the data for my single binary Finnish to English pocket dictionary ( https://github.com/hiAndrewQuinn/tsk ). It just makes things like data transformations so easy, especially with jq.

bvrmn

According to spec it's nearly impossible to correctly edit files in this format by hand.

mkl

How so? All you need is a text editor that preserves tabs.

bvrmn

1. It's quite easy to miss a tab and use only `|`.

2. Generated TPSV would look like an unreadable hard to edit mess. I doubt any tool would calculate max column length to adjust tab count for all cells. It basically kills any streaming.

mkl

You have a very strange definition of "nearly impossible".

> 1. It's quite easy to miss a tab and use only `|`.

Any format is hard to edit manually if you don't follow the requirements of the format (which are very simple in this case).

> 2. Generated TPSV would look like an unreadable hard to edit mess.

CSVs are much less readable than this, but still entirely possible to edit.