Skip to content(if available)orjump to list(if available)

Poor man's bitemporal data system in SQLite and Clojure

moomin

It's a pity that Clojure is kind of a hermetic space these days, because the concept of bitemporality really deserves much more attention. It's amazing how often you want to know "What was the P&L for March using the data available on Apr 4?" and how uncommon it is to find a database design that supports that kind of query.

nickpeterson

I don’t understand footnote 16? It sounds like he’s saying he’s ceo of htmx and datastar?

adityaathalye

Yes... This is a running joke in both communities. IYKYK :)

nickpeterson

Oh ok, I’m out of the loop on it ;) thought it was some sort of ai hallucination.

whalesalad

I've been absolutely waist deep in a bitemporal system on top of PostgreSQL using tstzrange fields. We manage an enormous portfolio of hundreds of thousands of domain names. Every time our core db is modified, before/after states are emitted to a change table. We've been doing this since 2022. Those changes get lightly transformed via trigger into a time travel record, with the valid from/to range and a gist index to make asking questions about the state of the world at a particular point in time easy. For perspective our change table has 90M rows.

All of it works quite well and is decently performant. We can ask questions like, how many domains did we own on March 13th, 2024? Or look at the entire lifecycle of a domains ownership (owned, released, re-acquired, transfered, etc).

The big challenge and core issue we discovered though is that our data sucks. QAing this new capability has been a moving target. Tons of mistakes over time that were partially undone or manually undone without proper audit trail. Ghost records. Rapid changes by our bulk editor tool a->b->a->b that need to get squashed into just a->b. The schema of our database has evolved over time, too, which has made this tough to view a consistent representation of things even if the fields storing that data were renamed. When the system was first introduced, we had ~5 columns to track. Now we have over 30.

Suffice to say if I were to do things over again, I would implement a much better change tracking system that bakes in tools to clean/erase/undo/soft-delete/hard-delete mistakes so that future me (now) wouldn't have so many edge cases to deal with in this time traveling system. I'd also like to just make the change tracking capable of time travel itself, versus building that as a bolt-on side table that tracks and works from the change table. Transitioning to an EAV (entity-attr-value) approach is on my spike list, too. Makes it easier to just reduce (key,val) tuples down into an up to date representation versus looking at diffs of before/after.

Really interesting stuff. I learned a lot about this from Clojure/Datomic and think its quite neat that so many Clojurists are interested in and tackling this problem. As the author notes in this post, XTDB is another one.

refset

tl;dr

  CREATE VIEW IF NOT EXISTS world_facts_as_of_now AS
  SELECT
    rowid, txn_time, valid_time,
    e, a, v, ns_user_ref, fact_meta
  FROM (
    SELECT *,
      ROW_NUMBER() OVER (
        PARTITION BY e, a
        ORDER BY valid_preferred DESC, txn_id DESC
      ) AS row_num
    FROM world_facts
  ) sub
  WHERE row_num = 1
    AND assert = 1
  ORDER BY rowid ASC;
...cool approach, but poor query optimizer!

It would be interesting to see what Turso's (SQLite fork) recent DBSP-based Incremental View Maintenance capability [0] would make of a view like this.

[0] https://github.com/tursodatabase/turso/tree/main/core/increm...

adityaathalye

It is a poor man's database, after all :)

I really need to complete this thing and run some data through it... like, how poor is poor really? Can it be just enough for me to make a getaway with smol SaaS apps?