Strong Eventual Consistency – The Big Idea Behind CRDTs
41 comments
·September 9, 2025judofyr
> This has massive implications. SEC means low latency, because nodes don't need to coordinate to handle reads and writes. It means incredible fault tolerance - every single node in the system bar one could simultaneously crash, and reads and writes could still happen normally. And it means nodes still function properly if they're offline or split from the network for arbitrary time periods.
Well, this all depends on the definition of «function properly». Convergence ensures that everyone observed the same state, not that it’s a useful state. For instance, The Imploding Hashmap is a very easy CRDT to implement. The rule is that when there’s concurrent changes to the same key, the final value becomes null. This gives Strong Eventual Consistency, but isn’t really a very useful data structure. All the data would just disappear!
So yes, CRDT is a massively useful property which we should strive for, but it’s not going to magically solve all the end-user problems.
josephg
Yeah; this has been a known thing for at least the 15 years I’ve been working in the collaborative editing space. Strong eventual consistency isn’t enough for a system to be any good. We also need systems to “preserve user intent” - whatever that means.
One simple answer to this problem that works almost all the time is to just have a “conflict” state. If two peers concurrently overwrite the same field with the same value, they can converge by marking the field as having two conflicting values. The next time a read event happens, that’s what the application gets. And the user can decide how the conflict should be resolved.
In live, realtime collaborative editing situations, I think the system just picking something is often fine. The users will see it and fix it if need be. It’s really just when merging long running branches that you can get in hot water. But again, I think a lot of the time, punting to the user is a fine fallback for most applications.
ljlolel
LLMs might be able to use context to auto resolve them often with correct user intent automatically
johnecheck
LLMs could be good at this, but the default should be suggestions rather than automatic resolution. Users can turn on YOLO mode if their domain is non-critical or they trust the LLM to get it right.
null
the_duke
The big problem with CRDTs IMO is that they make it incredibly easy to break application semantics.
Just a basic example for a task tracker:
* first update sets task cancelled_at and cancellation_reason
* second update wants the task to be in progress, so sets started_at
If code just uses the timestamps to consider the task state, it would not assume the task is cancelled, unexpected since the later user update set it to in progress.
Easy fix, we just add a state field 'PENDING|INPROGRESS|CANCELLED|...'.
Okay, but now you have a task that is in progress, but also has a cancellation timestamp, which seems inconsistent.
The point is:
With CRDTs you have to consider how partial out of order merges affect the state, and make sure your logic is always written in a way so these are handled properly. That is *not easy*!
I'd love it if someone came up with a framework that allows defining application semantics on top of CRDTs, and have the framework ensure types remain consistent.
tempodox
Do not separate the state field from its time stamp(s). Use a sum type (“tagged union”) where the time stamps are the payload for a selected state. Make invalid states unrepresentable.
the_duke
There are many ways to solve each individual problem.
The point is that you always have to think about merging behaviour for every piece of state.
fauigerzigerk
Yes, sort of like you have to think about your transaction boundaries in server-side code for every single task.
The difference is that coming up with a correct CRDT solution for application specific consistency requirements can be a research project. In many cases, no CRDT solution can exist.
shakna
If you want invalid states unrepresentable, and time as a primary key... How do you deal with time regularly becoming non-linear within the realm of computing?
josephg
The general answer is to accept that time isn’t linear. In a collaborative editing environment, every event happens after some set of other events based on what has been observed locally on that peer. This creates a directed acyclic graph of events (like git).
johnecheck
It might be nice if our universe conformed to our intuitions about time steadily marching forward at the same rate everywhere.
Einstein just had to come along and screw everything up.
Causality is the key.
throwawaymaths
logical clocks
evelant
I prototyped exactly such a framework! It's designed to solve exactly the problem you mentioned. It’s a super interesting problem. https://github.com/evelant/synchrotron
The gist is:
* Replicating intentions (actions, immutable function call definitions that advance state) instead of just replicating state.
* Hybrid logical clocks for total ordering.
* Some client side db magic to make action functions deterministic.
This ensures application semantics are always preserved with no special conflict resolution considerations while still having strong eventual consistency. Check out the readme for more info. I haven’t gotten to take it much further beyond an experiment but the approach seems promising.
the_duke
Nice, will have a look!
I've had similar thoughts, but my concern was: if you have idempotent actions, then why not just encode them as actions in a log. Which just brings you to event sourcing, a quite well-known pattern.
If you go that route, then what do you need CRDTs for?
evelant
The pattern I came up with is similar to event sourcing but with some CRDT and offline-first concepts mixed in. By using logical clocks and a client side postgres (pglite) it doesn't have to keep the entire event history for all time and the server side doesn't have to process actions/events at all beyond storing them. The clients do the resolution of state, not the server. Clients can operate offline as long as they like and the system still arrives at a consistent state. AFAIK this is different than most event sourcing patterns.
At least in my thinking/prototyping on the problem so far I think this solution offers some unique properties. It lets clients operate offline as long as they like. It delegates the heavy lifting of resolving state from actions/events to clients, requiring minimal server logic. It prevents unbounded growth of action logs by doing a sort of "rebase" for clients beyond a cutoff. It seems to me like it maximally preserves intentions without requiring specific conflict resolution logic. IMO worth exploring further.
n0w
A CRDT is any data structure that meets the definition (associative, commutative, idempotent, etc...)
Event Sourcing is not strictly designed to achieve eventual consistency in the face of concurrent writes though. But that doesn't mean it can't be!
I've also been considering an intent based CRDT system for a while now (looking forward to checking out GPs link) and agree that it looks/sounds very much like Event Sourcing. It's worth while being clear on the definition/difference between the two though!
ForHackernews
Doesn't event-sourcing imply that there's a single source-of-truth data store you can source them from? I'm not sure event sourcing says anything about resolving conflicts or consistency.
littlecosmic
Don’t you also have to consider this just as much without CRDT? Not saying it isn’t a real issue, but this example could easily be a problem with a more traditional style app - maybe users open the record on their web browser at same time and make different updates, or they update the different timestamp fields directly in a list of tasks.
the_duke
Sure, but you can usually rely on database transactions to handle the hard part.
filleokus
Yes!
Any many CRDT implantations have already solved this for the styled text domain (e.g bold and cursive can be additive but color not etc).
But something user definable would be really useful
gritzko
The big idea behind CRDTs is that a data structures can have replicas synchronizing on a best-effort basis. That is much closer to the physical reality: server here, client there, phones all over the place.
The basic CRDT ideas are actually pretty easy to implement: add some metadata here, keep some history there. The difficulty, for the past 20 years or so, is making the overheads low, and the APIs understandable.
Many projects revolve around some JSON-ish data format that is also a CRDT:
- Automerge https://automerge.org (the most tested one, but feels like legacy at times, the design is ~10yrs old, there are more interesting new ways)
- JsonJoy https://jsonjoy.com/
- RDX (mine) https://replicated.wiki/ https://github.com/gritzko/go-rdx/
- Y.js https://yjs.dev/
Others are trying to retrofit CRDTs into SQLite or Postgres. IMO, those end up using last-write-wins in most cases. Relational logic steers you that way.
skeeter2020
TIP: define acronyms the first time you use them, and don't put them in headings.
Conflict-free replicated data types (CRDTs) https://en.wikipedia.org/wiki/Conflict-free_replicated_data_...
ijaym
The majority of the content reminds me of Bigtable (https://static.googleusercontent.com/media/research.google.c...).
Do people really distinguish "Strong Eventual Consistency" from "Eventual Consistency"? To me, when I say "Eventual Consistency" I alwayes mean "Strong Eventual Consisteny".
simiones
I think there are a lot of systems that have a separate node syncing feature, where two nodes can receive updates, apply them to their local replica right away, and only communicate them and reconcile with the other backend nodes at a later time.
nl
(Non-Strong) Eventual Consistency does not guarantee that all replicas converge in a specific time period.
In an eventually consistent system replicas can diverge. A "last write" system can be eventually consistent, but a given point can read differently.
Eg: operations
1) Add "AA" to end of string 2) Split string in middle
Replicas R1 and R2 both have the string "ZZZZ"
If R1 sees operations (1) then (2) it will get "ZZZZAA", then "ZZZ", "ZAA"
If R2 sees (2) then (1) it will get:
"ZZ", "ZZ", then "ZZAA", "ZZ".
Strong Eventual Consistency doesn't have this problem because the operations have the time vector on them so the replicas know what order to apply them.
josephg
I’m not sure I follow. How would this be eventually consistent at all? It looks like the two peers in your example simply have divergent state and will never converge.
simiones
You're not describing an eventually consistent system, you're describing a system that diverges. By definition, eventually consistent means that, after some time, all readers across the entire system are guaranteed to find the same values, even if before that time they may see different values.
Any eventually consistent system has to have a strategy for ensuring that all nodes eventually agree on a final value. R1 and R2 need to communicate their respective states, and agree to a single one of them - maybe using timestamps if R2's value is newer, R1 will replace its own value when they communicate), maybe using a quorum (say there is also an R3 which agrees with R1, then R2 will change its value to match the other two), maybe using an explicit priority list (say, R1's value is assumed better than R2's).
aatd86
That precludes us from having side effects such as idempotent triggers right?
c0balt
Would this be a suitable ds to distribute node state for caching indices? Let's say two nodes have a set of N (possibly overlapping) keys and I want both to know all keys of each other for request routing (request for n \in N preferably to node with n in local cache).
mrkeen
CAP applies here.
If you ask your cache for a value, it could choose to reply now, with the information that it has - favouring A.
Or it could wait and hope for more accurate information to return to you later, favouring C.
'Cache' seems to imply that it's built for availability purposes.
LAC-Tech
Yes. I think G-Sets (3.3.1) are what you are looking for.
gethly
At first it made no sense but then I realised that what the author is saying is that in a distributed system, when you make local changes, you do not wait for the changes to propagate to all participants and back to you, before your local state is considered to be consistent with the global state but rather it is considered consistent with the global state immediately even before your local changes leave your system. In other words, every change committed into the distributed system is immediately consistent with the global state even if there are undelivered changes as eventually all the changes produce the same outcome.
In a specific use case that might apply. For example, if two people edit the same document and fix the same typo, the visual outcome is the same, no matter who made the change first or last.
But that is very niche as if we would take a programming code, someone can change a line of code that someone else is changing as well and they might be the same, but then you have other lines of code as well that might not be and then you end up with a code that won't compile. In other words, if we focus on the singular change in insolation, this makes sense. But that is essentially never the case in distributed environments in this context and we have to look at broader picture where multiple changes made by someone are related or tied to each other and do not live insolation.
Either way, i see nothing useful here. You can "render" your local changes immediately vs wait for them to be propagated through the system and return back to you. There is very little difference here and in the end it is mostly just about proper diffing approach and has little to do with the distributed system itself.
PS: the problem here is not really the order of applied changes for local consumer, like in case of editing a shared word document. The problem here is if we have a database and we commit a change locally but then someone else commits different change elsewhere, like "update users set email = foo@bar where id = 5" and before we receive the other, later, change we serve clients invalid data. That is the main issue of eventual consistency here. As I am running a system like this, I have to use "waiters" to ensure I get the correct data. For example, when user creates some content via web ui and is redirected back to list of all content, this is so fast that the distributed system has not had enough time to propagate the changes. So this user will not see his new content in the list - yet. For this scenario, I use correlation id that i receive when content is created and i put it into the redirect so when user moves to the page that lists all the content, this correlation is detected and a network call is made to appropriate server whose sole purpose is to keep the connection open until that server's state is caught up to the provided correlation id. Then I refresh the list of content to present the user the correct information - all of this whilst there is some loading indicator present on the page. There is simply no way around this in distributed systems and so I find this article of no value(at least to me).
xupybd
This blogger has a lot of great takes. I bet he'd make a great addition to any team.
I'd like to point out the following open-source SQLite project: https://github.com/sqliteai/sqlite-sync with a built-in cross-platform network layer.
P.S. I am the author of the project.