Pgactive: Postgres active-active replication extension
71 comments
·July 16, 2025eatonphil
x4m
And there's also https://postgrespro.com/docs/enterprise/current/multimaster With a history behind it.
eatonphil
Right, I'm not so familiar with it but from that page:
> The replication mechanism is based on logical decoding and an earlier version of the pglogical extension provided for community by the 2ndQuadrant team.
zknill
Looks like it uses Postgres Logical replication to share changes made on one postgres instance to another. Conflict resolution is last-write-wins based on timestamp. Conflicting transactions are logged to a special table (pgactive_conflict_history), so you can see the history, resolve, etc.
zozbot234
Is this multi-master replication? It will be interesting if it can be accepted into Postgres proper.
dehrmann
Did Postgres ever get a built-in, blessed replication offering? It's been a while since I set it up, but I remember this was always a big missing feature compared to Mysql.
wrs
The basic replication mechanisms have been built-in for quite a while. What’s not there is cluster management (replica launching, leader election, load balancing, that sort of thing) that makes it practical in a nontrivial situation. There are several 3rd party solutions to that. [0]
Same situation as, e.g., backups. You can just use pg_dump, but to be serious you need a 3rd party solution that does log shipping and so on.
[0] https://www.postgresql.org/download/products/3-clusteringrep...
philipwhiuk
Streaming and logical replication is built in: https://www.postgresql.org/docs/current/runtime-config-repli...
stephenr
Sounds like "yes, with an if" where the "if" is "if you don't really care about data consistency".
"Last write wins" sounds like a recipe for disaster IMO.
This is still one of those things that keeps people on MySQL - there are not one, but two open-source solutions available that provide synchronous cluster replication, allowing for "safe" writes against multiple primaries.
wfn
Out of curiosity, what conflict resolution options exist in mysql and/or mysql cluster (never checked / exp. in PG)? Because you'll always have to address conflicts of course - we come to CAP / PACELC. Hm [1][2] - looks like they support more strategies (possibly) but I mean none of them are somehow magical, and timestamp comparison based methods comprise the better part of offered strategy set (looks like?) - and "latest timestamp wins" at least used to be the default (did not read thoroughly mind you, was just curious)?
But I could be totally wrong - (1) curious if someone could link to things / explain, and (2) fyi ('stephenr) last write wins based on timestamp is a thing im mysql world as well (though again maybe set of options / different conflict resolution methods available is larger in mysql?)
[1]: https://dev.mysql.com/doc/refman/8.4/en/mysql-cluster-replic...
[2]: https://dev.mysql.com/blog-archive/enhanced-conflict-resolut... (nice writeup, maybe outdated idk?)
nyrikki
It's all tradeoffs, with MySQL multi-master and multi-source models having their own issues and pg also has other options with their own tradoffs.
ACID+distributed== tradoffs that will always keep this a horses for courses problem.
kosolam
Sounds interesting. So how soon one knows if his write has been accepted or rejected? Is it immediate or eventual?
okigan
It took 20 years to acknowledge that pushing eventual consistency to application layer is not worth it for most applications.
Seems the same is playing out out in Postgres with this extension, maybe will take it another 20 years
rubiquity
The idea of active-active is too seductive compared to how hard learning distributed systems is.
tinkertamper
I'm curious about what you mean here. It sounds like you're saying that applications shouldn't concern themselves with consistency. Can you elaborate?
ForHackernews
It's eventual consistency: Latest-write wins after the dust settles.
As I understand it, this is a wrapper on top of Postgres' native logical replication features. Writes are committed locally and then published via a replication slot to subscriber nodes. You have ACID guarantees locally, but not across the entire distributed system.
https://www.postgresql.org/docs/current/logical-replication....
gritzko
So the outcomes are essentially random?
It all feels like they expect developers to sift through the conflict log to resolve things manually or something. If a transaction did not go through on some of the nodes, what are the others doing then? What if they can not roll it back safely?
Such a rabbit hole.
nico
Tangential, but related. Is there a way to have a "locally writable" read replica, ie. a secondary db that reads from a primary, but that can also hold local changes that doesn't send back to the primary?
One of the use cases is to have a development db that can get data from production or staging (and doesn't send local changes back)
What I've done usually is have some script/cron/worker run periodically to get data, either via dump or running some queries, create a snapshot, store it in S3, then have a script on the local dev code that gets the snapshot and inserts/restores the data in the local db. This works for many cases, but index building can be a pain (take a long time), depending on the data
mdavidn
Load the snapshot to a "pristine" local database that you never modify. Whenever you need a "reset" of your dev database, drop it, then copy the pristine database using `createdb --template`. This copies prebuilt indexes rather than rebuild them, which is much faster.
nico
But when loading that pristine local db from the original source, it would still create the indices and thus take a long time?
The goal is not necessarily having an easy way to reset, but rather an easy/quick way to load real data
AYBABTME
Just FYI that most people would recommend against doing this for legal reasons. PII information and the likes are not usually allowed to land in a staging or dev environment, for various reasons. Doing this or allowing it, is a huge liability.
nico
Agreed, and there’s a few ways to deal with that, like not including certain tables or excluding the data via queries when creating the snapshot
Having said that, legal exposure and risk will highly depend on what you are working on. Probably for most projects this isn’t a big deal. IANAL, this is not legal advice
xinu2020
Curious about this - How would local writes conflicting with remote updates be handled? I can't think of a merge strategy working on all scenario (or even most of the time)
nico
Great question, I don't know. However, at least in my case, I wouldn't mind the source data always overwriting the local data. In fact, that's the way it works now when loading a newer snapshot, the local db is dropped and then re-built from the snapshot
Thinking about the developer experience though, when loading a snapshot manually, the dev knows they are overwriting their local db. However, if replication happened automatically/continuously on the background, it could lead to some really confusing/annoying behaviors
ForHackernews
AFAIK that is the standard behavior with a Postgres logical replication setup. There is nothing preventing you doing writes on the replica, they just won't get sent back anywhere else.
everfrustrated
I'm scratching my head trying to think why AWS would have worked on this? I can't think of it being used in any of their products.
RDS uses block replication. Aurora uses it's own SAN replication layer.
DMS maybe?
gotimo
probably [Aurora DSQL](https://aws.amazon.com/rds/aurora/dsql/) which released a couple of weeks ago
tneely
DSQL uses an internal service, Journal[1], for its mutli-region writes.
[1]: https://www.allthingsdistributed.com/2025/05/just-make-it-sc...
loginatnine
It's definitely DSQL with the multi-region active active feature[1].
riknos314
DSQL only uses Postgres for the query processor layer, so it doesn't require a replication library within postgres itself. Definitely NOT from DSQL.
> We’re not using any of the storage or transaction processing parts of PostgreSQL, but are using the SQL engine, an adapted version of the planner and optimizer, and the client protocol implementation. [1]
Rather, DSQL seems to do its region replication using the distributed journal abstraction [2].
[1] https://brooker.co.za/blog/2024/12/04/inside-dsql.html [2] https://brooker.co.za/blog/2024/12/06/inside-dsql-cap.html
fizx
I thought dsql was distributed transactions. :thinking:
mjb
DSQL does use distributed transactions to offer active-active alongside strong consistency and isolation guarantees. Here's how: https://brooker.co.za/blog/2024/12/05/inside-dsql-writes.htm...
cfycom
From the readme of the repo: "Use cases for this include running a Multi-Region high availability database cluster"
the_precipitate
Yeah, and this doesn't seems to be that useful. At least I don't understand why one should do this on a strong ACID relational database.
hobs
In my experience multi-writer is because of latency or HADR stuff - have all your data in all regions at the same time, but the method (via the tlog) seems like it sort of defeats what those sorts of systems might be able to historically do (write multiple places from the app at the same time so as to have the lowest possible chance of data loss.)
shermantanktop
Yes, I call it spoke-and-hub. The sharded spokes accept the writes and replicate back to the hub, where all shards coexist.
Useful for metric ingestion. Not useful for bank ledgers or whatever.
prdpx7
i think AWS is using pgactive to sell "Aurora Postgres Global" https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide...
thayne
> Aurora uses it's own SAN replication layer
I don't think that is used for cross region replication
ForHackernews
Apparently they've offered it as a feature in RDS Postgres for a couple years now https://aws.amazon.com/about-aws/whats-new/2023/10/pgactive-...
But only last month did they officially release it as open source to the community https://aws-news.com/article/2025-06-09-announcing-open-sour...
everfrustrated
Thanks, that seems to match.
0xFF0123
A lot of theorising above, but this seems like the earliest reference
null
dang
Related. Others?
Pgactive: Active-Active Replication Extension for PostgreSQL on Amazon RDS - https://news.ycombinator.com/item?id=37838223 - Oct 2023 (1 comment)
ahachete
I'm not tired of reminding everyone that "conflict resolution" is no more than an euphemism for "breaking durability by dropping already committed and acknowledged data".
Either architect for no data overlap on writes across all the "actives" (in which case software like pgactive could be a good deal) or use a purely distributed database (like Yugabyte).
iotapi322
After setting up numerous clusters with repmgr and patroni along with running them in zero down time production... This is the very last plugin i would ever install. I like to sleep at night.
dangoodmanUT
It seems async? That's a major problem for transaction isolation
dboreham
Pick your poison.
ltbarcly3
Don't use this unless you know exactly what you are doing.
This is not a way to get better performance or scalability in general.
A bit of the history as I've been told by 2nd Quadrant/EDB people (my teammates):
BDR1 [0] came first and was, and is, open source. pgactive is based on BDR1. BDR2 was a closed-source rewrite of BDR1 that was later abandoned.
pglogical v1 and v2 (PGL1, PGL2) were, and are, open-source [1].
pglogical v1, after heavy modification, was eventually merged into Postgres 10.
Based on learnings from this logical replication in Postgres 10, 2nd Quadrant started pglogical v2.
pgEdge is based on pglogical v2.
Then later 2nd Quadrant started pglogical v3 (closed source) and BDR v3 (closed source). They were merged into just BDR v4. At some point the BDR product was renamed to Postgres Distributed (PGD) [2].
2ndQuadrant was acquired by EDB. We (EDB) just released PGD v6.
[0] https://github.com/2ndQuadrant/bdr/tree/bdr-plugin/REL1_0_ST...
[1] https://github.com/2ndquadrant/pglogical
[2] https://www.enterprisedb.com/docs/pgd/latest/