Skip to content(if available)orjump to list(if available)

Reliably Replicating Data Between PostgreSQL and ClickHouse

woodhull

We've used PeerDB's hosted offering for sync'ing data from Postgres to Clickhouse both pre and post acquisition by Clickhouse Inc. We've also helped test the integrated sync features in Clickhouse Cloud built on top of PeerDB. We're using it to power customer facing features within our product.

It works well. Their team is great. I feel a bit spoiled having had as much access to the engineering team during the private beta as we've experienced.

It's great for use cases where it makes sense to sync postgres tables across to clickhouse without denormalizing them. PeerDB can transform rows in a single table sent via CDC using a lua scripting language, but it can't (yet!) denormalize data into clickhouse that is stored in 3NF on Postgres across multiple tables.

On the clickhouse query side, we end up wanting denormalized data for query performance and to avoid JOINs. It's frequently not a great idea to query in clickhouse using the same table structure as you're using in your transactional db.

In our experience we sync a few tables with PeerDB but mostly end up using app-level custom code to sync denormalized data into Clickhouse for our core use-cases. Most of the PeerDB sync'd tables end up as Clickhouse Dictionaries which we then use in our queries.

PeerDB works well and I like it for what it is. Just don't expect to be satisfied with querying in Clickhouse against the same table structure as you've got in Postgres unless your data size is tiny.

Curious to know about how others are using it and the architectures you've developed.

saisrirampur

Thanks, Nathan, for chiming in and for all the support during the private beta! <3

Overall, what you shared makes sense for use cases like yours. However, there are other scenarios—such as multi-tenant SaaS analytics running large-scale workloads with PeerDB/PostgreSQL CDC. In these cases there are 100s of tables across different schemas that are synced using CDC. Some customers denormalize tables using materialized views (MVs), which is a powerful feature in ClickHouse, while others power dashboards directly with JOINs using the recent JOIN improvements in ClickHouse and suitable/optimized order keys (tenant_id,id).

When dealing with 100s to 1000s of tables and a heavily relational schema, building dual-write pipelines with denormalization becomes extremely difficult—especially when the workload involves UPDATEs.

We have many customers falling in the above bucket, replicating multiple petabytes of data to ClickHouse. A few customer deep dives on this are coming soon! :)

Side note: We are tracking support for in-transit transformations as a future feature. However, MVs are the way to go—more of an ELT approach.

jascha_eng

Not to sound too sales-y but if you are looking into clickhouse and are currently based on postgres, you might also want to check out timescale. Since we're just a postgres extension it's 100% compatible with existing systems but provides a lot of the same speed benefits as clickhouse for analytical queries.

Don't be confused by the timeseries branding.

simonw

I've been very confused by the timeseries branding - I had always thought timescale was purely about adding time series features to PostgreSQL. I didn't know the extension had other applications.

Looks like you've expanded into vector indexing - https://github.com/timescale/pgvectorscale - and an extension which bakes RAG patterns (including running prompts from SQL queries) into PostgreSQL: https://github.com/timescale/pgai

akulkarni

That's interesting. Our first extension (TimescaleDB) is great for time-series and real-time analytics.

And yes you are correct, pgvectorscale scales pgvector for embeddings, and pgai includes dev experience niceties for AI (eg automatic embedding management).

Would love to hear any suggestions on how we could make this less confusing. :-)

simonw

People form initial impressions of a company and what they do, then file those away in their heads and it can be hard to get them to expand their mental model later on.

I guess that's why we have marketing teams!

mritchie712

The name of the company is timescale. That’s what’s confusing.

qeternity

We use TSDB and are pretty happy with it.

But it is much less performant than CH.

js4ever

Timescale is a very nice product but not at all close to clickhouse in terms of speed based on my own tests on very large tables (billions of rows)

jascha_eng

I didn't expect so many comments. I'm about to fly cross Atlantic and can't answer appropriately to everyone right now without internet but will try to do it justice once I'm home.

whitepoplar

How does Timescale compare to other extensions like Citus and Hydra/DuckDB?

geoka9

Could you go into the details of how one might go about replicating a PG db to a tsdb one? I assume application level would not be the most simple/reliable?

asadawadia

you don't

the data stays in PGDB - TSDB is an extension installed onto the data base server

akulkarni

Exactly. You can have the best of both worlds with Timescale.

tomnipotent

Not at all too sales-y.

I'm all for keeping as much as possible in your initial Postgres deployment as possible. If your team isn't having to work around things and things "just work" it's a wonderful conjunction of requirements and opportunity. It's incredible how much you can get out of a single instance, really remarkable. I'd also add it's still worth it even if there is a little pain.

But I've found that once I cross about 8-12 terabytes of data I need to specialize, and that a pure columnar solution like ClickHouse really begins to shine even compared to hybrid solutions given the amortized cost of most analytic workloads. This difference quickly adds up and I think at that scale really makes a difference to the developer experience that a switch is worth the consideration. Otherwise stick to Postgres and save your org some money and more importantly sanity.

You reach a point when you have enough queries doing enough work that the extra I/O and memory required by PAX/hybrid becomes noticeably more costly than pure columnar, at least for the workloads that I have experience with.

ClickHouse is now in my toolbox right alongside Postgres with things to deploy that I can trust to get the job done.

saisrirampur

Great summary and spot on! Once you reach that TBs scale Postgres is hard to scale. Yes, you could make Postgres scale to larger scales but it would need deep expertise and architecting and the experience wouldn’t be “it just works”. Ex-Citus here, we had PB scale deployments which needed multiple months of effort to implement and an expert team to manage. Eventually many (ex: CloudFlare, Heap) migrated to purpose built stores like ClickHouse, Singlestore. And not to forget storage costs. Sure there was compression/columnar in Postgres/Citus too - but it didn’t fare well compared to pure columnar stores.

(Disclaimer: This is Sai from ClickHouse/PeerDB team)

akulkarni

YMMV but our largest internal dogfooded Timescale instance is 100s of terabytes

https://www.timescale.com/blog/how-we-scaled-postgresql-to-3...

(Post is a year old, IIRC the database is over one petabyte now)

nijave

A few other things I can think of as well

- you'd probably at least want a read replica so you're not running queries on your primary db

- if you're going to the trouble of setting up a column store, it seems likely you're wanting to integrate other data sources so need some ETL regardless

- usually column store is more olap with lower memory and fast disks whereas operational is oltp with more memory and ideally less disk io usage

I suppose you could get some middle ground with PG logical rep if you're mainly integrating PG data sources

spapas82

If anybody's interested I've written a script in python that executes a query in a postgres db and saves the results to a parquet file (the script is not as simple as one would expect because of the fact that parquet is not as simple as csv and because the amount of data may be huge).

The parquet file is a columnar friendly friendly that can then be simply inserted to clickhouse or duckdb or even queried directly.

This script and a cron job are enough for my (not very complex) needs on replicating my postgres data on clickhouse for fast queries.

https://github.com/spapas/pg-parquet-py

zX41ZdbW

ClickHouse can query PostgreSQL directly, plus you can create a refreshable materialized view, which will do it automatically on a schedule:

https://clickhouse.com/docs/sql-reference/table-functions/po...

https://clickhouse.com/docs/materialized-view/refreshable-ma...

Additionally, you can set up incremental import with https://clickhouse.com/blog/postgres-cdc-connector-clickpipe...