Skip to content(if available)orjump to list(if available)

A major Postgres upgrade with zero downtime

pilif

> But we had less than a terabyte of data

I really wonder how an in-place `pg_upgrade` of such small amounts of data would take 30+ minutes.

My experience from a less mission-critical situation where 5 minutes of maintenance are absolutely acceptable is that an in-place `pg_upgrade` with `--link` of a 8 TB database takes less than a minute and will not accidentally lose data or fail to properly configure schema search paths or whatever other mess the article was talking about.

I understand that 30 minutes of downtime are not acceptable. But if it's 5 minutes or less, I would seriously consider an offline upgrade using `pg_upgrade`

And if it takes 30 minutes to hard-link less than 1 TB of data files, you should seriously consider changing hosts because that's absolutely unacceptable performance.

jedberg

This is impressive! I know others are questioning the "no downtime" bit, but that is why service level objectives exist -- because it really depends on the customer experience.

If you managed to have a cutover with no noticeable dip in business metrics (aka the users didn't notice) then I'd call that a no-downtime upgrade!

Very clever on the improvement over Lyft's methods. Thanks for the writeup. Now maybe someone can get it down from 3 seconds of pausing. :)

honestSysAdmin

> then I'd call that a no-downtime upgrade!

It'd be really convenient for me, well not me but others, if we could tell our customers this. However, those of us running DBaaS do have to offer an actual no-downtime upgrade.

darth_avocado

Pause all writes > let 16 to catch up > resume writes on 16

Isn’t that….. downtime? Unless you mean downtime to be only when reads are also not available.

stopachka

We count downtime if a service is unavailable and drops requests. In this case, since the pause took about 3.5 seconds, we were able to service all requests.

darth_avocado

Interesting. If that was the case, I’m curious why you had to write custom code. Would RDS proxy not work?

The way we usually upgrade is create a new upgraded cluster, replicate all data to new cluster, pause writes and redirect RDS proxy to the new cluster. And that usually takes a few seconds.

dboreham

By that definition you can have any length pause and still claim zero downtime.

nijave

Only if nothing has timeouts. Likely components somewhere in the stack have timeouts so you're bound by that

n_u

They say the "stop the world" approach that causes more downtime is

  Turn off all writes.
  Wait for 16 to catch up
  Enable writes again — this time they all go to 16
and instead they used a better algorithm:

  Pause all writes.
  Wait for 16 to catch up. 
  Resume writes on 16.
These seem pretty similar.

1. What is the difference in the algorithm? Is it just that in the "stop the world" approach the client sees their txns fail until "wait for 16 to catch up" is done? Whereas in the latter approach the client never sees their txns fail, they just have a bit more latency?

2. Why does the second approach result in less downtime?

stopachka

> in the "stop the world" approach the client sees their txns fail until "wait for 16 to catch up" is done? Whereas in the latter approach the client never sees their txns fail, they just have a bit more latency?

Yes, this is the main difference. For "stop the world", we imagined a simpler algorithm: instead of a script, we could manually toggle a switch for example.

However, by writing the script, the user only experiences a bit more latency, rather than failed transactions.

honestSysAdmin

Zero-downtime Postgres upgrades have been kind of normalized, at least in the environments I have been exposed to, with pgcat

  https://github.com/postgresml/pgcat

Izkata

Indenting with 2 spaces is for code formatting, which is why the URL isn't a link. Don't indent it if you want a link:

https://github.com/postgresml/pgcat

stopachka

Is there some resource that explains how to do a major version upgrade with pgcat? Would love to take a look

honestSysAdmin

As far as I know, there is not. I could probably write something up.

wswope

This is really cool/useful to know about - thanks for dropping the link!

upghost

I can't believe they took the time to make such an amazing write-up. With formatting and everything. Normally I just grab whatever broken crayon is closest and scribble on the back of an overdue bill "don't fsck up next time"

xyzzy_plugh

The title is pretty misleading. They're not even running Postgres, but AWS Aurora, which is Postgres compatible, but is not Postgres.

Also, pausing queries does count as downtime. The system was unavailable for that period of time.

stopachka

> The title is pretty misleading. They're not even running Postgres, but AWS Aurora, which is Postgres compatible, but is not Postgres.

For what it's worth, every command ran works on normal Postgres. Hence we didn't think it mattered to mention Aurora specifically in the title.

> Also, pausing queries does count as downtime.

If a query takes a bit longer to respond, I don't think that counts as downtime. From the perspective of the user, they couldn't distinguish this migration event from some blip of slightly slower queries.

scottlamb

> If a query takes a bit longer to respond, I don't think that counts as downtime. From the perspective of the user, they couldn't distinguish this migration event from some blip of slightly slower queries.

It comes down to defining Service Level Objectives (SLOs) that are meaningful to your users. For one system I worked on, latency was important, and so one SLO was "99.999% of <a certain class of> requests with a deadline >=1s should succeed with latency <1s", so if this affected more than 0.0001% of requests in <time interval defined in our SLO>, we'd have called it an outage. But I've also worked on systems with looser SLOs where this would have been fine.

nijave

Not only that but I think you also need to take upstream systems into account. With a reasonably robust frontend that handles transient issues and retries reasonably, I think it's ok to say "no downtime"

RadiozRadioz

Completely depends on what the "user" is. Are they a human, or a machine that explicitly requires timings within a particular threshold?

lionkor

It depends if it feels like an outage

awesome_dude

> If a query takes a bit longer to respond, I don't think that counts as downtime

"We're sorry that your query took 7 hours to be responded to, but it wasn't an outage - honest"

stopachka

We would count 7 hours as downtime too. Our pause was less than 5 seconds.

paulddraper

Strong energy of "someone brushed up against me and that's assault" going on here

SahAssar

AWS Aurora Postgres is a forked Postgres with a different storage engine. Sure you are technically correct, but there are many things called "Postgres compatible" that are very much less Postgres that AWS Aurora Postgres (like for example CockroachDB).

nijave

Iirc AWS explicitly calls out they still use upstream Postgres query engine and some other parts. It very much _is_ Postgres but not 100% pure upstream Postgres.

SahAssar

Yep, for example that is how they advertise protocol, feature and language compatibility.

paulddraper

> They're not even running Postgres, but AWS Aurora

But everything described is also PostgreSQL compatible.

> downtime

Context switching pauses execution too FYI.

unethical_ban

They reduced their potential downtime from 60s to what I assume is only a few seconds (they don't state in the article).

If there is not noticeable user impact or unavailability of services (this is unique to each service in existence) then there is no downtime.

stopachka

> they don't state in the article

Thank you for pointing this out. I updated the essay to mention how long the pause took explicitly:

After about a 3.5 second pause [^13], the failover function completed smoothly! We had a new Postgres instance serving requests, and best of all, nobody noticed.

[^13]: About 2.5 seconds to let active queries complete, and about 1 second for the replica to catch up

metadat

What is the [^13] notation? Is it different than a *?

wenbin

Awesome!

The best we’ve achieved is 0 downtime for read operations and less than 1 minute downtime for write ops [1]

Achieving 0 downtime for write ops is super hard!

[1] https://www.listennotes.com/blog/a-practical-way-to-upgrade-...

mkleczek

I wonder why they didn't use synchronous_commit option. That would eliminate replication lag and allow for real zero-downtime.

sgarland

I have to wonder – are they using a connection pooler? I'm leaning towards no, since what they did in code can be natively done with PgBouncer, PgCat, et al. That would also explain the last footnote:

> The big bottleneck is all the active connections

For anyone who is unaware, Postgres (and Aurora-compatible Postgres, which sucks but has a great marketing team) uses a process per connection, unlike MySQL (and others, I think) which use a thread per connection. This is inevitably the bottleneck at scale, long before anything else.

I did feel for them here:

> We couldn’t create a blue-green deployment when the master DB had active replication slots. The AWS docs did not mention this. [emphasis mine]

The docs also used to explicitly say that you could run limited DDL, like creating or dropping indices, on the Green DB. I found this to be untrue in practice, notified them, and I see they've since updated their docs. A painful problem to discover though, especially when it's a huge DB that took a long time to create the B/G in the first place.

stopachka

> are they using a connection pooler

We use Hikari [1] an in-process connection pooler. We didn't opt for pgbouncer at al, because we didn't want to add the extra infra yet.

> since what they did in code can be natively done with PgBouncer, PgCat, et al.

Can you point me to a reference I could look at, about doing a major version upgrade with PgBouncer et al? My understanding is that we would still need to write a script to switch masters, similar to what we wrote.

> The big bottleneck is all the active connections

The active connections we were referring too were websocket connections; we haven't had problems with PG connections.

Right now the algorithm we use to find affected queries and notify websockets starts to falter when the number of active websocket connections on one machine get too high. We're working on improving it in the coming weeks.

I updated the footnote to clarify that it was about websocket connections.

> I did feel for them here:

Thank you! That part was definitely the most frustrating.

[1] https://github.com/brettwooldridge/HikariCP

sgarland

I’m not sure about a reference, other than their docs [0]. Basically, you’d modify the config to point to the new servers, issue PAUSE to PgBouncer to gracefully drain connections, then RELOAD to pick up the new config, then RESUME to accept new traffic.

This would result in client errors while paused, though, so perhaps not quite the same. To me, a few seconds of downtime is fine, but everyone has their own opinions. EDIT: you could of course also modify your client code (if it doesn’t already) to gracefully retry connections, which would effectively make this zero downtime.

ProxySQL (which I think now supports Postgres) has a global delay option where you can effectively make clients think that the query is just taking a long time; meanwhile, you can do the same sequence as outlined.

If you had HA Bouncers (which hopefully you would), you could cheat a little as you eluded to in the post, and have one still allow read queries to hit the old DB while cutting over writes on the other one, so the impact wouldn’t be as large.

[0]: https://www.pgbouncer.org/usage.html

Izkata

> This would result in client errors while paused, though, so perhaps not quite the same.

What? Docs say:

> New client connections to a paused database will wait until RESUME is called.

Which fits what I remember when I was testing pgbouncer as part of automatic failover ages ago, if the connection from pgbouncer to the database dropped it would block until it reconnected without the app erroring.

nijave

Curious what you don't like about Aurora? We've found it to generally be better than the older PG offering since it uses clustered storage, you don't pay storage per replica. Additionally, you can pay 30% more per instance for unlimited IOPs

Serverless is generally a non starter unless you have a really really spikey workload

sgarland

As a disclaimer, I generally dislike most managed offerings of anything, because I don’t think you get nearly the value out of them for the price hike (and performance drop). For DBs especially, I don’t see the value, but I’m also a DBRE with extensive Linux experience, so the maintenance side doesn’t bother me.

For Aurora in general, here’s a short list:

* Since the storage is separated, and farther than even EBS, latency is worse. Local, on-hardware NVMe is blindingly fast, enough that you can often forget that it isn’t RAM.

* I’ve yet to see Aurora perform better; MySQL or Postgres variants. My 13 year old Dell R620s literally outperform them; I’ve tested it.

* The claimed benefit of being able to take a DB up to 128 TiB is a. an artificial limit that they’ve made worse by denying the same to RDS b. difficult to reach in practice, because of a bunch of gotchas like fixed-size temporary storage, which can make it impossible to do online DDL of large tables.

* For the MySQL variant, they removed the change buffer entirely (since storage is distributed, it was necessary for their design), which dramatically slows down writes to tables with secondary indices.

* It’s not open-source. I can and have pored through Postgres and MySQL source code, built debug builds, etc. to figure out why something was happening.

gtaylor

I’ve never been on a team that migrated to Aurora PG for raw query perf. It is slower than a bespoke setup that is optimized for raw latency, but Aurora is going to hold up under much higher traffic with much less fuss. It also has an excellent snapshot/restore facility.

dalyons

Aurora has been excellent in my experience. Many operational problems (eg managing replica lag) disappear

nijave

I haven't it tried it, but in another post recently someone mentioned enabling synchronous replication after the clone is up to date

https://www.postgresql.org/docs/current/warm-standby.html#SY...

jillyboel

> The next few hours was frustrating: we would change a setting, start again, wait 30 minutes, and invariably end up with the same error.

Sounds about right for cloud services.