Skip to content(if available)orjump to list(if available)

How when AWS was down, we were not

How when AWS was down, we were not

16 comments

·November 17, 2025

wparad

Hey, I wrote that article!

I'll try to add comments and answer questions where I can.

- Warren

sharklasers123

Is there not an inherent risk using an AWS service (Route 53) to do the health check? Wouldn’t it make more sense to use a different cloud provider for redundancy?

wparad

If the check can't be done, then everything stays stable, so I'm guessing the question is, "What happens if Route 53 does the check and incorrectly reports the result?"

In that case, no matter what we are using there is going to be a critical issue. I think the best I could suggest at that point would be to have records in your zone that round robin different cloud providers, but that comes with its own challenges.

I believe there are some articles sitting around regarding how AWS plans for failure and the fallback mechanism actually reduces load on the system rather than makes it worse. I think it would require in-depth investigation on the expected failover mode to have a good answer there.

For instance, just to make it more concrete, what sort of failure mode are you expecting to happen with the Route 53 health check? Depending on that there could be different recommendations.

indigodaddy

Had the same thought, eg if things are really down can it even do the check etc

indigodaddy

Back in the day (10-12 years ago) at a telecom/cable we accomplished this with F5 Big IP GSLB DNS (and later migrated to A10's GSLB equivalent devices) as the auth DNS server for services/zones that required or were suitable for HA. (I can't totally remember but I'm guessing we must have had a pretty low TTL for this).

Had no idea that Route 53 had this sort of functionality

pinkmuffinere

> During this time, us-east-1 was offline, and while we only run a limited amount of infrastructure in the region, we have to run it there because we have customers who want it there

> [Our service can only go down] five minutes and 15 seconds per year.

I don't have much experience in this area, so please correct me if I'm mistaken:

Don't these two quotes together imply that they have failed to deliver on their SLA for the subset of their customers that want their service in us-east-1? I understand the customers won't be mad at them in this case, since us-east-1 itself is down, but I feel like their title is incorrect. Some subset of their service is running on top of AWS. When AWS goes down, that subset of their service is down. When AWS was down, it seems like they were also down for some customers.

wparad

It's a good point.

We don't actually commit to running infrastructure in one specific AWS region. Customers can't request that the infra runs exactly in us-east-1, but they can request that it runs in "Eastern United States". The problem is that with scenarios that might require VPC peering or low latency connections, we can't just run the infrastructure in us-east-2 and commit to never having a problem. For the same reason, what happens if us-east-2 were to have an incident.

We have to assume that our customers need it in a relatively close region, and that at the same time need to plan for the contingency that region can be down.

Then there are the customer's users to think of as well. In some cases, those users might be globally dispersed, even if the customer infrastructure is only one major location. So while it would be nice to claim "well you were also down at that moment", in practices customer's users will notice, and realistically, we want to make sure we aren't impeding remediation on their side.

That is, even if a customer says "use us-east-1", and then us-east-1 is down, it can't look that way to the customer. This gets a lot more complicated, when the services that we are providing may be impacted differently. Consider us-east-1 dynamoDB down, but everything else was still working. Partial failure modes are much harder to deal with.

loloquwowndueo

Depends on what the SLA phrasing is - us-east-1 affinity is a requirement put forth by some customers so I would totally expect the SLA to specifically state it’s subject to us-east-1 availability. Essentially these customers are opting out of Authress’s fault-tolerant infrastructure and the SLA should be clear about that.

dylan604

As TFA states, we have to offer services in that region because that's where some users are as well. However, the core of services are not in that region. I have also suggested when the time comes for offering SLAs, that there is explicit wording exempting us-east-1.

PaulRobinson

The bulk of the article discusses their failover strategy, where they detect failures in a region and how they route requests to a backup region, and how to deal with data consistency and cost issues arising from that.

null

[deleted]

iso1631

I'm interested in how they measure that downtime. If you're down for 200 milliseconds, does that accumulate. How do you even measure that you're down for 200ms.

(For what it's worth, for some of my services, 200ms is certainly an impact, not as bad as 2 seconds out outage but still noticable and reportable)

tptacek

This is a rare case where the original bait-y title is probably better than the de-bait-ified title, because the actual article is much less of a brag and much more of an actual case study.

dang

Re-how'd, plus I've resisted the temptation to insert a comma that feels missing to me.

tptacek

"How?! When AWS was down: we were not!"