Skip to content(if available)orjump to list(if available)

Understanding Round Robin DNS

Understanding Round Robin DNS

128 comments

·October 26, 2024

jgrahamc

Hmm. I've asked the authoritative DNS team to explain what's happening here. I'll let HN know when I get an authoritative answer. It's been a few years since I looked at the code and a whole bunch of people keep changing it :-)

My suspicion is that this is to do with the fact that we want to keep affinity between the client IP and a backend server (which OP mentions in their blog). And the question is "do you break that affinity if the backend server goes down?" But I'll reply to my own comment when I know more.

delusional

> I'll let HN know when I get an authoritative answer

Please remember to include a TTL so I know how long I can cache that answer.

jgrahamc

Thank you for appreciating my lame joke.

mlhpdx

So many sins have been committed in the name of session affinity.

jgrahamc

Looks like this has nothing to do with session affinity. I was wrong. Apparently, this is a difference between our paid and free plans. Getting the details, and finding out why there's a difference, and will post.

asmor

Well, CEO said there is none, get on it engineering :)

jgrahamc

Update: change is rolling out to do zero downtime failover on free accounts.

hyperknot

Great news, thanks for the amazing turnaround time!

tiffanyh

And follow-up as well.

egberts1

Please ignore the hidden master server, carry on.

teddyh

One of the early proposed solutions for this was the SRV DNS record, which was similar to the MX record, but for every service, not just e-mail. With MX and SRV records, you can specify a list of servers with associated priority for clients to try. SRV also had an extra “weight” parameter to facilitate load balancing. However, SRV did not want the political fight of effectively hijacking every standard protocol to force all clients of every protocol to also check SRV records, so they specified that SRV should only be used by a client if the standard for that protocol explicitly specifies the use of SRV records. This technically prohibited HTTP clients from using SRV. Also, when the HTTP/2 (and later) HTTP standards were being written, bogus arguments from Google (and others) prevented the new HTTP protocols from specifying SRV. SRV seems to be effectively dead for new development, only used by some older standards.

The new solution for load balancing seems to be the new HTTPS and SVCB DNS records. As I understand it, they are standardized by people wanting to add extra parameters to the DNS in order to to jump-start the TLS1.3 handshake, thereby making fewer roundtrips. (The SVCB record type is the same as HTTPS, but generalized like SRV.) The HTTPS and SVCB DNS record types both have the priority parameter from the SRV and MX record types, but HTTPS/SVCB lack the weight parameter from SRV. The standards have been published, and support seem to have been done in some browsers, but not all have enabled it. We will see what browsers will actually do in the near future.

jsheard

> The new solution for load balancing seems to be the new HTTPS and SVCB DNS records. As I understand it, they are standardized by people wanting to add extra parameters to the DNS in order to to jump-start the TLS1.3 handshake, thereby making fewer roundtrips.

The other big advantage of the HTTPS record is that it allows for proper CNAME-like delegation at the domain apex, rather than requiring CNAME flattening hacks that can cause routing issues on CDNs which use GeoDNS in addition to or instead of anycast. If you've ever seen a platform recommend using a www subdomain instead of an apex domain, that's why, and it's part of why Akamai pushed for HTTPS records to be standardized since they use GeoDNS.

teddyh

Oh yes¹. This is an advantage shared by all of MX, SRV and HTTPS/SVCB, though.

1. <https://news.ycombinator.com/item?id=38420555>

jcgl

I wish so badly for proper adoption of SRV or other MX-style records that could be used for HTTP. Their lack is especially painful when dealing with the fact that people commonly want to host websites at their domain apex.

However, using MX-style records safely can be tricky if you can’t rely on DNSSEC.

__turbobrew__

DNS load balancing has some really nasty edge cases. I have had to deal with golang HTTP2 clients using RR DNS and it has caused issues.

Golang HTTP2 clients will reuse the first server they can connect to over and over and the DNS is never re-resolved. This can lead to issues where clients will not discover new servers which are added to the pool.

An particularly pathological case is if all serving backends go down the clients will all pin to the first serving backend which comes up and they will not move off. As other servers come up few clients will connect since they are already connected to the first server which came back.

A similar issue happens with grpc-go. The grpc DNS resolver will only re-resolve when the connection to a backend is broken. Similarly grpc clients can all gang onto a host and never move off. There are suggestions that on the server side you can set `MAX_CONNECTION_AGE` which will periodically disconnect clients after a while which causes the client to re-resolve the DNS.

I really wish there was a better standard solution for service discovery. I guess the best you can do is implement a request based load balancer with a virtual IP and have the load balancer perform health checks. But you are still kicking the can down the road as you are just pushing down the problem to the system which implements virtual IPs. I guess you assume that the routing system is relatively static compared to the backends and that is where the benefits come in.

I'm curious how do people do this on bare metal? I know AWS/GCP/etc... have their internal load balancers, but I am kind of curious what the secret sauce is to doing this. Maybe suggestions on blog posts or white papers?

fotta

> Golang HTTP2 clients will reuse the first server they can connect to over and over and the DNS is never re-resolved.

I’m not a DNS expert but shouldn’t it re-resolve when the TTL expires?

__turbobrew__

You nerd sniped me. The guts of how http2 deals with this in golang is in transport.go : https://github.com/golang/go/blob/master/src/net/http/transp...

If I’m reading the code right round trips (HTTP requests) go through queueForIdleConn which picks up any pre-existing connections to a host. The only time these connections are cleaned up (in HTTP2) is if keepalives are turned off and the connection has been idle for too long OR the connection breaks in some way OR the max number of connections is hit LRU cache evictions take place.

Furthermore, the golang dnsclient doesn’t even expose record TTLs to callers so how could the HTTP2 transport know when an entry is stale? https://github.com/golang/go/blob/master/src/net/dnsclient_u...

toast0

It should, but like the sibling, I haven't seen what Go does. I've seen it happen elsewhere. Exchange used to cache any answer it got until it restarted. Java has had that behavior from time to time if you're not careful as well.

Querying DNS can be expensive, so it makes sense to build a cache to avoid querying again when you don't need to, but typical APIs for name resolution such as gethostbyname / getaddrinfo don't return the TTL, so people just assume forever is a good TTL. Especially for a persistant (http) connection, it kind of makes sense to never query DNS again while you already have a working connection that you made with that name, and if it's TLS, it's quite possible that you don't check if the certificate has expired while you're connected or if you do a session resumption.

But innocent things like this add up to make operating services tricky. Many times, if you start refusing connections, clients figure it out, but sometimes the caches still don't get cleared.

fotta

> but typical APIs for name resolution such as gethostbyname / getaddrinfo don't return the TTL

Oh wow I didn’t know this but I looked it up and you’re right. Interesting.

hypeatei

I've seen DNS only be refreshed when restarting on embedded devices I work with too. They use a proprietary HTTP library...

loevborg

I don't know about Golang but I swear I've seen this before as well - clients holding on to an old IP address without ever re-resolving the domain name. It makes me wary of using DNS for load balancing or blue-green deployments. I feel like I can't trust DNS clients.

wink

It's been 8-10 years but when I was serving tracking pixels we were astonished how long we still got requests from residential IPs for whole hostnames we had deprecated. That means I would not trust DNS caching anyway. I'm not talking days here, but months, with a TTL set to mere days.

ignoramous

Some reasons to connect to the same IP: TCP Fast Open, TLS session resumption, connection pools, residual censorship.

kkielhofner

TTL isn't universally respected. Consider the following path:

Your machine -> Local router -> Configured upstream DNS Server (ISP/CF/Quad8/etc) -> ? -> Authoritative DNS Server

Any one of those layers can override/mess with/cache in a variety of ways including TTL. This is why Cloudflare and a variety of other providers use IP anycast. They accepted DNS for what it is and worked around it.

Not only is the IP always the IP, the "global" BGP routing table actually universally and consistently updates much faster than DNS. Then whatever routers, machines, etc downstream from that don't matter.

__turbobrew__

I read through the golang code once due to coming across this issue with kubernetes clients which use the standard golang http client under the hood.

I would need to re-read the code to refresh my memory.

pvtmert

not an expert but overall; unless connection closes for any reason, resolution does not happen.

also, java historically had -1 ttl (eg: infinite) by default. causing a lot of headaches with ephemeral/container services.

unilynx

> So what happens when one of the servers is offline? Say I stop the US server:

> service nginx stop

But that's not how you should test this. A client will see the connection being refused, and go on to the next IP. But in practice, a server may not respond at all, or accept the connection and then go silent.

Now you're dependent on client timeouts, and round robin DNS will suddenly look a whole lot less attractive to increase reliability.

globular-toast

Yes, this can be tested by just unplugging or turning off a machine/VM with that IP address. Stopping a service is a planned action that you could handle by updating your DNS first.

Joe_Cool

Yeah SIG_STOP or just ip/nftables DROP would be a much more realistic test.

tetha

> As you can see, all clients correctly detect it and choose an alternative server.

This is the nasty key point. The reliability is decided client-side.

For example, systemd-resolved at times enacted maximum technical correctness by always returning the lowest IP address. After all, DNS-RR is not well-defined, so always returning the lowest IPs is not wrong. It got changed after some riots, but as far as I know, Debian 11 is stuck with that behavior, or was for a long time.

Or, I deal with many applications with shitty or no retry behavior. They go "Oh no, I have one connection refused, gotta cancel everything, shutdown, never try again". So now 20% - 30% of all requests die in a fire.

It's an acceptable solution if you have nothing else. As the article notices, if you have quality HTTP clients with a few retries configured on them (like browsers), DNS-RR is fine to find an actual load balancer with health checks and everything, which can provide a 100% success rate.

But DNS-RR is no loadbalancer and loadbalancers are better.

aarmenaa

True. On the other hand, if you control the clients and can guarantee their behavior then DNS load balancing is highly effective. A place I used to work had internal DNS servers with hundreds of millions of records with 60 second TTLs for a bespoke internal routing system that connected incoming connections from customers with the correct resources inside our network. It was actually excellent. Changing routing was as simple as doing a DDNS update, and with NOTIFY to push changes to all child servers the average delay was less than 60 seconds for full effect. This made it easy to write more complicated tools, and I wrote a control panel that could take components from a single server to a whole data center out of service at the click of a button.

There were definitely some warts in that system but as those sorts of systems go it was fast, easy to introspect, and relatively bulletproof.

nerdile

It's putting reliability in the hands of the client, or whatever random caching DNS resolver they're sitting behind.

It also puts failover in those same hands. If one of your regions goes down, do you want the traffic to spread evenly to your other regions? Or pile on to the next nearest neighbor? If you care what happens, then you want to retain control of your traffic management and not cede it to others.

latchkey

> It's an acceptable solution if you have nothing else.

I'd argue it isn't acceptable at all in this day and age and that there are other solutions one should pick today long before you get to the "nothing else" choice.

toast0

Anycast is nice, but it's not something you can do yourself well unless you have large scale. You need to have a large number of PoPs, and direct connectivity to many/most transit providers, or you'll get weird routing.

You also need to find yourself some IP ranges. And learn BGP and find providers where you can use it.

DNS round robin works as long as you can manage to find two boxes to run your stuff on, and it scales pretty high too. When I was at WhatsApp, we used DNS round robin until we moved into Facebook's hosting where it was infeasible due to servers not having public addresses. Yes, mostly not browsers, but not completely browserless.

latchkey

Back in 2013, that might have been the best solution for you. But there were still plenty of headlines... https://www.wamda.com/2013/11/whatsapp-goes-down

We're talking about today.

The reason why I said Anycast is cause the vast majority of people trying to solve the need for having multiple servers in multiple locations, will just use CF or any one of the various anycast based CDN providers available today.

metadat

> This allows you to share the load between multiple servers, as well as to automatically detect which servers are offline and choose the online ones.

To [hesitantly] clarify a pedantry regarding "DNS automatic offline detection":

Out of the box, RR-DNS is only good for load balancing.

Nothing automatic happens on the availability state detection front unless you build smarts into the client. TFA introduction does sort of mention this, but it took me several re-reads of the intro to get their meaning (which to be fair could be a PEBKAC). Then I read the rest of TFA, which is all about the smarts.

If the 1/N server record selected by your browser ends up being unavailable, no automatic recovery / retry occurs at the protocol level.

p.s. "Related fun": Don't forget about Java's DNS TTL [1] and `.equals()' [2] behaviors.

[1] https://stackoverflow.com/questions/1256556/how-to-make-java...

[2] https://news.ycombinator.com/item?id=21765788 (5y ago, 168 comments)

encoderer

We accomplish this on Route53 by having it pull servers out of the dns response if they are not healthy, and serving all responses with a very low ttl. A few clients out there ignore ttl but it’s pretty rare.

ChocolateGod

I once achieved something similar with PowerDNS, which you can use LUA rules to do health checks on a pool of servers and only return health servers as part of the DNS record, but found odd occurrences of clients not respecting the TTL on DNS records and caching too long.

tetha

You usually do this with servers that should be rock-solid and stateless. HAProxy, Traefik, F5. That way, you can pull the DNS record for maintenance 24 - 48 hours in advance. If something overrides DNS TTLs that much, there is probably some reason.

d_k_f

Honest question to somebody who seems to have a bit of knowledge about this in the real world: several (German, if relevant) providers default to a TTL of ~4 hours. Lovely if everything is more or less finally set up, but usually our first step is to decrease pretty much everything down to 60 seconds so we can change things around in emergencies.

On average, does this really matter/make sense?

stackskipton

Lower TTLs is cheap insurance so you can move hostnames around.

However, you should understand that not ALL clients will respect those TTLs. There are resolvers that may minimum TTL threshold where IF TTL < Threshold, TTL == Threshold, Common with some ISPs, and also, there may be cases where browsers and operating systems will ignore TTLs or fudge them.

toast0

From experience, 90%+ of traffic will respect your TTLs or something close. So on average, it definitely does make a difference. There's always going to be a long tail of straglers though.

Personally, my default for names that are likely to change often is 5 minutes, but 1 minute is ok, but might drive a lot more DNS traffic.

rrdnsd

Shameless plug: a FOSS project to provide failover for RR-DNS and it's being funded by NLnet https://codeberg.org/FedericoCeratto/rrdnsd

latchkey

  > "It's an amazingly simple and elegant solution that avoids using Load Balancers."
When a server is down, you have a globally distributed / cached IP address that you can't prevent people from hitting.

https://www.cloudflare.com/learning/dns/glossary/round-robin...

toast0

Skipping an unnecessary intermediary is worth considering.

Load balancing isn't without cost, and load balancers subtly (or unsubtly) messing up connections is an issue. I've also used providers where their load balancers had worse availability than our hosts.

If you control the clients, it's reasonable to call the platform dns api to get a list of ips and shuffle and iterate through in an appropriate way. Even better if you have a few stablely allocated IPs you can distribute in client binaries for when DNS is broken; but DNS is often not broken and it's nice to use for operational changes without having to push new configuration/binaries everytime you update the cluster.

If your clients are browsers, default behavior is ok; they usually use IPs in order, which can be problematic [1], but otherwise, they have good retry behavior: on connection refused they try another IP right away, in case of timeout, they try at least a few different IPs. It's not ideal, and I'd use a load balancer for browsers, at least to serve the initial page load if feasible, and maybe DNS RR and semi-smart client logic in JS for websockets/etc; but DNS RR is workable for a whole site too.

If your clients are not browsers and not controlled by you, best of luck?

I will 100% admit that sometimes you have to assume someone built their DNS caching resolver to interpret the TTL field as a number of days, rather than number of seconds. And that clients behind those resolvers will have trouble when you update DNS, but if your loadbalancer is behind a DNS name, when it needs to change addresses, you'll deal with that then, and you won't have experience.

[1] one of the RFCs suggests that OS apis should sort responses by prefix match, which might make sense if IP prefixes were heirarchical as a proxy to get to a least network distance server. But in the real world, numerically adjacent /24s are often not network adjacent, but if your servers have widely disparate addresses, you may see traffic from some client ips gravitate towards numerically similar server ips.

ignoramous

> you control the clients, it's reasonable to call the platform dns api to get a list of ips and shuffle and iterate through in an appropriate way. Even better if you have a few stable allocated IPs you can distribute in client binaries for when DNS is broken

You know, not many apps do this but in particular WhatsApp does! Was it you?

toast0

Not my idea, but I supported it. Originally, client build scripts resolved the service names at build time, and that worked ok because our hosts tended to have a lot of longevity, and DNS tends to work, but things got a little better when we were more intentional about selecting the servers to be in the list, and keep track of which ones were in the list, so retirements could be managed a bit better. And I pushed until we got agreement on a set of FB load balancer IPs to include as well.

ectospheno

> I will 100% admit that sometimes you have to assume someone built their DNS caching resolver to interpret the TTL field as a number of days, rather than number of seconds.

I’ve run a min ttl of 3600 on my home network for over a year. No one has complained yet.

toast0

That's only because there's no way for service operators to effectively complain when your clients continue to hit service ips for 55 minutes after you should. And if there was, we'd first yell at all the people who continue to hit service ips for weeks and months after a change... by the time we get to complaining about one home using an hour ttl, it's not a big deal.

wongarsu

An clients tested in the article behaved correctly and chose one of the reachable servers instead.

Of course somebody will inevitably misconfigure their local DNS or use a bad client. Either you accept an outage for people with broken setups or you reassign the IP to a different server in the same DC.

latchkey

If you know all of your clients, then you don't even need DNS. But, you don't know all of your clients. Nor do you always know your upstream DNS provider.

Design for failure. Don't fabricate failure.

zamadatix

Why would knowing your clients change whether or not you want to use DNS? Even when you control all of the clients you'll almost always want to keep using DNS.

A large number of services successfully achieve their failure tolerances via these kinds of DNS methods. That doesn't mean all services would or that it's always the best answer, it just means it's a path you can consider when designing for the needs of a system.

arrty88

The standard today is to use a relatively low TTL and to health check the members of the pool from the dns server.

latchkey

That's like saying there are traffic rules in Saigon.

Exact implementation of TTL, is a suggestion.

jgrahamc

Hey. This is Cloudflare's CTO. We've rolled out a change to all free accounts in Cloudflare to bring them into line with paid accounts. The problem you are talking about here has been fixed and we should be doing Zero Downtime Failover for all account types. Can you retest it?

PS Thanks for writing this up. Glad we were able to change this behaviour for everyone.

hyperknot

Retested it, works brilliantly! I'll update the article accordingly.

Thanks for bringing it to the Free accounts, great outcome!

jgrahamc

Nice. Glad we got this fixed.

edm0nd

The dark remix version of this is fast flux hosting and what a lot of the bulletproof hosting providers use.

https://unit42.paloaltonetworks.com/fast-flux-101/

realchaika

May be worth mentioning Zero downtime failover is a Pro or higher feature I believe, that's how it was documented before as well, back when protect your origin server docs were split by plan level. So you may see different behavior/retries.

solatic

Multiple A records is not for load balancing, a key component of which is full control over registering new targets and deregistering old targets in order to shift traffic. Because DNS responses are cached, you can't reliably use DNS to quickly shift traffic to new IP addresses, or use DNS to remove traffic from old IP addresses.

As OP clearly shows, it's also not useful for geographically routing traffic to the nearest endpoint. Clients are dumb and may do things against their interest, the user will suffer for it, and you will get the complaints. Use a DNS provider with proper georouting if this is important to you.

The only genuinely valid reason for multiple A addresses is redundancy. If you have a physical NIC, guess what, those fail sometimes. If you get a virtual IP address from a cloud provider, guess what, those abstractions leak sometimes. Setting up multiple servers with multiple NICs per server and multiple A records pointing to those NICs is one of those things you do when your usecase requires some stratospherically high reliability SLA and you systematically start to work through every last single point of failure in your hot path.

neuroelectron

We used to do this at Amazon in the 00's for onsite hosts. At the time round robin DNS was the fastest way to load balance as even with dedicated load balancers of the time, the latency was a few milliseconds slower. A lot of the decisions didn't make sense to me and seemed to be grandfathered in from the 90's.

We had a dedicated DNS host and various other dedicated hosts for various services related to order fulfillment. A batch job would be downloaded in the morning to the order server (app) and split up amongst the symbol scanners which ran basic terminals. To keep latency as low as possible the scanners would dns round robin. I'm not sure how much that helped because the wifi was by far the biggest bottleneck simply for the fact of interference, reflection and so on.

With this setup an outage would have no effect the throughput of the warehouse since the batch job was all handled locally. As we moved toward same day shipping of course this was no longer a good solution and we moved to redundant, dedicated fiber and cellular data backup then almost completely remote servers for everything but app servers. So what we were left with was million dollars hvac to cool a quarter rack of hardware and a bunch of redundant onsite tech workers.