Understanding DNS Resolution on Linux and Kubernetes
37 comments
·March 23, 2025zokier
cduzz
Unix predates DNS; the nsswitch.conf tells the c libraries how to convert names to IP addresses. This behavior is actually dependent on which libc you're using...
To resolve names, you can ask /etc/hosts for the name / IP conversion; you can also ask DNS, or ldap or NIS; probably there are many I've forgotten about.
solaris: https://docs.oracle.com/cd/E19683-01/806-4077/6jd6blbbe/inde...
glibc: https://man7.org/linux/man-pages/man5/nsswitch.conf.5.html
musl appears to not have an nsswitch.conf or a way to configure name to number resolution behavior?
0xbadcafebee
> In addition to potentially simplifying configuration, having a daemon would allow system-wide dns caching, something I'd imagine would have been especially valuable back in the days of slow networks. Unix has daemons for everything else so that's why it feels odd that name resolution got baked into libc
Having a daemon would add complexity, take up RAM and CPU, and be unnecessary in general. There really weren't that many daemons running in the olden times.
DNS resolution is expected to be fast, since it's (supposed to be) UDP-based. It's also expected that there is a caching DNS resolver somewhere near the client machines to reduce latency and spread load (in the old days the ISP would provide them, then later as "home routers" became a thing, the "home router" provided them too).
Finally, as networks were fairly slow, you just didn't do a ton of network connections, so you shouldn't be doing a ton of DNS lookups. But even if you did, the DNS lookup was way faster than your TCP connection, and the application could cache results easily (I believe Windows did cache them in its local resolver, and nscd did on Linux/Unix)
If you really did need DNS caching (or anything else DNS related), you would just run Bind and configure it to your needs. Configuring Bind was one of the black arts of UNIX so that was avoided whenever possible :)
znpy
> traditionally
because in the tradition there was a (forwarding) dns server somewhere in the local network to do caching for everybody.
nowadays most decent linux distributions have a very good caching dns resolver (systemd-resolved) so that's not an issue anymore.
teknopaul
I don't see why system wide dns caching would be of use?
How many different programs in the same process space hit so many common external services individual caching of names is not sufficient?
Article lists a bunch of fun with systemd running junk in containers that seem counterproductive to me. A lot of systemd stuff seems to be stuff useful on a laptop that ends up where it's really not wanted.
Local dns caching seems like a solution looking for a problem to me. I disable it whereever I can. I have local(ish) dns caches on the network. But not inside lxc containers or Linux hosts.
dc396
The model in which the DNS was developed (back in the mid-80s) was CPU/memory was a more expensive resource than sending/receiving a small datagram to a centralized location within a campus over a local LAN (external connectivity off campus was also considered expensive but necessary).
The fact that this model is still largely assumed is due to inertia.
tyingq
Some of the early unix systems did have a local daemon. The much-maligned SunOS NIS/YP services as one example.
bandie91
maybe for most cases nscd was enough. not exactly a dns-cache but a hostname cache on one layer up.
sleepydog
I used to work for a huge email sender (constant contact). Our mail servers needed to perform an absurd number of lookups while receiving and validating mail. When I was there, we used dnscache, running locally, on all our mail servers. But even with a local dnscache, the overhead of making DNS requests and handling their responses was high enough that adding nscd made a noticeable improvement in CPU usage.
bandie91
i guess this shows that looking up getent hostname database cache is faster than looking up local dns cache because the former is simpler in data structure?
AndyMcConachie
People need to stop using .local or .dev for stuff like this. .dev is an actual TLD in the root zone and .local is for multicast DNS.
ICANN has said they will never delegate .internal and it should be used for these kinds of private uses.
I'm a coauthor on this Internet draft so I'm ofc rather biased.
sgc
There is a small country road near where I grew up with a highly visible Y intersection that never had a stop sign, because there was almost no traffic and, well it was very easy to see people coming from far away and traffic was quite slow on the bumpy road. Inexplicably, the county came along and installed a stop sign there a few decades ago. People who grew up on that road still run that stop sign to this day, more as a testament to the lack of awareness of the county authorities than anything. But it is an unnecessary annoyance as well.
That is how I feel about the takeover of the .local domain for mDNS. Why step in and take a widely used suffix that is shorter for something that will almost always be automated, instead of taking something longer to leave us alone with our .local setups. I will not forgive, I will not forget!
CableNinja
I use .lan at home, which is great, until i enter it in the browser and forget to add a / at the end. Both chrome and firefox just immediately think its a search request
sethops1
Same here, feels so natural to have my local machines at <host>.lan
globular-toast
Yeah, it's quite annoying. foo.bar.svc.cluster.internal even reads better. There is also home.arpa for LAN stuff if you don't own a domain.
Joker_vD
> and .local is for multicast DNS.
Does reusing it cause any problem for the mDNS, or does mDNS usage cause problem for the internal-domains usage?
vel0city
A lot of default configurations won't bother looking up .local hostnames on your DNS server and will only issue an mDNS query. This can often be changed but can be annoying to have to ensure it gets configured correctly everywhere.
And then when you reconfigure it, depending on the stack it won't bother querying mDNS at all if a DNS resolver responds.
diggan
> .dev for stuff like this. .dev is an actual TLD in the root zone
Yeah, not sure why that got approved in the first place. Sure, it wasn't officially part of any of the protected/reserved names or whatever when it got bought, but Google shouldn't have been allowed to purchase it at all since it was in use already for non-public stuff. That they also require HTST just in order to break existing setups is just salt on the wounds.
rascul
It seems like they are using mDNS, though.
szszrk
> The Kubernetes DNS resolves A-B-C-D.N.pod.cluster.local to A.B.C.D, as long as A.B.C.D is a valid IP address and N is an existing namespace. Let’s be honest: I don’t know how this serves any purpose, but if you do, please let me know!
You can use that to
- test weird dns setups
- to issue proper TLS certificates (you can do that technically, but it's less known fact and some services like let's encrypt forbid that as their rule)
- to utilize single IP and same port for multiple services (so just a common host/server configuration on typical reverse proxy, optionally with SNI to be used with TLS on top.
vbezhenar
I don’t think you can issue proper cert for a private IP. So using dns host name is the only option.
CableNinja
If you control an internal CA you can make certs for anything. I have one for my homelab, and even have a few certs issued for my homelab, which are not for domains i control as well as certs with IPs. The CA is who says you cant do those things, and yes its generally agreed upon for the public internet, certs shouldnt have IPs in them, but if you are operating internally theres nothing stopping you.
szszrk
> its generally agreed upon for the public internet, certs shouldnt have IPs in them
That's a bit of a stretch to say anyone agreed on not using IP based certs. Quite the contrary. It is present in RFC 5280 and SAN can contain an IP. It's just very rare to do that, but can be done and is done. Modern browsers and OSs accept it as well.
It's nice when you need to do some cert pinning to make sure there is not MITM eavesdropping, or for example on some onprem environments where you can't fully control workstations/DNS of you user endpoints, but still want to have your services behind certs that actually properly validate.
weinzierl
Let's encrypt public internet certs can have IPs in them.
sciencesama
Which software are you using and what is the process !??
znpy
setting up kubernetes typically involves creating a private CA, so most definitely yes, you technically can issue certificates for whatever you want.
null
szszrk
Private CA's are a thing, it's not even rare in organizations that control their hardware. There are plenty of use cases to go that route.
null
ChoHag
[dead]
null
It's bit curious that traditionally UNIX systems did not run local DNS resolver daemons and instead the resolv.conf (and nsswitch.conf) persisted for so long. In addition to potentially simplifying configuration, having a daemon would allow system-wide dns caching, something I'd imagine would have been especially valuable back in the days of slow networks. Unix has daemons for everything else so that's why it feels odd that name resolution got baked into libc