.localhost Domains
211 comments
·April 10, 2025hardaker
GrumpyYoungMan
The *.home.arpa domain in RFC 8375 has been approved for local use since 2018, which is long enough ago that most hardware and software currently in use should be able to handle it.
johnmaguire
RFC 8375 seems to have approved it specifically to use in Home Networking Control Protocol, though it also states "it is not intended that the use of 'home.arpa.' be restricted solely to networks where HNCP is deployed. Rather, 'home.arpa.' is intended to be the correct domain for uses like the one described for '.home' in [RFC7788]: local name service in residential homenets."
The OpenWrt wiki on Homenet suggests the project might be dead: https://openwrt.org/docs/guide-user/network/zeroconfig/hncp_...
Anyone familiar with HNCP? Are there any concerns of conflicts if HNCP becomes "a thing"? I have to say, .home.arpa doesn't exactly roll of the tongue like .internal. Some macOS users seem to have issues with .home.arpa too: https://www.reddit.com/r/MacOS/comments/1bu62do/homearpa_is_...
onre
> I have to say, .home.arpa doesn't exactly roll of the tongue like .internal.
In my native language (Finnish) it's even worse, or better, depending on personal preference - it translates directly to .mildew.lottery-ticket.
AndyMcConachie
Check the errata for RFC 7788. .home being listed in it is a mistake. .home has never been designated for this purpose.
home.arpa is for HNCP.
Use .internal.
Mountain_Skies
It's ugly and clunky, which is why after seven years it's had very little adoption. Home users aren't network engineers so these things actually do matter even if it seems silly in a technical sense.
styfle
Why use that over *.localhost which has been available since 1999 (introduced in RFC 2606)
bravetraveler
From RFC 2606:
The ".localhost" TLD has traditionally been statically defined in
host DNS implementations as having an A record pointing to the
loop back IP address and is reserved for such use
The RFC 8375 suggestion (*.home.arpa) allows for more than a single host in the domain. If not in name/feeling, but the strictest readings [and adherence] too.null
alexvitkov
Too much typing, and Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying - you have to type out the whole http://mything.internal.
This can be addressed by hijacking an existing TLD for private use, e.g. mything.bb :^)
nsteel
Isn't just typing the slash at the end enough to avoid it searching? e.g. mything/
jeroenhd
mything/ will make the OS resolve various hosts: mything., mything.local (mDNS), mything.whateverdomainyourhomenetworkuses. (which may be what you wanted).
If you want to be sure, use mything./ : the . at the end makes sure no further domains are appended during DNS lookup, and the / makes the browser try to access to resource without Googling it.
thaumasiotes
> Chromium-based browsers don't understand it yet and try to search for mything.internal instead, which is annoying
That's hardly the only example of annoying MONOBAR behavior.
This problem could have been avoided if we had different widgets for doing different things. Someone should have thought of that.
tepmoc
eh, you can just add search domain via dhcp or static configuration and just type out http://mything/ no need to enter whole domain unless you need todo ssl
null
codetrotter
In that case I would prefer naming as
<virtual>.<physical-host>.internal
So for example phpbb.mtndew.internal
And I’d probably still add phpbb.localhost
To /etc/hosts on that host like OP doesnodesocket
I wrote a super basic DNS server in go (mostly fun and go practice) which allows you to specify hosts and ips in a json config file. This eliminates the need for editing your /etc/hosts file. If it matches a host in the json config file it returns that ip, else uses Cloudflare public DNS resolver as a fallback. Please; easy on my go code :-). I am a total beginner with go.
eddyg
.home, .corp and .mail are on ICANN’s “high risk” list so won’t ever be gTLDs, so they are also good (short) options.
Ref: https://www.icann.org/en/board-activities-and-meetings/mater...
candiddevmike
It would be great if there was an easy way to get trusted certificates for reserved domains without rolling out a CA. There are a number of web technologies that don't work without a trusted HTTPS origin, and it's such a pain in the ass to add root CAs everywhere.
GoblinSlayer
You can configure them to send requests through http proxy.
kevincox
*.localhost is reserved for accessing the loopback interface. It is literally the perfect use for it. In fact on many operating systems (apparently not macOS) anything.localhost already resolves to the loopback address.
MaKey
It seems like it has not been standardized yet:
> As of March 7, 2025, the domain has not been standardized by the Internet Engineering Task Force (IETF), though an Internet-Draft describing the TLD has been submitted.
jwilk
It's been reserved by ICANN:
https://www.icann.org/en/board-activities-and-meetings/mater...
> Resolved (2024.07.29.06), the Board reserves .INTERNAL from delegation in the DNS root zone permanently to provide for its use in private-use applications.
g0db1t
> Resolved (2024.07.29.06) ... I'm too tired, I read it as a IPv4 adress...
sdwolfz
Note: browsers also give you a Secure Context for .localhost domains.
https://developer.mozilla.org/en-US/docs/Web/Security/Secure...
So you don't need self signed certs for HTTPS on local if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine (authentication for example requires a secure context if doing OAuth2).
c-hendricks
> if you want to, for example, have a backend API and a frontend SPA running at the same time talking to eachother on your machine
Won't `localhost:3000` and `localhost:3001` also both be secure contexts? Just starting a random vite project, which opens `locahost:3000`, `window.isSecureContext` returns true.
sdwolfz
This is used for scenarios where you don't want to hardcode port numbers, like when running multiple projects on your machine at the same time.
Usually you'd have a reverse proxy running on port 80 that forwards traffic to the appropoiate service, and an entry in /etc/hosts for each domain, or a catch all in dnsmasq.
Example: a docker compose setup using traefik as a reverse proxy can have all internal services running on the same port (eg. 3000) but have a different domain. The reverse proxy will then forward traffic based on Host. As long as the host is set up properly, you could have any number of backends and frontends started like this, via docker compose scaling, or by starting the services of another project. Ports won't conflict with eachother as they're only exposed internally.
Now, wether you have a use for such a setup or not is up to you.
bolognafairy
Well shit. TIL. Time to go reduce the complexity of our dev environment.
jrvieira
you should never trust browsers default behavior
1. not all browsers are the same
2. there is no official standard
3. even if there was, standards are often ignored
4. what is true today can be false tomorrow
5. this is mitigation, not security
wutwutwat
1. not all browsers are the same
they are all aiming to implement the same html spec
2. there is no official standard
there literally is
> A context is considered secure when it meets certain minimum standards of authentication and confidentiality defined in the Secure Contexts specification
https://w3c.github.io/webappsec-secure-contexts/
3. even if there was, standards are often ignored
major browsers wouldn't be major browsers if this was the case
4. what is true today can be false tomorrow
standards take a long time to become standard and an even longer time to be phased out. this wouldn't sneak up on anyone
5. this is mitigation, not security
this is a spec that provides a feature called "secure context". this is a security feature. it's in the name. it's in the spec.
sigil
This nginx local dev config snippet is one-and-done:
# Proxy to a backend server based on the hostname.
if (-d vhosts/$host) {
proxy_pass http://unix:vhosts/$host/server.sock;
break;
}
Your local dev servers must listen on a unix domain socket, and you must drop a symlink to them at eg /var/lib/nginx/vhosts/inclouds.localhost/server.sock.Not a single command, and you still have to add hostname resolution. But you don't have to programmatically edit config files or restart the proxy to stand up a new dev server!
hn92726819
I'm not that familiar with nginx config. Does this protect against path traversal? Ex: host=../../../docker.sock
sigil
nginx validates hostnames per the spec, and to your question specifically it rejects requests that would put a slash in $host: https://github.com/nginx/nginx/blob/b6e7eb0f5792d7a52d2675ee...
ku1ik
This is neat!
jFriedensreich
Chrome and i think Firefox resolve all <name>.localhost domains to localhost per default, so you don't have to add them to the hosts file. I setup a docker proxy on port 80 that resolves all requests from <containername>.localhost to the first exposed port of that container (in order of appearing in the docker compose file) automatically which makes everything smooth without manual steps for docker compose based setups.
globular-toast
Source for this? Are you sure it's not your system resolver doing it?
TingPing
There is a draft spec over it, Ill find it later, but they do hardcode it now and never touch dns.
kbolino
It's probably both. Browsers now have built-in DoH so they usually do their own resolving. Only if you disable "secure DNS" (or you use group policies) will you fall back to the system resolver anymore.
jFriedensreich
Pretty sure its hard coded in the browser and never touches any resolvers. It does not work the same in safari for example.
breck
[dead]
peterldowns
If you’re interested in doing local web development with “real” domain names, valid ssl certs, etc, you may enjoy my project Localias. It’s built on top of Caddy and has a nice CLI and config file format that you can commit to your team’s shared repo. It also has some nice features like making .local domain aliases available to any other device on your network, so you can more easily do mobile device testing on a real phone. It also syncs your /etc/hosts so you never need to edit it manually.
Check it out and let me know what you think! (Free, MIT-licensed, single-binary install)
Basically, it wraps up the instructions in this blogpost and makes everything easy for you and your team.
bestham
There is also mkcert by Filippo Valsorda (no relation to mkcert.org) at https://github.com/FiloSottile/mkcert
peterldowns
Yup, mkcert is used by caddy which is used by localias :)
novoreorx
After reading this blog, I immediately thought of Localias. I use it frequently, preferring the .test domain.
CodesInChaos
How do valid certs for localhost work? Does that require installing an unconstraint root certificate to sign the dev certs? Or is there a less risky way (name constraints?)
sangeeth96
It's mentioned in the README:
- If Caddy has not already generated a local root certificate:
- Generate a local root certificate to sign TLS certificates
- Install the local root certificate to the system's trust stores, and the Firefox certificate store if it exists and an be accessed.
So yes. I had written about how I do this directly with Caddy over here: https://automagic.blog/posts/custom-domains-with-https-for-y...CodesInChaos
But is this an unconstraint root, or does it use name constraints to limit it to localhost domains/IPs? And how does it handle/store the private key associated with that root?
lxgr
> Install the local root certificate to the system's trust stores
I really wish there was a safer way to do this, i.e. a way to tag a trusted CA as "valid for localhost use only". The article mentions this in passing
> The sudo version of the above command with the -d flag also works but it adds the certificate to the System keychain for all users. I like to limit privileges wherever possible.
But this is a clear case of https://xkcd.com/1200/.
Maybe this could be done using the name constraint extension marked as critical?
worewood
I think an alternative to local root certs would be to use a public cert + dnsmasq on your LAN to resolve the requests to a local address.
WhyNotHugo
Any subdomain of .localhost works out-of-the-box on Linux, OpenBSD and plenty of other platforms.
Of note, it doesn't work on macOS. I recall having delivered a coding assignment for a job interview long ago, and the reviewer said it didn't work for them, although the code all seemed correct to them.
It turned out on macOS, you need to explicitly add any subdomains of .localhost to /etc/hosts.
I'm still surprised by this; I always thought that localhost was a highly standard thing covered in the RFC long long ago… apparently it isn't, and macOS still doesn't handle this TLD.
telotortium
It's easy to be tricked into thinking macOS supports it, because both Chrome and Curl support it. However, ping does not, nor do more basic tools like Python's request library (and I presume urllib as well).
jwilk
> Any subdomain of .localhost works out-of-the-box on Linux
No, not here.
jchw
This usually happens because you have a Linux setup that doesn't use systemd-resolved and it also doesn't have myhostname early enough in the list of name resolvers. Not sure how many Linux systems default to this, but if you want this behavior, adjust your NSS configuration, most likely.
WhyNotHugo
You don't need systemd-resolved for .localhost domains to work. It works on systemd-based distros using another resolver as well as non-systemd distros.
I've never seen this not-work on any distro, must be a niche thing.
oulipo
Just did that on my mac and it seems to work?
$ ping hello.localhost
PING hello.localhost (127.0.0.1): 56 data bytes
64 bytes from 127.0.0.1: icmp_seq=0 ttl=64 time=0.057 ms
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms
tedunangst
That's because your DNS server sends back 127.0.0.1. The query isn't resolved locally.
parasti
I am doing this on macOS with no problem.
octagons
Against much well-informed advice, I use a vanity domain for my internal network at home. Through a combination Smallstep CA, CoreDNS, and Traefik, any services I host in my Docker Swarm cluster automatically are immediately issued a signed SSL certificate, load-balanced, and resolvable. Traefik also allows me to configure authentication for any services that I may not wish to expose without such.
That said, I do recommend the use of the internal. zone for any such setup, as others have commented. This article provides some good reasons why (at least for .local) you should aim to use a standards-compliant internal zone: https://community.veeam.com/blogs-and-podcasts-57/why-using-...
hobo_mark
I added a fake .com record in my internal DNS that resolves to my development server. All development clients within that network have an mkcert-generated CA installed.
Not so different from you, but without even registering the vanity domain. Why is this such a bad idea?
szszrk
For home it's not that bad, but there could be conflicts at some point. Your clients will send data to the Internet unknowingly when dns is missconfigured.
It's better to use domain you control.
I'm a fan of buying cheapest to extend (like .ovh, great value) and use real Let's Encrypt (via dns challenge) to register any subdomain/wildcard. So that any device will have "green padlock" for totally local service.
octagons
To be clear, I didn’t register anything. I just have a configuration that serves records for a zone like “artichoke.” on my DNS server. Internal hosts are then accessible via https://gitlab.artichoke, for example.
thot_experiment
I alias home.com to my local house stuff. I don't really understand why anyone thinks it's a bad idea either.
matthewaveryusa
It's not a terrible idea. On a large scale it can lead to the corp.com issue:
https://krebsonsecurity.com/2020/02/dangerous-domain-corp-co...
Honestly for USD5/year why don't you just buy yourself a domain and never have to deal with the problem?
null
kreetx
I run a custom (unused) tld with mkcert the same way, with nginx virtual hosts set up for each app.
tbyehl
What's the argument against using one's own actual domain? In these modern times where every device and software wants to force HTTPS, being able to get rid of all the browser warnings is nice.
waynesonfire
I think this is ideal. You make a great point that even if you were to use .internal TLD that is reserved for internal use, you wouldn't be able to use letsencrypt to get a SSL certificate for it. Not sure if there are other ssl options for .internal. But, self-signed is a PITA.
I guess the lesson is to deploy a self-signed root ca in your infra early.
octagons
Check out Smallstep’s step-ca server [0]. It still requires some work, but it allows you to run your own CA and ACME server. I have nothing against just hosting records off of a subdomain and using LE as mentioned, but I personally find it satisfying to host everything myself.
smjburton
OP: If you're already using Caddy, why not just use a purchased domain (you can get some for a few dollars) with a DNS-01 challenge? This way you don't need to add self-signed certificates to your trust store and browsers/devices don't complain. You'll still keep your services private to your internal network, and Caddy will automatically keep all managed certificates renewed so there's no manual intervention once everything is set up.
whatevaa
So basically pay protection money? We have engineered such a system that the only way to use your own stuff is to pay a tax for it and rely on centralized system, even though you don't need to be public at all?
smjburton
If you really want to keep things local without paying any fees, you could also use Smallstep (https://smallstep.com/) to issue certificates for your services. This way you only need to add one CA to your trust store on your devices, and the certificates still renew periodically and satisfy the requirements for TLS.
I suggested using a domain given they already have Caddy set up and it's inexpensive to acquire a cheap domain. It's also less of a headache in my experience.
egoisticalgoat
If you're already adding a CA to your trust store, you can just use caddy! [0] Add their local CA to your store (CA cert is valid for 10 years), and it'll generate a new cert per local domain every day.
Actually, now that I've linked the docs, it seems they use smallstep internally as well haha
[0] https://caddyserver.com/docs/automatic-https#local-https
qwertox
I was on a similar thought process, but this leaves you only with the option to set the A record of the public DNS entry to 127.0.0.1, if you want to use it on the go.
Though you could register a name like ch.ch and get a wildcard certificate for *.ch.ch, and insert local.ch.ch in the hosts file and use the certificate in the proxy, that would even work on the go.
shadowpho
> You'll still keep your services private to your internal network,
Is that a new thing? I heard previously that if you wanted to do DNS/domain for local network you had to expose the list external.
smjburton
It's not, just a different way of satisfying the certificate challenge. Look into a DNS-01 challenge vs a HTTP-01 challenge. Let's Encrypt has a good breakdown: https://letsencrypt.org/docs/challenge-types/.
shadowpho
Gotcha and that lets us avoid to expose internals? that seems like a win win win, I should totally do this!
nine_k
BTW you can actually give every locally-hosted app a separate IP address if you want. The entire 127.0.0/24 is yours, so you can resolve 127.0.0.2, 127.0.0.3, etc as separate "hosts" in /etc/hosts or in your dnsmasq config.
Yes, this also works under macOS, but I remember there used to be a need to explicitly add these addresses to the loopback interface. Under Linux and (IIRC) Windows these work out of the box.
justin_oaks
I'd recommend using some other reserved IP address block like 169.254.0.0/16 or 100.64.0.0/16 and assigning it to your local loopback interface. (Nitpick: you can actually use all of 127.0.0.0/8 instead of just 127.0.0.0/24).
I previously used differing 127.0.0.0/8 addresses for each local service I ran on my machine. It worked fine for quite a while but this was in pre-Docker days.
Later on I started using Docker containers. Things got more complicated if I wanted to access an HTTP service both from my host machine and from other Docker containers. Instead of having your services exposed differently inside a docker network and outside of it, you can consistently use the IP and Ports you expose/map.
If you're 127.0.0.0/8 addresses then this won't work. The local loopback addresses aren't routed to the host computer when sent from a Docker container; they're routed to the container. In other words, 127.0.0.1 inside Docker means "this container" not "this machine".
For that reason I picked some other unused IP block [0] and assigned that block to the local loopback interface. Now I use those IPs for assigning to my docker containers.
I wouldn't recommend using the RFC 1918 IP blocks since those are frequently used in LANs and within Docker itself. You can use something like the link-local IP block (169.254.0.0/16) which I've never seen used outside of the AWS EC2 metadata service. Or you can use the carrier-grade NAT IP block (100.64.0.0/16). Or even some IP block that's assigned for public use, but is never used, although that can be risky.
I use Debian Bookworm. I can bind 100.64.0.0/16 to my local loopback interface by creating a file under /etc/network/interfaces.d/ with the following
auto lo:1
iface lo:1 inet static
address 100.64.0.1
gateway 100.64.0.0
netmask 255.255.0.0
Once that's set up I can expose the port of one Docker container at 100.64.0.2:80, another at 100.64.0.3:80, etc.null
g0db1t
I have no idea why this is not the default solution nor why Docker can not engage in it?
lima
On my Linux machine with systemd-resolved, this even works out the box:
$ resolvectl query foo.localhost
foo.localhost: 127.0.0.1 -- link: lo
::1 -- link: lo
Another benefit is being able to block CSRF using the reverse proxy.jchw
Yeah, I've been using localhost domains on Linux for a while. Even on machines without systemd-resolved, you can still usually use them if you have the myhostname module in your NSS DNS module list.
https://www.man7.org/linux/man-pages/man8/libnss_myhostname....
(There are lots of other useful NSS modules, too. I like the libvirt ones. Not sure if there's any good way to use these alongside systemd-resolved.)
aib
I ended up writing a similar plugin[1] after searching in vain for a way to add temporary DNS entries.
The ability to add host entries via an environment variable turned out to be more useful than I'd expected, though mostly for MITM(proxy) and troubleshooting.
chuckwnelson
I use .localhost for all my projects. Just one annoying note: Safari doesn't recognize the TLD localhost so it will try to perform a search. Adding a slash at the end will fix this; ie example.localhost/
tapete
Luckily the easy fix is available: Do not use Safari.
subculture
When Apple's MobileMe came out I snagged the localhost@me.com email address, thinking how clever I was. But because filtering tools weren't as good back then I was never able to use it because of the truly massive amount of spam and test emails I'd get.
leshokunin
Thanks for the laugh. I wonder what test@gmail.com gets hahaha
isleyaardvark
For anyone unaware, the domain 'example.com' is specifically reserved for the purpose of testing, so you don't have to worry about some rando reading emails sent to "test@gmail.com"
watusername
I don't get it. What does gmail.com have to do with example.com?
You might check out .internal instead which was recently approved [1] for local use.
[1]: https://en.wikipedia.org/wiki/.internal