Getting ready to issue IP address certificates
110 comments
·June 25, 2025lq9AJ8yrfs
gruez
>So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
>Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
I don't see how this is any different than custom/vanity domains you can get from cloud providers. For instance on azure you can assign a DNS name to your VMs in the form of myapp.westus.cloudapp.azure.com, and CAs will happily issue certificates for it[1]. There's no cooloff for those domains either, so theoretically someone else can snatch the domain from you if your VM gets decommissioned.
[1] https://crt.sh/?identity=westus.cloudapp.azure.com&exclude=e...
eddythompson80
There is in fact weird cool off times for these cloud resources. I’m less familiar with AWS, but I know in azure once you delete/release one of these subdomains, it remains tied to your organization/tenant for 60 or 90 days.
You can reclaim it during that time, but any other tenant/organization will get an error that the name is in use. You can ping support to help you there if you show them you own both organizations. I was doing a migration of some resources across organizations and ran into that issue
derefr
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right?
My guess is that it's going to be approached the other way around. After all, it's not the ISPs' job to issue IP addresses in conformance with TLS; it's a TLS provider's job to "validate identity" — i.e. to issue TLS certificates in conformance with how the rest of the ecosystem attaches identity to varyingly-ephemeral resources (IPs, FQDNs, etc.)
The post doesn't say how they're going to approach this one way or the other, but my intuition is that LetsEncrypt is going to have/maintain some gradually-growing whitelist for long-lived IP-issuer ASNs — and then will only issue certs for IPs that exist under those ASNs; and invalidate IP certs if their IP is ever sold to another ASN not on the list. This list would likely be a public database akin to Mozilla's Public Suffix List, that LetsEncrypt would expect to share (and possibly co-maintain) with other ACME issuers that want to do IP issuance.
jeroenhd
You can renew your HTTPS certificate for 90 days the day before your domain expires. Your CA can't see if the credit card attached to your auto renewal has hit its limit or not.
I don't think the people using IP certificates will be the same people that abandon their IP address after a week. The most useful thing I can think of is either some very weird legacy software, or Encrypted Client Hello/Encrypted SNI support without needing a shared IP like with Cloudflare. The former won't drop IPs on a whim, the latter wouldn't succeed in setting up a connection to the real domain.
hk1337
> I wonder how many IP certs you could get for how much money with the different cloud providers.
I wonder if they'll offer wildcard certs at some point.
null
Hizonner
> So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
... or put multiple customers on the same IP address at the same time. But presumably they wouldn't be willing to complete the dance necessary to actually get certs for addresses they were using that way.
Just in case, though, it's probably a good idea to audit basically all software everywhere to make sure that it will not use IP-based SANs to validate connections, even if somebody does, say, embed a raw IP address in a URL.
This stuff is just insanity.
schoen
There was a prior concern in the history of Let's Encrypt about hosting providers that have multiple customers on the same server. In fact, phenomena related to that led to the deprecation of one challenge method and the modification of another one, because it's important that one customer not be able to pass CA challenges on behalf of another customer just because the two are hosted on the same server.
But there was no conclusion that customers on the same server can't get certificates just because they're on the same server, or that whoever legitimately controls the default server for an IP address can't get them.
This would be a problem if clients would somehow treat https://example.com/ and https://96.7.128.175/ as the same identifier or security context just because example.com resolves to 96.7.128.175, but I'm not aware of clients that ever do so. Are you?
If clients don't confuse these things in some automated security sense, I don't see how IP address certs are worse than (or different from) certs for different customers who are hosted on the same IP address.
xg15
So in the case of multiple users behind a NAT, the cert for 96.7.128.175 would identify whichever party has control over the 443 port on that address?
afiori
The way in which they are worse is that IP addresses are often unstable and shuffled around since generally the end user of the address is not its owner. It would be similar to getting a cert for myapp.github.io, technically perfectly valid but GitHub can add any moment steal the name from you since they are the owner, not you
AutistiCoder
also, HTTPS certs are for in transit - so I see no reason why one certificate can't be used for all the Websites on the same server.
Hizonner
Perhaps I didn't make myself clear. I don't think that IP certs will end up getting issued for shared servers, and definitely not in a way where tenants can impersonate one another. Not often enough to worry about, anyway.
The point was that it affects the utility of the idea.
... and don't get me started on those "challenge methods". Shudder. You'll have me ranting about how all of X.509 really just needs to be taken out and shot. See, I'm already doing it. Time for my medication...
mocko
I can see how this would work on a technical level but what's the intended use case?
infogulch
Just ESNI/ECH is a big deal.
I recall one of the main arguments against Encrypted server name indication (ESNI) is that it would only be effective for the giant https proxy services like Cloudflare, that the idea of IP certs was floated as a solution but dismissed as a pipe dream. With IP address certificates, now every server can participate in ESNI, not just the giants. If it becomes common enough for clients to assume that all web servers have an IP cert and attempt to use ESNI on every connection, it could be a boon for privacy across the internet.
Hizonner
So is this the flow?
1. Want to connect to https://www.secret.com.
2. Resolve using DNS, get 1.2.3.4
3. Connect to 1.2.3.4, validate cert
4. Send ESNI, get separate cert for www.secret.com, validate that
... and the threat you're mitigating is presumably that you don't want to disclose the name "www.secret.com" unless you're convinced you're talking to the legitimate 1.2.3.4, so that some adversary can't spoof the IP traffic to and from 1.2.3.4, and thereby learn who's making connections to www.secret.com. Is that correct?
But the DNS resolution step is still unprotected. So, two broad cases:
1. Your adversary can subvert DNS. In this case IP certificates add no value, because they can just point you to 5.6.7.8, and you'll happily disclose "www.secret.com" to them. And you have no other path to communicate any information about what keys to trust.
2. Your adversary cannot subvert DNS. But if you can rely on DNS, then you can use it as a channel for key information; you include a key to encrypt the ESNI for "www.secret.com" in a DNS record. Even if the adversary can spoof the actual IP traffic to and from 1.2.3.4, it won't do any good because it won't have the private key corresponding to that ESNI key in the DNS. And those keys are already standardized.
So what adversary is stopped by IP certificates who isn't already stopped by the ESNI key records in the DNS?
infogulch
Sure, I agree, the next increment in privacy comes with using DoT/DoH (in fact some browsers require this to use ESNI at all). Probably throw in DNSSEC next. Having IP certs is just one more (small) step in that direction.
> you include a key to encrypt the ESNI for "www.secret.com" in a DNS record
I've never heard of this, is this a thing that exists today? (edited to remove unnecessary comment)
nine_k
The point is in not showing the watching adversary any DNS names at all. You do DoH, you do the IP cert, you enter TLS before naming any names. The www.secret.com is never visible in plaintext.
Only helpful if the IP itself is not incriminating or revealing, that is, it's an IP from a large pool like Cloudflare, GCP, AWS, etc.
To my mind, it's much more interesting to verify that an address like 1.1.1.1 or 8.8.8.8 is what it purports to be, but running UDP DNS over TLS is still likely not practical, and DoH already exists, so I don't see how helpful is it here.
tptacek
Presumably you're encrypting DNS.
duskwuff
> If it becomes common enough for clients to assume that all web servers have an IP cert
That's never going to be a safe assumption; private and/or dynamically assigned IP addresses are always going to be a thing.
move-on-by
Plenty of other responses with good use cases, but I didn’t see NTS mentioned.
If you want to use NTS, but can’t get an IP cert, then you are left requiring DNS before you can get a trusted time. If DNS is down- then you can’t get the time. A common issue with DNSSEC is having the wrong time- causing validation failures. If you have DNSSEC enforced and have the wrong time- but NTS depends on DNS, then you are out of luck with no way to recover. Having IP as part of your cert allows trusted time without the DNS requirement, which can then fix your broken DNSSEC enforcement.
Hizonner
How are you going to validate an X.509 certificate if you don't know the time?
move-on-by
Oh this is a good point! Looking at my DNSSEC domain (hosted by CloudFlare) on https://dnssec-debugger.verisignlabs.com - the Inception Time and Expiration Time seems to be valid for... 3.5 days? This isn't something I look at much, but I assume that is up to the implementation. The new shortlived cert is valid for 6 days. So, from a very rough look, I expect X.509 certificate is going to be less time sensitive then DNSSEC - but only by a few days. Also, very likely to depend on implementation details. This is a great point.
codys
this seems possible to avoid as an issue without needing IP certs by having the configuration supply both an IP and a hostname, with the hostname used for the TLS validation.
move-on-by
Yes, that is absolutely possible, but doesn't mean that will be the default. I commented recently [0] about Ubuntu's decision to have only NTS enabled (via domain) by default on 25.10. It begs the question how system time can be set if the initial time is outside of the cert's validity time-frame. I didn't look, but perhaps Chrony would still use the local network's published NTP servers.
szszrk
Sometimes you want to have valid certs while your dns is undergoing major redesign. For instance to keep your dashboards available, or to be triple sure no old automation will fail due to dns issues.
In other cases dns is just not needed at all. You might prefer simplicity, independence from dns propagation, so you will have your, say, Cockpit exposed instantly on a test env.
Only our imagination limits us here.
Hizonner
So go to keys-are-names.
There's no reason AT ALL to bring IP addresses into the mix.
nine_k
Consider Wireguard: it works at IP level, but gives you identity by crypto key. You can live without proper DNS in a small internal network.
(This obviously lives well without the IP certs under discussion.)
szszrk
> So go to keys-are-names.
Elaborate, please.
> There's no reason AT ALL to bring IP addresses into the mix.
Not sure what scenario you are talking about, but IPs are kind of hard to avoid. DNS is trivial to avoid - you can simply not set it up.
"bringing IPs into the mix" is literally the only possible option.
ff317
It might be interesting for "opportunistic" DoTLS towards authdns servers, which might listen on the DoTLS port with a cert containing a SAN that matches the public IP of the authdns server. (You can do this now with authdns server hostnames, but there could be many varied names for one public authdns IP, and this kinda ties things together more-clearly and directly).
jeroenhd
It might also he useful to hide the SNI in HTTPS requests. With the current status of ESNI/ECH you need some kind of proxy domain, but for small servers that only host a few sites, every domain may be identifiable (as opposed to, say, a generic Cloudflare certificate or a generic Azure certificate).
throitallaway
I'm guessing mostly hobbyists and one-off use cases where people don't care to associate a hostname to a project.
hypeatei
One use-case is connecting to a DoT (DNS-over-TLS) server directly rather than using a hostname. If you make a TLS connection to an IP address via OpenSSL, it will verify the IP SAN and fail if it's not there.
teaearlgraycold
Not common, but there is the use case of vanity IPs. The cert for https://1.1.1.1 is signed for the IP as well as the domain name one.one.one.one
0xbadcafebee
Nice, another exploit for TLS. The previous ones were all about generating valid certs for a domain you don't own. This one will be for generating a cert for an IP you don't own. The blackhats will be hooting and hollering on telegram :)
Hizonner
So does anybody have a pointer to the official justification for this insanity?
ameliaquining
The announcement is https://letsencrypt.org/2025/01/16/6-day-and-ip-certs/. I don't think it's more complicated than: there exist services that for one reason or another don't have a domain name and are instead accessible by a public static IP address, and they need TLS certificates for security, and other CAs offer this, so Let's Encrypt should too. Is there any specific reason why they shouldn't?
leoh
It seems to me they could just as easily issue subdomains and certs for said IPs and make the whole thing infinitely safer.
parliament32
I could see the opposite argument: domain names who knows, someone could steal it or hack the registrar, registrar could be evil, DNS servers could be untrusted and/or evil or MITM'd... connecting to an IP you're engineering out entire classes of weaknesses in the scheme.
Hizonner
Hmm. Absolutely no explanation of why there's a need. Given only that announcement, I'd have to assume that the reason is "because we can".
So the first reason not to do it is that you never want to change software without a good reason. And none of the use cases anybody's named here so far hold water. They're all either skill issues already well addressed by existing systems, or fundamental misunderstandings that don't actually work.
Changing basic assumptions about naming is an extra bad idea with oak leaf clusters, because it pretty much always opens up security holes. I can't point to the specific software where somebody's made a load-bearing security assumption about IP address certificates not being available (more likely a pair of assumptions "Users will know about this" and "This can't happen/I forgot about this")... but I'll find out about it when it breaks.
Furthermore, if IP certificates get into wide use (and Let's Encrypt is definitely big enough to drive that), then basically every single validator has to have a code path for IP SANs. Saying "you don't have to use it" is just as much nonsense as saying "you don't have to use IP". Every X.509 library ends up with a code path for IP SANs, and it effectively can't even be profiled out. Every library is that much bigger and that much more complicated and needs that much more maintenance and testing. It's a big externalized cost. It would better to change the RFCs to deprecate IP SANs; they never should have been standardized to begin with.
It also encourages a whole bunch of bad practices that make networks brittle and unmaintainable. You should almost never see an IP address outside of a DNS zone file (or some other name resolution protocol). You can argue that people shouldn't do stupid things like hardwiring IP addresses even if they're given the tools... but that's no consolation to the third parties downstream of those stupid decisions.
... and it doesn't even work for all IP addresses, because IP addresses aren't global names. So don't forget to special-case the locally administered space in every single piece of code that touches an X.509 certificate.
ameliaquining
TLS certificates for IP addresses are already a thing that exists. You can, for instance, go to https://1.1.1.1 in your browser right now (it used to actually serve the HTML from there but now it's a redirect). If that doesn't work in a given TLS client, this will be treated as a bug in that client, and rightly so. The genie is out of the bottle; nobody is going to remove support for things that work today just because it'd be slightly cleaner. So TLS clients are already paying the maintainability costs of supporting IP address certificates; this isn't a new change.
I'm not sure why private IP addresses would need to be treated differently other than by the software that issues certs for publicly trusted CAs (which is highly specialized and can handle the few extra lines of code, it's not a big cost for the whole ecosystem). Private CAs can and do issue certs for private IP addresses.
Also, how would DoH or DoT work without this?
fredfish
Hizonner
I'm sorry, but how is "Require validation of DNSSEC (when present) for CAA and DCV Lookups" related to issuing X.509 certs that include IP address SANs? I don't see any connection, and I didn't spot anything about it on a quick skim of the comments.
fredfish
Anything from people who are afraid of increasingly onerous DNS requirements to breakage because they can't fix their parent domains DNSSEC misconfiguration. It seems like an interesting timing coincide to me so I wonder if there's some (ir)rational explanation. (Implementing a new SAN that must inherently have the gap you are finally addressing is not a bit funny to you?)
vkdelta
Does it help getting encrypted https (without self signed cert error) on my local router ? 192.168.0.1 being an example login page.
qmarchi
No but maybe yes: It would be impossible, and undesirable to issue certificates for local addresses. There's no way to verify local addresses because, inherently, they're local and not globally routable.
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
ameliaquining
No, they won't issue a certificate for a private IP address because you don't have exclusive control over it (i.e., the same IP address would point to a different machine on someone else's network).
jekwoooooe
No and it shouldn’t. You can just run a proxy with a real domain and a real cert and then use dns rewrites to point that domain to a local host
For example you can use nginx manager if you want a ui and adguard for dns. Set your router to use adguard as the exclusive dns. Add a rewrite rule for your domain to point to the proxy. Register the domain and get a real cert. problem solved
All of my local services use https
remram
No, on the contrary. You can't get a valid certificate for non-global IP, but you can already get a certificate for a domain name and point it to 192.168.0.1.
johnklos
You have to possess the IP.
dark-star
no but you can do something closely related:
- get a domain name (foo.com) and get certificates for *.foo.com
- run a DNS resolver that maps a.b.c.d.foo.com (or a-b-c-d.foo.com) to the corresponding private IP a.b.c.d
- install the foo.com certificate on that private IP's device
then you can connect to devices in your local network via IP by using https ://192-18-1-1.foo.com
Since you need to install the certificate in step 3 above, this works better with long-lived certificates, of course, but aotomation helps there
michaelt
I considered doing that for a project once.
Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve. And when my internet is down is exactly when I want to access my router's admin page.
I decided simply using unencrypted HTTP is a much better choice.
yjftsjthsd-h
> Then I realised that when my internet was down, 192-18-1-1.foo.com wouldn't resolve.
Just add a local DNS entry on your local DNS server (likely your router).
OptionOfT
Interesting, there is no subject in the example cert shown.
Is this because the certificate was requested for the IP, and other DNS entries were part of the SAN?
jaas
We (Let's Encrypt) are getting rid of subject common names and moving to just using subject alternative names.
This change has been made in short-lived (6 day) certificate profiles. It has not been made for the "classic" profile (90 day).
richm44
Time for me to dust off CVE-2010-3170 again? :-)
NicolaiS
I guess a bunch of "roll your own X.509 validation"-logic will have that bug, but to exploit it you need a misbehaving CA to issue you such a cert (i.e. low likelihood)
zdw
This seems to be for public IP addresses, not private RFC1918 ipv4 range addresses.
The only challenges possible are HTTP and TLS-ALPN, not DNS, so the "proof" that you own the IP is that LetsEncrypt can contact it?
ameliaquining
Yes, which is the same way control of a domain name is typically checked; DNS is only used in a minority of cases as it can't be as turnkey.
Hizonner
Having DNS available wouldn't be any more "proof". The person applying gets to choose which form of proof will be provided, so adding more options can only ever make it easier to "prove" things.
foresto
I expect SAN in this case means Subject Alternative Name, not Storage Area Network.
Sigh... I wish people would use their words before trotting out possibly-ambiguous (or obscure) acronyms. It would help avoid confusion among readers who don't live and breathe the topic on the writer's mind.
Operyl
There’s only one, and not really obscure, interpretation of this acronym in a technical forum post announcement from a TLS certificate authority, the context was sufficient.
parliament32
If you don't know how to interpret "SAN" in a blog post from a TLS certificate issuer, I don't think you're the target audience for this post.
foresto
Lots of people on HN are not the target audience for any given post, yet are still interested.
(And my point applies to all writing and speaking, not just this post.)
mcpherrinm
If it was a blog post or announcement, we’d have surely included more context, and not a forum post really intended for limited distribution.
You just used HN without expanding that acronym! :)
XorNot
It's standard academic writing practice to introduce the full acronym on first usage in any given text.
Way more people should be familiar with the concept since it's very useful and ensures clear communications.
NewJazz
OK, but how hard is a link to Wikipedia?
zaik
Interesting. I wonder if XMPP federation would work with such a certificate.
giancarlostoro
Are there public XMPP servers using just the IP for the host? Never heard of this, I could see this being the case internally.
timewizard
I've personally never felt comfortable using regexes to solve production problems. The certificate code referenced here shows why:
https://github.com/mozilla-firefox/firefox/blob/d5979c2a5c2e...
Yikes.
ameliaquining
I think that's not doing anything security-critical, it's just formatting an IPv6 address for display in the certificate-viewer UI.
cpburns2009
All that regex does is split an IPv6 address into groups of 4 digits, joins them with ":", and collapses any sequence of ":0000:" to "::". I don't see anything problematic with it.
timewizard
> and collapses any sequence of ":0000:" to "::"
Which is an error. Any ip like 2001:0000:0000::1 is going to be incorrect. It willingly produces errors. Whoever wrote this didn't even spend a few seconds thinking about the structure of IPv6 addresses.
> I don't see anything problematic with it.
Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
cpburns2009
It only operates on a 32 digit IPv6 address so it won't already be abbreviated. My phrasing was inexact. It replaces only the first sequence of any number of ":0000:" to "::".
remram
> Any ip like 2001:0000:0000::1 is going to be incorrect.
This is neither a possible input nor a possible output of that code.
ephou7
> Other than it being completely wrong and requiring a regex to be compiled for an amount of work that's certainly less than the compilation itself.
It's not. And the sequence you describe is not even parsed because colons are not part of the IPv6 extension of the SAN. PLease educate yourself before spilling such drivel.
baobun
Unless you see a glaring issue I don't: I think you are getting the causality wrong there. You "Yikes" because of your discomfort and lack of practice with regexes.
timewizard
> You "Yikes" because of your discomfort and lack of practice with regexes.
That's exceptionally presumptions to the point of being snotty.
> I think you are getting the causality wrong there.
Where did I imply causality? This was simply an occasion to look at the code. This is bad code. I would not pass this. What's your _justification_ for using a regex here?
baobun
> Where did I imply causality?
> > The certificate code referenced here shows why
So what's the implication here, then?
> This is bad code.
Without justifying further I think we're on equal footing on the snottiness here (:
What's bad? Why not use regex here? It's not like they're using it to parse user-controlled HTML. Simple string transormations like this is a great use-case where the manual character iteration easily becomes inefficient and messy. And you may introduce bugs in the process (unicode length bugs are common).
Do you also avoid grep and sed without the -F flag in shell?
So all the addressing bodies (e.g., ISPs and cloud providers) are on board then right? They sometimes cycle through IP's with great velocity. Faster than 6 days at least.
Lots of sport here, unless perhaps they cool off IPs before reallocating, or perhaps query and revoke any certs before reusing the IP?
If the addressing bodies are not on board then it's a user responsibility to validate the host header and reject unwanted IP address based connections until any legacy certs are gone / or revoke any legacy certs. Or just wait to use your shiny new IP?
I wonder how many IP certs you could get for how much money with the different cloud providers.