Skip to content(if available)orjump to list(if available)

TLS certificate lifetimes will officially reduce to 47 days

bob1029

What's the end game here? I agree with the dissent. Why not make it 30 seconds?

Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours? I am willing to bet money this threshold will never be crossed.

This feels like much more of an ideological mission than a practical one, unless I've missed some monetary/power advantage to forcing everyone to play musical chairs with their entire infra once a month...

mcpherrinm

I'm on the team at Let's Encrypt that runs our CA, and would say I've spent a lot of time thinking about the tradeoffs here.

Let's Encrypt has always self-imposed a 90 day limit, though of course with this ballot passing we will now have to reduce that under 47 days in the future.

Shorter lifetimes have several advantages:

1. Reduced pressure on the revocation system. For example, if a domain changes hands, then any previous certificates spend less time in the revoked state. That makes CRLs smaller, a win for everyone involved.

2. Reduced risk for certificates which aren't revoked but should have been, perhaps because a domain holder didn't know that a previous holder of that domain had it, or an attack of any sort that led to a certificate being issued that wasn't desired.

3. For fully short-lived certs (under 7 days), many user-agents don't do revocation checks at all, because that's a similar timeline to our existing revocation technology taking effect. This is a performance win for websites/user-agents. While we advocate for full certificate automation, I recognize there are cases where that's not so easy, and doing a monthly renewal may be much more tractable.

Going to shorter than a few days is a reliability and scale risk. One of the biggest issues with scale today is that Certificate Transparency logs, while providing great visibility into what certs exist (see points 1 and 2), will have to scale up significantly as lifetimes are cut.

Why is this happening now, though? I can't speak for everyone, and this is only my own opinion on what I'm observing, but: One big industry problem that's been going on for the last year or two is that CAs have found themselves in situations where they need to revoke certificates because of issues with those certificates, but customers aren't able to respond on an appropriate timeline. So the big motivation for a lot of the parties here is to get these timelines down and really prove a push towards automation.

noveltyaccount

When I first set up Let's Encrypt I thought I'd manually update the cert one per year. The 90 day limit was a surprise. This blog post helped me understand (it repeats many of your points) https://letsencrypt.org/2015/11/09/why-90-days/

0xbadcafebee

So it's being pushed because it'll be easier for a few big players in industry. Everybody else suffers.

da_chicken

It's a decision by Certificate Authorities, the ones that sell TLS certificate services, and web browser vendors. One benefits from increased demand on their product, while the other benefits by increasing the overhead on the management of their software, which increases the minimum threshold to be competitive.

There are security benefits, yes. But as someone that works in infrastructure management, including on 25 or 30 year old systems in some cases, it's very difficult to not find this frustrating. I need tools I will have in 10 years to still be able to manage systems that were implemented 15 years ago. That's reality.

Doubtless people here have connected to their router's web interface using the gateway IP address and been annoyed that the web browser complains so much about either insecure HTTP or an unverified TLS certificate. The Internet is an important part of computer security, but it's not the only part of computer security.

I wish technical groups would invest some time in real solutions for long-term, limited access systems which operate for decades at a time without 24/7 access to the Internet. Part of the reason infrastructure feels like running Java v1.3 on Windows 98 is because it's so widely ignored.

tptacek

Or, equivalently, it's being pushed because customers of "big players", of which there are a great many, are exposed to security risk by the status quo that the change mitigates.

mcpherrinm

It makes the system more reliable and more secure for everyone.

I think that's a big win.

The root reason is that revocation is broken, and we need to do better to get the security properties we demand of the Web PKI.

yellowapple

"Suffer" is a strong word for those of us who've been using things like Let's Encrypt for years now without issue.

ignoramous

Unless I misunderstood, GP mentions that the problem stems from WebPKI's central role in server identity management. Think of these cert lifetimes as forcefully being signed out after 47 days of being signed in.

> easier for a few big players in industry

Not necessarily. OP mentions, more certs would mean bigger CT logs. More frequent renewals mean more load. Like with everything else, this seems like a trade-off. Unfortunately, for you & I, as customers of cert authorities, 47 days is where the now the agreed cut-off is (not 42).

null

[deleted]

klaas-

I think a very short lived cert (like 7 days) could be a problem on renewal errors/failures that don't self correct but need manual intervention.

What will let's encrypt be like with 7day certs? Will it renew them every day(6 day reaction time), or every 3 days (4 days reaction time). Not every org is suited with 24/7 staffing, some people go on holidays, some public holidays extend to long weekends etc :). I would argue that it would be a good idea to give people a full week to react to renewal problems. That seems impossible for short lived certs.

iqandjoke

Like Apple case. Apple already ask their developer to re-sign the app every 7 days. It should not be the problem.

grey-area

Since you’ve thought about it a lot, in an ideal world, should CAs exist at all?

mcpherrinm

There's no such thing as an ideal world, just the one we have.

Let's Encrypt was founded with a goal of rapidly (within a few years) helping get the web to as close to 100% encrypted as we could. And we've succeeded.

I don't think we could have achieved that goal any way other than being a CA.

Ajedi32

In an ideal world where we rebuilt the whole stack from scratch, the DNS system would securely distribute key material alongside IP addresses and CAs wouldn't be needed. Most modern DNS alternatives (Handshake, Namecoin, etc) do exactly this, but it's very unlikely any of them will be usurping DNS anytime soon, and DNS's attempts to implement similar features have been thus far unsuccessful.

JackSlateur

No they should not

DANE is the way (https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...)

But no browser have support for it, so .. :/

throwaway2037

This is a great question. If we don't have CAs, how do we know if it OK to trust a cert?

Are there any reasonable alternatives to CAs in a modern world? I have never heard any good proposals.

thayne

In an ideal world we could just trust people not to be malicious, and there wouldn't be any need to encrypt traffic at all.

WJW

How relevant is that since we don't live in such a world? Unless you have a way to get to to such a world, of course, but even then CAs would need to keep existing until you've managed to bring the ideal world about. It would be a mistake to abolish them first and only then start on idealizing the world.

klysm

CAs exist on the intersection of reality (far from ideal) and cryptography.

Stefan-H

What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.

efortis

4. Encrypted traffic hoarders would have to break more certs.

anakaine

I love the push that LE puts on industry to get better.

I work in a very large organisation and I just dont see them being able to go to automated TLS certificates for their self managed subdomains, inspection certificates, or anything else for that matter. It will be interesting to see how the short lived certs are adopted into the future.

ryao

Could you explain why Let's Encrypt is dropping OCSP stabling support, instead of dropping it for must-staple only certificates and letting those of us who want must-staple to deal with the headaches? I believe that resolving the privacy concerns involving OCSP raised did not require eliminating must-staple.

cm2187

All of that in case the previous owner of the domain would attempt a mitm attack against a client of the new owner, which is such a remote scenario. In fact has it happened even once?

woodruffw

The "end game" is mentioned explicitly in the article:

> Shorter lifetimes mitigate the effects of using potentially revoked certificates. In 2023, CA/B Forum took this philosophy to another level by approving short-lived certificates, which expire within 7 days, and which do not require CRL or OCSP support.

Shorter-lived certificates make OCSP and other revocation mechanisms less of a load-bearing component within the Web PKI. This is a good thing, since neither CAs nor browsers have managed to make timely revocation methods scale well.

(I don't think there's any monetary or power advantage to doing this. The reason to do it is because shorter lifetimes make it harder for server operators to normalize deviant certificate operation practices. The reasoning there is the same as with backups or any other period operational task: critical processes must be continually tested and evaluated for correctness.)

sitkack

Don't lower cert times also get people to trust certs that were created just for their session to MITM them?

That is the next step in nation state tapping of the internet.

woodruffw

I don't see why it would; the same basic requirements around CT apply regardless of certificate longevity. Any CA caught enabling this kind of MITM would be subject to expedient removal from browser root programs, but with the added benefit that their malfeasance would be self-healing over a much shorter period than was traditionally allowed.

ezfe

lol no? lower cert times still extend the root certificates that are already trusted. It is not a noticeable thing when browsing the web as a user.

A MITM cert would need to be manually trusted, which is a completely different thing.

notatoad

>unless I've missed some monetary/power advantage

the power dynamic here is that the CAs have a "too big to fail" inertia, where they can do bad things without consequence because revoking their trust causes too much inconvenience for too many people. shortening expiry timeframes to the point where all their certificates are always going to expire soon anyways reduces the harm that any one CA can do by offering bad certs.

it might be inconvenient for you to switch your systems to accomodate shorter expiries, but it's better to confront that inconvenience up front than for it to be in response to a security incident.

michaelt

> Once we cross the threshold of "I absolutely have to automate everything or it's not viable to use TLS anymore", why do we care about providing anything beyond ~48 hours?

Well you see, they also want to be able to break your automation.

For example, maybe your automation generates a 1024 bit RSA certificate, and they've decided that 2048 bit certificates are the new minimum. That means your automation stops working until you fix it.

Doing this with 2-day expiry would be unpopular as the weekend is 2 days long and a lot of people in tech only work 5 days a week.

timewizard

If the service becomes unavailable for 48 straight hours then every certificate expires and nothing works. You probably want a little more room for catastrophic infrastructure problems.

fs111

Load on the underlying infrastructure is a concern. The signing keys are all in HSMs and don't scale infinitely.

bob1029

How does cycling out certificates more frequently reduce the load on HSMs?

timmytokyo

It's all relative. A 47-day cycle increases the load, but a 48-hour cycle would increase it substantially more.

woodruffw

Much of the HSM load within a CA is OCSP signing, not subscriber cert issuance.

karlgkk

> Why not make it 30 seconds?

This is a ridiculous straw man.

> 48 hours. I am willing to bet money this threshold will never be crossed.

That's because it won't be crossed and nobody serious thinks it should.

Short certs are better, but there are trade-offs. For example, if cert infra goes down over the weekend, it would really suck. TBH, from a security perspective, something in the range of a couple of minutes would be ideal, but that runs up against practical reasons

- cert transparency logs and other logging would need to be substantially scaled up

- for the sake of everyone on-call, you really don't want anything shorter than a reasonable amount of time for a human to respond

- this would cause issues with some HTTP3 performance enhancing features

- thousands of servers hitting a CA creates load that outweighs the benefit of ultra short certs (which have diminishing returns once you're under a few days, anyways)

> This feels like much more of an ideological mission than a practical one

There are numerous practical reasons, as mentioned here by many other people.

Resisting this without good cause, like you have, is more ideological at this point.

pixl97

Heh, working with a number of large companies I've seen most of them moving to internally signed certs on everything because of ever shortening expiration times. They'll have public certs on edge devices/load balancers but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

plorkyeran

This is a desired outcome. The WebPKI ecosystem would really like it if everyone stopped depending on them for internal things because it's actually a pretty different set of requirements. Long-lived certs with an internal CA makes a lot of sense and is often more secure than using a public CA.

tetha

Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

It's been a huge pain as we have encountered a ton of bugs and missing features in libraries and applications to reload certs like this. And we have some really ugly workarounds in place, because some applications place a "reload a consul client" on the same level of "reload all config, including opening new sockets, adjusting socket parameters, doing TCP connection handover" - all to rebuild a stateless client throwing a few parameters at a standard http client. But oh well.

But I refuse to back down. Reload your certs and your secrets. If we encounter a situation in which we have to mass-revoke and mass-reissue internal certs, it'll be easy for those who do. I don't have time for everyone else.

donnachangstein

> Our internally provided certs of various CAs have a TTL of 72 hours and should be renewed every 48 hours.

Do you promise to come back and tell us the story about when someone went on vacation and the certs issued on a Thursday didn't renew over the weekend and come Monday everything broke and no one could authenticate or get into the building?

tptacek

Some of this rhymes with Colm MacCárthaigh's case against mTLS.

https://news.ycombinator.com/item?id=25380301

OptionOfT

This has been our issue too. We've had mandates for rotating OAuth secrets (client ID & client secret).

Except there are no APIs to rotate those. The infrastructure doesn't exist yet.

And refreshing those automatically does not validate ownership, unlike certificates where you can do a DNS check or an HTTP check.

Microsoft has some technology that next to these tokens they also have a per-machine certificate that is used to sign requests, and those certificates can't leave the machine.

rlpb

Browsers don't design for internal use though. They insist on HTTPS for various things that are intranet only, such as some browser APIs, PWAs, etc

akerl_

As is already described by the comment thread we're replying in, "internal use" and "HTTPS" are very compatible. Corporations can run an internal CA, sign whatever internal certs they want, and trust that CA on their devices.

Spooky23

Desired by who?

There’s nothing stopping Apple and Google from issuing themselves certificates every 10 minutes. I get no value for doing this. Building out or expanding my own PKI for my company or setting up the infrastructure to integrate with Digicert or whomever gets me zero security and business value, just cost and toil.

Revocation is most often an issue when CAs fuck up. So now we collectively need to pay to cover their rears.

crote

CAs fucking up every once in a while is inevitable. It is impossible to write guaranteed bug-free software or train guaranteed flawless humans.

The big question is what happens when (not "if") that happens. Companies have repeatedly shown that they are unable to rotate certs in time, to the point of even suing CAs to avoid revocation. They've been asked nicely to get their shit together, and it hasn't happened. Shortening cert lifetime to force automation is the inevitable next step.

christina97

What do you mean “WebPKI … would like”. The browser vendors want one thing (secure, ubiquitous, etc), the CAs want a very different thing (expensive, confusing, etc)…

ozim

Problem is browsers will most likely follow the enforcement of short certificates so internal sites will be affected as well.

Non browser things usually don’t care even if cert is expired or trusted.

So I expect people still to use WebPKI for internal sites.

akerl_

The browser policies are set by the same entities doing the CAB voting, and basically every prior change around WebPKI has only been enforced by browsers for CAs in the browser root trust stores. Which is exactly what's defined in this CAB vote as well.

Why would browsers "most likely" enforce this change for internal CAs as well?

ryao

Why would they? The old certificates will expire and the new ones will have short lifespans. Web browsers do not need to do anything.

That said, it would be really nice if they supported DANE so that websites do not need CAs.

nickf

'Most likely' - with the exception of Apple enforcing 825-day maximum for private/internal CAs, this change isn't going to affect those internal certificates.

jiggawatts

I just got a flashback to trying to automate the certificate issuance process for some ESRI ArcGIS product that used an RPC configuration API over HTTPS to change the certificate.

So yes, you had to first ignore the invalid self-signed certificate while using HTTPS with a client tool that really, really didn't want to ignore the validity issue, then upload a valid certificate, restart the service... which would terminate the HTTPS connection with an error breaking your script in a different not-fun way, and then reconnect... at some unspecified time later to continue the configuration.

Fun times...

rsstack

> I've seen most of them moving to internally signed certs

Isn't this a good default? No network access, no need for a public certificate, no need for a certificate that might be mistakenly trusted by a public (non-malicious) device, no need for a public log for the issued certificate.

pavon

Yes, but it is a lot more work to run an internal CA and distribute that CA cert to all the corporate clients. In the past getting a public wildcard cert was the path of least resistance for internal sites - no network access needed, and you aren't leaking much info into the public log. That is changing now, and like you said it is probably a change for the better.

pkaye

What about something like step-ca? I got the free version working easily on my home network.

https://smallstep.com/docs/step-ca/

bravetraveler

> A lot more work

'ipa-client-install' for those so motivated. Certificates are literally one among many things part of your domain services.

If you're at the scale past what IPA/your domain can manage, well, c'est la vie.

maccard

I’ve unfortunately seen the opposite - internal apps are now back to being deployed over VPN and HTTP

xienze

> but internal services with have internal CA signed certs with long expire times because of the number of crappy apps that make using certs a pain in the ass.

Introduce them to the idea of having something like Caddy sit in front of apps solely for the purpose of doing TLS termination... Caddy et al can update the certs automatically.

pixl97

Unless they are web/tech companies they aren't doing that. Banks, finance, large manufacturing are all terminating at F5's and AVI's. I'm pretty sure those update certs just fine, but it's not really what I do these days so I don't have a direct answer.

xienze

Sure. The point is, don't bother letting the apps themselves do TLS termination. Too much work that's better handled by something else.

tikkabhuna

F5s don't support ACME, which has been a pain for us.

cryptonym

You now have to build and self-shot a complete CA/PKI.

Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

mox1

Companies have software to manage this for you. We utilize https://www.cyberark.com/products/machine-identity-security/

stackskipton

You could always ask for wildcard for internal subdomain and use that instead so you will leak your internal FQDN but not individual hosts.

JoshTriplett

> Or request a certificate over the public internet, for an internal service. Your hostname must be exposed to the web and will be publicly visible in transparency reports.

That doesn't seem like the end of the world. It means you shouldn't have `secret-plans-for-world-takeover.example.com`, but it's already the case that secret projects should use opaque codenames. Most internal domain names would not actually leak any information of value.

lokar

I’ve always felt a major benefit of an internal CA is making it easy to have very sort TTLs

SoftTalker

Or very long ones. I often generate 10 year certs because then I don't have to worry about renewing them for the lifetime of the hardware.

lokar

In a production environment with customer data?

formerly_proven

I'm surprised there is no authorization-certificate-based challenge type for ACME yet. That would make ACME practical to use in microsegmented networks.

The closest thing is maybe described (but not shown) in these posts: https://blog.daknob.net/workload-mtls-with-acme/ https://blog.daknob.net/acme-end-user-client-certificates/

benburkert

It's 100% possible today to get certs in segmented networks without a new ACME challenge type: https://anchor.dev/docs/public-certs/acme-relay

(disclamer: i'm a founder at anchor.dev)

bigp3t3

I'd set that up the second it becomes available if it were a standard protocol. Just went through setting up internal certs on my switches -- it was a chore to say the least! With a Cert Template on our internal CA (windows), at least we can automate things well enough!

Pxtl

At this point I wish we could just get all our clients to say "self-signed is fine if you're connecting to a .LOCAL domain name". https is intrinsically useful over raw http, but the overhead of setting up centralized certs for non-public domains is just dumb.

Give us a big global *.local cert we can all cheat with, so I don't have to blast my credentials in the clear when I log into my router's admin page.

shlant

this is exactly what I do because mongo and TLS is enough of a headache. I am not dealing with rotating certificates regularly on top of that for endpoints not exposed to the internet.

SoftTalker

Yep letsencrypt is great for public-facing web servers but for stuff that isn't a web server or doesn't allow outside queries none of that "easy" automation works.

procaryote

Acme dns challenge works for things that aren't webservers.

For the other case perhaps renew the cert at a host allowed to do outside queries for the dns challenge and find some acceptable automated way to propagate an updated cert to the host that isn't allowed outside queries.

bsder

And may the devil help you if you do something wrong and accidentally trip LetsEncrypt's rate limiting.

You can do nothing except twiddle your thumbs while it times out and that may take a couple of days.

greatgib

As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain. Only the big one embedded in browser will have the receive to have their own CA certificate with whatever period they want...

And in term of security, I think that it is a double edged sword:

- everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it

- Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time. If ever Digicert or Letsencrypt server, or the "cert updating client" is rooted or has a security issue, most servers around the world could be compromised in a very very short time.

As a side note, I'm totally laughing at the following explanation in the article:

   47 days might seem like an arbitrary number, but it’s a simple cascade:
   - 47 days = 1 maximal month (31 days) + 1/2 30-day month (15 days) + 1 day wiggle room
So, 47 is not arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...

lolinder

> everyone will be so used to certificates changing all the time, and no certificate pinning anymore, so the day were China, a company or whoever serve you a fake certificate, you will be less able to notice it

I'm a computing professional in the tiny slice of internet users that actually understands what a cert is, and I never look at a cert by hand unless it's one of my own that I'm troubleshooting. I'm sure there are some out there who do (you?), but they're a minority within a minority—the rest of us just rely on the automated systems to do a better job at security than we ever could.

At a certain point it is correct for systems engineers to design around keeping the average-case user more secure even if it means removing a tiny slice of security from the already-very-secure power users.

gruez

>As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain.

like, private CA? All of these restrictions are only applied for certificates issued under the webtrust program. Your private CA can still issue 100 year certificates.

greatgib

Let's suppose that I'm a competitor of Google and Amazon, and I want to have my Public root CA for mydomain.com to offer my clients subdomains like s3.customer1.mydomain.com, s3.customer2.mydomain.com,...

tptacek

If you want to be a public root CA, so that every browser in the world needs to trust your keys, you can do all the lifting that the browsers are asking from public CAs.

gruez

Why do you want this when there are wildcard certificates? That's how the hyperscalers do it as well. Amazon doesn't have a separate certificate for each s3 bucket, it's all under a wildcard certificate.

anacrolix

No. Chrome flat out rejects certificates that expire more than 13 months away, last time I tried.

nickf

Certificate pinning to public roots or CAs is bad. Do not do it. You have no control over the CA or roots, and in many cases neither does the CA - they may have to change based on what trust-store operators say. Pinning to public CAs or roots or leaf certs, pseudo-pinning (not pinning to a key or cert specifically, but expecting some part of a certificate DN or extension to remain constant), and trust-store limiting are all bad, terrible, no-good practices that cause havoc whenever they are implemented.

szszrk

Ok, but what's the alternative?

Support for cert and CA pinning is in a state that is much better than I thought it will be, at least for mobile apps. I'm impressed by Apple's ATS.

Yet, for instance, you can't pin a CA for any domain, you always have to provide it up front to audit, otherwise your app may not get accepted.

Doesn't this mean that it's not (realistically) possible to create cert pinning for small solutions? Like homelabs or app vendors that are used by onprem clients?

We'll keep abusing PKI for those use cases.

precommunicator

> everyone will be so used to certificates changing all the time, and no certificate pinning anymore

Browser certificate pinning is deprecated since 2018. No current browsers support HPKP.

There are alternatives to pinning, DNS CAA records, monitoring CT logs.

blincoln

Cert pinning is a very common practice for mobile apps. I'm not a fan of it, but it's how things are today. Seems likely that that will have to change with shorter cert lifetimes.

lucb1e

> 47 [is?] arbitrary, but 1 month, + 1/2 month, + 1 day are not arbitrary values...

Not related to certificates specifically, and the specific number of days is in no way a security risk, but it reminded me of NUMS generators. If you find this annoyingly arbitrary, you may also enjoy: <https://github.com/veorq/numsgen>. It implements this concept:

> [let's say] one every billion values allows for a backdoor. Then, I may define my constant to be H(x) for some deterministic PRNG H and a seed value x. Then I proceed to enumerate "plausible" seed values x until I find one which implies a backdoorable constant. I can begin by trying out all Bible verses, excerpts of Shakespeare works, historical dates, names of people and places... because for all of them I can build a story which will make the seed value look innocuous

From http://crypto.stackexchange.com/questions/16364/why-do-nothi...

jeroenhd

> As I said in another thread, basically that will kill any possibility to do your own CA for your own subdomain

Only if browsers enforce the TLS requirements for private CAs. Usually, browsers exempt user or domain controlled CAs from all kinds of requirements, like certificate transparancy log requirements. I doubt things will be different this time.

If they do decide to apply those limits, you can run an ACME server for your private CA and point certbot or whatever ACME client you prefer at it to renew your internal certificates. Caddy can do this for you with a couple of lines of config: https://caddyserver.com/docs/caddyfile/directives/acme_serve...

Funnily enough, Caddy defaults to issueing 12 hour certificates for its local CA deployment.

> no certificate pinning anymore

Why bother with public certificate authorities if you're hardcoding the certificate data in the client?

> Instead of having closed systems, readonly, having to connect outside and update only once per year or more to update the certificates, you will have now all machines around the world that will have to allow quasi permanent connections to random certificate servers for the updating the system all the time.

Those hosts needed a bastion host or proxy of sorts to connect to the outside yearly, so they can still do that today. But I don't see the advantage of using the public CA infrastructure in a closed system, might as well use the Microsoft domain controller settings you probably already use in your network to generate a corporate CA and issue your 10 year certificates if you're in control of the network.

yjftsjthsd-h

If you're in a position to pin certs, aren't you in a position to ignore normal CAs and just keep doing that?

ghusto

I really wish encryption and identity weren't so tightly coupled in certificates. If I've issued a certificate, I _always_ care about encryption, but sometimes do not care about identity.

For those times when I only care about encryption, I'm forced to take on the extra burden that caring about identity brings.

Pet peeve.

tptacek

There's minimal security in an unauthenticated encrypted connection, because an attacker can just MITM it.

SoftTalker

Trust On First Use is the normal thing for these situations.

asmor

TOFU equates to "might as well never ask" for most users. Just like Windows UAC prompts.

steventhedev

There is a security model where MITM is not viable - and separating that specific threat from that of passive eavesdropping is incredibly useful.

tptacek

MITM scenarios are more common on the 2025 Internet than passive attacks are.

jchw

I mean, we do TOFU for SSH server keys* and nobody really seems to bat an eye at that. Today if you want "insecure but encrypted" on the web the main way to go is self-signed which is both more annoying and less secure than TOFU for the same kind of use case. Admittedly, this is a little less concerning of an issue thanks to ACME providers. (But still annoying, especially for local development and intranet.)

*I mistakenly wrote "certificate" here initially. Sorry.

tptacek

SSH TOFU is also deeply problematic, which is why cattle fleet operators tend to use certificates and not piecewise SSH keys.

arccy

ssh server certificates should not be TOFU, the point of SSH certs is so you can trust the signing key.

TOFU on ssh server keys... it's still bad, but less people are interested in intercepting ssh vs tls.

pabs3

You don't have to TOFU SSH server keys, there is a DNSSEC option, or you can transfer the keys via a secure path, or you can sign the keys with a CA.

gruez

>I mean, we do TOFU for SSH server certificates and nobody really seems to bat an eye at that.

Mostly because ssh isn't something most people (eg. your aunt) uses, and unlike with https certificates, you're not connecting to a bunch of random servers on a regular basis.

hedora

TOFU is not less secure than using a certificate authority.

Both defend against attackers the other cannot. In particular, the number of machines, companies and government agencies you have to trust in order to use a CA is much higher.

panki27

How is an attacker going to MITM an encrypted connection they don't have the keys for, without having rogue DNS or something similar, i.e. faking the actual target?

Ajedi32

It's an unauthenticated encrypted connection, so there's no way for you to know whose keys you're using. The attacker can just tell you "Hi, I'm the server you're looking for. Here's my key." and your client will establish a nice secure, encrypted connection to the malicious attacker's computer. ;)

oconnor663

They MITM the key exchange step at the beginning, and now they do have the keys. The thing that prevents this in TLS is the chain of signatures asserting identity.

simiones

Connections never start as encrypted, they always start as plain text. There are multiple ways of impersonating an IP even if you don't control DNS, especially if you are in the same local network.

IshKebab

I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

On the other hand providing the option may give a false sense of security. I think the main reason SSH isn't MitM'd all over the place is it's a pretty niche service and very often you do have a separate authentication method by sending your public key over HTTPS.

saurik

When I use a service over TLS on a network I don't trust, the premise is that I only will trust the connection if it has a certificate from a handful of companies trusted by the people who wrote the software I'm using (my browser/client and/or my operating system) to only issue said certificates to people who are supposed to have them (which these days is increasingly defined to be "who are in control of the DNS for the domain name at a global level", for better or worse, not that everyone wants to admit that).

But like, no: the free Wi-Fi I'm using can't, in fact, MITM the encryption used by my connection... it CAN do a bunch of other shitty things to me that undermine not only my privacy but even undermine many of the things people expect to be covered by privacy (using traffic analysis on the size, timing, or destination of the packets that I'm sending), but the encryption itself isn't subject to the failure mode of SSH.

woodruffw

> I disagree. Think about every time you use a service (website, email, etc.) you've used before via a network you don't trust (e.g. free WiFi).

Hm? The reason I do use those services over a network I don't trust is because they're wrapped in authenticated, encrypted channels. The authenticated encryption happens at a layer above the network because I don't trust the network.

tikkabhuna

But isn't that exactly the previous posters point? Free WiFI someone can just MITM your connection, you would never know and you think its encrypted. Its the worst possible outcome. At least when there's no encryption browsers can tell the user to be careful.

Ajedi32

In what situation would you want to encrypt something but not care about the identity of the entity with the key to decrypt it? That seems like a very niche use case to me.

xyzzy123

Because TLS doesn't promise you very much about the entity which holds the key. All you really know is that they they control some DNS records.

You might be visiting myfavouriteshoes.com (a boutique shoe site you have been visiting for years), but you won't necessarily know if the regular owner is away or even if the business has been sold.

Ajedi32

It tells you the entity which holds the key is the actual owner of myfavouriteshoes.com, and not just a random guy operating the free Wi-Fi hotspot at the coffee shop you're visiting. If you don't care about that then why even bother with encryption in the first place?

arccy

at least it's not evil-government-proxy.com that decided to mitm you and look at your favorite shoes.

pizzafeelsright

Seems logical.

If we encrypt everything we don't need AuthN/Z.

Encrypt locally to the target PK. Post a link to the data.

lucb1e

What? I work in this field and I have no idea what you mean. (I get the abbreviations like authz and pk, but not how "encrypting everything" and "posting links" is supposed to remove the need for authentication)

mannyv

All our door locks suck, but everyone has a door lock.

The goal isn't to make everything impossible to break. The goal is to provide Just Enough security to make things more difficult. Legally speaking, sniffing and decrypting encrypted data is a crime, but sniffing and stealing unencrypted data is not.

That's an important practical distinction that's overlooked by security bozos.

charcircuit

Having them always coupled disincentivizes bad ISP's from MITM the connection.

silverwind

I agree, there needs to be a TLS without certificates. Pre-shared secrets would be much more convenient in many scenarios.

ryao

How about TLS without CAs? See DANE. If only web browsers would support it.

pornel

DANE is a TLS with too-big-to-fail CAs that are tied to the top-level domains they own, and can't be replaced.

Separation between CAs and domains allows browsers to get rid of incompetent and malicious CAs with minimal user impact.

panki27

Isn't this excatly the reason why LetsEncrypt was brought to life?

bullen

If you have the inverse requirement = identity and no encryption I heartily recommend: https://datatracker.ietf.org/doc/html/rfc2289

grishka

I want a middle ground. Identity verification is useful for TLS, but I really wish there was no reliance on ultimately trusted third parties for that. Maybe put some sort of identity proof into DNS instead, since the whole thing relies on DNS anyway.

immibis

Makes it trivial for your DNS provider to MITM you, and you can't even use certificate transparency to detect it.

grishka

You can use multiple DNS providers at once to catch that situation. You can have some sort of signing scheme where each authoritative server would sign something in turn to establish a chain of trust up to the root servers. You can use encrypted DNS, even if it is relying on traditional TLS certificates, but it can also use something different for identity verification, like having you use a config file with the public key embedded in it, or a QR code, instead of just an address.

captn3m0

This is great news. This would blow a hole in two interesting places where leaf-level certificate pinning is relied upon:

1. mobile apps.

2. enterprise APIs. I dealt with lots of companies that would pin the certs without informing us, and then complain when we'd rotate the cert. A 47-day window would force them to rotate their pins automatically, making it even worse of a security theater. Or hopefully, they switch rightly to CAA.

bearjaws

Giving me PTSD for working in healthcare.

Health Systems love pinning certs, and we use an ALB with 90 day certs, they were always furious.

Every time I was like "we can't change it", and "you do trust the CA right?", absolute security theatre.

DiggyJohnson

Do you (or anyone) recommend any text based resources laying out the state of enterprise TLS management in 2025?

It’s become a big part of my work and I’ve always just had a surface knowledge to get me by. Assume I work in a very large finance or defense firm.

grishka

Isn't it usually the server's public key that's pinned? The key pair isn't regenerated when you renew the certificate.

toast0

Typical guidance is to pin the CA or intermediate, because in case of a key compromise, you're going to need to generate a new key.

You should really generate a new key for each certificate, in case the old key is compromised and you don't know about it.

What would really be nice, but is unlikely to happen would be if you could get a constrained CA certificate issued for your domain and pin that, then issue your own short term certificates from there. But if those are wide spread, they'd need to be short dated too, so you'd need to either pin the real CA or the public key and we're back to where we were.

nickf

I've said it up-thread, but never ever never never pin to anything public. Don't do it. It's bad. You, and even the CA have no control over the certificates and cannot rely on them remaining in any way constant. Don't do it. If you must pin, pin to private CAs you control. Otherwise, don't do it. Seriously. Don't.

1a527dd5

Dealing with enterprise is going to be fun, we work with a lot of car companies around the world. A good chunk of them love to whitelist by thumbprint. That is going to be fun for them.

philsnow

> As a certificate authority, one of the most common questions we hear from customers is whether they’ll be charged more to replace certificates more frequently. The answer is no. Cost is based on an annual subscription […]

(emphasis added)

Pump the brakes there, digicert. Price is based on an annual subscription. CA costs will actually go up an infinitesimal amount, but they’re already nearly zero to begin with. Running a CA has got to be one of the easiest rackets in the world.

jwnin

Costs to buy certs will not materially change. Costs to manage certs will increase.

bityard

I see that there is a timeline for progressive shortening, so if anyone has any "inside baseball" on this, I'm very curious to know:

Given that the overarching rationale here is security, what made them stop at 47 days? If the concern is _actually_ security, allowing a compromised cert to exist for a month and a half is I guess better than 398 days, but why is 47 days "enough"?

When will we see proposals for max cert lifetimes of 1 week? Or 1 day? Or 1 hour? What is the lower limit of the actual lifespan of a cert and why aren't we at that already? What will it take to get there?

Why are we investing time and money in hatching schemes to continually ratchet the lifespan of certs back one more step instead of addressing the root problems, whatever those are?

dadrian

The root problem is certificate lifetimes are too long relative to the speed at which domains change, and the speed at which the PKI needs to change.

peanut-walrus

So the assumption here is that somehow your private key is easier to compromise than whatever secret/mechanism you use to provision certs?

Yeah not sure about that one...

ori_b

Can someone point me to specific exploits that this key rotation schedule would have stopped?

It seems to me like compromised keys are rare. It also seems like 47 days is low enough to be inconvenient, but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.

Avamander

> Can someone point me to specific exploits that this key rotation schedule would have stopped?

It's not only key mismanagement that is being mitigated. You also have to prove more frequently that you have control of the domain or IP in the certificate.

In essence it brings a working method of revocation to WebPKI.

> but not low enough to prevent significant numbers of people from being compromised if there is a compromised key.

Compared to a year?

ori_b

> You also have to prove more frequently that you have control of the domain or IP in the certificate.

That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.

On the other hand, anyone that owns the domain can get a perfectly valid cert any time, no need to exploit anything. And given that nobody actually looks at the details of the cert owner in practice, that means that if you lose the domain, the new owner is, treated as legit. No compromises needed.

The only way to prevent that is to pin the cert, which this short rotation schedule makes harder, or pin the public key and be very careful to not regenerate your keys when you submit a new CSR.

In short: Don't lose your domain.

> Compared to a year?

Typically these kinds of things have an exponential dropoff, so most of the exploited folks would be soon after the compromise. I don't think that shortening to this long a period, rather than (say) 24h would make a material difference.

But, again, I'm also not sure how many people were compromised via anything that this kind of rotation would prevent. It seems like most exploits depend on someone either losing control over the domain (again, don't do that; the current issuance model doesn't handle that), or just being phished via a valid cert on an unrelated domain.

Do you have concrete examples of anyone being exploited via key mismanagement (or not proving often enough that they have control over a domain)?

Avamander

> That doesn't particularly matter; if someone takes over the domain but doesn't have a leaked key, they can't sign requests for the domain with my cert. It takes a leaked key for this to turn into a vulnerability.

It does, if someone gets temporary access, issues a certificate and then keeps using it to impersonate something. Now the malicious actor has to do it much more often, significantly increasing chances of detection.

kbolino

I just downloaded one of DigiCert's CRLs and it was half a megabyte. There are probably thousands of revoked certificates in there. If you're not checking CRLs, and a lot of non-browser clients (think programming languages, software libraries, command-line tools, etc.) aren't, then you would trust one of those certificates if it was presented to you. With certificate lifetimes of 47 days instead of a year, 87% of those revoked certificates become unusable regardless of CRL checking.

crote

The 47 days are (mostly) irrelevant when it comes to compromised keys. The certificate will be revoked by the CA at most 24 hours after compromise becomes known, so a shorter cert isn't really "more secure" than a longer one.

At least, that's what the rules say. In practice CAs have a really hard time saying no to a multi-week extension because a too-big-to-fail company running "critical infrastructure" isn't capable of rotating their certs.

Short cert duration forces companies to automate cert renewal, and with automation it becomes trivial to rotate certs in an acceptable time frame.

throwaway96751

Off-topic: What is a good learning resource about TLS?

I've read the basics on Cloudflare's blog and MDN. But at my job, I encountered a need to upload a Let's encrypt public cert to the client's trusted store. Then I had to choose between Let's encrypt's root and intermediate certs, between key types RSA and ECDSA. I made it work, but it would be good to have an idea of what I'm doing. For example why root RSA key worked even though my server uses ECDSA cert. Before I added the root cert to a trusted store, clients used to add fullchain.pem from the server and it worked too — why?

ivanr

I have a bunch of useful resources, most of which are free:

- If you're looking for a concise (yet complete) guide: https://www.feistyduck.com/library/bulletproof-tls-guide/

- OpenSSL Cookbook is a free ebook: https://www.feistyduck.com/library/openssl-cookbook/

- SSL/TLS and PKI history: https://www.feistyduck.com/ssl-tls-and-pki-history/

- Newsletter: https://www.feistyduck.com/newsletter/

- If you're looking for something comprehensive and longer, try my book Bulletproof TLS and PKI: https://www.feistyduck.com/books/bulletproof-tls-and-pki/

null

[deleted]

dextercd

I learned a lot from TLS Mastery by Michael W. Lucas.

throwaway96751

Thanks, looks exactly like what I wanted

bbkane

I wrote a list of resources that helped me at https://www.bbkane.com/blog/learn-ssl/

throwaway96751

> SSL is one of those weird niche subjects that no one learns until they run into a problem

Yep, that me.

Thanks for the blog post!

null

[deleted]

null

[deleted]

physicles

Use ECDSA if you can, since it reduces the size of the handshake on the wire (keys are smaller). Don’t bake in intermediate certs unless you have a very good reason.

No idea why the RSA key worked even though the server used RSA — maybe check into the recent cross-signing shenanigans that Let’s Encrypt had to pull to extend support for very old Android versions.

throwaway96751

I've been reading a little since then, and I think it worked with RSA root cert because this cert was a trust anchor of the Chain of Trust of my server's ECDSA certificate.

null

[deleted]

pizzafeelsright

Curious why you wouldn't have a Q and A with AI?

If the information is relatively unchanged and the details well documented why not ask questions to fill in the gaps?

The Socratic method has been the best learning tool for me and I'm doubling my understanding with the LLMs.

throwaway96751

I think this method works best when you can verify the answer. So it has to be either a specific type of question (a request to generate code, which you can then run and test), or you have to know enough about the subject to be able to spot mistakes.

_bin_

Is there an actual issue with widespread cert theft? That seems like the primary valid reason to do this, not forcing automation.

cryptonym

Let's Encrypt dropped support for OCSP. CRL doesn't scale well. Short lived certificate probably are a way to avoid certificate revocation quirks.

Ajedi32

It's a real shame. OCSP with Must-Staple seemed like the perfect solution to this, it just never got widespread support.

I suppose technically you can get approximately the same thing with 24-hour certificate expiry times. Maybe that's where this is ultimately heading. But there are issues with that design too. For example, it seems a little at odds with the idea of Certificate Transparency logs having a 24-hour merge delay.

NoahZuniga

Also certificate transparency is moving to a new standard (sunlight CT) that has immediate merges. Google requires maximum merge delay to be 1 minute or less, but they've said on google groups that they expect merges to be way faster.

lokar

The log is not really for real time use. It’s to catch CA non-compliance.

dboreham

I think it's more about revocation not working in practice. So the only solution is a short TTL.

trothamel

I suspect it's to limit how long a malicious or compromised CA can impact security.

hedora

Equivalently, it also maximizes the number of sites impacted when a CA is compromised.

It also lowers the amount of time it’d take for a top-down change to compromise all outstanding certificates. (Which would seen paranoid if this wasn’t 2025.)

lokar

Mostly this. Today of a big CA is caught breaking the rules, actually enforcing repairs (eg prompt revocation ) is a hard pill to swallow.

rat9988

I think op is asking has there been many real case scenarios in practice that pushed for this change?

chromanoid

I guess the main reason behind this move is platform capitalism. It's an easy way to cut off grassroots internet.

gjsman-1000

If that were true, we would not have Let's Encrypt and tools which can give us certificates in 30 seconds flat once we prove ownership.

The real reason was Snowden. The jump in HTTPS adoption after the Snowden leaks was a virtual explosion; and set HTTPS as the standard for all new services. From there, it was just the rollout. (https://www.eff.org/deeplinks/2023/05/10-years-after-snowden...)

(Edit because I'm posting too fast, for the reply):

> How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

Everyone is reliant on a 3rd party for the internet. It's called your ISP. They also take complaints and will shut you down if they don't like what you're doing. If you are using an online VPS, you have a second 3rd party, which also takes complaints, can see everything you do, and will also shut you down if they don't like what you're doing; and they have to, because they have an ISP to keep happy themselves. Networks integrating with 3rd party networks is literally the definition of the internet.

nottorp

How do you enjoy being dependent on a 3rd party (even a well intentioned one) for being on the internet?

Let's Encrypt... Cloudflare... useful services right? Or just another barrier to entry because you need to set up and maintain them?

chromanoid

I dunno. Self-hosting w/o automation was feasible. Now you have to automate. It will lead to a huge amount of link rot or at least something very similar. There will be solutions but setting up a page e2e gets more and more complicated. In the end you want a service provider who takes care of it. Maybe not the worst thing, but what kind of security issues are we talking about? There is still certificate revocation...

bshacklett

How does this cut off the grassroots internet?

chromanoid

It makes end to end responsibility more cumbersome. There were days people just stored MS Frontpage output on their home server.

jack0813

There are very convenient tools to do https easily these days, e.g. Caddy. You can use it to reverse proxy any http server and it will do the cert stuff for you automatically.

chromanoid

Ofc, but you have to be quite techsavy to know this and to set this up. It's also cumbersome in many low-tech situations. There is certificate revocation, I would really like to see the threat model here. I am not even sure if automation helps or just shifts the threat vector to certificate issuing.

umvi

So does this mean all of our Chromecasts are going to stop working again once this takes effect since (judging by Google's response during the week long Chromecast outage earlier this year) Chromecast is run by a skeleton crew and won't have the resources to automate certificate renewal?