Skip to content(if available)orjump to list(if available)

SSL certificate requirements are becoming obnoxious

Intermernet

Since the advent of LetsEncrypt, ACME, and Caddy I haven't thought about SSL/TLS for more than about an hour per year, and that's only because I forget the steps required to setup auto-renewal. I pay nothing, I spend a tiny amount of time dealing with it, and it works brilliantly.

I'm not sure why many people are still dealing with legacy manual certificate renewal. Maybe some regulatory requirements? I even have a wildcard cert that covers my entire local network which is generated and deployed automatically by a cron job I wrote about 5 years ago. It's working perfectly and it would probably take me longer to track down exactly what it's doing than to re-write it from scratch.

For 99.something% of use cases, this is a solved problem.

stego-tech

As someone on the other side of the fence who lives primarily in IT land, this is far from a solved problem. Not every device supports SSH for copying certs across the network, some devices have arbitrary requirements for the certs themselves (like timelines, lack of SANs, specific cryptography requirements, etc), and signing things internally (so that they’re only valid within the intranet, not on the internet) doesn’t work with LE at present.

So unless you’re part of the folks fine heavily curating (or jailbreaking) devices to make the above possible, PKI is hardly a solved problem. If anything it remains a nightmare for orgs of all sizes. Even in BigCo at a major SV company, we had a dedicated team to manage PKI for internal certificates - complete with review board, justification documents, etc - and that still only bought us a manual process with a lead time of 72 hours for a cert.

That said, it is measurably improved and I do think ACME/certbot/LE is on the right track here. Instead of constant bureaucratic revisioning of rules and standards documents, I believe the solution here is a sort of modern Wireguard-esque implementation of PKI and an associated certification program for vendors and devices. “Here’s the cert standard you have to accept, here’s the tool to automatically request and pin a certificate, here’s how that tool is configured for internal vs external PKI, and here’s the internal tooling standards projects that want to sign internal certs have to follow.”

Basically an AD CA-alike for SMB and Enterprise both. Saves me time having to get into the nitty gritty of why some new printer/IoT/PLC doesn’t support a cert, and improves the posture of the wider industry.

Shank

I feel like a lot of these requirements need to be really solved from first principles. What do you need these certificates for -- specifically, TLS certificates?

If the biggest issue is "we want to encrypt traffic" then the answer really should be something more automated. To put it another way, TLS certificates used to convey a lot of things. We had basic certs that said "you are communicating with the rightful owner of domain example.com" and we had EV certs that said "you are communicating with the rightful legal entity Example Org, who happens to own example.com" and so-on and so-forth.

But the thing is, we've killed off a lot of these certificate types. We don't have EV certs anymore. Let's Encrypt effectively democratized it to the point where you don't need to do any manual work for a normal "we want to encrypt data" certificate. I just don't understand what your specific requirements are, if they aren't directly "encrypt this traffic" focused, where you actually need valid certificates that work across the internet.

Put differently, if you're running an internal CA, you can push out your own certificates via MDM tools and then not worry about this. If you aren't running your own CA but you're implementing all of this pomp and circumstances, what are you trying to solve? Do you really need all of this ceremony for all of your devices and applications?

pelagicAustral

I work for government and I can tell you the guys working infrastructure are still paying for shitty SSL certificates every year, in most cases for infrastructure that doesn't even see the light of day (internal), and the reason for that is none other that not knowing any better, and being unable to get their head out of their asses for enough time to learn something novel and implement it in their workflow. So yeah, there are those types out there in thew wild still.

stego-tech

In our defense, it’s because we’re expected to give everything a cert but often have no say on the security and cryptography capabilities of what’s brought onto the network in the first place, nevermind the manpower and time to build such an automated solution internally. Execs bringing in MFPs that don’t support TLS, PLCs that require SHA-1, routers with a packet buffer measured in single-digit integers but with a Java web GUI, all of these horrors and more are why it’s such a shitshow even today.

Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses. Believe me, I’ve been fighting this battle internally my entire career and I hate it. I hate the shitty state of PKI today, and how the sole focus seems to be public-facing web services instead of, y’know, the other 90% of a network’s devices and resources.

PKI isn’t a solved problem.

stackskipton

It is solved but devices you are talking about refuse to get on board with the fix so here we are.

Also, I used to do IT, I get it but what do you think the fix here is? You could also run your own CA that you push to all the devices and then you can cut certificates as long as you want.

znpy

Don't take this as a snarky comment, but that sounds quite literally as "skill issue". Not in you personally, but in the environment you work in.

> PKI isn’t a solved problem.

PKI is largely a solved issue nowadays. Software like Vault from hashicorp (it's FIPS compliant, too: https://developer.hashicorp.com/vault/docs/enterprise/fips) let you create a cryptographically-strong CA and build the automation you need.

It's been out for years now, integrating the root CA shouldn't be much of an issue via group policies (in windows, there are equivalents for mac os and gnu/linux i guess).

> Just because someone’s homelab is fully cert’d through Caddy and LE that they slapped together over a weekend two years ago, doesn’t mean the process is trivial or easy for the masses.

Quite the contrary: it means that the process is technically so trivial the masses can do it in an afternoon and live off it for years with little to no maintenance.

Hence, if a large organization is not able to implement that, the issue is in the organization, not in the technology.

luma

Alternatively, those people are dealing with legacy systems that are pathologically resistant to cert automation (looking SQUARELY AT YOU vmware) and elect for the longest lasting certs they can get their hands on to minimize the disruption.

It’s generally best to assume experts in other fields are doing things for good reasons, and if you don’t understand the reason it might be something other than them being dumb.

pelagicAustral

Yeah OK, touché, I was over the top... I just need my cup of coffee...

basscomm

> Since the advent of LetsEncrypt, ACME, and Caddy I haven't thought about SSL/TLS for more than about an hour per year

I run a couple of low-stakes websites just for fun and manually updating certificates takes me about 10 minutes a year, and most of that is remembering how to generate the csr. Setting up an automated process gains me nothing except additional points of failure. Ratcheting the expiration down to 47 days is an effort to force everyone to use automation, which makes no sense for small hobby sites.

> I'm not sure why many people are still dealing with legacy manual certificate renewal

Not everyone is a professional sysadmin. Adding automation is another layer of complexity on top of what it already takes to run a website. It's fine for large orgs and professionals who do this for a living at their day jobs, but for someone just getting their feet wet it's a ridiculous ask.

roblabla

What's frankly ridiculous is that the big softwares like Nginx and Apache don't deal with this on their own. I've been letting Caddy (my http host of choice) deal with TLS for me for _ages_ now. I don't have to think about anything, I don't have to setup automation. I just... configure my caddy to host my website on https://my.domain.com and it just fetches the TLS for me, renews it when necessary, and uses it as necessary.

You don't need to be a professional sysadmin to deal with this - so long as the software you use isn't ass. Nginx will _finally_ get this ability in the next release (and it'll still be more configuration than caddy, that just defaults to the sane thing)...

o_m

The older I get the more skeptical I get to free services that run on others servers. They have a bunch of expenses and you are getting it for free. You are not the customer. I rather pay for a service than gamble on some free service that might be shut down at any time, or that might have malicious intents.

PhilippGille

Let's Encrypt is run by a nonprofit organization [1], funded by corporate and individual sponsors (like Google and AWS, but also the EFF and Mozilla) [2].

That doesn't guarantee they don't have malicious intents, but it's different from a for-profit company that tries to make money with you.

[1] https://www.abetterinternet.org/about/

[2] https://www.abetterinternet.org/sponsors/

jraph

With you in general, but in this specific case, the whole thing seems healthy:

- Many companies (including competitors) are sponsoring LE, so the funding should be quite robust

- These companies are probably winning from the web being more secure, so the incentives are aligned with you (contrary to say, a company that offers something free but want to sink you under ads)

- the vendor lock-in is very weak. The day LE goes awry, you can move to another CA pretty painlessly

There are CAs supporting ACME that provide paid services as well.

IcePic

I think for certs, you are not better of paying $5 for the cert, than paying nothing to get an LE cert. It is already "subsidized" into cheapness, and the $5 company will bug you with ads for EV certs and whatnot in order to make a profit off you somehow since you are now a customer.

What I think LE did was to gather the required bag of money that any cert issuer needs to pony up to get the infra up and validated, and then skipped the $5 part and just run on donations. So while LE might stop tomorrow, you don't have any good guarantees that the $5 cert company will last longer if their sidebusiness goes under, and if you go to a $100 cert company, you are just getting scammed from some company who soon will realize that most certs are being given away and that they can't prove why their $100 certs are "better" in any meaningful way so they will also be at risk of going under. In all these cases, you get to use your cert for whatever validity period you had, and then rush over to the next issuer, whoever that is left when the pay-for-certs business tanks.

As opposed to cars or whatever, you can't really put more "quality math" into the certs so they last longer, the CAs have limits on how long they are allowed to last, so no more 10-year certs for public services anyhow. You might aswell get the cheapest of the ones that are still valid and useful (ie, exists in browser CA lists) and LE is one of those. Might be more (zerossl?) but same argument would hold for those. The CA list is curated by the browser teams lots better than me or you shopping around websites that make weird claims on why their certs are worth paying $100 for.

aitchnyu

Whats a LetsEncrypt competitor which has convenient automated renewal?

trenchpilgrim

Any that support ACME. Most of the big SSL companies do nowadays.

trenchpilgrim

There are paid ACME services - basically LE with paid support.

znpy

Yeah, one of those is https://zerossl.com/

allan_s

For regulatory requirements: yes !

I currently for EIDAS certificates, I can only choose a vouched certificate provider, and it's mostly somes that requires me to in person with my ID card with someone verifying the guy who made the CSR is actually me.

The certificate is used for double SSL to authentify the server doing the request , i.e that the server doing an API call to the bank server is one I own. (I find it a pretty neat solution and much better than requiring to make a theater dance to get a token to renew every 3600 seconds )

Muromec

Well, its eidas, ofc you need to show the id

CamouflagedKiwi

I've worked with a major financial institution (let's just say that you'd definitely recognise the name) in a past job, and while I couldn't really see exactly what was going on with the certs they issued, I'm sure it was a pretty manual process given our observations of things changing then reverting again later. I don't think regulation was really the issue though, just bad old processes.

I wonder what they will do with the shorter validity periods. They aren't required to comply in the same way; it's not a great look not to but I can't believe the processes will scale (for them or their customers) to renewing an order of magnitude more frequently.

electroly

Lazy vendors. I know how to set up Let's Encrypt for web servers that I directly control, but some of the web servers are embedded in commercial vendor products. If this were just about people's directly-controlled nginx/caddy webservers this would be easy. We're not talking about homelabs here.

evilduck

For all but one of my personal use cases, Tailscale + Caddy have even automated away the setup steps and autorenewal of SSL with LetsEncrypt. Just toggle on the SSL features with `tailscale cert`, maybe point Caddy at the Tailscale socket file depending on your user setup, then point an upstream at the correct hostname and you're done.

gdbsjjdn

I understand OP's frustration, but the alternate view is that mandating better practices is a forcing function for businesses that otherwise don't give a shit about users or their privacy or security.

For all the annoyance of SOC2 audits, it sure does make my manager actually spend time and money on following the rules. Without any kind of external pressure I (as a security-minded engineer) would struggle to convince senior leadership that anything matters beyond shipping features.

Jeslijar

Why is a month's expiration better than a year or two years?

Why wouldn't you go with a week or a day? isn't that better than a whole month?

Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

allan_s

I think it's all about change management

a whole month put you in the "if you don't have the resource to automate it, it's still doable by a human, not enough to crush somebody, but still enough to make the option , let's automate fully something to consider"

hence why it's better than a week or a day (it's too much pressure for small companies) better than hours/minutes/secondes (it means you go from 1 year to 'now it must be fully automated right now ! )

a year or two years was not a good idea, because you loose knowledge, it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

A month, you either start to fully document it, or at least to have it fresh in your mind. A month give you time to everytime think "ok, we have 30 certicates, can't we have a wild card, or a certificate with several domain in it?"

> Perhaps it's time to go with another method entirely.

I think that's the way forward, it's just that it will not happen in one step, and going to one month is a first step.

source: We have to manage a lot of certificate for a lot of different use cases (ssh, mutual ssl for authentification, classical HTTPS certificate etc. ) and we learned the hard way that no 2 years is not better than 1 , and I agree that one month would be better

also https://www.digicert.com/blog/tls-certificate-lifetimes-will...

ameliaquining

I think the less conservative stakeholders here would honestly rather do the six-day thing. They don't view the "still doable by a human" thing as a feature; they'd rather everyone think of certificate management as something that has to be fully automated, much like how humans don't manually respond to HTTP requests. Of course, the idea is not to make every tiny organization come up with a bespoke automation solution; rather, it's to make everyone who writes web server software designed to be exposed to the public internet think of certificate management as included within the scope of problems that are their responsibility to solve, through ACME integration or similar. There isn't any reason in principle why this wouldn't work, and I don't think there'd have been a lot of objections if it had worked this way from the beginning; resistance is coming primarily from stakeholders who don't ever want to change anything as they view it as a pure cost.

(Why not less than six days? Because I think at that point you might start to face some availability tradeoffs even if everything is always fully automated.)

belval

> it creates pressure (oh my.... not the scary yearly certificate renewal, i remember last year we broke something, we i don't remember what...)

Ah yes, let's make a terrible workflow to externally force companies who can't be arsed to document their processes to do things properly, at the expense of everyone else.

FuriouslyAdrift

I just recently had a executive level manager ask if we could get a 100 year cert for our ERP as the hassle of cert management and the massive cost of missing a renewal made it worth it.

He said six figures for the price would be fine. This is an instance where business needs and technology have gotten really out of alignment.

op00to

Start your own business - nginx proxy in front of ERP where you handle the SSL for them, put $$ in a trust to ensure there's enough money to pay for someone to update the cert.

9dev

How on earth would that make more sense than properly setting up ACME and forgetting about the problem for the next hundred years?? If your bespoke ERP system is really so hostile toward cert changes, put it behind a proper reverse proxy with modern TLS features and self-sign a certificate for a hundred years, and be done with it.

It'll take about fifteen minutes of time, and executive level won't ever have to concern themselves with something as mundane as TLS certificates again.

johannes1234321

The exact time probably has no "best" but from past times: I have seen so many places where multi-year certificates were used and people forgot about them, till some service suddenly stopped working and then people having to figure out how to replace that cert.

A short cycle ensures either automation or keeping memory fresh.

Automation of course can also be forgotten and break, but it's at least somewhere written down in some form (code) rather than personal memory of a long gone employee who previously uploaded certs to some CA website for signing manually etc

btown

Sure, there is an argument about slippery slopes here. But the thing about the adage of "if you slowly boil a frog..." (https://en.wikipedia.org/wiki/Boiling_frog) is that not only is the biological metaphor completely false, it also ignores the fact that there can be real thresholds that can change behavior.

Imagine you run an old-school media company who's come into possession of a beloved website with decades of user-generated and reporter-generated content. Content that puts the "this is someone's legacy" in "legacy content." You get some incremental ad revenue, and you're like "if all I have to do is have my outsourced IT team do this renewal thing once a year, it's free money I guess."

But now, you have to pay that team to do a human-in-the-loop task monthly for every site you operate, which now makes the cost no longer de minimis? Or, fully modernize your systems? But since that legacy site uses a different stack, they're saying it's an entirely separate project, which they'll happily quote you with far more zeroes than your ads are generating?

All of a sudden, something that was infrequent maintenance becomes a measurable job. Even a fully rational executive sees their incentives switch - and that doesn't count the ones who were waiting for an excuse to kill their predecessors' projects. We start seeing more and more sites go offline.

We should endeavor not to break the internet. That's not "don't break the internet, conditional on fully rational actors who magically don't have legacy systems." It's "don't break the internet."

tyzoid

Pretty much any legacy system can have a modern reverse proxy in front of it. If the legacy application can't handler certs sanely, use the reverse proxy for terminating TLS.

Thorrez

>Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

Then if your CA went down for an hour, you would go down too. With 47 days, there's plenty of time for the CA to fix the outage and issue you a new cert before your current one expires.

8organicbits

Lots of ACME software supports configuring CA fallbacks, so even if a CA is down hard for an extended period you can issue certificates with the others.

Using LetsEncrypt and ZeroSSL together is a popular approach. If you need a stronger guarantee of uptime, reach for the paid options.

https://github.com/acmesh-official/acme.sh?tab=readme-ov-fil...

yladiz

I'm not sure if you're arguing in good faith, but assuming you are, it should be pretty self-evident why you wouldn't generate the certificate dynamically each request: it would take too much time to do so, and so every request would be substantially slower, probably as slow as using Tor, since you would need to ask for the certificate from a central authority. In general it's all about balance, 1 month isn't necessarily better than 1 year, but the reduced timeframe means that there's less complexity in keeping some renovation list and passing it to clients, and it's not so short to require more resources on both the issuer and the requester of the certificate.

> Perhaps it's time to go with another method entirely.

What method would you suggest here?

zimpenfish

> since you would need to ask for the certificate from a central authority

Could it work that your long-term certificate (90 days, whatever) gives you the ability to sign ephemeral certificates (much like, e.g. LetsEncrypt signs your 90 day certificate)? That saves calling out to a central authority for each request.

ozim

There was an attempt doing it differently by CRL but it turns out certificate revoking is not feasible in practice on web scale.

Now they are doing next plausible solution. Seems like 47 days is something they found out by let’s encrypt experience estimating load by current renewals but that last part I am just imagining.

fanf2

CRL distribution at web scale is now possible thanks to work by John Schanck at Mozilla https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

But CRL sizes are also partly controlled by expiry time, shorter lifetimes produce smaller CRLs.

yjftsjthsd-h

> Why wouldn't you go with a week or a day? isn't that better than a whole month?

There is in fact work on making this an option: https://letsencrypt.org/2025/02/20/first-short-lived-cert-is...

> Why isn't it instead just a minute? or a few seconds? Wouldn't that be better?

> Why not have certificates dynamically generated constantly and have it so every single request is serviced by a new one and then destroyed after the session is over?

Eventually the overhead actually does start to matter

> Maybe the problem isn't that certificates expire too soon, maybe the problem is that humans are lazy. Perhaps it's time to go with another method entirely.

Like what?

stblack

Nobody has yet mentioned how certificates induce and support churn.

In 2025 it's not possible to create an app and release it into the world and have it work for years or decades, as was once the case.

If your "developer certificate" for app stores and ad-hoc distribution is valid for a year, then every year you must pay a "developer program fee" to remain a participant. You need to renew that cert, and you need to recompile a new version within a year. Which means you must maintain a development environment and tools on an ongoing basis for an app that may be feature- and operationally-complete.

All this is completely unnecessary except when it comes to reinforcing hegemony of app-store monopolists.

dns_snek

But that has nothing to do with certificates as such and everything to do with app store policies. Certificates don't induce churn - app stores do.

xg15

Yeah, we went from "software is a good that can be duplicated with no cost" to "software can be a service" to "software must be a service".

null

[deleted]

jraph

The decreasing validity time pushes for the process to be automated, and automation reduces the possible human errors.

Many things need to be run and automated when running stuff, I don't understand what makes SSL certificates special in this.

For a hobbyist, setting up certbot or acme.sh is pretty much fire and forget. For more complex settings well… you already have this complexity to manage and therefore the people managing this complexity.

You'll need to pick a client and approve it, sure, but that's once, and that's true for any tool you already use. (edit: and nginx is getting ACME support, so you might already be using this tool)

It's not the first time I encounter them, but I really don't get the complaints. Sure, the setup may take longer. But the day to day operations are then easier.

throw0101a

> The decreasing validity time pushes for the process to be automated, and automation reduces the possible human errors.

There are environments and devices where automation is not possible: not everything that needs a cert is a Linux server, or a system where you can run your own code. (I initially got ACME/LE working on a previous job's F5s because it was RH underneath and so could get Dehydrate working (only needs bash, cURL, OpenSSL); not all appliances even allow that).

I'm afraid that with the 47-day mandate we'll see the return of self-signed certs, and folks will be trained to "just accept it the first time".

jraph

In these setups, the issue already exists: an appliance would have to renew its SSL certificate when it expires. I believe ssl certificates should already not be used anywhere they can't be renewed.

birdman3131

One of the arguments to be made is that while " automation reduces the possible human errors." it also reduces the amount of human oversight as well.

9dev

Oversight over… what exactly? TLS certificates don't need human oversight. If you want to see which certificates have been issued for your domains, set up certificate transparency monitoring. But thank goodness we're past paying people for comparing certificate checksums.

auguzanellato

Do you really need more oversight on renewals than a simple success/failure notification?

For new certificate you can keep the existing amount of human oversight in place so nothing changes on that front.

everforward

Yes, because you want to know what certificates you're issuing. You could be automatically issuing and deploying certs on a system where the actual app was decommissioned. It's probably mostly a risk for legacy systems where the app gets killed, but the hardware stays live and potentially unpatched and is now vulnerable to a hacker taking it over.

With manual renewals, the cert either wouldn't get renewed and would become naturally invalid or the notification that the cert expired would prompt someone to finish the cleanup.

FuriouslyAdrift

No better way to create errors at scale than automation ;-)

dale_glass

I believe the low maximum lifetimes are becoming a thing because revocation failed.

CRLs become gigantic and impractical at the sizes of the modern internet, and OCSP has privacy issues. And there's the issue of applications never checking for revocation at all.

So the obvious solution was just to make cert lifetimes really short. No gigantic CRLs, no reaching out to the registrar for every connection. All the required data is right there in the cert.

And if you thought 47 days was unreasonable, Let's Encrypt is trying 6 days. Which IMO on the whole is a great idea. Yearly, or even monthly intervals are long enough that you know a bunch of people will do it by hand, or have their renewal process break and not be noticed for months. 6 days is short enough that automation is basically a must and has to work reliably.

Andoryuuta

Semi-related: Firefox 142 was released a few days ago and is now using CRLite[0], which apparently only needs ~300kB a day for for the revocation lists in their new clubcard data-structure[1].

[0]: https://hacks.mozilla.org/2025/08/crlite-fast-private-and-co...

[1]: https://github.com/mozilla/clubcard

FuriouslyAdrift

Because of all my internal systems that use certs to connect (switches, routers, iot, etc) that have manual only interfaces (most are tftp), I have had to go back to just running my own CA infrastructure and only using public CAs for non-corporate or mixed audience sites/services.

It's really annoying because I have to carve outs for browsers and other software that refuse to connect to things with unverifiable certs and adding my CA to some software or devices is a either a pain or impossible.

It's created a hodge podge of systems and policies and made our security posture full of holes. Back when we just did a fully delegated digicert wildcard (big expense) on a 3 or 5 year expiration, it was easy to manage. Now, I've got execs in other depts asking about crazy long expirations because of the hassle.

9dev

Why is fronting these systems with a central haproxy with TLS termination or similar not an option?

dvdkon

Because then you have plain HTTP running over your network. The issue here (I presume) is not how to secure access over the Internet, but within an internal network.

Plenty of people leave these devices without encrypted connections, because they are in a "secure network", but you should never rely on such a thing.

whatevaa

Fronting a switch management interface with haproxy? Are you sure that is a good idea?

layer8

CRLs don’t have to be large, since they only need to list revoked certificates that also haven’t expired yet. Using sub-CAs, you can limit the maximum size any single CRL could possibly have. I’m probably missing something, but for SSL certificates on the public internet I don’t really see the issue. Where is the list of such compromised non-expired certificates that is so gigantic?

compumike

Just thinking out loud here: an ACME DNS-01 challenge requires a specific DNS TXT record to be set on _acme-challenge.<YOUR_DOMAIN> as a way of verifying ownership. Currently this is a periodic check every 45 or 90 or 365 days or whatever, which is what everyone's talking about.

Why not encode that TXT record value into the CA-signed certificate metadata? And then at runtime, when a browser requests the page, the browser can verify the TXT record as well, and cache that result for an hour or whatever you like?

Or another set of TXT records for revocation, TXT _acme-challenge-revoked.<YOUR_DOMAIN> etc?

It's not perfect, DNS is not at all secure / relatively easy to spoof for a single client on your LAN, I know that. But realistically, if someone has control of your DNS, they can just issue themselves a legit certificate anyway.

ameliaquining

I think the problem with this idea is not security (as you point out, the status quo isn't really better), but availability. It's not all that uncommon for poorly designed middleboxes to block TXT records, since they're not needed for day-to-day web browsing and such.

Also, I don't see how that last paragraph follows; is your argument just that client-side DNS poisoning is an attack not worth defending against?

Also, there's maybe not much value in solving this for DNS-01 if you don't also solve it for the other, more commonly used challenge types.

ashleyn

Certbot has this down to a science. I haven't once had to touch it after setting it up. 6 days doesn't seem like an onerous requirement in light of that.

sigseg1v

The first link in the article I clicked for context led to a cert provider whose business name I recognize. Found the problem.

I inherited a process using the same thing last year and it is the absolutely most insane nonsense I can think of. These types of companies have support that is totally useless and their entire business model is to charge 1000x or more (eg. compare signature price to a HSM in GCP) what competitors charge while also providing less functionality, and hoping that people will get sucked in and trapped in their ecosystem by purchasing an expensive cert such as an "EV" cert which I'm still not totally clear does by the way, but I'm assured it's very important for security on Windows. Not security against bad guys though... it appears to be for security against no-name anti virus vendors deleting your files if they detect you didn't pay this "EV" cert ransom. They don't need to actually detect threats based on code or behavior, they just detect if you have enough money.

bbarnett

I've spent 15+ minutes searching, and the digicert (linked to in the article), and other cert providers all reference a vote on "Multi-Perspective Issuance Corroboration (MPIC)".

Everywhere I've read, one "must validate domain control using multiple independent network perspectives". EG, multiple points on the internet, for DNS validation.

Yet there is not one place I can find a very specific "this is what this means". What is a "network perspective", searching shows it means "geographical independent regions". What's a region? How big? How far apart from your existing infra qualifies? How is it calculated.

Anyone know? Because apparently none of the bodies know, or wish to tell.

jaas

Section 3.2.2.9 of this document:

https://cabforum.org/working-groups/server/baseline-requirem...

You can also just search the document for the word "Perspective" to find most references to it.

ameliaquining

For convenience, here are the quotes that most directly answer the above question:

"Effective December 15, 2026, the CA MUST implement Multi-Perspective Issuance Corroboration using at least five (5) remote Network Perspectives. The CA MUST ensure that [...] the remote Network Perspectives that corroborate the Primary Network Perspective fall within the service regions of at least two (2) distinct Regional Internet Registries."

"Network Perspectives are considered distinct when the straight-line distance between them is at least 500 km."

elp

Unless I'm completely misunderstanding things Letsencrypt has been doing this since 2020 https://letsencrypt.org/2020/02/19/multi-perspective-validat...

I.e they check from multiple network locations in case an attacker has messed with network routing in some way. This is reasonable and imposes no extra load on the domain needing the certificate all the extra work falls on the CA, and if Letsencrypt can get this right there is no major reason why "Joe's garage certs" can't do the same thing.

This is outrage porn.

null

[deleted]

Avamander

The same the exact IP addresses or ASNs of existing validation origins are not public, neither will any future ones be. It makes it a bit harder to coordinate an attack against this infrastructure.

greyface-

It's trivial for an attacker to learn the validation origins by triggering validations of their own servers while watching the logs. Secrecy confers no advantage here.

null

[deleted]

nikanj

It means the barrier of entry to the SSL certificate market gets higher, favouring established players

wongarsu

Renting five servers 500km apart each, spread across at least two continents is hardly a difficult or costly requirement

Havoc

> I am responsible for approving SSL certificates for my company. I’ve developed a process

What does that even mean? Is he smelling them to check for freshness?

I get process around first time request perhaps to ensure it’s set up right, but renewals?

> My stakeholders understand their roles and responsibilities

Oh no. All that’s missing here is a committee and steering group and daily stand ups

ComputerGuru

I actually don’t have a problem with the SSL changes as they specifically pertain to http servers – it’s largely a dived problem with automated solutions compatible with all the major players on most fronts.

But certs and every other context have become neigh impossible except in enterprise settings with your own CA and cert servers. From things like printers and network appliances to entirely non-http applications like VPN (StrongSwan and OpenVPN both have/support TLS with signed SSL certs, but place very different constraints on how those work in practice and what identities are supported, how or if wildcards work, etc).

Very little attention has been paid to non-general purpose and non-http contexts as things currently stand.

DougN7

The last time I looked, if you ran your HTTPS service on anything other than port 443 LetsEncrypt was not for you. Maybe that’s built into ACME?

mdaniel

I can't tell if it's a typo but HTTP-01 would contact your webserver on :80 in order to successfully retrieve a very, very, very specific ACME path and does not care at all what you do with your issued TLS afterward, including what port you run it upon

Also, I know firsthand that the DNS Validator also works perfectly fine, no http check required

OptionOfT

You can get LetsEncrypt certificates for endpoints that aren't publically accessible through the DNS-01 challenge.

mholt

First line:

> I am responsible for approving SSL certificates for my company.

And that is exactly what the requirements are intending to prevent. Automation is the way.

The system is working!

cr3ative

Right. This unfortunately reads like a human process has been set up where automation should have been set up, and now that hand is being forced.

The hand-waving away of certbot/ACME at the very end of the article only really goes to show that it hasn't been looked in to properly for whatever reason.

azeemba

I think a large enough org that needs many different certificates should have an internally-trusted CA. That would then allow the org to decide their own policy for all their internal facing certificates.

Then you only have to follow the stricter rules for only the public facing certs.

linsomniac

We make extensive use of self-signed certificates internally on our infrastructure, and we used to manually manage year-long certs. A few months ago I built "LessEncrypt", which is a dead simple ACME-inspired system for handing out certs without requiring hijacking the HTTP port or doing DNS updates. Been running it on ~200 hosts for a few months now and it's been fantastic to have the certs manage themselves.

https://github.com/linsomniac/lessencrypt

I've toyed with the idea of adding the ability for the server component to request certs from LetsEncrypt via DNS validation. Acting as a clearing house so that individual internal hosts don't need a DNS secret to get certs. However, we also put IP addresses and localhost on our internal certs, so we'd ahve to stop doing that to be able to get them from LetsEncrypt.

jraph

Why or in which cases is opening a dedicated port better than publishing challenges under some /.well-known path using the standard HTTP port?

(You say hijacking the HTTP port, but I don't let the ACME client take over 80/443, I make my reverse proxy point the expected path to a folder the ACME client writes to, I'm not asking for a comparison with a setup where the acme client takes over the reverse proxy and edits its configuration by itself, which I don't like)

ocdtrekkie

It used to be only a large enough organization needed this, but smaller organizations could slap their PKI wildcard on everything. Between the 47 day lifetime and the removal of client authentication as a permitted key usage of PKI certs, everyone will need a private CA.

Active Directory Certificate Services is a fickle beast but it's about to get a lot more popular again.

romaniv

The web today is a rotting carcass with various middlemen maggots crawling all over it and gorging themselves on the decay. The only real discussion to be had is what to replace it with and how to design the new protocols to avoid the same issues.

jacquesm

The reason the web is a rotting carcass is not because of the way the web is architected, it is because a lot of people's livelihoods depend on making it as rotten as possible without collapsing it entirely.

From advertising companies, search engines (ok, sometimes both), certificate peddlers and other 'service' (I use the term lightly here) providers there are just too many of these maggots that we don't actually need. We mostly need them to manage the maggots! If they would all fuck off the web would instantly be a better place.

ameliaquining

Who do you propose needs to fuck off in order for the web to not need certificate authorities?

bloomca

What do you think is better? The web is indeed questionable, but it is literally the best we have, it is still reasonably simple to deploy a web app.

Desktop app development gets increasingly hostile and OSes introduce more and more TCC modals, you pretty much need a certificate to codesign an app if you sideload (and app stores have a lot of hassle involved), mobile clients had it bad for a while (and just announced that Android will require a dev certificate for sideloading as well).

edit: also another comment is correct, the reason it is like that is because it has the most eyes on it. In the past it was on desktop apps, which made them worse

quesera

I don't know what a replacement for the web would look like.

But it seems obvious to me that it will have to work over HTTP/QUIC, and TCP port 443.

Which prompts the obvious question ...

mdaniel

As a friendly reminder, SRV records exist and are great at fixing that magic port syndrome (unless you were hinting at the infinite corporate firewall appliances, for which I have no magic fix)

pixl97

Thats the neat thing, you cant really avoid the same issues. Security is not a destination, it's a process. Everything you find a way to make something more secure someone seems to find a new way to attack it, and so the ecosystem evolves.