Skip to content(if available)orjump to list(if available)

Why I no longer have an old-school cert on my HTTPS site

eadmund

> So, yes, instead of saying that "e" equals "65537", you're saying that "e" equals "AQAB". Aren't you glad you did those extra steps?

Oh JSON.

For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).

So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.

JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.

em-bee

as someone who started the s-expression task on rosettacode.org, i approve. if you need an s-expression parser for your language, look here https://rosettacode.miraheze.org/wiki/S-expressions (the canonical url is https://rosettacode.org/wiki/S-expressions but they have DNS issues right now)

drob518

Seems like a large integer can always be communicated as a vector of byte values in some specific endian order, which is easier to deal with than Base64 since a JSON parser will at least convert the byte value from text to binary for you.

But yea, as a Clojure guy sexprs or EDN would be much better.

matja

Aren't JSON parsers technically not following the standard if they don't reliably store a number that is not representable by a IEEE754 double precision float?

It's a shame JSON parsers usually default to performance rather than correctness, by using bignums for numbers.

q3k

Have a read through RFC7159 or 8259 and despair.

> This specification allows implementations to set limits on the range and precision of numbers accepted

JSON is a terrible interoperability standard.

matja

So a JSON parser that cannot store a 2 is technically compliant? :(

1a527dd5

I don't understand the tone of aggression against ACME and their plethora of clients.

I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.

We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.

Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.

So we ended up trying:-

- https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment

- https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue

- https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions

Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.

lucideer

> I don't understand the tone of aggression against ACME and their plethora of clients.

> ACME idea good, ACME implementation bad.

Maybe I'm misreading but it sounds like you're on a similar page to the author.

As they said at the top of the article:

> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.

This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.

giancarlostoro

Im not a container guru by any means (at least not yet?) but would docker not suffice these concerns?

fpoling

The issue is that the client needs to access the private key, tell web server where various temporary files are during the certificate generation (unless the client uses DNS mode) and tell the web server about a new certificate to reload.

To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.

The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.

And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.

TheNewsIsHere

My reading of the article suggested to me that the author took exception to the code that touched the keying material. Docker is immaterial to that problem. I won’t deign to speak for Rachel By The Bay (mother didn’t raise a fool, after all), but I expect Docker would be met with a similar regard.

Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.

rsync

Yes, it does.

I run acme in a non privileged jail whose file system I can access from outside the jail.

So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.

Yes, I use dns mode. Yes, my dns server is also a (different) jail.

null

[deleted]

diggan

> I don't understand the tone of aggression against ACME and their plethora of clients.

The older posts on the same website provided a bit more context for me to understand today's post better:

- "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/

- "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/

immibis

Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.

Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.

spockz

Given that keys probably need to be shared between multiple gateway/ingresses, how common is it to just use some HSM or another mechanism of exchanging the keys with all the instances? The acme client doesn’t have to run on the servers itself.

tialaramex

> The acme client doesn’t have to run on the servers itself.

This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.

If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.

For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.

g-b-r

> Some people don't want to be forced to run a bunch of stuff they don't understand on the server

It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.

Avamander

> It's that more complex stuff is inherently more prone to security vulnerabilities

That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.

In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.

hannob

> Some people don't want to be forced to run a bunch of stuff they > don't understand on the server, and I agree with them.

Honest question:

* Do you understand OS syscalls in detail?

* Do you understand how your BIOS initializes your hardware?

* Do you understand how modern filesystems work?

* Do you understand the finer details of HTTP or TCP?

Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.

sussmannbaka

This point is so tired. I don’t understand how a thought forms in my neurons, eventually matures into a decision and how the wires in my head translate this into electrical pulses to my finger muscles to type this post so I guess I can’t have opinions about complexity.

frogsRnice

Sure - but people are still free to decide where they draw the line.

Each extra bit of software is an additional attack surface after all

fc417fc802

An OS is (at least generally) a prerequisite. If minimalism is your goal then you'd want to eliminate tangentially related things that aren't part of the underlying requirements.

If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.

kjs3

I hear some variation of this line of 'reasoning' about once a week, and it's always followed by some variation of "...and that's why we shouldn't have to do all this security stuff you want us to do".

liampulles

I appreciate the author calling this stuff out. The increasing complexity of the protocols that the web is built on is not a problem for developers who simply need to find a tool or client to use the protocol, but it is a kind of regulatory capture that ensures only established players will be the ones able to meet the spec required to run the internet.

I know ACME alone is not insurmountably complex, but it is another brick in the wall.

jeroenhd

There's something to be said for implementing stuff like this manually for the experience of having done it yourself, but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.

Kind of makes me wonder what kind of stack her website is running on that something like a lightweight ACME library (https://github.com/jmccl/acme-lw comes to mind, but there's a C++ library for ESP32s that should be even more lightweight) loading in the certificates isn't doing the job.

mschuster91

> but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.

The problem is, SSL is a fucking hot, ossified mess. Many of the noted core issues, especially the weirdnesses around encoding and bitfields, are due to historical baggage of ASN.1/X.509. It's not fun to deal with it, at all... the math alone is bad enough, but the old abstractions to store all the various things for the math are simply constrained by the technological capabilities of the late '80s.

There would have been a chance to at least partially reduce the mess with the introduction of LetsEncrypt - basically, have the protocol transmit all of the required math values in a decent form and get an x.509 cert back - and HTTP/2, but that wasn't done because it would have required redeveloping a bunch of stuff from scratch whereas one can build an ACME CA with, essentially, a few lines of shell script, OpenSSL and six crates of high proof alcohol to drink away one's frustrations of dealing with OpenSSL, and integrate this with all software and libraries that exist there.

jeroenhd

There's no easy way to "just" transmit data in a foolproof manner. You practically need to support CSRs as a CA anyway, so you might as well use the existing ASN.1+X509 system to transmit data.

ASN.1 and X509 aren't all that bad. It's a comprehensively documented binary format that's efficient and used everywhere, even if it's hidden away in binary protocols you don't look at every day.

Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.

The unnecessarily complex parts of the protocol when writing a from-the-ground-up client are complex because ACME didn't reinvent the wheel, and reused existing standard protocols instead. Unfortunately, that means having to deal with JWS, but on the other hand, it means most people don't need to write their own ACME-JWS-replacement-protocol parsers. All the other parts are complex because the problem ACME is solving is actually quite complex.

The author wrote [another post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the time they fell for the lies of a CA that promised an "easier" solution. That solution is pretty much ACME, but with more manual steps (like registering an account, entering domain names).

I personally think that for this (and for many other protocols, to be honest) XML would've been a better fit as its parsers are more resilient against weird data, but these days talking about XML will make people look at you like you're proposing COBOL. Hell, I even exchanging raw, binary ASN.1 messages would probably have gone over pretty well, as you need ASN.1 to generate the CSR and request the certificate anyway. But, people chose "modern" JSON instead, so now we're base64 encoding values that JSON parsers will inevitably fuck up instead.

sam_lowry_

I am running an HTTP-only blog and it's getting harder every year not to switch to HTTPS.

For instance, Whatsapp can not open HTTP links anymore.

projektfu

You can proxy it, which for a small server might be the best way to avoid heavy traffic, through caching at the proxy.

g-b-r

For god's sake, however complex ACME might be it's better than not supporting TLS

sam_lowry_

Why? The days of MITM boxes injecting content into HTTP traffic are basically over, and frankly they never were a thing in my part of the world.

I see no other reason to serve content over HTTPS.

JoshTriplett

> Why? The days of MITM boxes injecting content into HTTP traffic are basically over

The reason you don't see many MITM boxes injecting content into HTTP anymore is because of widespread HTTPS adoption and browsers taking steps to distrust HTTP, making MITM injection a near-useless tactic.

(This rhymes with the observation that some people now perceive Y2K as overhyped fear-mongering that amounted to nothing, without understanding that immense work happened behind the scenes to avert problems.)

DonHopkins

Are you an Anti-VAXer too?

I'll give you my 8600 when you pry it from my cold, dead LAN.

g-b-r

You see no reason for privacy, ok

neogodless

Oh parts of this remind me of having to write an HMAC signature for some API calls. I like to start in Postman, but the provider's supplied Postman collection was fundamentally broken. I tried and tried to write a pre-request script over a day or two, and ended up giving up. I want to get back to it, but it's frustrating because there's no feedback cycle. Every request fails with the same 401 Unauthorized error, so you are on your own for figuring out which piece of the script isn't doing quite the right thing.

orion138

Not the main point of the article, but the author’s comments on Gandi made me wonder:

What registrar do people recommend in 2025?

samch

Since you asked, I use Cloudflare for my registrar. I can’t really say if it’s objectively better or worse than anybody else, but they seemed like a good choice when Google was in the process of shutting off their registry service.

memset

I have moved to porkbun.

I have built a registrar in the past and have a lot of arcane knowledge about how they work. Just need to figure out a way to monetize!

KolmogorovComp

Any feedback on CF one?

jsheard

CF sells domains at cost so you're not going to beat them on price, but the catch is that domains registered through them are locked to their infrastructure, you're not allowed to change the nameservers. They're fine if you don't need that flexibility and they support the TLDs you want.

sloped

Pork bun is my favorite.

graemep

It seems to be what Rachel decided on.

Must be other good ones? Somewhat prefer something in the UK (but have been using Gandi so its not essential).

jsheard

I don't know about the UK, but if you want to keep things in Europe then I can vouch for Netim in France.

INWX in Germany also seems well regarded but I haven't used them.

mattl

Gandi prices went way way up. I've been using Porkbun too.

matja

Lucky that 415031 is prime :)

The steps described in the article sound familiar to the process done in the early 2000's, but I'm not sure why you'd want to make it hard for yourself now.

I use certbot with "--preferred-challenges dns-01" and "--manual-auth-hook" / "--manual-cleanup-hook" to dynamically create DNS records, rather than needing to modify the webserver config (and the security/access risks that comes with). It just needs putting the cert/key in the right place and reloading the webserver/loadbalancer.

tux3

JOSE/JWK is indeed some galactically overengineered piece of spec, but the rest seems.. fine?

There are private keys and hash functions involved. But base64url and json aren't the worst web crimes to have been inflicted upon us. It's not _that_ bad, is it?

unscaled

Yes, JOSE is certainly overengineered and JWK is arguably somewhat overengineered as well.

But "the rest" of ACME also include X.509 certificates and PKCS#10 Certificate Signing Requests, which are in turn based on ASN.1 (you're fortunate enough you only need DER encoding) and RSA parameters. ASN.1 and X.509 are devilishly complex if you don't let openssl do everything for you and even if you do. The first few paragraphs are all about making the correct CSR and dealing with RSA, and encoding bigints the right way (which is slightly different between DER and JWK to make things more fun).

Besides that I don't know much about the ACME spec, but the post mentions a couple of other things :

So far, we have (at least): RSA keys, SHA256 digests, RSA signing, base64 but not really base64, string concatenation, JSON inside JSON, Location headers used as identities instead of a target with a 301 response, HEAD requests to get a single value buried as a header, making one request (nonce) to make ANY OTHER request, and there's more to come.

This does sound quite complex. I'm just not sure how much simpler ACME could be. Overturning the clusterfuck that is ASN.1, X.509 and the various PKCS#* standards has been a lost cause for decades now. JOSE is something I would rather do without, but if you're writing an IETF RFC, you're only other option is CMS[1], which is even worse. You can try to offer a new signature format, but that would be shut down for being "simpler and cleaner than JOSE, but JOSE just has some warts that need to be fixed or avoided"[2].

I think the things you're left with that could have been simplified and accepted as a standard are the APIs themselves, like getting a nonce with a HEAD request and storing identifiers in a Location header. Perhaps you could have removed signatures (and then JOSE) completely and rely on client IDs and secrets since we're already running over TLS, but I'm not familiar enough with the protocol to know what would be the impact. If you really didn't need any PKI for the protocol itself here, then this is a magnificent edifice of overengineering indeed.

[1] https://datatracker.ietf.org/doc/html/rfc5652 [2] https://mailarchive.ietf.org/arch/msg/cfrg/4YQH6Yj3c92VUxqo-...

oneplane

I personally don't see the overengineering in JOSE; as you mention, a JWK (and JWKs) is not much more than the RSA key data we already know and love but formatted for Web and HTTP. It doesn't get more reasonable than that. JWTs, same story, it's just JSON data with a standard signature.

The spec (well, the RFC anyway) is indeed classically RFC-ish, but the same applies to HTTP or TCP/IP, and I haven't seen the same sort of complaints about those. Maybe it's just resistance to change? Most of the specs (JOSE, ACME etc) aren't really complex for the sake of complexity, but solve problems that aren't simple problems to solve simply in a simple fashion. I don't think that's bad at all, it's mostly indicative of the complexity of the problem we're solving.

unscaled

I would argue that JOSE is complex for the sake of complexity. It's not nearly as bad as old cryptographic standards (X.509 and the PKCS family of standards) and definitely much better than XMLDSig, but it's still a lot more complex than it needs to be.

Some examples of gratuitous complexity:

1. Supporting too many goddamn algorithms. Keeping RSA and HMAC-SHA256 for leagcy-compatible stuff, and Ed25519 for XChaChaPoly1305 for regular use would have been better. Instead we support both RSA with PKCS#1 v1.5 signatures and RSA-PSS with MGF1, as well as ECDH with every possible curve in theory (in practice only 3 NIST Prime curves).

2. Plethora of ways to combine JWE and JWS. You can encrypt-then-sign or sign-then-encrypt. You can even create multiple layers of nesting.

3. Different "typ"s in the header.

4. RSA JWKs can specify the d, p, q, dq, dp and qi values of the RSA private key, even though everything can be derived from "p" and "q" (and the public modulus and exponent "n" and "e").

5. JWE supports almost every combination of key encryption algorithm, content encryption algorithm and compression algorithm. To make things interesting, almost all of the options are insecure to a certain degree, but if you're not an expert you wouldn't know that.

6. Oh, and JWE supports password-based key derivation for encryption.

7. On the other, JWS is smarter. It doesn't need this fancy shmancy password-based key derivation thingamajig! Instead, you can just use HMAC-SHA256 with any key length you want. So if you fancy encrypting your tokens with a cool password like "secret007" and feel like you're a cool guy with sunglasses in a 1990s movie, just go ahead!

This is just some of the things of the top of my head. JOSE is bonkers. It's a monument to misguided overengineering. But the saddest thing about JOSE is that it's still much simpler than the standards which predated it: PKCS#7/CMS, S/MIME and the worst of all - XMLDSig.

oneplane

It's bonkers if you don't need it, just like JSONx (JSON-as-XML) is bonkers if you don't need it. But standards aren't for a single individual need, if they were they wouldn't be standards. And some people DO need these variations.

Take your argument about order of operations or algorithms. Just because you might not need to do it in an alternate order or use a legacy (and broken) algorithm doesn't mean nobody else does. Keep in mind that this standard isn't exactly new, and isn't only used in startups in San Francisco. There are tons of systems that use it that might only get updated a handful of times each year. Or long-lived JWTs that need to be supported for 5 years. Not going to replace hardware that is out on a pole somewhere just because someone thought the RFC was too complicated.

Out of your arguments, none of them require you to do it that way. Example: you don't have to supply d, dq, dp or qi if you don't want to. But if you communicate with some embedded device that will run out of solar power before it can derive them from the RSA primitives, you will definitely help it by just supplying it on the big beefy hardware that doesn't have that problem. It allows you to move energy and compute cost wherever it works best for the use case.

Even simpler: if you use a library where you can specify a RSA Key and a static ID, you don't have to think about any of this; it will do all of it for you and you wouldn't even know about the RFC anyway.

The only reason someone would need to know the details is if you don't use a library or if you are the one writing it.

lmz

Imagine coming from JWK and having to encode that public key into a CSR or something with that attitude.

oneplane

Imagine writing your own security software when there are proven systems that just take that problem out of your hands so you don't need to complain about it.

tialaramex

One of the things this gestures at might as well get a brief refresher here:

Subject Alternative Name (SAN) is not an alternative in the sense that it's an alias, SANs exist because the X.509 certificate standard is, as its name might suggest, intended for the X.500 directory system, a system from the 20th century which was never actually deployed. Mozilla (back then the Netscape Corporation) didn't like re-inventing wheels and this standard for certificates already existed so they used it in their new "Secure Sockets" technology but it has no Internet names so at first they just put names in plain text. However, X.500 was intended to be infinitely extensible, so we can just invent an alternative naming scheme, and that's what the SANs are, which is why they're mandatory for certificates in the Web PKI today - these are the Internet's names for things, so they're mandatory when talking about the Internet, they're described in detail in PKIX, the IETF document standardising the use of X.500 for the Internet.

There are several types of name we can express as SANs but in a certificate the two you'll commonly see are dnsName - the same ASCII names you'd see in URLs like "news.ycombinator.com" or "www.google.com" and ipAddress - a 32-bit integer typically spelled as four dotted decimals 10.20.30.40 [yes or an IPv6 128-bit integer will work here, don't worry]

Because the SANs aren't just free text a machine can reliably parse them which would doubtless meet Rachel's approval. The browser can mindlessly compare the bytes in the certificate "news.ycombinator.com" with the bytes in the actual DNS name it looked up "news.ycombinator.com" and those match so this cert is for this site.

With free text in a CN field like a 1990s SSL certificate (or, sadly, many certificates well into the 2010s because it was difficult to get issuers to comply properly with the rules and stop spewing nonsense into CN) it's entirely possible to see a certificate for " 10.200.300.400" which well, what's that for? Is that leading space significant? Is that an IP address? But those numbers don't even fit in one byte each I hope our parser copes!

p_ing

Did browsers ever strictly require a SAN; they certainly didn't even as of ~10 years ago? Yes, it is "required", but CN only has worked for quite some time. I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.

tialaramex

> Did browsers ever strictly require a SAN;

Yes, all the popular browsers require this.

> they certainly didn't even as of ~10 years ago?

That's true, ten years ago it was likely that if a browser required this they would see unacceptably high failure rates because CAs were non-compliant and enforcement wasn't good enough. Issuing certs which would fail PKIX was prohibited, but so is speeding and yet people do that every day. CT improved our ability to inspect what was being issued and monitor fixes.

> Yes, it is "required", but CN only has worked for quite some time.

No trusted CA will issue "CN only" for many years now, if you could obtain such a certificate you'd find it won't work in any popular browser either. You can read the Chromium or Mozilla source and there just isn't any code to look in CN, the browser just parses the SANs.

> I find this tricks up some IT admins who are still used to only supplying a CN and don't know what a SAN is.

In most cases this is a sign you're using something crap like openssl's command line to make CSRs, and so you're probably expending a lot of effort filling out values which will be ignored by the CA and yet not offered parameters you did need.

p_ing

You're forgetting that browsers deal with plenty of internal-only CAs. Just because a public CA won't issue a CN only cert doesn't mean an internal CA won't. That is why I'm curious to know if browsers /strictly/ require SANs, yet. Not something I've tested in a long time since I started supporting public-only websites/cloud infra.

As you noted about OpenSSL, Windows CertSvr will allow you to do CN only, too.

null

[deleted]

z3t4

At some stage you need to update your TXT records, and if you register a wildcard domain you have to do it twice for the same request! And you have to propagate these TXT records twice to all your DNS servers, and wait for some third party like google dns to request the TXT record. And it all has to be done within a minute in order to not time out. DNS servers are not made to change records from one second to another and rely heavily on caching, so I'm lucky that I run my own DNS servers, but good luck doing this if you are using something like a anycast DNS service.

castillar76

Fortunately that’s only needed if you’re using the DNS validation method — necessary if you’re getting wildcards (but…eek, wildcards). For HTTP-01, no DNS changes are needed unless you want to add CAA records to block out other CAs.

XorNot

Or just use the HTTP protocol, which works fine.

fpoling

For wildcard certificates DNS is the only option.