Why I no longer have an old-school cert on my HTTPS site
436 comments
·May 23, 2025eadmund
cortesoft
> Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.
I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data. Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand. If you have a JSON object you want to hand edit, you can just type... for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.
You might not think the ability to hand generate, read, and edit is important, but I am pretty sure that is a big reason JSON has won in the end.
Oh, and the Ruby JSON parser handles that large number just fine.
motorest
> I feel like not understanding why JSON won out is being intentionally obtuse. JSON can easily be hand written, edited, and read for most data.
You are going way out of your way to try to come up with ways to rationalize why JSON was a success. The ugly truth is far simpler than what you're trying to sell: it was valid JavaScript. JavaScript WebApps could parse JSON with a call to eval(). No deserialization madness like XML, no need to import a parser. Just fetch a file, pass it to eval(), and you're done.
nextaccountic
In other words, the thing that made JSON initially succeed was also a giant security hole
jaapz
But also, all the other reasons written by the person you replied to
amne
it's in the name after all: [j]ava[s]cript [o]bject [n]otation
pharrington
The entire reason ACME exists is because you are never writing or reading the CSR by hand.
So of course, ACME is based around a format whose entire reason d'etre is being written and read by hand.
It's weird.
thayne
The reason json is a good format for ACME isn't that it is easy to read and write by hand[1], but that most languages have at least one decent json implementation available, so it is easier to implement clients in many different languages.
[1]: although being easy to read by humans is an advantage when debugging why something isn't working.
eadmund
> I feel like not understanding why JSON won out is being intentionally obtuse.
I didn’t feel like my comment was the right place to shill for an alternative, but rather to complain about JSON. But since you raise it.
> JSON can easily be hand written, edited, and read for most data.
So can canonical S-expressions!
> Canonical S-expressions are not as easy to read and much harder to write by hand; having to prefix every atom with a length makes is very tedious to write by hand.
Which is why the advanced representation exists. I contend that this:
(urn:ietf:params:acme:error:malformed
(detail "Some of the identifiers requested were rejected")
(subproblems ((urn:ietf:params:acme:error:malformed
(detail "Invalid underscore in DNS name \"_example.org\"")
(identifier (dns _example.org)))
(urn:ietf:params:acme:error:rejectedIdentifier
(detail "This CA will not issue for \"example.net\"")
(identifier (dns example.net))))))
is far easier to read than this (the first JSON in RFC 8555): {
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Some of the identifiers requested were rejected",
"subproblems": [
{
"type": "urn:ietf:params:acme:error:malformed",
"detail": "Invalid underscore in DNS name \"_example.org\"",
"identifier": {
"type": "dns",
"value": "_example.org"
}
},
{
"type": "urn:ietf:params:acme:error:rejectedIdentifier",
"detail": "This CA will not issue for \"example.net\"",
"identifier": {
"type": "dns",
"value": "example.net"
}
}
]
}
> for an Canonical S-expression, you have to count how many characters you are typing/deleting, and then update the prefix.As you can see, no you do not.
thayne
Your example uses s-expressions, not canonical s-expressions. Canonical s expressions[1] is basically a binary format. Each atom/string is prefixed by a decimal length of the string and a colon. It's advantage over regular s expressions is that there is no need to escape or quote strings with whitespace, and there is only a single possible representation for a given data structure. The disadvantage is it is much harder to write and read by humans.
As for s-expressions vs json, there are pros and cons to each. S-expressions don't have any way to encode type information in the data itself, you need a schema to know if a certain value should be treated as a number or a string. And it's subjective which is more readable.
[1]: https://en.m.wikipedia.org/wiki/Canonical_S-expressions
eximius
For you, perhaps. For me, the former is denser, but crossing into a "too dense" region. The JSON has indentation which is easy on my poor brain. Also, it's nice to differentiate between lists and objects.
But, I mean, they're basically isomorphic with like 2 things exchanges ({} and [] instead of (); implicit vs explicit keys/types).
eddythompson80
> is far easier to read than this (the first JSON in RFC 8555):
It's not for me. I'd literally take anything over csexps. Like there is nothing that I'd prefer it to. If it's the only format around, then I'll just roll my own.
NooneAtAll3
> I contend that this is far easier to read than this
oh boi, that's some Lisp-like vs C-like level of holywar you just uncovered there
and wooow my opinion is opposite of yours
remram
This doesn't help with numbers at all, though. Any textual representation of numbers is going to have the same problem as JSON.
michaelcampbell
> is far easier to read than this
Readability is a function of the reader, not the medium.
lisper
> Canonical S-expressions are not as easy to read and much harder to write by hand
You don't do that, any more than you read or write machine code in binary. You read and write regular S-expressions (or assembly code) and you translate that into and out of canonical S expressions (or machine code) with a tool (an assembler/disassembler).
cortesoft
I have written by hand and read JSON hundreds of times. You can tell me I shouldn’t, but I am telling you I do. Messing around with an API with curl, tweaking a request object slightly for testing something, etc.
Reading happens even more times. I am constantly printing out API responses when I am coding, verifying what I am seeing matches what I am expecting, or trying to get an idea of the structure of something. Sure, you can tell me I shouldn’t do this and I should just read a spec, but in my experience it is often much faster just to read the JSON directly. Sometimes the spec is outdated, just plain wrong, or doesn’t exist. Being able to read the JSON is a regular part of my day.
beeflet
you can use a program to convert between s-expressions and a more readable format. In a world where canonical s-expressions rule, this "more readable format" would probably be an ordinary s-expression
tsimionescu
This seems like a just-so story. Your explanation could make some sense if we were comparing {"e" : "AQAB"} to {"e" : 65537}, but there is no reason why that should be the alternative. The JSON {"e" : "65537"} will be read precisely the same way by any JSON parser out there. Converting the string "65537" to the number 65537 is exactly as easy (or hard), but certainly unambiguous, as converting the string "AQAB" to the same number.
Of course, if you're doing this in JS and have reasons to think the resulting number may be larger than the precision of a double, you have a huge problem either way. Just as you would if you were writing this in C and thought the number may be larger than what can fit in a long long. But that's true regardless of how you represent it in JSON.
pornel
For very big numbers (that could appear in these fields), generating and parsing a base 10 decimal representation is way more cumbersome than using their binary representation.
The DER encoding used in the TLS certificates uses the big endian binary format. OpenSSL API wants the big endian binary too.
The format used by this protocol is a simple one.
It's almost exactly the format that is needed to use these numbers, except JSON can't store binary data directly. Converting binary to base 64 is a simple operation (just bit twiddling, no division), and it's easier than converting arbitrarily large numbers between base 2 and base 10. The 17-bit value happens to be an easy one, but other values may need thousands of bits.
It would be silly for the sender and recipient to need to use a BigNum library when the sender has the bytes and the recipient wants the bytes, and neither has use for a decimal number.
ncruces
Go can decode numbers losslessly as strings: https://pkg.go.dev/encoding/json#Number
json.Number is (almost) my “favorite” arbitrary decimal: https://github.com/ncruces/decimal?tab=readme-ov-file#decima...
I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.
eadmund
> Go can decode numbers losslessly as strings: https://pkg.go.dev/encoding/json#Number
Yup, and if you’re using JSON in Go you really do need to be using Number exclusively. Anything else will lead to pain.
> I'm half joking, but I'm not sure why S-expressions would be better here. There are LISPs that don't do arbitrary precision math.
Sure, but I’m referring specifically to https://www.ietf.org/archive/id/draft-rivest-sexp-13.html, which only has lists and bytes, and so number are always just strings and it’s up to the program to interpret them.
mise_en_place
For actual SERDES, JSON becomes very brittle. It's better to use something like protobuf or cap'n'proto for such cases.
marcosdumay
What I don't understand is why you (and a lot of other people) just expect S-expression parsers to not have the exact same problems.
eadmund
Because canonical S-expressions don’t have numbers, just atoms (i.e., byte sequences) and lists. It is up to the using code to interpret "34" as the string "34" or the number 34 or the number 13,108 or the number 13,363, which is part of the protocol being used. In most instances, the byte sequence is probably a decimal number.
Now, S-expressions as used for programming languages such as Lisp do have numbers, but again Lisp has bignums. As for parsers of Lisp S-expressions written in other languages: if they want to comply with the standard, they need to support bignums.
tsimionescu
You can write JSON that exclusively uses strings, so this is not really relevant. Sure, maybe it can be considered an advantage that s-expressions force you to do that, though it can also be seen just as easily as a disadvantage. It certainly hurts readability of the format, which is not a 0-cost thing. This is also why all Lisps use more than plain sexps to represent their code: having different syntax for different types helps.
its-summertime
"it can do one of 4 things" sounds very much like the pre-existing issue with JSON
motorest
> Because canonical S-expressions don’t have numbers, just atoms (i.e., byte sequences) and lists.
If types other than string and a list bother you, why don't you stick with those types in JSON?
null
01HNNWZ0MV43FF
I think they mean that Common Lisp has bigints by default
ryukafalz
As do Scheme and most other Lisps I'm familiar with, and integers/floats are typically specified to be distinct. I think we'd all be better off if that were true of JSON as well.
I'd be happy to use s-expressions instead :) Though to GP's point, I suppose we might then end up with JS s-expression parsers that still treat ints and floats interchangeably.
josephg
The funny thing about this is that JavaScript the language has had support for BigIntegers for many years at this point. You can just write 123n for a bigint of 123.
JSON could easily be extended to support them - but there’s no standards body with the authority to make a change like that. So we’re probably stuck with json as-is forever. I really hope something better comes along that we can all agree on before I die of old age.
While we’re at it, I’d also love a way to embed binary data in json. And a canonical way to represent dates. And comments. And I’d like a sane, consistent way to express sum types. And sets and maps (with non string keys) - which JavaScript also natively supports. Sigh.
aapoalas
It's more a problem of support and backwards compatibility. JSON and parsers for it are so ubiquitous, and the spec completely lacks any versioning support, that adding a feature would be a breaking change of horrible magnitude, on nearly all levels of the modern software infrastructure stack. I wouldn't be surprised if some CPUs might break from that :D
JSON is a victim of its success: it has become too big to fail, and too big to improve.
Sammi
There are easy workarounds to getting bigints in JSON: https://github.com/GoogleChromeLabs/jsbi/issues/30#issuecomm...
josephg
Sure; and I can encode maps and sets as entry lists. Binary data as strings and so on. But I don’t want to. I shouldn’t have to.
The fact remains that json doesn’t have native support for any of this stuff. I want something json-like which supports all this stuff natively. I don’t want to have to figure out if some binary data is base64 encoded or hex encoded or whatever, and hack around jackson or serde or javascript to encode and decode my objects properly. Features like this should be built in.
kangalioo
But what's wrong with sending the number as a string? `"65537"` instead of `"AQAB"`
comex
The question is how best to send the modulus, which is a much larger integer. For the reasons below, I'd argue that base64 is better. And if you're sending the modulus in base64, you may as well use the same approach for the exponent sent along with it.
For RSA-4096, the modulus is 4096 bits = 512 bytes in binary, which (for my test key) is 684 characters in base64 or 1233 characters in decimal. So the base64 version is much smaller.
Base64 is also more efficient to deal with. An RSA implementation will typically work with the numbers in binary form, so for the base64 encoding you just need to convert the bytes, which is a simple O(n) transformation. Converting the number between binary and decimal, on the other hand, is O(n^2) if done naively, or O(some complicated expression bigger than n log n) if done optimally.
Besides computational complexity, there's also implementation complexity. Base conversion is an algorithm that you normally don't have to implement as part of an RSA implementation. You might argue that it's not hard to find some library to do base conversion for you. Some programming languages even have built-in bigint types. But you typically want to avoid using general-purpose bigint implementations for cryptography. You want to stick to cryptographic libraries, which typically aim to make all operations constant-time to avoid timing side channels. Indeed, the apparent ease-of-use of decimal would arguably be a bad thing since it would encourage implementors to just use a standard bigint type to carry the values around.
You could argue that the same concern applies to base64, but it should be relatively safe to use a naive implementation of base64, since it's going to be a straightforward linear scan over the bytes with less room for timing side channels (though not none).
nssnsjsjsjs
Ah OK so: readable, efficient, consistent; pick 2.
shiandow
Converting large integers to decimal is nontrivial, especially when you don't trust languages to handle large numbers.
Why you wouldn't just use the hexadecimal that everyone else seems to use I don't know. There seems to be a rather arbitrary cutoff where people prefer base64 to hexadecimal.
chipsa
Size: base 64 is 2/3 the number of bytes as hex.
red_admiral
This sounds like an XY problem to me. There is already an alternative that is at least as secure and only requires a single base-64 string: Ed25519.
deepsun
PHP (at least old versions I worked with) treats "65537" and 65537 similarly.
red_admiral
That sounds horrible if you want to transmit a base64 string where the length is a multiple of 3 and for some cursed reason there's no letters or special characters involved. If "7777777777777777" is your encoded string because you're sending a string of periods encoded in BCD, you're going to have a fun time. Perhaps that's karma for doing something braindead in the first place though.
foobiekr
Cost.
ayende
Too likely that this would not work because silent conversion to number along the way
iforgotpassword
Then just prefixing it with an underscore or any random letter would've been fine but of course base64 encoding the binary representation in base 64 makes you look so much smarter.
JackSlateur
Is this ok ?
Python 3.13.3 (main, May 21 2025, 07:49:52) [GCC 14.2.0] on linux
Type "help", "copyright", "credits" or "license" for more
information.
>>> import json
>>>
json.loads('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
47234762761726473624762746721647624764380000000000000000000000000000000000000000000
teddyh
I prefer
>> import json, decimal
>> j = "47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
Decimal('47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
This way you avoid this problem: >> import json
>> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j)
0.47234762761726473
And instead can get: >> import json, decimal
>> j = "0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000"
>> json.loads(j, parse_float=decimal.Decimal, parse_int=decimal.Decimal)
Decimal('0.47234762761726473624762746721647624764380000000000000000000000000000000000000000000')
sevensor
Just cross your fingers and hope for the best if your data is at any point decoded by a json library that doesn’t support bigints? Python’s ability to handle them is beside the point of they get mangled into ieee754 doubles along the way.
jazzyjackson
yes, python falls into the sane language category with arbitrary-precision arithmetic
faresahmed
Not so much,
>>> s="1"+"0"*4300
>>> json.loads(s)
...
ValueError: Exceeds the limit (4300 digits) for integer string conversion:
value has 4301 digits; use sys.set_int_max_str_digits() to increase the limit
This was done to prevent DoS attacks 3 years ago and have been backported to at least CPython 3.9 as it was considered a CVE.Relevant discussion: https://news.ycombinator.com/item?id=32753235
Your sibling comment suggests using decimal.Decimal which handles parsing >4300 digit numbers (by default).
drob518
Seems like a large integer can always be communicated as a vector of byte values in some specific endian order, which is easier to deal with than Base64 since a JSON parser will at least convert the byte value from text to binary for you.
But yea, as a Clojure guy sexprs or EDN would be much better.
mcpherrinm
I’m the technical lead for the Let’s Encrypt SRE/infra team. So I spend a lot of time thinking about this.
The salt here is deserved! JSON Web Signatures are a gnarly format, and the ACME API is pretty enthusiastic about being RESTful.
It’s not what I’d design. I think a lot of that came via the IETF wanting to use other IETF standards, and a dash of design-by-committee.
A few libraries (for JWS, JSON and HTTP) go a long way to making it more pleasant but those libraries themselves aren’t always that nice, especially in C.
I’m working on an interactive client and accompanying documentation to help here too, because the RFC language is a bit dense and often refers to other documents too.
cryptonector
> JSON Web Signatures are a gnarly format
They are??
As someone who wallows in ASN.1, Kerberos, and PKI, I don't find JWS so "gnarly". Even if you're open-coding a JSON Web Signature it will be easier than to open-code S/MIME, CMS, Kerberos, etc. Can you explain what is so gnarly about JWS?
Mind you, there are problems with JWT. Mainly that HTTP user-agents don't know how to fetch the darned things because there is not standard for how to find out how to fetch the darned things, when you should honor a request for them, etc.
mcpherrinm
I'd take ASN.1/DER over JWS any day :) It's the weekend and I don't feel I have the energy to launch a full roast of JWS, but to give some flavour, I'll link
https://auth0.com/blog/critical-vulnerabilities-in-json-web-...
Implementations can be written securely, but it's too easy to make mistakes.
Yeah, there's worse stuff from the 90s around, but JOSE and ACME is newer than that - we could have done better!
Alas, it's not changing now.
I think ASN.1 has some warts, but I think a lot of the problems with DER are actually in creaky old tools. People seem way happier with Protobuf, for example: I think that's largely down to tooling.
cryptonector
The whole not validating the signatures thing is a problem, yes. That can happen with PKI certificates too, but those have been around longer and -perhaps because one needed an ASN.1 stack- only people with more experience wrote PKI stacks than we see in the case of JWS?
I think Protocol Buffers is a disaster. Its syntax is worse than ASN.1 because you're required to write in tags, and it is a TLV encoding very similar to DER so... why _why_ does PB exist? Don't tell me it's because there were no ASN.1 tools around -- there were no PB tools around either!
asimops
Don't you think you are falling for classic whataboutism here?
Just because ASN.1 and friends are exceptionally bad, it does not mean that Json Web * cannot be bad also.
cryptonector
> Don't you think you are falling for classic whataboutism here?
I do not. This sort of codec complexity can't be avoided. And ASN.1 is NOT "exceptionally bad" -- I rather like ASN.1. The point was not "wait till you see ASN.1", but "wait till you see Kerberos" because Kerberos requires a very large amount of client-side smarts -- too much really because it's got more than 30 years of cruft.
dwedge
What is she talking about that you have to pay for certs if you want more than 3? Am I about to get a bill for the past 5 years or did she just misunderstand?
belorn
to quote the article (or rather, the 2023 article which is the one mentioning the number 3).
"Somehow, a couple of weeks ago, I found this other site which claimed to be better than LE and which used relatively simple HTTP requests without a bunch of funny data types."
"This is when the fine print finally appeared. This service only lets you mint 90 day certificates on the free tier. Also, you can only do three of them. Then you're done. 270 days for one domain or 3 domains for 90 days, and then you're screwed. Isn't that great? "
She don't mention what this "other site" is.
jchw
FWIW, it is ZeroSSL. I want there to be more major ACME providers than just LE, but I'm not sure about ZeroSSL, personally. It seems to have the same parent company as IdenTrust (HID Global Corporation). Probably a step up from Honest Achmed but recently I recall people complaining that their EV code signing certificates were not actually trusted by Windows which is... Interesting.
null
tasuki
> and the ACME API is pretty enthusiastic about being RESTful
Without looking at it, are you sure about that?
I once used to know what REST meant. Are you doing REST as in HATEOAS or as in "we expose some http endpoints"?
mcpherrinm
Everything is an object, identified by a URL. You start from a single URL (the directory), and you can find all the rest of the resources from URLs provided from there.
ACME models everything as JSON objects, each of which is identified by URL. You can GET them, and they link to other objects with Location and Link headers.
To quote from the blog post:
> Dig around in the headers of the response, looking for one named "Location". Don't follow it like a redirection. Why would you ever follow a Location header in a HTTP header, right? Nope, that's your user account's identifier! Yes, you are a URL now.
I don't know if it's the pure ideal of HATEOS, but it's about as close as I've seen in use.
It has the classic failing though: it’s used by scripts which know exactly what they want to do (get a cert), so the clients still hardcode the actions they need. It just adds a layer of indirection as they need to keep track of URLs.
I would have preferred if it was just an RPC-over-HTTP/JSON with fixed endpoints and numeric object IDs.
tasuki
That's pretty good! Better than 99% claims of REST for sure! Thanks for the long reply.
peanut-walrus
REST has for a long long time meant "rpc via json over http". HATEOAS is a mythical beast nobody has ever seen in the wild.
hamburglar
Eh, I think that’s what it meant for a while. I’ve now interacted with enough systems that have rigor about representing things as resources that have GET urls and doing writes with POST etc that I don’t think it’s always the ad hoc RPC fest it once was. It may be rare to see according-to-hoyle HATEOAS but REST is definitely no longer in the “nobody actually does this” category.
1a527dd5
I don't understand the tone of aggression against ACME and their plethora of clients.
I know it isn't a skill issue because of who the author is. So I can only imagine it is some sort of personal opinion that they dislike ACME as a concept or the tooling around ACME in general.
We've been using LE for a while (since 2019 I think) for handful of sites, and the best nonsense client _for us_ was https://github.com/do-know/Crypt-LE/releases.
Then this year we've done another piece of work this time against the Sectigo ACME server and le64 wasn't quite good enough.
So we ended up trying:-
- https://github.com/certbot/certbot on GitHub Actions, it was fine but didn't quite like the locked down environment
- https://github.com/go-acme/lego huge binary, cli was interestingly designed and the maintainer was quite rude when raising an issue
- https://github.com/rmbolger/Posh-ACME our favourite, but we ended up going with certbot on GHA once we fixed the weird issues around permissions
Edit* Re-read it. The tone isn't aimed at the ACME or the clients. It's the spec itself. ACME idea good, ACME implementation bad.
lucideer
> I don't understand the tone of aggression against ACME and their plethora of clients.
> ACME idea good, ACME implementation bad.
Maybe I'm misreading but it sounds like you're on a similar page to the author.
As they said at the top of the article:
> Many of the existing clients are also scary code, and I was not about to run any of them on my machines. They haven't earned the right to run with privileges for my private keys and/or ability to frob the web server (as root!) with their careless ways.
This might seem harsh but when I think it's a pretty fair perspective to have when running security-sensitive processes.
thayne
No the author seems opposed to the idea specification of ACME, not just the implementation of the clients.
And a lot of the complaints ultimately boil down to not liking JWS. And I'm not really sure what she would have preferred there. ASN.1, which is even more complicated? Some bespoke format where implementations can't make use of existing libraries?
imtringued
This is exactly the impression I got here.
I would have had sympathy for the disdain for certbot, but certbot wasn't called out and that isn't what the blog post is about at all.
dangus
I disagree, the author is overcomplicating and overthinking things.
She doesn't "trust" tooling that basically the entire Internet including major security-conscious organizations are using, essentially letting perfect get in the way of good.
I think if she were a less capable engineer she would just set that shit up using the easiest way possible and forget about it like everyone else, and nothing bad would happen. Download nginx proxy manager, click click click, boom I have a wilcard cert, who cares?
I mean, this is her https site, which seems to just be a blog? What type of risk is she mitigating here?
Essentially the author is so skilled that she's letting perfect get in the way of good.
I haven't thought about certificates for years because it's not worth my time. I don't really care about the tooling, it's not my problem, and it's never caused a security issue. Put your shit behind a load balancer and you don't even need to run any ACME software on your own server.
nothrabannosir
Sometimes I wonder how y’all became programmers. I learned basically everything by SRE-larping on my shitty nobody-cares-home-server for years and suddenly got paid to do it for real.
Who do you think they hire to manage those LBs for you? People who never ran any ACME software, or people who have a blog post turning over every byte of JSON in the protocol in excruciating detail?
dwedge
This is the same author that threw everyone into a panic about atop and turned out to not really have found anything.
ezekiel68
Agreed and -- in particular -- I don't recall seeing any kind of "everybody get back into the pool" follow-up after the developers of atop quickly addressed the issue with an update. At least not any kind of follow-up that got the same kind of press as the initial alarm.
giancarlostoro
Im not a container guru by any means (at least not yet?) but would docker not suffice these concerns?
fpoling
The issue is that the client needs to access the private key, tell web server where various temporary files are during the certificate generation (unless the client uses DNS mode) and tell the web server about a new certificate to reload.
To implement that many clients run as a root. Even if that root is in a docket container, this is needlessly elevated privileges especially given the complexity (again, needless) of many clients.
The sad part is that it is trivial to run most of the clients with an account with no privileges that can access very few files and use a unix socket to tell the web server to reload the certificate. But this is not done.
And then ideally at this point the web servers should if not implement then at least facilitate ACME protocol implementations, like, for example, redirect traffic requests from acme servers to another port with one-liner in config. But this is not the case.
rsync
Yes, it does.
I run acme in a non privileged jail whose file system I can access from outside the jail.
So acme sees and accesses nothing and I can pluck results out with Unix primitives from the outside.
Yes, I use dns mode. Yes, my dns server is also a (different) jail.
TheNewsIsHere
My reading of the article suggested to me that the author took exception to the code that touched the keying material. Docker is immaterial to that problem. I won’t deign to speak for Rachel By The Bay (mother didn’t raise a fool, after all), but I expect Docker would be met with a similar regard.
Which I do understand. Although I use Docker, I mainly use it personally for things I don’t want to spend much time on. I don’t really like it over other alternatives, but it makes standing up a lab service stupidly easy.
lucideer
I use docker for the same reasons as the author's reservations - I combine a docker exec with some of my own loose automation around moving & chmod-ing files & directories to obviate the need for the acme client to have unfettered root access to my system.
Whether it's a local binary or a dockerised one, that access still needs to be marshalled either way & it can get complex facilitating that with a docker container. I haven't found it too bad but I'd really rather not need docker for on-demand automations.
I give plenty* of services root access to my system, most of which I haven't written myself & I certainly haven't audited their code line-by-line, but I agree with the author that you do get a sense from experience of the overall hygiene of a project & an ACME client has yet to give me good vibes.
* within reason
null
diggan
> I don't understand the tone of aggression against ACME and their plethora of clients.
The older posts on the same website provided a bit more context for me to understand today's post better:
- "Why I still have an old-school cert on my https site" - January 3, 2023 - https://rachelbythebay.com/w/2023/01/03/ssl/
- "Another look at the steps for issuing a cert" - January 4, 2023 - https://rachelbythebay.com/w/2023/01/04/cert/
immibis
Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.
Sadly, security is a cat and mouse game, which means it's always evolving and you're forced to keep up - and it's inherent by the nature of the field, so we can't really blame anyone (unlike, say, being forced to integrate with the latest Google services to be allowed on the Play Store). At least you get to write your own ACME client if you want to. You don't have to use certbot, and there's no TPM-like behaviour locking you out of your own stuff.
g-b-r
> Some people don't want to be forced to run a bunch of stuff they don't understand on the server
It's not just about not understanding, it's that more complex stuff is inherently more prone to security vulnerabilities, however well you think you reviewed its code.
Avamander
> It's that more complex stuff is inherently more prone to security vulnerabilities
That's overly simplifying it and ignores the part where the simple stuff is not secure to begin with.
In the current context you could take a HTTP client with a formally verified TLS stack, would you really say it's inherently more vulnerable than a barebones HTTP client talking to a server over an unencrypted connection? I'd say there's a lot more exposed in that barebones client.
tptacek
Non-ACME certs are basically over. The writing has been on the wall for a long time. I understand people being squeamish about it; we fear change. But I think it's a hopeful thing: the Web PKI is evolving. This is what that looks like: you can't evolve and retain everyone's prior workflows, and that has been a pathology across basically all Internet security standards work for decades.
ipdashc
ACME is cool (compared to what came before it), but I'm kind of sad that EV certs never seemed to pan out at all. I feel like they're a neat concept, and had the potential to mitigate a lot of scams or phishing websites in an ideal world. (That said, discriminating between "big companies" and "everyone else who can't afford it" would definitely have some obvious downsides.) Does anyone know why they never took off?
throw0101b
> Some people don't want to be forced to run a bunch of stuff they don't understand on the server, and I agree with them.
There are a number of shell-based ACME clients whose prerequisites are: OpenSSL and cURL. You're probably already relying on OpenSSL and cURL for a bunch of things already.
If you can read shell code you can step through the logic and understand what they're doing. Some of them (e.g., acme.sh) often run as a service user (e.g., default install from FreeBSD ports) so the code runs unprivileged: just add a sudo (or doas) config to allow it to restart Apache/nginx.
spockz
Given that keys probably need to be shared between multiple gateway/ingresses, how common is it to just use some HSM or another mechanism of exchanging the keys with all the instances? The acme client doesn’t have to run on the servers itself.
tialaramex
> The acme client doesn’t have to run on the servers itself.
This is really important to understand if you care about either: Actually engineering security at some scale or knowing what's actually going on in order to model it properly in your head.
If you just want to make a web site so you can put up a blog about your new kitten, any of the tools is fine, you don't care, click click click, done.
For somebody like Rachel or many HN readers, knowing enough of the technology to understand that the ACME client needn't run on your web servers is crucial. It also means you know that when some particular client you're evaluating needs to run on the web server that it's a limitation of that client not of the protocol - birds can't all fly, but flying is totally one of the options for birds, we should try an eagle not an emu if we want flying.
immibis
You could if your domain was that valuable. Most aren't.
hannob
> Some people don't want to be forced to run a bunch of stuff they > don't understand on the server, and I agree with them.
Honest question:
* Do you understand OS syscalls in detail?
* Do you understand how your BIOS initializes your hardware?
* Do you understand how modern filesystems work?
* Do you understand the finer details of HTTP or TCP?
Because... I don't. But I know enough about them that I'm quite convinced each of them is a lot more difficult to understand than ACME. And all of them and a lot more stuff are required if you want to run a web server.
sussmannbaka
This point is so tired. I don’t understand how a thought forms in my neurons, eventually matures into a decision and how the wires in my head translate this into electrical pulses to my finger muscles to type this post so I guess I can’t have opinions about complexity.
snowwrestler
I get where you’re going with this, but in this particular case it might not be relevant because there’s a decent chance that Rachel By The Bay does actually understand all those things.
frogsRnice
Sure - but people are still free to decide where they draw the line.
Each extra bit of software is an additional attack surface after all
fc417fc802
An OS is (at least generally) a prerequisite. If minimalism is your goal then you'd want to eliminate tangentially related things that aren't part of the underlying requirements.
If you're a fan of left-pad I won't judge but don't expect me to partake without bitter complaints.
kjs3
I hear some variation of this line of 'reasoning' about once a week, and it's always followed by some variation of "...and that's why we shouldn't have to do all this security stuff you want us to do".
Arnavion
If you want to actually implement an ACME client from first principles, reading the RFC (plus related RFCs for JOSE etc) is probably easier than you think. I did exactly that when I made a client for myself.
I also wrote up a digested description of the issuance flow here: https://www.arnavion.dev/blog/2019-06-01-how-does-acme-v2-wo... It's not a replacement for reading the RFCs, but it presents the information in the sequence that you would follow for issuance, so think of it like an index to the RFC sections.
anishathalye
Implementing an ACME client is part of the final lab assignment for MIT’s security class: https://css.csail.mit.edu/6.858/2023/labs/lab5.html
Bluecobra
Nice thanks! I’ve been wanted to learn it as dealing with cert expirations every year is a pain. My guess is that we will have 24 hour certs at some point.
justusthane
I don’t know about 24 hours, but it will be 47 days in 2029.
jazzyjackson
Looks like a good class; is it only available to enrolled students? videos seem to be behind a log-in wall.
anishathalye
Looks like the 2023 lectures weren't uploaded to YouTube, but the lectures from earlier iterations of the class, including 2022, are available publicly. For example, see the YouTube links on https://css.csail.mit.edu/6.858/2022/
(6.858 is the old name of the class, it was renamed to 6.5660 recently.)
distantsounds
[flagged]
tomhow
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
liampulles
I appreciate the author calling this stuff out. The increasing complexity of the protocols that the web is built on is not a problem for developers who simply need to find a tool or client to use the protocol, but it is a kind of regulatory capture that ensures only established players will be the ones able to meet the spec required to run the internet.
I know ACME alone is not insurmountably complex, but it is another brick in the wall.
charcircuit
These protocols all have open source implementations. And as AI gets stronger this barrier will get smaller and smaller.
chrisandchris
So instead of designing simpler protocols (like HTTP/1.1 is), we do not care and let AI figure it out? Sounds great to me... /s
donnachangstein
OpenBSD has a dead-simple lightweight ACME client (written in C) as part of the base OS. No need to roll your own. I understand it was created because existing alternatives ARE bloatware and against their Unixy philosophy.
Perhaps the author wasn't looking hard enough. It could probably be ported with little effort.
tialaramex
When I last checked this client is a classic example of OpenBSD philosophy not understanding why security is the way it is.
This client really wants the easy case where the client lives on the machine which owns the name and is running the web server, and then it uses OpenBSD-specific partitioning so that elements of the client can't easily taint one another if they're defective
But, the ACME protocol would allow actual air gapping - the protocol doesn't care whether the machine which needs a certificate, the machine running an ACME client, and the machine controlling the name are three separate machines, that's fine, which means if we do not use this OpenBSD all-in-one client we can have a web server which literally doesn't do ACME at all, an ACME client machine which has no permission to serve web pages or anything like that, and name servers which also know nothing about ACME and yet the whole system works.
That's more effort than "I just install OpenBSD" but it's how this was designed to deliver security rather than putting all our trust in OpenBSD to be bug-free.
donnachangstein
I said it was dead-simple and you delivered a treatise describing the most complex use case possible. Then maybe it's not for you.
Most software in the OpenBSD base system lacks features on purpose. Their dev team frequently rejects patches and feature requests without compelling reasons to exist. Less features means less places for things to go wrong means less chance of security bugs.
It exists so their simple webserver (also in the base system) has ACME support working out of the box. No third party software to install, no bullshit to configure, everything just works as part of a super compact OS. Which to this day still fits on a single CD-ROM.
Most of all no stupid Rust compiler needed so it works on i386 (Rust cannot self-host on i386 because it's so bloated it runs out of memory, which is why Rust tools are not included in i386).
If your needs exceed this or you adore complexity then feel free to look elsewhere.
zh3
Or uacme [0] - litle bit of C that's been running perfectly since endless battery failures with the LE python client made us look for something that would last longer.
seanw444
Yeah, was looking for someone to comment this. I use it. Works great.
rollcat
Came here to mention this.
Man page: https://man.openbsd.org/man1/acme-client.1
Source: https://github.com/openbsd/src/tree/master/usr.sbin/acme-cli...
jeroenhd
There's something to be said for implementing stuff like this manually for the experience of having done it yourself, but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.
Kind of makes me wonder what kind of stack her website is running on that something like a lightweight ACME library (https://github.com/jmccl/acme-lw comes to mind, but there's a C++ library for ESP32s that should be even more lightweight) loading in the certificates isn't doing the job.
mschuster91
> but the author's tone makes it sound like she hates the protocol and all the extra work she needs to do to make the Let's Encrypt setup work.
The problem is, SSL is a fucking hot, ossified mess. Many of the noted core issues, especially the weirdnesses around encoding and bitfields, are due to historical baggage of ASN.1/X.509. It's not fun to deal with it, at all... the math alone is bad enough, but the old abstractions to store all the various things for the math are simply constrained by the technological capabilities of the late '80s.
There would have been a chance to at least partially reduce the mess with the introduction of LetsEncrypt - basically, have the protocol transmit all of the required math values in a decent form and get an x.509 cert back - and HTTP/2, but that wasn't done because it would have required redeveloping a bunch of stuff from scratch whereas one can build an ACME CA with, essentially, a few lines of shell script, OpenSSL and six crates of high proof alcohol to drink away one's frustrations of dealing with OpenSSL, and integrate this with all software and libraries that exist there.
jeroenhd
There's no easy way to "just" transmit data in a foolproof manner. You practically need to support CSRs as a CA anyway, so you might as well use the existing ASN.1+X509 system to transmit data.
ASN.1 and X509 aren't all that bad. It's a comprehensively documented binary format that's efficient and used everywhere, even if it's hidden away in binary protocols you don't look at every day.
Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
The unnecessarily complex parts of the protocol when writing a from-the-ground-up client are complex because ACME didn't reinvent the wheel, and reused existing standard protocols instead. Unfortunately, that means having to deal with JWS, but on the other hand, it means most people don't need to write their own ACME-JWS-replacement-protocol parsers. All the other parts are complex because the problem ACME is solving is actually quite complex.
The author wrote [another post](https://rachelbythebay.com/w/2023/01/03/ssl/) about the time they fell for the lies of a CA that promised an "easier" solution. That solution is pretty much ACME, but with more manual steps (like registering an account, entering domain names).
I personally think that for this (and for many other protocols, to be honest) XML would've been a better fit as its parsers are more resilient against weird data, but these days talking about XML will make people look at you like you're proposing COBOL. Hell, I even exchanging raw, binary ASN.1 messages would probably have gone over pretty well, as you need ASN.1 to generate the CSR and request the certificate anyway. But, people chose "modern" JSON instead, so now we're base64 encoding values that JSON parsers will inevitably fuck up instead.
schoen
> Unlike what most people seem to think, ACME isn't something invented just for Let's Encrypt. Let's Encrypt was certainly the first high-profile CA to implement the protocol, but various CAs (free and paid) have their own ACME servers and have had them for ages now. It's a generic protocol for certificate authorities to securely do domain validation and certificate provisioning that Let's Encrypt implemented first.
This depends on whether you're speaking as a matter of history. ACME was originally invented and implemented by the Let's Encrypt team, but in the hope that it could become an open standard that would be used by other CAs. That hope was eventually borne out.
GoblinSlayer
The described protocol looks like rewording of X509 with json syntax, but you still have X509, as a result you have two X509. Replay nonce is used straightforwardly as serial number, termsOfServiceAgreed can be extension, and CSR is automatically signed in the process of generation.
schoen
Yes, we actually considered the "have the protocol transmit all of the required math values in a decent form and get an x.509 cert back" version, but some people who were interested in using Let's Encrypt were apparently very keen on being able to use an existing external CSR. So that became mandatory in order not to have two totally separate code paths for X.509-based requests and non-X.509-based requests.
An argument for this is that it makes it theoretically possible for devices that have no knowledge of anything about PKI since the year 2000, and/or no additional programmability, to use Let's Encrypt certs (obtained on their behalf by an external client application). I have, in fact, subsequently gotten something like that to work as a consultant.
mschuster91
Yikes. Guessed as much. Thanks for your explanation.
As for oooold devices - doesn't LetsEncrypt demand key lengths and hash algorithms nowadays that simply weren't implemented back then?
arkadiyt
> Make an RSA key of 4096 bits. Call it your personal key.
This is bad advice - making a 4096 bit key slows down visitors of your website and only gives you 2048 bits of security (if someone can break a 2048 bit RSA key they'll break the LetsEncrypt intermediate cert and can MITM your site). You should use a 2048 bit leaf certificate here
Arnavion
My webhost only supports RSA keys, so I use an RSA-4096 key just to annoy them into supporting EC keys.
asimops
The key in question is the acme account key though, correct?
nothrabannosir
Amateur question: does a 4096 not give you more security against passive capture and future decrypting? Or is the intermediate also a factor in such an async attack?
arkadiyt
> does a 4096 not give you more security against passive capture and future decrypting?
If the server was using a key exchange that did not support forward secrecy then yes. But:
% echo | openssl s_client -connect rachelbythebay.com:443 2>/dev/null | grep Cipher
New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Cipher : ECDHE-RSA-AES256-GCM-SHA384
^ they're using ECDHE (elliptic curve diffie hellman), which is providing forward secrecy.nothrabannosir
I thought FS only protected other sessions from leak of your current session key. How does it protect against passive recording of the session and later attacking of the recorded session in the future?
upofadown
The certificate is for authentication of the server. It has nothing to do with the encryption of the data.
Basically forward secrecy is where both the sender and receiver throw away the key after the data is decrypted. That way the key is not available for an attacker to get access to later. If the attacker can find some way other than access to the key to decrypt the data then forward secrecy has no benefit.
ndsipa_pomu
I was amazed by them having so much distrust of the various clients. Certbot is typically in the repositories for things like Debian/Ubuntu.
My favourite client is probably https://github.com/acmesh-official/acme.sh
If you use a DNS service provider that supports it, you can use the DNS-01 challenge to get a certificate - that means that you can have the acme.sh running on a completely different server which should help if you're twitchy about running a complex script on it. It's also got the advantage of allowing you to get certificates for internal/non-routable addresses.
JoshTriplett
Certbot is definitely one of the strongest arguments against ACME and Let's Encrypt.
Personally, I find that tls-alpn-01 is even nicer than dns-01. You can run a web server (or reverse proxy) that listens to port 443, and nothing else, and have it automatically obtain and renew TLS certificates, with the challenges being sent via TLS ALPN over the same port you're already listening on. Several web servers and reverse proxies have support for it built in, so you just configure your domain name and the email address you want to use for your Let's Encrypt account, and you get working TLS.
Shadowmist
Does this only work if LE can reach port 443 on one of your servers/proxies?
JoshTriplett
Yes. If you want to create certificates for a private server you have to use a different mechanism, such as dns-01.
skywhopper
Certbot goes out of its way to be inscrutable about what it’s doing. It munges your web server config (temporarily) to handle http challenges, and for true sysadmins who are used to having to know all the details of what’s going on, that sort of script is a nightmare waiting to happen.
I assume certbot is the client she’s alluding to that misinterprets one of the factors in the protocol as hex vs decimal and somehow things still work, which is incredibly worrisome.
castillar76
Having my ACME client munge my webserver configs to obtain a cert was one of the supreme annoyances about using them — it felt severely constraining on how I structured my configs, and even though it’s a blip, I hated the double restart required to fetch a cert (restart with new config, restart with new cert).
Then I discovered the web-root approach people mention here and it made a huge difference. Now I have the HTTP snippet in my server set to serve up ACME challenges from a static directory and push everything else to HTTPS, and the ACME client just needs write permission to that directory. I can dynamically include that snippet in all of the sites my server handles and be done.
If I really felt like it, I could even write a wrapper function so the ACME client doesn’t even need restart permissions on the web-server (for me, probably too much to bother with, but for someone like Rachel perhaps worthwhile).
ndsipa_pomu
A wrapper function may be overkill when you can do something like this:
letsencrypt renew --non-interactive --post-hook "systemctl reload nginx"
jeroenhd
With the HTTP implementation that's true, but the DNS implementation of certbot's certificate request plugins don't touch your server config. As an added bonus, you can use that to also obtain wildcard certificates for your subdomains so different applications can share the same certificate (so you only need one single ACME client).
claudex
You can configure certbot to write in a directory directly and it won't touch your web server config.
ndsipa_pomu
> It munges your web server config (temporarily) to handle http challenges
I run it in "webroot" mode on NgINX servers so it's just a matter of including the relevant config file in your HTTP sections (likely before redirecting to HTTPS) so that "/.well-known/acme-challenge/" works correctly. Then when you do run certbot, it can put the challenge file into the webroot and NgINX will automatically serve it. This allows certbot to do its thing without needing to do anything with NgINX.
christina97
I used to like them, then they somehow sold out to zerossl and switched the default there from LE after an update.
Pinned to an old version and looking for a replacement right now.
Bender
That annoyed me as well given the wording on the ZeroSSL site suggested one has to create an account which is not true. I had hit an error using DNS-01 at the time. They have an entirely different page for ACME clients but it is not or was not linked from anywhere on the main page.
If anyone else ran into that it's just a matter of adding
--server letsencrypt
castillar76
You can also permanently change your default to LE — acme.sh actually has instructions for doing so in their wiki.
I rather liked using ZeroSSL for a long time (perhaps just out of knee-jerk resistance to the “Just drink the Koolaid^W^W^Wuse Let’s Encrypt! C’mon man, everyone’s doing it!” nature of LE usage), but of late ZeroSSL has gotten so unreliable that I’ve rolled my eyes and started swapping things back to LE.
ndsipa_pomu
I only started using it after the default was ZeroSSL, but it's easy to specify LetsEncrypt instead
notherhack
I feel like acme.sh is the kind of client she's ranting about. 8000 lines of shell code in acme.sh and more in dozens of user-contributed hook scripts, and over 1000 open issues on github?
Personally I like https://github.com/dehydrated-io/dehydrated. Same concept as acme.sh but only 2500 lines of shell and 54 open issues. You do have to roll your own hook script though.
Curiously, first commits for both acme.sh and dehydrated were in December 2015. Maybe they both took a security class at uni that fall.
corford
Agree with the acme.sh recommendation. It's my favourite by far (especially, as you point out, when leveraging with DNS-01 challenges so you can sidestep most of the security risks the article author worries about)
12_throw_away
Dunno about the protocol, but man, working with certbot and getting it do what I wanted was ... well, a lot more work than I would have guessed. The hooks system was so much trouble that I ended up writing my own.
But yeah, can definitely recommend DNS-01 over HTTP-01, since it doesn't involve implicitly messing with your server settings, and makes it much easier to have a single locked server with all the ACME secrets, and then distribute the certs to the open-to-the-internet web servers.
egorfine
certbot is complexity creep at it's finest. I'd love to hear Rachel's take on it.
+1 for acme.sh, it's beautiful.
xorcist
acme.sh is 8000 lines, still a magnitude better than certbot for something security-critical, but not great.
tiny-acme.py is 200 lines, easy to audit and incorporate parts into your own infrastructure. It works well for the tiny work it does but it does support anything more modern.
sam_lowry_
I am running an HTTP-only blog and it's getting harder every year not to switch to HTTPS.
For instance, Whatsapp can not open HTTP links anymore.
pornel
You're making a mistake assuming that the push for HTTPS-only Web is about protecting the content of your site.
The problem is that mere existence of HTTP is a vulnerability. Users following any insecure link to anywhere allow MITM attackers to inject any content, and redirect to any URL.
These can be targeted attacks against vulnerabilities in the browser. These can be turning browsers into a botnet like the Great Cannon. These can be redirects, popunders, or other sneaky tab manipulation for opening phishing pages for other domains (unrelated to yours) that do have important content.
Your server probably won't even be contacted during such attack. Insecure URLs to your site are the vulnerability. Don't spread URLs that disable network-level security.
projektfu
You can proxy it, which for a small server might be the best way to avoid heavy traffic, through caching at the proxy.
g-b-r
For god's sake, however complex ACME might be it's better than not supporting TLS
bigstrat2003
There's no good reason to serve a blog over TLS. You're not handling sensitive data, so unencrypted is just fine.
foobiekr
The reason is to prevent your site from becoming a watering hole where malicious actors use it to inject malware into the browsers of your users.
TLS isn't for you, it's for your readers.
throw0101b
> You're not handling sensitive data, so unencrypted is just fine.
Except when an adversary MITMs your site and injects an attack to one of your readers:
* https://www.infoworld.com/article/2188091/uk-spy-agency-uses...
Further: tapping glass is a thing, and if the only traffic that is encrypted is the "important" or "sensitive" stuff, then it sticks out in the flow, and so attackers know to focus just on that. If all traffic is encrypted, then it's much harder for attackers to figure out what is important and what is not.
So by encrypting your "unimportant" data you add more noise that has to be sifted through.
cAtte_
relevant blog post and HN discussion: https://news.ycombinator.com/item?id=22146291
g-b-r
Do you consider only religion, health and political data to be sensitive??
What someone chooses to read on a blog is no one else's business, and can be very sensitive.
null
agarren
Why? I can understand the argument that you don’t want an ISP or a middlebox injecting ads or scripts (valid I think even if I’ve never encountered it to my knowledge), but otherwise you’re publishing content intended for the world. There’s presumably nothing especially sensitive that you need to hide on the wire.
mort96
> (valid I think even if I’ve never encountered it to my knowledge)
Visitors to your website may encounter it. Do you not care that your visitors may be seeing ads?
You're also leaking what your visitors are reading to their ISPs and governments. Maybe you don't consider anything you write about to be remotely sensitive, but how critically do you examine that with every new piece you write?
If you wrote something which could be sensitive to readers in some parts of the world (something about circumventing censorship, something critical of some religion, something involving abortion or other forms of healthcare that some governments are cracking down on), do you then add SSL at that point? Or do you refrain from publishing it?
Personally, I like the freedom to just not think about these things. I can write about whatever I want, however controversial it might be in some regions, no matter how dangerous it is for some people to be found reading about it, and be confident that my readers can expect at least a baseline of safety because my blog, like pretty much every other in the world today, uses cryptography to ensure that governments and ISPs can't use deep packet inspection to scan the words they read or use MITM to inject things into my blog. Does it really matter? Well probably not for my site specifically, but across all the blogs and websites in the world and all the visitors in the world, that's a whole lot of "probably not"s which all combine together into a huge "almost definitely".
sam_lowry_
Why? The days of MITM boxes injecting content into HTTP traffic are basically over, and frankly they never were a thing in my part of the world.
I see no other reason to serve content over HTTPS.
JoshTriplett
> Why? The days of MITM boxes injecting content into HTTP traffic are basically over
The reason you don't see many MITM boxes injecting content into HTTP anymore is because of widespread HTTPS adoption and browsers taking steps to distrust HTTP, making MITM injection a near-useless tactic.
(This rhymes with the observation that some people now perceive Y2K as overhyped fear-mongering that amounted to nothing, without understanding that immense work happened behind the scenes to avert problems.)
g-b-r
You see no reason for privacy, ok
DonHopkins
[flagged]
red_admiral
It bugs me that we're still on RSA/4096. Ed25519 has fewer parameters to mess around with (no custom exponent or modulus), keys and signatures are shorter and have a well-defined forman (just binary data) and there's no network byte order confusion.
Meanwhile, ECDSA is so complex to write that most people will get it wrong and end up with a security hole that makes the NSA happy.
ddtaylor
> Random side note: while looking at existing ACME clients, I found that at least one of them screws up their encoding of the publicExponent and ends up interpreting it as hex instead of decimal. That is, instead of 65537, aka 0x10001, it reads it as 0x65537, aka 415031! > > Somehow, this anomaly exists and apparently doesn't break anything? I haven't actually run the client in question, but I imagine people are using it since it's in apt.
> So, yes, instead of saying that "e" equals "65537", you're saying that "e" equals "AQAB". Aren't you glad you did those extra steps?
Oh JSON.
For those unfamiliar with the reason here, it’s that JSON parsers cannot be relied upon to treat numbers properly. Is 4723476276172647362476274672164762476438 a valid JSON number? Yes, of course it is. What will a JSON parser due with it? Silently truncate it to a 64-bit or 63-bit integer, or a float, probably or if you’re very lucky emit an error (a good JSON decoder written in a sane language like Common Lisp would of course just return the number, but few of us are so lucky).
So the only way to reliably get large integers into and out of JSON is to encode them as something else. Base64-encoded big-endian bytes is not a terrible choice. Silently doing the wrong thing is the root of many security errors, so it not wrong to treat every number in the protocol this way. Of course, then one loses the readability of JSON.
JSON is better than XML, but it really isn’t great. Canonical S-expressions would have been far preferable, but for whatever reason the world didn’t go that way.