Let's Not Encrypt (2019)
213 comments
·October 14, 2025jrockway
pavel_lishin
We had this happen to us at work, once.
We were working on some feature for a client's website, and suddenly things started breaking. We eventually tracked it down to some shoddy HTML + Javascript being on our page that we certainly didn't put there, and further investigation revealed that our ISP - whom we were paying for a business connection - was just slapping a fucking banner ad on most of the pages that were being served.
This was around ... 2008? I wonder if they were injecting it into AJAX responses, too.
My boss called them up and chewed them several new assholes, and the banner was gone by afternoon.
brightball
I don't know what the official name for the phenomena is where you widely experience a huge problem, the market reacts to fix the problem almost completely, people who never experienced the problem in the first place because the world before them solved it begin to complain about the solution, people defending the solution are mocked by people who have no context, the solution is rolled back and all the people who did it are happy with their win for a brief moment in time, then the original problem comes back in force but all of the walls put up to tear down the original solution make it 1000x harder to fix.
I feel like there needs to be a name for this. For now, "Those who do not learn from history are doomed to repeat it." is the most apt I think.
Happens constantly when you're essentially born on 3rd base. Maybe that's the proper name. Born on 3rd Base Syndrome.
schoen
I think you want the Preparedness Paradox.
null
sd9
Chesterton's Fence?
null
slowmovintarget
Glass-Steagall comes to mind.
[1] https://en.wikipedia.org/wiki/Glass%E2%80%93Steagall_legisla...
supportengineer
Is this about vaccines?
benjiro
> suddenly things started breaking. We eventually tracked it down
Amateur level ... Around 2006, we enjoyed some clients complaining why information on our CMS was being duplicated.
No matter what we did, there was no duplication on our end. So we started to trace the actions from the from the client (inc browser, ip etc). And low and behold, we got one action coming from the client, and another from a different IP source.
After tracing back the IP, it was a anti virus company. We installed the software on a test system, and ... Yep, the assh** duplicated every action, inc browser settings, session, you name it.
Total and complete mimic beyond the IP. So any action the user did + the information of the page, was send to their servers for "analyzing".
Little issue ... This was not from the public part of our CMS but the HTTPS protected admin pages!
Sure, our fault for not validating the session with extra IP checks but we did not expect the (admin only) session to leak out from a HTTPS connection.
So we tried to see if they reacted to login attempts at several bank pages. O, yes, they send the freaking passwords etc. We tried on a unused bank account, o, look, it was duplicating bank actions (again, bank at fault for not properly checking the session / ip).
It only failed on a bank transfer because the token for authorization was different on their side, vs our request.
You can imagine that we had a rather, how to say, less then polite talks / conversation with the software team behind that anti-virus. They "fixed it" in a new release. Did they remove the whole tracking? Nowp, they just removed the code for the session stealing if the connection was secure.
O, and the answer to why they did it. "it a bug" (yea, right, your mimic a total user behavior, and its a "bug"). Translation: Legal got up their behinds for that crap and they wanted to avoid legal issues with what they did.
Remember folks, if its free your the product. And when its paid, you are often STILL the product. And yes, that was a paid anti-virus "online protection". And people question why i never run any anti-virus software beyond a off-line scan from time to time, and have Windows "online" protections disabled.
Companies just can not stop themselves from being greedy. Same reason why i NEVER use Windows 11... You expect if you paid for Windows, Office or whatever, to not be the product, but hey ...
sehugg
Haha, yeah this kind of stuff made HTTP long polling requests over mobile pretty fun. IIRC, we ran HTTP over IMAP and POP3 ports for cases where port 80 was unreliable.
jabroni_salad
My ISP (Mediacom) appears to have a deal with certain websites to display service messages. The only two I've encountered it on is Amazon and Facebook but they are somehow able to insert a maintenance banner at the top of those two when downtime is anticipated or if I am near the end of my bandwidth quota. Haven't gotten any ads this way but they have the technology.
akerl_
The only ways I can think of where this would be possible is if:
1. You're somehow connecting to Facebook and Amazon over HTTP, not HTTPS
2. Your browser has an extension from your ISP installed that's interfering with content
3. You've trusted a root CA from your ISP in your browser's trust store
Philip-J-Fry
Pretty sure this is done on your router. They terminate TLS on your router, inject their malware and then re-encrypt.
navigate8310
This was common back in the early 2010s with Indian ISPs as well, particularly the state-controlled BSNL.
dunham
Our app reports all of the runtime exceptions to the server. We had one years ago (maybe before 2008) that was caused by somebody's "toolbar" replacing a method like Element.appendChild with one that sometimes crashed.
This inspired me to add a list of all script tags to error reports.
nurettin
The modern version of that is brave or ublock or screen reader extensions or spyware inserting JS or data attributes which leads to user complaints. We don't need ISPs hacking lines, people do it to themselves when they sign up to shady sms services on download sites.
nasretdinov
I remember when Wi-Fi was first introduced in Moscow metro system (underground trains) in 2014, this is exactly what happened: most sites were HTTP and thus allowed ads to be injected essentially as form of payment for the Wi-Fi service. Almost immediately most Russian web sites switched to HTTPS because the ads often broke CSS layout and caused other issues in general
stop_nazi
And who now uses metro wifi «от оленевода»? Noone
bilekas
> I think that if we didn't do TLS, every ISP would be injecting ads into websites these days.
Thats the least of the problems, they (anyone with basic access to your network actually) could easily overwrite every cooking or session on your machine to use their referral links. IE : Honey &* PayPal's Fraud [0] without you having any idea, now maybe you don't care, but it's stealing other peoples potential earnings.
[0] https://www.theverge.com/24343913/paypal-honey-megalag-coupo...
bgwalter
ISPs should be regulated like common carriers. Modifying the data in transit should be illegal. ISP supercookies should be illegal.
Avamander
We can do that and also use HTTPS to be more certain of it.
throw_a_grenade
> I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load [...]
They can MITM the connection between the host and LE (or any other CA resolver, ACME or non-ACME, doesn't matter). This was demonstrated by the attack against jabber.ru, at the time hosted in OVH. I recommend reading the writeup by the admin (second link from the top in TFA).
This worked, because no-one checked CT.
1718627440
They can also just tell some CA to sign a certificate.
throw_a_grenade
I don't believe this happens. Should something like this happen, the CA would be immediately distrusted by browsers, not as punishment but to deter state actors. It would give CAs argument, “we won't do it, because it means end of business for us”. Compelling by the state to do something that destroys a company is illegal in many jurisdictions, in the law that prescribes what the state can order employees of the company and what they cannot.
fragmede
LE checks from multiple places, so you'd have to MITM all of them, which makes it seem rather challenging to actually pull off.
Ajedi32
AFAIK that's not a required feature of the DV process, and even if it were it wouldn't help if the MITM was happening between the website and the wider internet.
That said, I don't think there's a way to stop a nation state from seizing control of a domain they control the TLD name servers for without something like Namecoin where the whole DNS system is redesigned to be self-sovereign.
throw_a_grenade
They just MITMed on the link between the victim and it's immediate next hop, most likely by coercing the ISP (OVH). (See the writeup, where the admin discusses TTL values). No amount of multiview is sufficient if you control the uplink. Both DNS resolution and IP routing worked fine and IP packets were intercepted in attacker-controlled envirenment (on-path MITM box).
What would somewhat help would be CAA record with specified ACME account key. The attackers would then have to alter DNS record, would be harder as you describe. (Or pull the key from VM disk image, which would cross another line).
octoberfranklin
Your comment is disingenuous -- the article isn't arguing against TLS. It is arguing against WebPKI.
You can stop ISP ad injection with solutions much less complex than WebPKI.
Simply using TOFU-certificates (Trust On First Use) would achieve this. It also gives you the "people who controlled this website the first time I visited it still control it" guarantee you mention in your last paragraph.
TOFU isn't ideal, but it's an easy counterexample to your claims.
iamnothere
TOFU would allow the ISP to MITM every connection and then serve you ads. The ISP could simply provide their own cert to you.
blenderob
> Simply using TOFU-certificates (Trust On First Use) would achieve this.
As a user how would I know if I should trust the website's public key on first use?
akerl_
I guess we could organize regional parties where site operators and users meet up and exchange key material. I'm sure that will scale and won't have any problems of its own.
1718627440
The same way I know which real person is serving me the website. I don't I merely know that the owner doesn't change randomly.
octoberfranklin
The same way you know if you should trust the WebPKI Rube-Goldberg-contraption: you don't.
It's a counterexample, not a recommendation.
If you need this guarantee, use self-certifying hostnames like Tor *.onion sites do, where the URL carries the public key. More examples of this: https://codeberg.org/amjoseph/not-your-keys-not-your-name
stop_nazi
1. using http-only for decades, never seen “injections” 2. just change ISP
bilekas
> using http-only for decades, never seen “injections”
This has to be a rage bait comment, but anyway, how do you expect 'injections' to show up on 'http-only' ?
"Don't mind us, we're just sitting in the middle of your traffic here and recording your logins in plaintext"
stop_nazi
I'm not talking about logins, it's supposed to be encrypted. If I go to read news that is open to an unlimited number of people, there is no need for encryption: the information is open.
Avamander
> 2. just change ISP
Not a viable option in a lot of places. Nor does anyone really even want to consider this possibility of their ISP being able to MITM something in the first place.
stop_nazi
If a provider does not provide data transmission, that provider is not competent. Period
perching_aix
> just change ISPs
I sure love when decisions reduce themselves to single points of consideration by virtue of them being discussed in a heated internet forum thread
stop_nazi
The problem with the horrible injections on the page can be solved very simply. If the information on a page is open, just pass that page openly and pass a checksum of page in the header. To prevent this sum from being tampered with, the server will encrypt it. Not the whole page, just the sum. You will save a lot of CPU time on server and on client, reduce CO₂ and so on
Avamander
So TLS with some "eNULL" ciphersuite. People have been there, tried that. There's very very little practical value in that over just doing proper encryption as well.
woodruffw
> Not this time. The technical problems are easy to solve. For decades, users of SSH have had a system (save the certificate permanently the first time you connect, and warn if it ever changes) that is optimal in a sense: it works at least as well as any other solution. It's trivial to implement, is completely free, involves no third parties, and lasts forever. To the surprise of absolutely no one, web browsers don't support it.
This is completely backwards: TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank, and (2) shouldn't be exposed to any MITM risk because they forget to. The entire point of a public key infrastructure like the Web PKI is to ensure that technical and non-technical people alike get transport security.
(The author appears to unwittingly concede this point with the SSH comparison -- asking my grandparents to learn SSH's host pinning behavior to manage their bank accounts would be elder abuse. It works great for nerds, and terribly for everyone else.)
ericbarrett
Does it even work great for nerds? I have seen a distressing amount of turning host key warnings off, or ignoring the warnings forever, or replacing a host key without any curiosity or investigation. Seems even worse in the cloud, where systems change a lot.
woodruffw
> Does it even work great for nerds?
No, but I was extending a charitable amount of credulousness :-)
evilduck
Even amongst nerds I've seen a significant amount of key pair re-use in my time, both 1:n::dev:servers and sometimes even 1:n::organization:devs. The transport security is moot when the user(s) discard all precautions and best practices on either end.
Avamander
Even in such cases it's not really moot if a forward-secure scheme is used, only old legacy implementations might not by now. So just the key being shared between machines does usually not compromise the security of individual sessions, especially not retroactively.
ghusto
Please let's not break something that works really well just to cater to those who don't know how to use the tools of their trade.
MrDarcy
The platform engineering team at my big corp work simply disabled host key checking in the cloud tool Python script they wrote for all of us to log into our bastion hosts.
For prod.
ssh —-known-hosts-file=/dev/null
20after4
Wow, that is a level of DGAF I haven't encountered before in production. No wonder data breaches are so common with that kind of YOLO security practices.
Spivak
I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect. Just scream if it ever changes. I get that they're expecting me to compare it to something out of band but nobody does that.
jeroenhd
Depends on the server. A VM you just installed on your own machine? A lab machine on the proxmox cluster? Probably.
A new cloud VM running in another city? I would trust it by default, but you don't have a lot of choice in many corporate environments.
Funnily enough, there is a solution to this: SSH has a certificate authority system that will let your SSH clients trust the identity of a server if the hostkey is signed and matches the domain the SSH CA provided.
Like with HTTPS, this sort of works if you're deploying stuff internally. No need to check fingerprints or anything, as long as whatever automation configured your new VM signs the generated host key. Essentially, you get DV certificates for SSH except you can't easily automate them with Let's Encrypt/ACME because SSH doesn't have tooling like that.
blenderob
> I think it's pretty reasonable to turn off the "yes i would like to accept this key" on first connect.
Why is it reasonable to trust the key on first use? What if the first use itself has a man-in-the-middle that presents you the middle-man's key? Why should I trust it on first use? How do I tell if the key belongs to the real website or to a middle-man website?
blenderob
> TOFU schemes aren't acceptable for the public web because the average user (1) isn't equipped to compare certificate fingerprints for their bank
This! Forget about average user. As a technical user too I don't know how I would compare fingerprints every single time without making a mistake. I could install software or write my own to do this on desktop but what would I do on cell phones?
And TOFU requires "trust" on first use. How do I make sure that if I should be trusting the website public key on first use? It doesn't seem like any easier to solve than PKI.
akerl_
This is the sleight of hand being employed when folks suggest TOFU mechanisms. The problem with any communication boils down to trust. The modern web PKI has a bunch of complexity and a plenty of rough edges in how it handles resolving that trust. TOFU is then proposed as a simpler solution with none of those pesky rough edges, but it doesn't have the rough edges because it leaves all the hard parts as an exercise for the reader.
It's a bit like suggesting that AES-GCM has risks so we ought to just switch to one-time-pads.
Avamander
> How do I make sure that if I should be trusting the website public key on first use? It doesn't seem like any easier to solve than PKI.
Usually such questions get replied to with a recommendation of implementing DNSSEC. Which is also obviously PKI and in many ways worse than WebPKI.
perching_aix
It's the usual hilarious flow of "HTTPS is dogshit, so here's the SSH fingerprint you should trust instead, served over HTTPS of course".
arielcostas
SSH fingerprints can also be provided via DNS with the SSHFP[0] DNS record, which coupled with DNSSEC and supposing you trust the DNS root and intermediate entities (whether that's IANA/ICANN, or alternatives like OpenNIC or Namecoin) allows you to check the SSH server fingerprints without HTTPS. At some point you probably need to trust someone anyway.
Or you can always get the fingerprint out of band. If it's some friend granting you SSH access to their server, or a vendor, or whatever, you can ask them to write the fingerprint on a piece of paper and give it to you, with you checking the paper comes from them and then checking them.
NoahZuniga
> You can make the warning go away by paying a third-party—who then pays Google—to sign your website's SSL certificate
This is just not true!!!! CAs don't pay google to be in their root store.
> But if someone is able to perform a man-in-the-middle attack against your website, then he can intercept the certificate verification, too
The reasoning goes that most MITM (potential) attacks are between you and your ISP. Let's encrypt can connect to the backbone basically directly, so most MITM attacks won't reach them. Also, starting on September 15, 2025 (Let's encrypt has been doing this for a while already though) all domain validation requests have to be made from multiple perspectives, making MITM attacks harder.
nicce
I don’t know whether they pay for Google but Google can dictate many things; otherwise they drop certificates from Chrome and this has happened.
NoahZuniga
Well, I do! And Google doesn't get paid!
> otherwise they drop certificates from Chrome and this has happened.
As far as I know, all the CAs Google dropped, this was because the CA misbehaved and misissued certs or was obviously failing at their job. Also, all CAs google has removed from their root store have also been removed by mozilla (or weren't removed because mozilla never included them).
akerl_
You're thinking of the CAB, which dictates which CAs are trusted. Google is a participant in that. The things they dictate are public and have to do with security requirements, not whether or not they pay Google money.
NoahZuniga
This is not true! CAB is a place where CAs and browsers agree on what the rules for CAs should be. Google, Mozilla, Microsoft and Apple all administrate their own root stores which individually decide what CAs are trusted on their platforms. Individual root stores decide on the rules for inclusion in their stores themselves, but these rules are essentially: You follow CAB rules + a few extra things. Mozilla for example requires (besides CAB rules) that whenever a CA becomes aware of an issue, they post a bug to bugzilla and get their shit together pretty quickly and keep mozilla up to date on what they're doing.
1718627440
> But if someone is able to perform a man-in-the-middle attack against your website, then he can intercept the certificate verification, too. In other words, Let's Encrypt certificates don't stop the one thing they're supposed to stop.
But the certificate is signed with the key of Let's Encrypt and your own, both of which the private key never leave the server.
voidmain
The author is claiming that a sufficiently capable attacker can MITM the ACME protocol used to automatically renew certificates (and thus get a valid certificate issued for the victim domain with the attacker's private key). This is probably true as far as it goes, but certificate transparency logs make such attacks easy to detect, and browsers will not accept certificates that are not in the logs. Web sites that do not monitor CT logs probably are vulnerable to well resourced attacks of this kind, but I don't think there is a huge plague of them, maybe because attackers with the ability to MITM DNS requests for LE don't want to burn that capability on such easily detected attacks.
ameliaquining
Also, if the CA runs the ACME check from five different validation servers that aren't all on the same continent, which Let's Encrypt does and all other CAs will be required to do in a couple years, then it is dramatically harder to simultaneously MITM them all. And if you really want to, you can use DNS-01 with DNSSEC, which means an attacker would have to be able to compromise DNSSEC on top of everything else.
codethief
> Web sites that do not monitor CT logs probably are vulnerable to well resourced attacks of this kind
How many web site owner really do that? I mean, even Cloudflare hasn't been running a tight ship in this regard[0] until recently.
[0]: https://blog.cloudflare.com/unauthorized-issuance-of-certifi...
ownagefool
Yeah, the argument apparently doesn't really grok how certificates are issued and why the changes exist.
Manual long term keys are frowned upon due to potential keyleaks, such as heartbleed, or admin misuse, such as copy of keys on lots of devices when you were signing that 10 year key.
Automated and short lived keys are the solutions to these problems and they're pretty hard to argue against, especially as the key never leaves the server, so the security concerns are invalid.
That's not to say you can't levy valid criticism. I'm not sure if the author is entirely serious either though.
p.s. Certbot and Cert-manager are probably fine, but they're also fairly interesting attack vectors
bilekas
Yeah it reads as if the OP misunderstands the attack vectors of SSL. If there's a misconfiguration, or the server admin is not correctly authenticating the authority, then sure. But skips over what they mean.
Being generous I would say they are referring to if the client has an invalid ssl approved on their local, in which case its a client problem.
To ignore Encryption altogether is a silly idea. Maybe it shouldn't be so centralised to 1 company though.
null
maratc
My IT department performs a man-in-the-middle attack against all my VPN traffic, and issues on-the-fly certificates for all the sites I visit. There is zero warning on my side, and the only way I know of it is because I'm a nerd who looks into certificate chains sometimes. My other nerd coworkers are blissfully unaware.
EDIT: I understand how it works. This wasn’t my point.
jval43
They need to install their root certificate into your work machine's trust store. Which they can only do because they control the machine (or VPN software), and would not be possible for a regular machine.
maratc
Many people are using VPNs these days. Nothing prevents vpn-du-jour.com from similarly messing with your traffic. Moreover, any software you install with privileges could also install certificates. In this sense, “a regular machine” is only the one which has no other software installed.
The point (I think) that TLA is trying to make is that encryption isn’t enough. It wouldn’t be a good situation where someone looks at their house burning and says “well at least nobody could ever read my https traffic.”
akerl_
This only works because you company's endpoints have been configured to trust the company's root CA. Which makes sense, because it's their device and their VPN.
AntronX
I wonder if you could roll your own VPN tunnel that directly connects to your home internet IP and passes custom encrypted payload that your IT department cannot decode. Would they just drop the connection if they can't inspect what your are sending?
1718627440
The issue is that they control the device he is using, so they could simply verify it on device.
keepamovin
Yes, but if you use HSTS a regular browser will flag that. Perhaps your browser is also "MITM"d via a management policy? hehe :S
Spivak
Also MITMing a user is much easier than MITMing Let's Encrypt themselves who perform multiple checks from different locations.
NegativeK
This article feels like an opinion piece with an axiom of perfection or nothing.
samcat116
Thats most articles posted to HN
Antibabelic
Perfection is a very useful guide star. Just because it may not exist doesn't mean we shouldn't hold up deeply flawed projects to it.
llm_nerd
Indeed, their main grip seems to be with DV, and they seem to hold only EV certs as legitimate. They miss the entire value proposition and purpose of DVs.
MITM is a user->service concern. If someone is between a service and LE, there are much bigger problems.
Ajedi32
Certainly a MITM between a website and LE is less likely than a MITM between a user on a random public Wi-Fi network and the website, but I've often wondered why more attention hasn't been given to securing the domain validation process itself.
There are a lot of random internet routers between CAs and websites which effectively have the ability to get certificates for any domain they want. It just seems like such an obvious vulnerability I'm kinda shocked it hasn't been exploited yet. Perhaps the fact that it hasn't is a sign such an attack is more difficult than my intuition suggests.
Still, I'd be a lot more comfortable if DNSSEC or an equivalent were enforced for domain validation. Or perhaps if we just cut out the middleman and built a PKI directly into the DNS protocol, similar to how DANE or Namecoin work.
ameliaquining
A lot of attention has been given to securing the domain validation process. The primary defense is Multi-Perspective Issuance Corroboration, which Let's Encrypt already does and all CAs will be required to do in a couple years. The idea is that you run the check from five different servers on two different continents, so that compromising just one internet router isn't enough, you have to get one on every path, which is much harder to pull off.
Also, Let's Encrypt validates DNSSEC for DNS-01 challenges, so you can use that if you like, although CAs in general are not required to do this, there are various reasons why a site operator might not want to, and most don't.
There are two fundamental problems with DANE that make it unworkable, and that would presumably also apply to any similar protocol. The first is compatibility: lots of badly behaved middleboxes don't let DNSSEC queries through, so a fail-closed system that required end-user devices to do that would kick a lot of existing users off the internet (and a fail-open one would serve no security purpose). The other is game-theoretic: while the high number of CAs in root stores is in some ways a security liability, it also has the significant upside that browsers can and do evict misbehaving CAs, secure in their knowledge that those CAs' customers have other options to stay online. And since governments know that'll happen, they very rarely try to coerce CAs into misissuing certificates. By contrast, if the keepers of the DNSSEC keys decided to start abusing their power, or were coerced into doing so, there basically wouldn't be anything that anyone could do about it.
Joker_vD
EV certs seem to have basically the same verification policies that CAs had for ordinary certificates back in the early 2000s (i.e., really not that much at all), so I am intrigued as to what the DV has to offer except "it's basically self-signed but with extra steps and the rest of the world will trust it".
> If someone is between a service and LE
There is always someone there: my ISP, my government that monitors my ISP, the LE's ISP, and the US government that monitors the LE's ISP.
DannyBee
I mean, it's not untypical as a view.
In reality, successful society lives halfway down tons of slippery slopes at any given point in time, and engineers in particular hate this. Yet this has been true since basically forever.
I'm sure cavemen engineers complained about how it's not secure to trust that your cave is the one with the symbol you made on the wall, etc.
rini17
Sure let's eat crap without complaint, nothing is perfect anyway. /s
sigmar
His points aren't bad, but it seems like a great example of "perfect is the enemy of good." Let's Encrypt does an incredible amount of good by adding SSL to sites that wouldn't have had it otherwise.
ghusto
His points against LetsEncrypt are that:
- It introduces an exploitable attack vector
- He sees it as a Trojan Horse, and fears for what will happen in the future
There are a few static sites I run where there is no exchange of information. I'm locked into ensuring certificates exist for these sites, even though there's nothing to protect (unless you count the ensuring the content is really from me as protecting something).
nearbuy
Except his points are mostly straight up factually wrong.
sigmar
It does kind of suck that Let's Encrypt is entirely funded by donations from corporations like Google and Facebook. If they pulled support what would happen? Would 92% of websites we visit get downgraded to http?[1]
Also his point that it "supplants better solutions" is inarguably true. The 2010s had lots of conversations about certificate transparency and CA changes that just don't happen today because the existence of Let's Encrypt made it so easy to put a cert-signed website online.
[1] of US firefox users: https://letsencrypt.org/stats/
AndrewStephens
It is a shame that HTTPS is required for sites these days but that doesn't change the fact that it really is necessary, even for the smallest of blogs.
HTTPS does three interrelated things:
Encryption - the data cannot be read by an intermediary, which protects your readers' privacy. You don't want people to know what pages you read on BigBank.com or EmbarassingFetish.com.
Tamper Proofing - the data cannot be changed by an intermediary, which protects your readers' (and your server) from someone messing with the data, say substituting one bank account number for another when setting up a payment, etc.
Site Authentication - ensures that the browser is connected to the server it says it is, which also prevents proxying. Without this an intermediary can impersonate any site.
Before the big push for encrypting everything it was not uncommon to hear of ISPs inspecting all traffic to sell to advertisers, or even injecting ads directly into pages. HTTPS makes this much more difficult.
jeroenhd
HTTPS is hardly required for websites. Web applications may restrict sensitive actions to HTTPS, but websites over HTTP still work fine.
I try to avoid them because they allow sketchy ISPs to inject ads and other weirdness into my browser, but normal browsers will still accept HTTP by default.
If you don't want people to know you're visiting EmbarrassingFetish.com, EmbarrassingFetish.com also needs to implement ECH (eSNI's replacement) and your browser must have it enabled, otherwise anyone can on the line can still sniff out what domain you're connecting to.
I don't think site authentication is practical, though. For some use cases it works (i.e. validating the origin before firing off a request to a U2F/FIDO2 authenticator), but for normal users, mybank.com and securemybank.com may as well be equivalent (and some shitty important services actually use fake sounding domains like that, like PayPal for instance). Unless you remember the country and state and town your bank is registered in, even EV certificates can't help you because there can be multiple companies with the name Apple Inc. that all deserve a certificate for their website.
AndrewStephens
Hey, I only read EmbarrassingFetish.com for the recipes section (I recommend their carrot cake.) I'm not into the rest of the stuff there and you can't prove it thanks to HTTPS.
More seriously, you are not wrong. Site Authentication is still a problem and actually the weakest part of HTTPS but it is also more of a people problem than a technical one. Nothing stops somebody from registering MyB4nk.com but at least HTTPS stops crooks spoofing MyBank.com exactly.
kbolino
> but websites over HTTP still work fine
The best attack surfaces always do. If I'm a smart attacker, why would I impair your experience (at least, until I get what I want)? It's better to give you a false sense of security. There are, of course, dumber attacks that will show obvious signs. While many people do fall prey to such attacks from lapses in, or impairment to, their judgment, the smarter attacks hide themselves better.
The classical model of web security based around "important" sites and "sensitive" actions has been insufficient for decades. It was certainly wrong by the time the first coffee shop/airport/hotel wifi was created; by the time the first colocation provider/public cloud was created; by the time every visitor/student/employee of any library/university/company was given open Internet access; etc.
kbolino
I think this is the classical explanation and set of examples, which only really explain why HTTPS should be used on "important" websites. But HTTPS should be used on every website and you need a different explanation/example for justifying that.
To connect to a website on the Internet, you must traverse a series of networks that neither you nor the website control. If the traffic is not tamper-proof, no matter how "unimportant" it may seem, it presents the opportunity for manipulation. All it takes is one of the nodes in the path to be compromised.
Scripts can be injected--even where none already exist; images can be modified--you see a harmless cat picture, the JPEG library gets a zero-day exploit; links can be added and manipulated--taking you to other, worse sites with more to gain by fooling you.
None of this is targeted at you or the website per se. It's targeted at the network traffic. You're just the victim.
Avamander
> If the traffic is not tamper-proof, no matter how "unimportant" it may seem, it presents the opportunity for manipulation. All it takes is one of the nodes in the path to be compromised.
It also ignores one really important fact that these pipes are not perfect, they do introduce errors into the stream. To ensure integrity we would still need to checksum everything and in a way that no eager router "fixes".
We want our bank statements to be bit-perfect, our family pictures not to be corrupted, so on and on.
So even if someone handwaves away all the reasons why we need encryption everywhere (which is insane), we would still need something very similar to TLS and CAs being used. Previous TLS versions have even had "eNULL" ciphersuites.
kbolino
It would have been nice to have been able to keep eNULL around, but a) it was basically never used in practice and b) the way it worked practically guaranteed it was impossible for the average sysadmin to get right. There's never really a situation in which you might want to negotiate eNULL instead of a specific encryption algorithm. Either the site/page is encrypted or it isn't. Encryption-or-not is on a completely different axis from the type of encryption to use. And configuring older versions of SSL/TLS involved traversing a minefield of confusing, arcane, and trap-laden knobs whose documentation was written for the wrong audience.
ghusto
> which protects your readers' privacy. You don't want people to know what pages you read on BigBank.com or EmbarassingFetish.com
DNS requests leak this information.
> Tamper Proofing > Site Authentication
There are _many_ sites where this is not important. I want HTTPS for my bank, but I couldn't care less if someone wants to spend the time and effort to intercept and change pages from a blog I read.
kbolino
> I couldn't care less if someone wants to spend the time and effort to intercept and change pages from a blog I read.
I do not understand why so many people think having, say, zero-day exploits served to them is not a problem.
The blog is not the target; the unsecured connection is.
Approximately nobody is taking the time to hand craft a specific modification of some random blog. They develop and use tools that manipulate any packet streams which allow tampering, without the slightest concern for how (un-)important the source of those packets is.
1718627440
> The official way to renew Let's Encrypt certificates is automatically, with a tool called certbot. It downloads a bunch of untrusted data from the web, and then feeds that data into your web server, all as root.
Why would you run certbot as root? You don't do that with any other server.
jval43
It used to be the case that you had to run certbot as root or it just wouldn't work. At least not officially, you could get it work without root but it wasn't supported.
The official docs still recommend doing so: >Certbot is most useful when run with root privileges, because it is then able to automatically configure TLS/SSL for Apache and nginx.
Avamander
I think I've never ran it as root since it came out by using the `webroot` method, where certbot just writes the challenges to a specified path it has access to and that's it.
1718627440
I haven't experienced that, since I prefer acmetool.
charles_f
I remember when you had to give verisign a few 100 or thousands every year, some random dev would download the thing to their machine and circulate it by email for update. None of the competitors were cheaper either. That day wasn't either better or more secure, less so. Ironically, the solution author is pushing forward (basically self signed) is much worse to prevent mitm attacks.
I somewhat agree with the precept, it's not great that the web is controlled by Google, beyond just tls certs. Something that changed since this was written is precisely that you have alternatives like zerossl.
Saying that letsencrypt doesn't bring any security is plain wrong though. The OWASP top ten doesn't list certificate theft or chain of trust mitm attack, but does have a category for cryptographic failures. My hotel has full control of the wifi, but it hardly has an opportunity to mitm my chain of trust. Same goes for ISP. When you have a cert corresponding to your dns record, it at least shows that you have some control over the infra that is behind that record.
jjgreen
Heh, that page "Verified by Let's Encrypt"
croes
> Update 2023-11-05 Yeah, I've got an LE cert now. And I don't want to talk about it.
_def
That quote is the only thing you have to read of that article besides the headline.
dijit
The ironic observation about the page using an LE cert is fantastic; Browser mandates make the encryption discussion moot. If you don't use it, your argument literally won't load for a modern audience.
It speaks to the problem of digital decay. We can still pull up a plain HTTP site from 1995, but a TLS site from five years ago is now often broken or flagged as "insecure" due to aggressive deprecation cycles. The internet is becoming less resilient.
And this has real, painful operational consequences. For sysadmins, this is making iDRAC/iLO annoying again.
(for those who don't know what iDRAC/iLO are, it's the out-of-band management controller that let you access a server's console (KVM) even when the OS is toast. The shift from requiring crappy, insecure Java Web Start (JWS) to using HTML5 was a massive win for security and usability - old school sysadmins might remember keeping some crappy insecure browser around (maybe on a bastion host) to interact with these things because they wouldn't load on modern browsers after 6mo)
Now, the SSL/TLS push is undoing that. Since the firmware on these embedded controllers can't keep pace with Chrome's release schedule, the controllers' older, functional certificates are rejected. The practical outcome is that we are forced to maintain an old, insecure browser installation just to access critical server hardware again.
We traded one form of operational insecurity (Java's runtime) for another (maintaining a stale browser) all because a universal security policy fails to account for specialised, slow-to-update infrastructure... I can already hear the thundering herd approaching me: "BUT YOU NEED FIRMWARE UPDATES" or "YOU NEED TO DEPRECATE YOUR FIRMWARES IF NOT SUPPORTED".. completely tone-deaf to the environments, objectives and realities where these things operate.
notatoad
>if you don't use it, your argument literally won't load for a modern audience
this is just a flat-out lie. yes, modern browsers will stilll load websites over http. come on.
peacebeard
SSL benefits a user entering a password on a public network.
A MITM attack against your renewal does not expose your private key. I don’t think that causes the harm the article suggests.
1718627440
It however does allow to intercept all future connections to your webserver until you recognize it and publish a revocation certificate.
Avamander
No. The private key does not leave the server, you can't use the certificate without it.
1718627440
When you MITM a certificate request the attacker can provide it's own key.
lanyard-textile
Let’s Encrypt has always been a saving grace in my eyes: When it first entered the scene, it solved a problem we all loathed dealing with.
So I’ve always been fond of it and never really thought twice of it. While it’s rare for companies to support a shared resource together, this was a situation where it made sense.
But this is a good reminder to be wary of even the most benevolent looking tools and processes.
I think that if we didn't do TLS, every ISP would be injecting ads into websites these days. Making it difficult for middle-of-the-road interlopers is a good thing. ISPs don't want the customer service burden of proxy configurations and custom certs (god knows your IT department hates the support aspect of this tampering), so TLS keeps us free of excessive advertising. (Of course, they like do tampering with DNS which is why we have to do DNS-over-HTTPS. If you make it easy to tamper with your traffic, your ISP has a good business case to tamper with your traffic. Sad but true.)
I'm not as convinced as the author is that nation states can easily tamper with certificates these days. I am not sure how much CT checking we do before each page load, but either nation states are compelling the issue of certs that aren't in the CT database, or they are and you can just get a list of who the nation states are spying on. Seems like less of a problem than it was a decade ago.
The author seems to miss the one guarantee that certificates provide; "the same people that controlled this site on $ISSUANCE_DATE control the site right now". That can be a useful guarantee.