Skip to content(if available)orjump to list(if available)

Nginx introduces native support for ACME protocol

Shank

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

DNS-01 is probably the most impactful for users of nginx that isn't public facing (i.e., via Nginx Proxy Manager). I really want to see DNS-01 land! I've always felt that it's also one of the cleanest because it's just updating some records and doesn't need to be directly tethered to what you're hosting.

kijin

A practical problem with DNS-01 is that every DNS provider has a different API for creating the required TXT record. Certbot has more than a dozen plugins for different providers, and the list is growing. It shouldn't be nginx's job to keep track of all these third-party APIs.

It would also be unreasonable to tell everyone to move their domains to a handful of giants like AWS and Cloudflare who already control so much of the internet, just so they could get certificates with DNS-01. I like my DNS a bit more decentralized than that.

sureglymop

That is true and it is annoying. They should really just support RFC 2136 instead of building their own APIs. Lego also supports this and pretty much all DNS servers have it implemented. At least I can use it with my own DNS server...

https://datatracker.ietf.org/doc/html/rfc2136

cpach

This is a very good point.

I wonder what a good solution to this would be? In theory, Nginx could call another application that handles the communication with the DNS provider, so that the user can tailor it to their needs. (The user could write it in Python or Go or whatever.) Not sure how robust that would be though.

uncleJoe

no need to wait: https://en.angie.software/angie/docs/configuration/modules/h...

(angie is the nginx fork lead by original nginx developers that left f5)

tmcdos

What are the main differences between Angie and freenginX.org ?

rfmoz

The problem with DNS-01 is that you can only use one delegation a time. I mean, if you configure a wildcard cert with _acme-challenge.example.com in Google, you couldn't use it in Cloudflare, because it uses a single DNS authorization label (subdomain).

The solution has been evolving along these years and currently the las IETF draft is https://datatracker.ietf.org/doc/draft-ietf-acme-dns-account...

The new proposal brings the dns-account-01 challenge, incorporating the ACME account URL into the DNS validation record name.

clvx

But you have to have your dns api key loaded and many dns providers don’t allow api keys per zone. I do like it but a compromise could be awful.

qwertox

You can make the NS record for the _acme-challenge.domain.tld point to another server which is under your control, that way you don't have to update the zone through your DNS hoster. That server then only needs to be able to resolve the challenges for those who query.

yupyupyups

It's time for DNS providers to start supporting TSIG + key management. This is a standardized way to manipulate DNS records, and has a very granular ACL.

We don't need 100s of custom APIs.

https://en.m.wikipedia.org/wiki/TSIG

reactordev

The whole point is to abstract that from the users so they don’t know it’s a giant flat file. Selling a line at a time for $29.99. (I joke, obviously)

immibis

General note: your DNS provider can be different from your registrar, even though most registrars are also providers, and you can be your own DNS provider. The registrar is who gets the domain name under your control, and the provider is who hosts the nameserver with your DNS records on it.

qwertox

Yes, and you can be your own DNS provider only for the challenges, everything else can stay at your original DNS provider.

bananapub

no you don't, you can just run https://github.com/joohoi/acme-dns anywhere, and then CNAME _acme_challenge.realdomain.com to aklsfdsdl239072109387219038712.acme-dns.anywhere.com. then your ACME client just talks to the ACME DNS api, which let's it do nothing at all aside from deal with challenges for that one long random domain.

Arnavion

You can do it with an NS record, ie _acme_challenge.realdomain.com pointing to the DNS server that you can program to serve the challenge response. No need to make a CNAME and involve an additional domain in the middle.

8organicbits

There's a SaaS version as well, if you don't want to self-host.

https://docs.certifytheweb.com/docs/dns/providers/certifydns...

rglullis

I've been hoping to get ACME challenge delegation on traefik working for years already. The documentation says it supports it, but it simply fails every time.

If you have any idea how this tool would work on a docker swarm cluster, I'm all ears.

grim_io

Sounds like a DNS provider problem. Why would Nginx feel the need to compromise because of some 3rd party implementation detail?

toomuchtodo

Because users would pick an alternative solution that meets their needs when they don't have leverage or ability to change DNS provider. Have to meet users where they are when they have options.

ddtaylor

It's a bit of a pain in the ass, but you can actually just publish the DNS records yourself. It's clear they are on the way out though as I believe it's only a 30 day valid certificate or something.

I use this for my Jellyfin server at home so that anyone can just type in blah.foo regardless of if their device supports anything like mDNS, as half the devices claim to support it but do not correctly.

fmajid

My company's DNS provider doesn't even have an API so I delegated to a subdomain, hosted it on PowerDNS, and used Lego to automate the ACME.

quicksilver03

Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.

chaz6

One of Traefik's shortcomings with ACME is that you can only use one api key per DNS provider. This is problematic if you want to restrict api keys to a domain, or use domains belonging to two different accounts. I hope Nginx will not have the same constraint.

mholt

This is one of the main reasons Caddy stopped using lego for ACME and I wrote our own ACME stack.

navigate8310

You can use CNAME to handle multiple DNS challenge providers. https://doc.traefik.io/traefik/reference/install-configurati...

samgranieri

I use dns01 in my homelab with step-ca with caddy. It's a joy to use

reactordev

+1 for caddy. nginx is so 2007.

darkwater

Caddy is just for developers that want to publish/test the thing they write. For power users or infra admins, nginx is still much more valuable. And yes, I use Caddy in my home lab and it's nice and all but it's not really flexible as nginx is.

RadiozRadioz

So a tool's value should be judged as inversely proportional to its age?

supriyo-biswas

Only if they'd get the K8s ingress out of the WIP phase; I can't wait to possibly get rid of the cert-manager and ingress shenanigans you get with others.

attentive

Yes, ACME-DNS please - https://github.com/joohoi/acme-dns

Lego supports it.

Spivak

I don't even know why anyone wouldn't use the DNS challenge unless they had no other option. I've found it to be annoying and brittle, maybe less so now with native web server support. And you can't get wildcards.

cortesoft

My work is mostly running internal services that aren’t reachable from the external internet. DNS is the only option.

You can get wildcards with DNS. If you want *.foo.com, you just need to be able to set _acme-challenge.foo.com and you can get the wildcard.

filleokus

Spivak is saying that the DNS method is superior (i.e you are agreeing - and I do too).

One reason I can think of for HTTP-01 / TLS-ALPN-01 is on-demand issuance, issuing the certificate when you get the request. Which might seem insane (and kinda is), but can be useful for e.g crazy web-migration projects. If you have an enormous, deeply levelled, domain sprawl that are almost never used but you need it up for some reason it can be quite handy.

(Another reason, soon, is that HTTP-01 will be able to issue certs for IP addresses: https://letsencrypt.org/2025/07/01/issuing-our-first-ip-addr...)

bryanlarsen

> DNS is the only option

DNS and wildcards aren't the only options. I've done annoying hacks to give internal services an HTTPS cert without using either.

But they're the only sane options.

cyberax

One problem with wildcards is that any service with *.foo.com can pretend to be any other service. This is an issue if you're using mutual TLS authentication and want to trust the server's certificate.

It'd be nice if LE could issue intermediary certificates constrained to a specific domain ( https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1.... ).

null

[deleted]

bityard

The advantage to HTTP validation is that it's simple. No messing with DNS or API keys. Just fire up your server software and tell it what your hostname is and everything else happens in the background automagically.

abcdefg12

And you have two or more servers serving this domain you’re out of luck

jeroenhd

If you buy your domain with a bottom-of-the-barrel domain reseller and then not pay for decent DNS, you don't have the option.

Plus, it takes setting up an API key and most of the time you don't need a wildcard anyway.

account42

You don't need API access to your DNS, the ability to delegate the ACME challenge records to your own server is also enough.

Dylan16807

I don't know how to make my server log into my DNS, and I don't particularly want to learn how. Mapping .well-known is one line of config.

Wildcards are the only temptation.

account42

Just like you can point .well-known/acme-challenge/ to a writable directory you can also delegate the relevant DNS keys to a name server that you can more easily update.

account42

> I've found it to be annoying and brittle

How so? It's just serving static files.

dizhn

This is pretty big. Caddy had this forever but not everybody wants to use caddy. It'll probably eat into the user share of software like Traefik.

elashri

What I really like about Caddy is their better syntax. I actually use nginx (via nginx proxy manager) and Traefik but recently I did one project with Caddy and found it very nice. I might get the time to change my selfhosted setup to use Caddy in the future but probably will go with something like pangolin [1] because it provides alternative to cloudflare tunnels too.

[1] https://github.com/fosrl/pangolin

kstrauser

I agree. That, and the sane defaults are almost always nearly perfect for me. Here is the entire configuration for a TLS-enabled HTTP/{1.1,2,3} static server:

  something.example.com {
    root * /var/www/something.example.com
    file_server
  }
That's the whole thing. Here's the setup of a WordPress site with all the above, plus PHP, plus compression:

  php.example.com {
    root * /var/www/wordpress
    encode
    php_fastcgi unix//run/php/php-version-fpm.sock
    file_server
  }
You can tune and tweak all the million other options too, of course, but you don't have to for most common use cases. It Just Works more than any similarly complex server I've ever been responsible for.

pgug

I find the documentation for the syntax to be a bit lacking if you want to do anything that isn't very basic and how they want you to do it. For example, I want to use a wildcard certificate for my internal services to hide service names from certificate transparency logs, and I can't get the syntax working. Chatgpt and gemini also couldn't.

Saris

Caddy does have some bizarre limitations I've run into, particularly logging with different permissions when it writes the file, so other processes like promtail can read the logs. With Caddy you cannot change them, it always writes with very restrictive permissions.

I find their docs also really hard to deal with, trying to figure out something that would be super simple on Nginx can be really difficult on Caddy, if it's outside the scope of 'normal stuff'

The other thing I really don't like is if you install via a package manager to get automated updates, you don't get any of the plugins. If you want plugins you have to build it yourself or use their build service, and you don't get automatic updates.

francislavoie

Actually, you can set the permissions for log files now. See https://caddyserver.com/docs/caddyfile/directives/log#file

nodesocket

I use Caddy as my main reverse proxy into containers with CloudFlare based DNS let’s encrypt. The syntax is intuitive and just works. I’ve used Traefik in the past with Kubernetes and while powerful the setup and grok ability has quite a bit steeper learning curve.

dizhn

You can have the binary self update with currently included plugins. I think the command line help says it's beta but has always worked fine for me.

karmakaze

Not only that but Nginx how the configuration is split up into all the separate modules is a lot of extra complexity that Caddy avoids by having a single coherent way of configuring its features.

dizhn

I checked out pangolin too recently but then I realized that I already have Authentik and using its embedded (go based) proxy I don't really need pangolin.

tgv

I switched over to caddy recently. Nginx' non-information about the http 1 desync problem drove me over. I'm not going to wait for something stupid to happen or an auditor ask me questions nginx doesn't answer.

Caddy is really easier than nginx. For starters, I now have templates that cover the main services and their test services, and the special service that runs for an education institution. Logging is better. Certificate handling is perfect (for my case, at least). And it has better metrics.

Now I have to figure out plugins though, because caddy doesn't have rate limiting and some stupid bug in powerbi makes a single user hit certain images 300.000 times per day. That's a bit of a downside.

dekobon

I did a google search for the desync problem and found this page: https://my.f5.com/manage/s/article/K30341203

This type of thing is out of my realm of expertise. What information would you want to see about the problem? What would be helpful?

tgv

A simple statement by the maintainers of nginx stating how to configure so that a desync attack fails. That would have been helpful. Especially since the people behind the desync attack claim nginx is not invulnerable.

I've got no idea who F5 is. They seem legit, but that page didn't show up in my DDG search. But it's too late now. Water under the bridge.

thrown-0825

Definitely. I use traefik for some stuff at home and will likely swap it out now.

grim_io

I configure traefik by defining a few docker labels on the services themselves. No way I'm going back to using the horrible huge nginx config.

dizhn

https://gist.github.com/omltcat/241ef622070ca0580f2876a7cfa7...

Some guy retrofitted caddy to use docker labels. It looks way too complicated for me but i don't know how easy/hard it is with traefik either.

thrown-0825

Traefik is slower AND uses more resources.

dwedge

It's also been in Apache since 2018

dizhn

That is pretty early. I had no idea Apache had this. I guess not many people are talking about apache anymore.

fastball

I felt the same but switched to Caddy for my reverse proxy last year and have had a great experience.

Admittedly this was on the back of trying to use nginx-unit, which was an overall bad experience, but ¯\_(ツ)_/¯

vivzkestrel

Not gonna lie, setting up Nginx, Certbot inside docker is the biggest PITA ever. you need certificates to start the NGINX server but you need the NGINX server to issue certificates? see the problem? It is made infinitely worse by a tonne of online solutions and blog posts none of which I could ever get to work. I would really appreciate if someone has documented this extensively for docker compose. I dont want to use libraries like nginx-proxy as customizing that library is another nightmare alltogether

nickjj

This is mostly why I run nginx outside of Docker, I've written about it here: https://nickjanetakis.com/blog/why-i-prefer-running-nginx-on...

I keep these things separate on the servers I configure:

    - Setting up PKI related things like DH Params and certs (no Docker)
    - My app (Docker)
    - Reverse proxy / TLS / etc. with nginx (no Docker)
This allows configuring a server in a way where all nginx configuration works over HTTPS and the PKI bits will either use a self-signed certificate or certbot with DNS validation depending on what you're doing. It gets around all forms of chicken / egg problems and reduces a lot of complexity.

Switching between self-signed, Let's Encrypt or 3rd party certs is a matter of updating 1 symlink since nginx is configured to read the destination. This makes things easy to test and adds a level of disaster recovery / reliability that helps me sleep at night.

This combo has been running strong since all of these tools were available. Before Let's Encrypt was available I did the same thing, except I used 3rd party certs.

bspammer

I must say this is something that showcases NixOS very well.

This is all it takes to start a nginx server. Add this block and everything starts up perfectly first time, using proper systemd sandboxing, with a certificate provisioned, and with a systemd timer for autorenewing the cert. Delete the block, and it's like the server never existed, all of that gets torn down cleanly.

  services.nginx = {
    enable = true;
    virtualHosts = {
      "mydomain.com" = {
        enableACME = true;
        locations."/" = {
          extraConfig = ''; # Config goes here
        };
      };
    };
  }
I recently wanted to create a shortcut domain for our wedding website, redirecting to the SaaS wedding provider. The above made that a literal 1 minute job.

nojs

> I would really appreciate if someone has documented this extensively for docker compose

Run `certbot certonly` on the host once to get the initial certs, and choose the option to run a temporary server rather than using nginx. Then in `compose.yml` have a mapping from the host's certificates to the nginx container. That way, you don't have to touch your nginx config when setting up a new server.

You can then use a certbot container to do the renewals.

E.g.

  nginx:
    volumes:
      - /etc/letsencrypt:/etc/letsencrypt

  certbot:
    volumes:
      - /etc/letsencrypt:/etc/letsencrypt
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"

In your nginx.conf you have

    ssl_certificate /etc/letsencrypt/live/$DOMAIN/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/$DOMAIN/privkey.pem;
And also

    location /.well-known/ {
        alias /usr/share/nginx/html/.well-known/;
    }
For the renewals.

mythz

What's the issue with nginx-proxy? We've used it for years to handle CI deploying multiple multiple Docker compose Apps to the same server [1] without issue, with a more detailed writeup at [2].

This served us well for many years before migrating to use Kamal [3] for its improved remote management features.

[1] https://docs.servicestack.net/ssh-docker-compose-deploment

[2] https://servicestack.net/posts/kubernetes_not_required

[3] https://docs.servicestack.net/kamal-deploy

vivzkestrel

i can write a simple rate limit block easily in raw nginx config but look at this mess when using nginx-proxy https://github.com/nginx-proxy/nginx-proxy/discussions/2524

vivzkestrel

the issue with nginx proxy is that i am not in control of the nginx script https://github.com/nginx-proxy/nginx-proxy/discussions/2523

atomicnumber3

I personally just terminate TLS at nginx, run nginx directly on the metal, and all the services are containerized behind it. I suspect if I had nginx then proxying to remote nodes I'd probably just use an internal PKI for that.

dwedge

Usually the solution is to either not add ssl until you have the certs, or use selfsigned/snakeoil placeholder certs to get nginx started.

Personally I use dns everywhere. I have a central server running dehydrated and dns challenges every night which then rsyncs to all the servers (I'm going to replace it with vault). I kind of like having one place to check for certs

yjftsjthsd-h

> you need certificates to start the NGINX server but you need the NGINX server to issue certificates?

I just pre-populate with a self-signed cert to start, though I'd have to check how to do that in docker.

vivzkestrel

exactly! it all sounds easy unless you want to run stuff inside docker at which point there is a serious lack of documentation and resources

yjftsjthsd-h

Okay, it seems like if you're using compose this is now doable: https://stackoverflow.com/questions/70322031/does-docker-com... So you'd make an init container that runs something like `test -f /certs/whatever.crt || openssl command to generate cert` and tell compose to run that before the real web server container.

null

[deleted]

josegonzalez

This is great. Dokku (of which I am the maintainer) has a hokey solution for this with our letsencrypt plugin, but thats caused a slew of random issues for users. Nginx sometimes gets "stuck" reloading and then can't find the endpoint for some reason. The fewer moving knobs, the better.

That said, its going to take quite some time for this to land in stable repositories for Ubuntu and Debian, and it doesn't (yet?) have DNS challenge support - meaning no wildcards - so I don't think it'll be useful for Dokku in the short-term at least.

ctxc

Hey! Great to see you here.

I tried dokku (and still am!) and it is so hard getting started.

For reference, - I've used Coolify successfully where it required me to create a Github app to deploy my apps on pushes to master - I've written GH actions to build and deploy containers to big cloud

This page is what I get if I want to achieve the same, and it's completely a reference book approach - I feel like I'm reading an encyclopedia. https://dokku.com/docs/deployment/methods/git/#initializing-...

Contrast it with this, which is INSTANTLY useful and helps me deploy apps hot off the page: https://coolify.io/docs/knowledge-base/git/github/integratio...

What I would love to see for Dokku is tutorials for popular OSS apps and set-objective/get-it-done style getting started articles. I'd LOVE an article that takes me from baremetal to a reverse proxy+a few popular apps. Because the value isn't in using Dokku, it's in using Dokku to get to that state.

I'm trying to use dokku for my homeserver.

Ideally I want a painless, quick way to go from "hey here's a repo I like" to "deployed on my machine" with Dokku. And then once that works, peek under the hood.

kocial

The problem with the big open-source companies is that they are always very late to understand and implement the most basic innovations that come out.

Caddy & Traefik did it long, long ago (half a decade ago), and after half a decade, we finally have ngxin supporting it too. Great move though, finally I won't have to manually run certbot :pray:

winter_blue

Caddy did it almost a decade ago. IIRC it had some form of automatic Let’s Encrypt HTTPS back in 2016.

So Nginx is just about 9 to 10 years late. Lol

mholt

2015 in fact. A decade ago.

squigz

And the brilliant thing about open source projects is that if someone felt it was so important to have it built-in, they could have done so many years ago.

stephenr

Given that Caddy has a history that includes choices like "refuse to start if LE cannot be contacted while a valid certificate exists on disk" I'm pretty happy to keep my certificate issuance separate from a web server.

I need a tool to issue certs for a bunch of other services anyway, I don't really see how it became such a thing for people to want it embedded in their web server.

francislavoie

As we repeat every time this comes up, this was literally 8 years ago when the project was in its infancy and the project author was in the middle of exams, and it has not been true since. Caddy has been rewritten from the ground up since then, and comparing it to those old versions is dishonest.

stephenr

The concern isn't that the same code exists, or even that it has odd unintended behaviour.

The concern is that the author failed to understand why his batshit-crazy intended behaviour was a bad design from the start.

mholt

I remember you. You're just grumpy because you didn't think of it first. ;)

stephenr

Top effort dispelling the claim that you make poor decisions mate.

Someone references when you made an ass-backwards decision, and insisted you were correct; your immediate response is not any kind of explanation about how you learnt to trust other people's opinions, or even acknowledging that you got it wrong - you resort to petty childlike attempts at insult.

thaumaturgy

Good to see this. For those that weren't aware, there's been a low-effort solution with https://github.com/dehydrated-io/dehydrated, combined with a pretty simple couple of lines in your vhost config:

    location ^~ /.well-known/acme-challenge/ {
        alias <path-to-your-acme-challenge-directory>;
    }
Dehydrated has been around for a while and is a great low-overhead option for http-01 renewal automation.

Avamander

The same config also works with certbot. I've used it for years.

andrewmcwatters

This is really cool, but I find projects that have thousands of people depending on it not cutting a stable release really distasteful.

Edit: Downvote me all you want, that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

Don't consume major version 0 software, it'll bite you one day. Convince your maintainers to release stable cuts if they've been sitting on major version 0 for years. It's just lazy and immature practice abusing semantic versioning. Maintainers can learn and grow. It's normal.

Dehydrated has been major version 0 for 7 years, it's probably past due.

See also React, LÖVE, and others that made 0.n.x jumps to n.x.x. (https://0ver.org)

CalVer: "If both you and someone you don't know use your project seriously, then use a serious version."

SemVer: "If your software is being used in production, it should probably already be 1.0.0."

https://0ver.org/about.html

nothrabannosir

Distasteful by whom, the people depending on it? Surely not… the people providing free software at no charge, as is? Surely not…

Maybe not distasteful by any one in particular, but just distasteful by fate or as an indicator of misaligned incentives or something?

yjftsjthsd-h

> Distasteful by whom, the people depending on it? Surely not…

Why not?

ygjb

That's the great thing about open source. If you are not satisfied with the free labour's pace of implementing a feature you want, you can do it yourself!

andrewmcwatters

Yes, absolutely! I would probably just pick a version to fork, set it to v1.0.0 for your org's production path, and then you'd know the behavior would never change.

You could then merge updates back from upstream.

thaumaturgy

FWIW I have been using and relying on Dehydrated to handle LetsEncrypt automation for something like 10 years, at least. I think there was one production-breaking change in that time, and to the best of my recollection, it wasn't a Dehydrated-specific issue, it was a change to the ACME protocol. I remember the resolution for that being super easy, just a matter of updating the Dehydrated client and touching a config file.

It has been one of the most reliable parts of my infrastructure and I have to think about it so rarely that I had to go dig the link out of my automation repository.

hju22_-3

You've been using Dehydrated since its initial commit in December of 2015?

dspillett

Feel free to provide and support a "stable" branch/fork that meets your standards.

Be the change you want to see!

Edit to comment on the edit:

> Edit: Downvote me all you want

I don't generally downvote, but if I were going to I would not need your permission :)

> that's reality folks, if you don't release v1.0.0, the interface you consume can change without you realizing it.

I assume you meant "present" there rather than "consume"?

Anyway, 1.0.0 is just a number. Without relevant promises and a track record and/or contract to back them up breaking changes are as likely there as with any other number. A "version 0.x.x" of a well used and scrutinized open source project is more reliable and trustworthy than something that has just had a 1.0.0 sticker slapped on it.

Edit after more parent edits: or go with one of the other many versioning schemes. Maybe ItIsFunToWindUpEntitledDicksVer Which says "stick with 0.x for eternity, go on, you know you want to!".

juped

Another person who thinks semver is some kind of eldritch law-magic, serving well to illustrate the primary way in which semver was and is a mistake.

Sacrificing a version number segment as a permanent zero prefix to keep them away is the most practical way to appease semver's fans, given that they exist in numbers and make ill-conceived attempts to depend on semver's purported eldritch law-magics in tooling. It's a bit like the "Mozilla" in browser user-agents; I hope we can stop at one digit sacrificed, rather than ending up like user-agents did, though.

In other words, 0ver, unironically. Pray we do not need 0.0ver.

idoubtit

A little mistake with this release: they packaged the ngx_http_acme_module for many Linux distributions, but "forgot" Debian stable. Oldstable and oldoldstable are listed in https://nginx.org/en/linux_packages.html (packages built today) but Debian 13 Trixie (released 4 days ago) is not there.

thresh

I'm currently working on getting the Trixie packages uploaded. It'll be there this week.

As you've said Debian 13 was released 4 days ago - it takes some time to spin up the infrastructure for a new OS (and we've been busy with other tasks, like getting nginx-acme and 1.29.1 out).

(I work for F5)

triknomeister

That's Debian's fault I guess

sjmulder

How is that? These are vendor packages

stego-tech

The IT Roller Coaster in two reactions:

> Nginx Introduces Native Support for Acme Protocol

IT: “It’s about fucking time!

> The current preview implementation supports HTTP-01 challenges to verify the client’s domain ownership.

IT: “FUCK. Alright, domain registrar, mint me a new wildcard please, one of the leading web infrastructure providers still can’t do a basic LE DNS-01 pull in 2025.

Seriously. PKI in IT is a PITA and I want someone to SOLVE IT without requiring AD CAs or Yet Another Hyperspecific Appliance (YAHA). If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

While we’re at it, can we also allow DNS-01 certs to be issued for intermediate authorities, allowing internally-signed certificates to be valid via said Intermediary? That’d solve like, 99% of my PKI needs in any org, ever, forever.

cnst

You could always switch to the Angie fork if you require the DNS challenge type with the wildcard domains:

https://en.angie.software/angie/docs/configuration/modules/h...

0xbadcafebee

> allowing internally-signed certificates to be valid via said Intermediary

By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised. Since only CAs can issue certs, and CAs have to pass at least some basic security scrutiny, clients have assurance that the thing giving it a cert got said cert from a trustworthy authority. If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

> If your load balancer, proxy server, web server, or router appliance can’t mint me a basic Acme certificate via DNS-01 challenges, then you officially suck and I will throw your product out for something like Caddy the first chance I get.

I mean, that's a valid ask. It will become more commonplace once some popular corporate offering includes it, and then all the competitors will adopt it so they don't leave money on the table. To get the first one to adopt it, be a whale of a customer and yell loudly that you want it, then wait 18 months.

stego-tech

> If you want a non-trustworthy authority... go with a custom CA. It's intentionally difficult to do so.

This is where I get rankled.

In IT land, everything needs a valid certificate. The printer, the server, the hypervisor, the load balancer, the WAP’s UI, everything. That said, most things don’t require a publicly valid certificate.

Perhaps Intermediate CA is the wrong phrase for what I’m looking for. Ideally it would be a device that does a public DNS-01 validation for a non-wildcard certificate, thus granting it legitimacy. It would then crank out certificates for internal devices only, which would be trusted via the Root CA but without requiring those devices to talk to the internet or use a wildcard certificate. In other words, some sort of marker or fingerprint that says “This is valid because I trust the root and I can validate the internal intermediary. If I cannot see the intermediary, it is not valid.”

The thinking goes is that this would allow more certificates to be issued internally and easily, but without the extra layer of management involved with a fully bespoke internal CA. Would it be as secure as that? No, but it would be SMB-friendly and help improve general security hygiene instead of letting everything use HTTPS with self-signed certificate warnings or letting every device communicate to the internet for an HTTP-01 challenge.

If I can get PKI to be as streamlined as the rest of my tech stack internally, and without forking over large sums for Microsoft Server licenses and CALs, I’d be a very happy dinosaur that’s a lot less worried about tracking the myriad of custom cert renewals and deployments.

0xbadcafebee

Well you can use an admin box and a script to request like 1000 different certs of different names through DNS-01. Copy the certs to the devices that need them. The big problem now is, you have ~5 days to constantly re-copy new certs and reboot the devices, thanks to LE's decision to be super annoying. If you want less annoying... pay for certs.

Installing custom CA certs isn't that hard once you figure out how to do it for each application. I had to write all the docs on this for the IT team, specific to each application, because they were too lazy to do it. Painful at first, but easy after. To avoid more pain later, make the certs expire in 2036, retire before then.

everfrustrated

Intermediates aren't a delegation mechanism as such. They're a way to navigate to the roots trust.

The trust is always in the root itself.

It's not an active directory / LDAP / tree type mechanism where you can say I trust things at this node level and below.

account42

> By design, nothing is allowed to delegate signing authority, because it would become an immediate compromise of everything that got delegated when your delegated authority got compromised.

Or because it would expose the web PKI for the farce it is. Some shady corporation in bumfuckistan having authority to sign certificates for .gov.uk or even just your personal website is absolutely bonkers. Certificate authority should have always been delegated just like nameserver authority is.

pointlessone

DNS challenge is complicated by the fact that every registrar has their own API. HTTP is easier for nginx because it’s a single flow and it already does HTTP.

I’m sure nginx will get DNS but it’s still an open question when it will support your particular registrar or if at all.

account42

> DNS challenge is complicated by the fact that every registrar has their own API

You can sidestep that by delegating the ACME keys to your own name server.

stephenr

What company that has enough infrastructure to dictate an IT Department is also only using certificates on their web servers, and thus doesn't have a standard tool for issuing/renewing/deploying certificates for *all* services that need them?

RagnarD

After discovering Caddy, I don't use Nginx any longer. Just a much better development experience.

metafunctor

I never saw it as a problem for nginx to just serve web content and let certbot handle cert renewals. Whatever happened to doing one thing well and making it composable? Fat tools that try to do everything inevitably suck at some important part.

SchemaLoad

It's kind of annoying to set up. Last I remember certbot could try to automatically configure things for you but unless you had the most default setup it wouldn't work. Just having Nginx do everything for you seems like a better solution.

account42

Certbot can just as easily work with a directory you have nginx set up to point .well-known/acme-challenge/ to. No automatic configuration magic needed.

idoubtit

This optional module makes simple cases simpler.

Having distinct tools for serving content and handling certs is not a problem, and nothing changes on this side. Moreover, the module won't cover every need.

BTW, cerbot is rather a "fat tool" compared to other acme tools like lego. I've had bad experiences with certbot in the past because it tried to do too much automatically and it's hard to diagnose – though I think certbot has been rewritten since then, since it has no more dependency on python zope.

pointlessone

Nginx with certbot is annoying to setup. Especially with HTTP challenge. Mostly because of a circular dependency. You need nginx to clear the challenge and once verboten gets a cert you need to reload nginx.

I switched to Lego because it has out of the box support for my domain registrar so I could use DNS instead of HTTP challenge. It’s also a single go binary which is much simpler to install than certbot.

account42

There is no circular dependency since the HTTP challenge uses unencrypted port 80 and not HTTPS. Reloading nginx config after cert updates is also not a problem as nginx can do that without any downtime.

pointlessone

There’s dependency in the nginx config. You have to specify where your certs are. So you have to have a working config before you start nginx, then you need to get certs and change config with the cert/key location before you can HUP nginx. This is extremely brittle, especially if you have a new box or a setup where you regularly bring up clean nodes as that’s when you can get all sorts of unexpected things to happen. It’s much less brittle when you already have a cert and a working config and just renew the certificate but not all setups are like that. I can’t even confidently say that most are like that.

stephenr

I wonder about the same thing. I've come to the conclusion that it's driven a lot by Management-Ideal definition of devops: developers who end up doing OPs without sufficient knowledge or experience to do it well.

aorth

Oh this is exciting! Caddy's support is very convenient and it does a lot of other stuff right out of the box which is great.

One thing keeping me from switching to Caddy in my places is nginx's rate limiting and geo module.

miggy

It seems HAProxy also added ACME/DNS-01 challenge support in haproxy-3.3-dev6 very recently. https://www.mail-archive.com/haproxy@formilux.org/msg46035.h...

owenthejumper

It added ACME in 3.2, the DNS challenge is coming next: https://www.haproxy.com/blog/announcing-haproxy-3-2#acme-pro...