HTTP/3 is everywhere but nowhere
272 comments
·March 14, 2025djha-skin
> Really it's hard to point to any popular open-source tools that fully support HTTP/3: rollout has barely even started.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.
kstrauser
I'm not an expert on HTTP/3, but vehemently disagree about IPv6. It removes tons of overhead and cruft, making it delightful for datacenter work. That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
cogman10
The problem I keep running into is that IPv6 support in common infrastructure is somewhat lacking.
It's always a headache to learn that some container orchestration system doesn't support IPv6. Or an http client. Or a DNS resolver. Or whatever.
Not to mention the supreme annoyance I have that, to this day, my ISP still does not have IPv6 addressing available.
cogman10
Totally agree. Most of the benefit of HTTP 2/3 comes from minimizing TCP connections between app->lb. Once you are past the lb the benefits are dubious at best.
Most application frameworks that I've dealt with have limited capabilities to handle concurrent requests, so it becomes a minor issue to have 100+ connections between the app and the lb.
On the flipside, apps talking to the LB can create all sorts of headaches if they have even modest sized pools. 20 TCP connections from 100 different apps and you are already looking at hard to handle TCP flooding.
Varriount
I really have to agree with the "easier to debug" part. I one time had to debug a particularly nasty networking issue that was causing HTTP connections to just "stop" midway through sending data. Turned out to be a confusion mismatch between routers and allowed packet sizes. It would have been so much worse with a non-plaintext protocol.
aoeusnth1
What kind of overhead? I’m curious if there’s data about this because I hadn’t heard that 1.1 was better for the data center.
Karrot_Kream
QUIC/HTTP3 relies on TLS. If you already have some encrypted transport, like an Istio/Envoy service mesh with mutual TLS, or Zerotier/Tailscale/Wireguard style encrypted overlay network, then there are no benefits to using HTTP3. Moreover native crypto libraries tend do a better job handling encryption anyway so rather than wasting cycles doing crypto in Go or Node it makes more sense to let the service mesh or the overlay handle encryption and let your app just respond to clear requests.
urban_alien
Ok, HTTP/3 is mobile centric. But why not fallback to HTTP/2 in all other cases?
jallmann
http/1 traffic is a lot easier to inspect
CharlieDigital
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET actually looking like it has decent support for any teams that are interested[0] (side note: sad that .NET and C# are not considered "major"...). There is an open source C library that they've published that seems rather far along[1]Support for Windows, Linux[2], and Mac[3] (the latter two with some caveats).
Overall, I think for most dev teams that are not building networking focused products/platforms, HTTP/3 is probably way down the stack of optimizations and things that they want to think about, especially if the libraries available have edge cases and are too early for production. Who wants to debug issues with low-level protocol implementations when there are features to ship, releases to stabilize, and defects to fix?
[0] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
[1] https://github.com/microsoft/msquic
[2] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
[3] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
hypeatei
> side note: sad that .NET and C# are not considered "major
I've said it before on here, but the tech community severely underrates .NET today. It's not Windows only (and hasn't been for ~8 years) plus C# is a very nice language. F# is also an option for people who like functional languages. I'd highly recommend giving it a try if you haven't already.
shermantanktop
.NET suffers from the long lasting reputational taint of Microsoft. It was seen as the sworn enemy of open source and Linux, and for good reason.
Today’s MS is not what it was back then. But long memories are not a bad thing, really. If .NET suffers a bit from some unfair perception, perhaps that can remind MS and others what happens when you take an aggressively adversarial approach.
nolist_policy
I would say that .NET is the best example that Microsoft has not changed: https://isdotnetopen.com/
CharlieDigital
The ironic thing? GitHub, VS Code, and TypeScript are all Microsoft products.
thayne
More than just that it came from MS.
For a long time, .NET was completely proprietary, and only ran on Windows.
Now it is open source and cross platform, but it is still fighting the momentum of being seen as Windows-only.
mixmastamyk
They may be much less Linux-hostile today, but are still plenty user-hostile. Not to mention their lackadaisical security record.
OkGoDoIt
I’ve been a .Net developer since it launched, but recently I find myself using it less and less. I’m so much more productive with LLM assistance and they aren’t very good at C#. (Seriously, I thought AI coding was all exaggeration until I switched to Python and realized what the hype was all about, these language models are just so much more optimized for python)
Plus now Microsoft is being a bully when it comes to Cursor and the other VS Code forks, and won’t let the .net extensions work. I jumped through a lot of hoops but they keep finding ways to break it. I don’t want an adversarial relationship with my development stack.
I miss C# and I really don’t like Python as a language, but I don’t see myself doing a lot more C# in the future if these trends continue.
victor106
Can .net produce cross platform libraries/executables like Go does? With Go I can develop on Mac and create executables for windows and Linux
CharlieDigital
Yes, it can.
I work on .NET and work on Mac (hate the OS, but the hardware and battery life are way better).
Last startup, we shipped AWS t4g Arm64 and GCP x64 Linux containers. A few devs started on Windows (because of their preferred platform), but we all ended up on M1 MacBook Pros using a mix of Rider and VS Code.
Common misconception between old .NET Framework and new .NET # (e.g. .NET 9) (MS terrible naming). C#/.NET has been capable of cross platform binaries for close to a decade?
I have a bit more info here: https://typescript-is-like-csharp.chrlschn.dev/pages/intro-a...
homebrewer
If your code does not rely on native libraries, or you're fine with shipping multiple copies for different operating systems, a single build works everywhere with dotnet installed.
Or you can cross-compile and run without having dotnet on the target system, I do it from Linux to all three platforms all the time, it's pretty seamless. The application can be packaged into a single binary (similar to Go), or as a bunch of files which you can then then package up into a zip file.
artimaeis
I'm a dabbler in Go, far from an expert. But I'm not familiar with a capability to use, say, native MacOS platform libraries from a go app that I'm compiling in Windows/Linux without using a VM of some sort to compile it. If that's possible I'd love to learn more.
dismalaf
Mono's been on Linux for like 20 years, maybe longer... C# is like Java, can run basically anywhere.
parineum
They also severely underrate it's actual usage. For a non "major" language, there sure are a lot of jobs out there.
.NET ain't hip.
null
briandear
.NET ain’t hip because many of the shops that use it are dinosaurs in processes and culture.
shepherdjerred
Why would I use C# over any other language though?
jayd16
C# has a strong high level feature set and lower level tools to keep performance up. The language itself is well designed and consistent. Its able to mix functional and OO features without being dogmatic, leading to better dev-x over all.
ASP is actually very good these days and feels cleaner than Spring Boot. There's less choice but the available choices are good. It has arguably the best gRPC implementation. It's just a nice experience over all.
CharlieDigital
It's very productive and kinda nice.
Especially for big backend APIs: https://typescript-is-like-csharp.chrlschn.dev/
hu3
LINQ alone is unmatched.
qingcharles
Indeed. And I would think you can use Microsoft's free reverse proxy, YARP, in front of an app (on any platform) that doesn't natively support HTTP/3?
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/s...
inejge
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET omission notwithstanding, one of the languages in the list is not like the others: Rust has a deliberately minimal standard library and doesn't include HTTP at all. I don't follow Rust HTTP/3 efforts closely, but there are at least two actively developed libraries: quiche and quinn.
Macha
The official Python stance on using the standard library HTTP client is also "are you sure?" and their stance on using the standard library HTTP server is "don't".
https://docs.python.org/3/library/http.client.html#module-ht...
https://docs.python.org/3/library/http.server.html#module-ht...
xpressvideoz
> side note: sad that .NET and C# are not considered "major"...
Even Microsoft does not use C# for their new projects. See the new TypeScript compiler that is being rewritten in Go. So I think it is safe to say C# is indeed a minor language.
CharlieDigital
> So I think it is safe to say C# is indeed a minor language
That's not really the case; StackOverflow survey[0] shows C# (27.1%) right behind Java (30.3%) and well ahead of Go (13.5%), Rust (12.6%), Kotlin (9.4%), Ruby (5.2%), and Scala (2.6%). If we exclude HTML/CSS, Bash/Shell, and SQL, C# would be #5 in actual languages used over the past year by devs in this survey.You get the same result from scraping job postings: https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...
1. JS/TS (note these two are collapsed)
2. Python
3. Java
4. C#
Two completely separate sources with the same output... > See the new TypeScript compiler that is being rewritten in Go
If they had started from scratch, Anders mentioned the considerations would be different. But because they had an existing body of code that was not class based, it would be more of a re-write (C#) versus a refactor (Go). A lot of folks read the headline without actually reading Anders' comments and reasoning.C# is good for many things -- in particular application backends, game engines (both Godot and Unity) -- and not optimal for other things -- like serverless functions. Each language has a place and Go and Python are certainly better for CLI tools, for example.
bheadmaster
> StackOverflow survey[0] shows C# (27.1%) right behind Java (30.3%)
Can we rule out sample bias here? After all, Jon Skeet [0] is an important part of the Stack Overflow's C# community.
It might just be the case that C# and Java developers use Stack Overflow more than users of other languages.
[0] https://toggl.com/blog/save-princess-8-programming-languages
Teckla
But because they had an existing body of code that was not class based, it would be more of a re-write (C#) versus a refactor (Go).
I don't understand this reasoning at all, and I'm hoping you can shed some light on it.
As far as I know, C# supports static methods. Thus, using OO in C# would not have been required, would it?
I feel like I'm missing something here.
briandear
I’ve never filled out a stack overflow survey. I wouldn’t say Stack Overflow is statistically representative what’s being used — it’s statistically representative of people that use Stack Overlow. 10 years ago SO was my go-to. Now, I barely notice it — it seems very outdated in many respects.
0x457
I never understood SO as a measurement tool for anything, but people that can't read docs.
jabart
The interviews with the Typescript dev doing the rewrite will tell you why. Switching their compiler to Go was a quick transition since Go matched their current JS build. The dev also wanted to use go, and use functional programming. It would have required more work to switch from functional to OOP style that C# has. Dev also didn't want to learn F#. Nothing about C#, just a personal decision with the least amount of work to get to a beta.
kevinmershon
It's pretty true from recent experience. I've recently started rewriting a C# based desktop/window stream tool because of how weak the support is across the board for C#. Microsoft abandoned WinRTC, Sipsorcery is one guy and is missing VP9 HEVC and AV1 support. And for fancier stuff like using computer shaders for color space conversion, SharpDX is constantly referenced by chatgpt and MS docs, yet it's archived and unmaintained as well. I ended up using media streams VideoFrame class but it and two other classes required to interact with it have unpreventable thread and memory leaks built into the WinRT implementations themselves 4+ years ago. Good times.
All of the above was easy to implement in Rust
kragen
This is an interesting point I hadn't thought of when I saw the announcement of the new TypeScript compiler. It might be overstating the case to say that C# is indeed a minor language, but it's thought-provoking that it wasn't Microsoft's automatic choice here, the way it is for some all-Microsoft in-house IT shops.
CharlieDigital
Microsoft themselves ship on a variety of platforms.
It's more about right tool for the right job.
Good example is Azure CLI; it's Python. Microsoft is also a big contributor in the Python scene[0]
I don't think it's surprising at all that they didn't use C# to write a compiler for TS.
They have internal champions for Rust[1]
I'd say Microsoft is possibly one of the most diverse shops when it comes to tech selection.
[0] https://devblogs.microsoft.com/python/supporting-the-python-...
[1] https://www.theregister.com/2022/09/20/rust_microsoft_c/
troupo
It's not thought-provoking if you care to spend 5 minutes and read/listen to the reasons they provided.
jayd16
Go folks are going to be holding up this Typescript decision for years, aren't they...
ralferoo
For me, I think the biggest issue with large scale deployment of HTTP 3 is that it increases the surface area of potentially vulnerable code that needs to be kept patched and maintained. I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Additionally, I'd posit that for most client applications, a few extra ms of latency on a request isn't really a big deal. Sure, I can imagine applications that might care, but I can't think of any applications I have (as a developer or as a user) where I'd trade to have more complexity on the networking layer for potentially saving a few ms per request, or more likely just on the first request.
lemagedurage
A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
For all the CPU optimisations we're doing, cutting out a 50ms roundtrip for establishing a HTTP connection feels like a great area to optimize performance.
motorest
> A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
That's a valid concern. That's the baseline already though, so everyone is already living with that without much in the way of a concern. It's a nice-to-have.
The problem OP presents is what are the tradeoffs for that nice-to-have. Is security holes an acceptable tradeoff?
IgorPartola
I routinely have concerns about lag on mobile. It sucks to have to wait for 10 seconds for a basic app to load. And that adds up over the many many users any given app or website has.
fragmede
Most people still use Google, and so they're living the fast HTTP 3 life, switching off that to a slower protocol only when interacting with non-Google/Amazon/MSFT properties. If your product is a competitor, but slower/inaccessible users are going to bounce off your product and not even be able to tell you why.
croes
And then the web app eats all that saved ms with ease.
the8472
Isn't 5G supposed to solve the mobile latency issue?
nine_k
The connection to your local tower can have a negligible latency. The connection all the way to the datacenter may take longer. Then, there is congestion sometimes, e.g. around large gatherings of people; it manifests as latency, too.
TheRealPomax
And yet, compared to the time you're waiting for that mast head jpeg to load, plus an even bigger "react app bundle", also completely irrelevant.
HTTP/3 makes a meaningful difference for machines that need to work with HTTP endpoints, which is what Google needed it for: it will save them (and any other web based system similar to theirs) tons of time and bandwidth, which at their scale directly translates to dollars saved. But it makes no overall difference to individual humans who are loading a web page or web app.
There's a good argument to be made about wasting round trips and HTTP/3 adoption fixing that, but it's not grounded in the human experience, because the human experience isn't going to notice it and go "...did something change? everything feels so much faster now".
charleslmunger
Deploying QUIC led to substantial p95 and p99 latency improvements when I did it (admittedly a long time ago) in some widely used mobile apps. At first we had to correct our analysis for connection success rate because so many previously failing connections now succeeded slowly.
It's a material benefit over networks with packet loss and/or high latency. An individual human trying to accomplish something in an elevator, parking garage, or crowded venue will care about a connection being faster with a greater likelihood of success.
celsoazevedo
Almost every optimization is irrelevant if we apply the same reasoning to everything. Add all savings together and it does make a difference to real people using the web in the real world.
null
cyanmagenta
> I'd far rather have the OS provide a verified safe socket layer
There is work going on right now[1] to implement the QUIC protocol in the linux kernel, which gets used in userspace via standard socket() APIs like you would with TCP. Of course, who knows if it’ll ultimately get merged in.
eptcyka
Yea, but does the kernel then also do certificate validation for you? Will you pin certs via setsockopt? I think QUIC and TLS are wide enough attack surfaces to warrant isolation from the kernel.
cyanmagenta
> but does the kernel then also do certificate validation for you
No, the asymmetric cryptography is all done in userspace. Then, post-handshake, symmetric cryptography (e.g., AES) is done in-kernel. This is the same way it works with TCP if you’re using kTLS.
SahAssar
The kernel already does TLS, but the handshake happens in user-space.
XorNot
The problem is that the situation where everyone rolls their own certificate stack is lunacy in this day and age. We need crypto everywhere, and it should be a lot easier to configure how you want: the kernel is a great place to surface the common interface for say "what certificates am I trusting today?"
The 10+ different ways you specify a custom CA is a problem I can't wait to see the back of.
jeroenhd
Experiencing the internet at 2000ms latency every month or so thanks to dead spots along train tracks, the latency improvements quickly become noticeable.
HTTP/3 is terrible for fast connections (with download speeds on gigabit fiber notably capped) and great for bad ones (where latency + three way handshakes make the web unusable).
Perhaps there should be some kind of addon/setting for the browser to detect the quality of the network (doesn't it already for some JS API?) and dynamically enable/disable HTTP/3 for the best performance. I can live with it off 99% of the time, but those rare times I'm dropped to 2G speeds, it's a night and day difference.
AnthonyMouse
> I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Then you're trying to rely on the OS for this when it should actually be a dynamically linked third party library under some open source license.
Trying to get the OS to do it fails to one of two problems. Either each OS provides its own interface, and then every application has to be rewritten for each OS and developers don't want to deal with that so they go back to using a portable library, or the OS vendors all have to get together and agree on a standard interface, but then at least Microsoft refuses to participate and that doesn't happen either.
The real problem here is that mobile platforms fail to offer a package manager with the level of dependency management that has existed on Linux for decades. The way this should work is that you open Google Play and install whatever app that requires a QUIC library, it lists the QUIC library as a dependency, so the third party open source library gets installed and dynamically linked in the background, and the Play Store then updates the library (and therefore any apps that use it) automatically.
But what happens instead is that all the apps statically link the library and then end up using old insecure versions, because the store never bothered to implement proper library dependency management.
kelnos
> a few extra ms of latency on a request
That's not what it is, though. The graph embedded in the article shows HTTP/3 delivering content 1.5x-2x faster than HTTP/2, with differences in the hundreds of ms.
Sure, that's not latency, but consider that HTTP/3 can do fewer round-trips. RTs are often what kill you.
Whether or not this is a good trade off for the negatives you mention is still arguable, but you seem to be unfairly minimizing HTTP/3's benefits.
xxs
>saving a few ms per request, or more likely just on the first request.
That's not given, either as UDP is normally not prioritized under congestion.
eptcyka
But that will change, as more and more clients will rely on UDP on port 443.
AnthonyMouse
It's also a poor congestion control practice to begin with. The main categories of UDP traffic are DNS, VoIP and VPNs. DNS is extremely latency sensitive -- the entirety of what happens next is waiting for the response -- so dropping DNS packets is a great way to make everything suck more than necessary. VoIP often uses some error correction and can tolerate some level of packet loss, but it's still a realtime protocol and purposely degrading it is likewise foolish.
And VPNs are carrying arbitrary traffic. You don't even know what it is. Assigning this anything less than "normal" priority is ridiculous.
In general middleboxes should stop trying to be smart. They will fail, will make things worse, and should embrace being as dumb and simple as possible. Don't try to identify traffic, just forward every packet you can and drop them at random when the pipe is full. The endpoints will figure it out.
plopz
the big one is multiplayer games. udp is preferred and trying to work with webrtc is awful
sennalen
Connection migration sounds like a security nightmare
AnthonyMouse
How is it any worse than session resumption from a different IP address?
fresh_broccoli
The slow adoption of QUIC is the result of OpenSSL's refusal to expose the primitives needed by QUIC implementations that already existed in the wild. Instead, they decided to build their own NIH QUIC stack, which after all these years is still not complete.
Fortunately, this recently changed and OpenSSL 3.5 will finally provide an API for third party QUIC stacks.[1] It works differently than all the other existing implementations, as it's push-based instead of pull-based. It remains to be seen what it means for the ecosystem.
jsheard
It's pretty glaring that nginx still doesn't have production-ready HTTP3 support despite being a semi-commercial product backed by a multi billion dollar corporation. F5 is asleep at the wheel.
LinuxBender
Out of curiosity have F5 added any new modules since they acquired Nginx?
pas
acquisition finished in 2019
there are quite a lot of features, but it's hard to say what constitutes a new module. (well, there's "Feature: the ngx_stream_set_module." so maybe yes?)
LinuxBender
One would probably have to go through git logs [1] so I guess I should do that after getting some food in the belly to answer my own question. It's a big log. Interesting side note, appears all commits from Maxim stopped in January 2024. Must be all F5 now.
xendo
Are there any viable nginx alternatives that support HTTP3 and are mature for prod workflows?
xg15
I feel another way to look at it is that there is a growing divide between the "fronted/backend developer" view of an application and the "ops/networking" view - or put differently, HTTP2 and HTTP3 are not really "application layer" protocols anymore, they're more on the level of TCP and TLS and are perceived as such.
As far as developers are concerned, we still live, have always lived and will always be living in a "plaintext HTTP 1.1 only" world, because those are the abstractions that browser APIs and application servers still maintain. All the crazy stuff in between - encryption, CDNs, changing network protocols - are just as abstracted away as the different hops of an IP packet and might just as well not exist from the application perspective.
jauntywundrkind
There's some cool stuff & capabilities here. Its surprising to me that uptake has been so slow.
Node.js just posted an update on the state of QUIC, which underlies http3 & has had some work over the years. They're struggling with openssl being slow to get adequate API support going. There's efforts that have working books for quic, but the prospect of switching is somewhat onerous.
Really unfortunate; so much of this work has been done for Node & there's just no straightforward path forwards.
sureIy
I'd love to see that OpenSSL fork drama on the main page of HN. Do you know where this was discussed?
billywhizz
there's a pretty good summary of things with links from daniel stenberg - the curl guy - here : https://daniel.haxx.se/blog/2021/10/25/the-quic-api-openssl-...
jillesvangurp
My observation is that anything based on public cloud providers using their load balancers is basically using HTTP3 out of the box. This benefits people that use browsers that support this (essentially all browser and mobile browsers). And since it falls back to plain HTTP 1.1, there are no downsides for others.
Sites that use their own apache/nginx/whatever servers are not benefiting from this and need to do work. And this is of course not helped by the fact that http3 support in many servers is indeed still lacking. Which at this point should be a strong hint to maybe start considering something more modern/up to date.
Http clients used for API calls between servers that maybe use pipelining and connection reuse, benefit less from using HTTP3. So, fixing http clients to support http3 is less urgent. Though there probably are some good reasons to support this anyway. Likewise there is little benefit in ensuring communication between microservices in e.g. Kubernetes happens over http3.
JimDabell
I’ve been using niquests with Python. It supports HTTP/3 and a bunch of other goodies. The Python ecosystem has been kind of stuck on the requests package due to inertia, but that library is basically dead now. I’d encourage Python developers to give niquests a try. You can use it as a drop-in replacement for requests then switch to the better async API when you need to.
https://niquests.readthedocs.io/en/latest/
Traditionally these types of things are developed outside the stdlib for Python. I’m not sure why they draw the line where they do between urllib vs niquests, but it does sometimes feel like the batteries-included nature of Python is a little neglected in some areas. A good HTTP library seems like it belongs in the stdlib.
mixmastamyk
requests dead? The reason given for not including it in the stdlib was so it could evolve more rapidly. Back then the protocol layer was handled/improved by urllib3.
JimDabell
It’s not evolving at all:
> Requests is in a perpetual feature freeze, only the BDFL can add or approve of new features. The maintainers believe that Requests is a feature-complete piece of software at this time.
> One of the most important skills to have while maintaining a largely-used open source project is learning the ability to say “no” to suggested changes, while keeping an open ear and mind.
> If you believe there is a feature missing, feel free to raise a feature request, but please do be aware that the overwhelming likelihood is that your feature request will not be accepted.
— https://requests.readthedocs.io/en/latest/dev/contributing/#...
antisthenes
It takes a very special case of a person to complain about a feature-complete piece of software not evolving fast enough.
artyom
Every single project mentioned in the article is to some extent either open source and/or community driven.
So nobody considered HTTP/3 interesting enough to rush and add support for it very quickly. It'll get there, but fast? I don't think so, see IPv6.
Also, nobody considered HTTP/3 worth enough of paying for maintainers to add support for it.
kccqzy
The comparison with IPv6 is interesting. IPv6 isn't mainly driven by open source or community. It is driven by the needs of large corporations, including both ISPs and tech companies. ISPs like T-mobile wanting to run an IPv6-only backbone network, and tech companies like Apple forcing every app in the App Store to work in IPv6-only mode (DNS64+NAT64). New operating system levels features for IPv6 are often proposed by big tech companies and then implemented eagerly by them; see for example DHCP option 108.
In a sense the need for IPv6 is driven by corporates just like that for HTTP/3.
elcritch
IPv6 always seemed to me to be driven by a certain class of purist networking geeks. Then some corporations started getting on board like you said, but many couldn't care less.
FuriouslyAdrift
The largest use of IPv6 is in mobile (cell) networks. When they effectively killed IP block mobility (provider independent netblocks), they (the standards bodies) effectively killed it's adoption everywhere else.
I work in the networking space and outside of dealing with certain European subsidiaries, we don't use IPv6 anywhere. It's a pain to use and the IPv6 stacks on equipment (routers, firewalls, etc) are no where near the quality, affordability, and reliability of their IPv4 stacks.
nine_k
The exhaustion of IPv4 address pool was easy to predict even in 2000, just by extrapolation of the growth curve.
Then came IP telephony backbone and the mobile internet, topped up with the cloud, and the need became acute. For the large corporations involved, at least.
kccqzy
Oh many purist networking geeks joined large corporations so that these corporations began to push IPv6 in a direction set by the geeks. They understood that as independent geeks they have essentially no say in the evolution of IPv6. My favorite example here is Android refusing to support stateful DHCPv6; it's clear that it's being pushed by purist networking geeks inside Google.
pas
wanting p2p to work (without quixotic NAT hole-punching) is puristry?
vlovich123
Ummm… Google invented QUIC and pushed it into Chrome and shuttled it through IETF to be ratified as a standard. Some of the large OSS projects are maintained by large companies (eg quiche is by Cloudflare) and Microsoft has MsQuic which you can link against directly or just use the kernel mode version built into the OS directly since Windows 11. The need for QUIC is actually even more driven by corporates since IPv6 was a very small comparative pain point compared to better reaching customers with large latency network connections.
jsheard
Nginx (F5) and Go (Google) are hardly scrappy open source projects with limited resources. The former is semi-commercial, you can pay for Nginx and still not have stable HTTP3 support. Google was one of the main drivers of the HTTP3 spec and has supported it both in Chromium and on their own cloud for years, but for whatever reason they haven't put the same effort into Go's stdlib.
arccy
It's in progress: quic is in testing in http://pkg.go.dev/golang.org/x/net/quic and http3 is being implemented https://github.com/golang/go/issues/70914
Since Go has strong backwards compatibility guarantees, they're unlikely to commit to APIs that may need to change in the standard library.
Orygin
The backwards compatibility guarantees are for the language and not the standard library. They won't make breaking changes nilly willy but it can and has happened for the std.
FuriouslyAdrift
I'd go with HAProxy over Nginx any day. It far more robust and more capable. They've had QUIC & HTTP/3 since 2022.
hylaride
99% of the benefit of HTTP/3 is on distributed web serving where clients are connecting to multiple remote ends on a web page (which lets be honest, is mostly used for serving ads faster).
Why would the open source community prioritize this?
not_a_bot_4sho
> see IPv6
"We'll get to IPv6 after we finish IPv5"
jiehong
Caddy does support http 3 in production now, and so can be used as a reverse proxy.
The http client libraries almost everywhere do lack support, though.
ddon
We switched to Caddy in multiple projects and really happy with it... Certificate generation feels like magic and http3 works great as well. Config files are much smaller and easier to read as well!
The most popular internet protocols are 100% text. This fact is indisputable. I doubt that the guys at google even know this, or know why ....