HTTP/3 is everywhere but nowhere
503 comments
·March 14, 2025djha-skin
kstrauser
I'm not an expert on HTTP/3, but vehemently disagree about IPv6. It removes tons of overhead and cruft, making it delightful for datacenter work. That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
cogman10
The problem I keep running into is that IPv6 support in common infrastructure is somewhat lacking.
It's always a headache to learn that some container orchestration system doesn't support IPv6. Or an http client. Or a DNS resolver. Or whatever.
Not to mention the supreme annoyance I have that, to this day, my ISP still does not have IPv6 addressing available.
p_l
Major reason for that is BSD Sockets and their leaky abstraction that results in hardcoding protocol details in application code.
For a good decade a lot of software had to be slowly patched in every place that made a socket to add v6 support, and sometimes multiple times because getaddrinfo didn't reach everyone early enough.
frollogaston
Yep, it's tragic because it all stems from unforced differences vs ipv4. The design was reasonable, but with perfect hindsight, it needed to be different. They needed to keep the existing /32s and just make the address field bigger, despite the disadvantages.
"Everywhere but nowhere" is sorta how I'd describe ipv6. Most hardware and lower-level software supports it, so obviously it wasn't impossible to support a new protocol, but it's not being used.
afiori
I wonder if something like HTTP connection upgrade would have been possible with ipv6-ipv4, maybe something like imagine Machine 1 with ips 1.1.1.1 and 11::11 and machine 2 with ips 2.2.2.2 and 22::22.
When machine 2 receives a packet from 1.1.1.1 at 2.2.2.2 it sends a ipv6 ping-like packet to the ipv4-mapped address ::ffff:1.1.1.1 saying something like "hey you can also contact me at 22::22 and if machine 1 undertands then it can try to use the new address for the following packets.
I can see how it would be hard to secure this operation.
cowsandmilk
For those building on AWS with VPC per service and using PrivateLink for connections between services, the whole IP conflict problem just evaporates. Admittedly, you’re paying some premiums to Amazon for that convenience.
yyyk
>That, and basically guaranteeing you don't have to deal with the company you just acquired having deployed their accounts with the same 10/16 subnet your own company uses.
I always found that to be a desperate talking point. 'Prepare your network for the incredibly rare event where you intend to integrate directly' (didn't anyone hear of network segmentation?). It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
bigstrat2003
> It makes a lot more sense to worry about the ISP unilaterally changing your prefix - something that can only happen in IPv6.
ISPs unilaterally change your DHCP address on IPv4 all the time. And in any situation where you would have a static address for IPv4, your ISP should have no problem giving you a static v6 prefix. This argument makes no sense at all.
einpoklum
> It removes tons of overhead and cruft
Can you elaborate? Here, or with links? What kinds of overhead and cruft?
kstrauser
* The header is a fixed length.
* You can't fragment packets.
* The redundant checksum header was removed.
* No more private addressing (unless you're a glutton for punishment).
* No more NAT (see above).
* Simpler routing.
* Doesn't require DHCP.
It benefits hugely from the lessons learned with IPv4.
Varriount
I really have to agree with the "easier to debug" part. I one time had to debug a particularly nasty networking issue that was causing HTTP connections to just "stop" midway through sending data. Turned out to be a confusion mismatch between routers and allowed packet sizes. It would have been so much worse with a non-plaintext protocol.
cogman10
Totally agree. Most of the benefit of HTTP 2/3 comes from minimizing TCP connections between app->lb. Once you are past the lb the benefits are dubious at best.
Most application frameworks that I've dealt with have limited capabilities to handle concurrent requests, so it becomes a minor issue to have 100+ connections between the app and the lb.
On the flipside, apps talking to the LB can create all sorts of headaches if they have even modest sized pools. 20 TCP connections from 100 different apps and you are already looking at hard to handle TCP flooding.
raggi
HTTP/3 is not a mobile centric technology. Yes there was a lot of discussion of packet pacing and it's implications for mobile in early presentations on QUIC but that's not the same as "centric", that's one application of the behavior. Improved congestion control, reduced control plane cost and removal of head of line blocking behaviors have significant value in data center networks as well. How often do you work with services where they have absolutely atrocious tail latencies and wide gaps between median and mean latencies? how often is that a side effect of http/tcp semantics?
IPv6 is the same deal, I sort of understand where the confusion comes from around QUIC because so much was discussed about mobile early on, and it just got parroted heavily in the rumor mill, but IPv6? that long predates the mobile explosion, and again, it helps as an application, but ascribing it as the only application because of it's applicability somewhere else doesn't hold up to basic scrutiny. The largest data centers these days are pushing up against a whole v4 IP class (I know, classes are dead, sorta) in hardware addressable compute units - a trend that is not slowing.
We did this with quic data center side: https://tailscale.com/blog/living-in-the-future#the-c10k-pro... and while it might be slightly crazy in and of itself, it's far more practical with multiplexing than with a million excessively sized buffers competing over pools and so on.
There is absolutely value to quic and ipv6 in the data center, perhaps it's not so useful for traditionally shaped and sized LAMP stacks, but you can absolutely make great use of these at scale and in modern architectures, and they open a lot of doors/relax constraints in the design space. This also doesn't mean everyone needs to reach for them, but I don't think they should be discarded or ascribed limited purpose so blithely.
djha-skin
I will acknowledge that truly massive datacenter deployments can and do use these technologies to good effect, but I haven't worked at any of these kinds of places in the last fifteen years and I suspect many (most?) of my colleagues haven't either. Anything smaller than a /8, they usually don't add much and just get in the way more often than not.
KaiserPro
HTTP3 is patch that unfucked some stupid design choices from HTTP2[1]
However IPv6 is perfectly suited to the datacentre. So long as you have properly infrastructure setup (ie properly functioning DNS) IPv6 is a godsend for simplifying medium scale infra.
In fact, if you want to get close to a million hosts, you need ipv6.
[1] Me and http/2 have beef, TCP multiplexing was always going to be a bad idea, but because idealism got in the way of testing
jiggawatts
> pass back HTTP 1.1 ... to the backing service.
Sure, but now you've lost some of the benefits of HTTP/3, such as the header compression and less head-of-line blocking. To some degree the load balancer can solve this by using multiple parallel HTTP 1.1 streams, but in practice I've seen pretty bad results in many common scenarios.
djha-skin
No one cares about those "benefits" _on a gigabit line_. The head of your line is not blocked at such speeds, believe you me. Same thing with compression. Like, why. Other than to make it harder to debug?
jiggawatts
I had head-of-line-blocking issues recently on a 10 Gbps data centre link!
HTTP client packages often use a small, fixed number of connections per domain. So if you have two servers talking to each other and there's slow requests mixed in with short RPCs, the latter can sit in a queue for tens of seconds.
aoeusnth1
What kind of overhead? I’m curious if there’s data about this because I hadn’t heard that 1.1 was better for the data center.
Karrot_Kream
QUIC/HTTP3 relies on TLS. If you already have some encrypted transport, like an Istio/Envoy service mesh with mutual TLS, or Zerotier/Tailscale/Wireguard style encrypted overlay network, then there are no benefits to using HTTP3. Moreover native crypto libraries tend do a better job handling encryption anyway so rather than wasting cycles doing crypto in Go or Node it makes more sense to let the service mesh or the overlay handle encryption and let your app just respond to clear requests.
dathinab
> let the service mesh or the overlay handle encryption
which could use HTTP/2 or HTTP/3
and HTTP/2 for the localhost (or unix socket) gateway<->app step to provide e.g. WebTransport support
shmerl
How is IPv6 not suitable?
CharlieDigital
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET actually looking like it has decent support for any teams that are interested[0] (side note: sad that .NET and C# are not considered "major"...). There is an open source C library that they've published that seems rather far along[1]Support for Windows, Linux[2], and Mac[3] (the latter two with some caveats).
Overall, I think for most dev teams that are not building networking focused products/platforms, HTTP/3 is probably way down the stack of optimizations and things that they want to think about, especially if the libraries available have edge cases and are too early for production. Who wants to debug issues with low-level protocol implementations when there are features to ship, releases to stabilize, and defects to fix?
[0] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
[1] https://github.com/microsoft/msquic
[2] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
[3] https://learn.microsoft.com/en-us/dotnet/fundamentals/networ...
hypeatei
> side note: sad that .NET and C# are not considered "major
I've said it before on here, but the tech community severely underrates .NET today. It's not Windows only (and hasn't been for ~8 years) plus C# is a very nice language. F# is also an option for people who like functional languages. I'd highly recommend giving it a try if you haven't already.
shermantanktop
.NET suffers from the long lasting reputational taint of Microsoft. It was seen as the sworn enemy of open source and Linux, and for good reason.
Today’s MS is not what it was back then. But long memories are not a bad thing, really. If .NET suffers a bit from some unfair perception, perhaps that can remind MS and others what happens when you take an aggressively adversarial approach.
nolist_policy
I would say that .NET is the best example that Microsoft has not changed: https://isdotnetopen.com/
throwaway2037
Why hasn't Java been tainted the same since Oracle bought Sun and now 100% controls Java's development? I am continuously surprised to see they continue to make major investments in the platform. Project Valhalla was started in 2014 and is still going strong. I keep waiting for Oracle to cancel all major Java improvements, then milk the remaining corpse.
CharlieDigital
The ironic thing? GitHub, VS Code, and TypeScript are all Microsoft products.
mixmastamyk
They may be much less Linux-hostile today, but are still plenty user-hostile. Not to mention their lackadaisical security record.
thayne
More than just that it came from MS.
For a long time, .NET was completely proprietary, and only ran on Windows.
Now it is open source and cross platform, but it is still fighting the momentum of being seen as Windows-only.
parineum
They also severely underrate it's actual usage. For a non "major" language, there sure are a lot of jobs out there.
.NET ain't hip.
null
briandear
.NET ain’t hip because many of the shops that use it are dinosaurs in processes and culture.
OkGoDoIt
I’ve been a .Net developer since it launched, but recently I find myself using it less and less. I’m so much more productive with LLM assistance and they aren’t very good at C#. (Seriously, I thought AI coding was all exaggeration until I switched to Python and realized what the hype was all about, these language models are just so much more optimized for python)
Plus now Microsoft is being a bully when it comes to Cursor and the other VS Code forks, and won’t let the .net extensions work. I jumped through a lot of hoops but they keep finding ways to break it. I don’t want an adversarial relationship with my development stack.
I miss C# and I really don’t like Python as a language, but I don’t see myself doing a lot more C# in the future if these trends continue.
seunosewa
You can use VS Code and cursor at the same time. One to code and the other to compile the code. That's how I build for Android. I generate code in Cursor/Windsurf then I compile and deploy using Android Studio.
epolanski
Have you given C# a new try with Cursor and newer models?
There's a huge difference from Copilot in 2023, if that was your example.
Also, interesting you feel more productive on a language you don't even like than your "main" one.
victor106
Can .net produce cross platform libraries/executables like Go does? With Go I can develop on Mac and create executables for windows and Linux
CharlieDigital
Yes, it can.
I work on .NET and work on Mac (hate the OS, but the hardware and battery life are way better).
Last startup, we shipped AWS t4g Arm64 and GCP x64 Linux containers. A few devs started on Windows (because of their preferred platform), but we all ended up on M1 MacBook Pros using a mix of Rider and VS Code.
Common misconception between old .NET Framework and new .NET # (e.g. .NET 9) (MS terrible naming). C#/.NET has been capable of cross platform binaries for close to a decade?
I have a bit more info here: https://typescript-is-like-csharp.chrlschn.dev/pages/intro-a...
andygocke
Hi, I own the Native AOT compiler and self-contained compiler for .NET.
Self-contained will work fine because we precompile the runtime and libraries for all supported platforms.
Native AOT won't, because we rely on the system linker and native libraries. This is the same situation as for C++ and Rust. Unlike Go, which doesn't use anything from the system, we try to support interop with system libraries directly, and in particular rely on the system crypto libraries by default.
Unfortunately, the consequence of relying on system libraries is that you actually have to have a copy of the system libraries to link against them, and a linker that supports that. In practice, clang is actually a fine cross-linker for all these platforms, but acquiring the system libraries is an issue. None of the major OSes provide libraries in a way that would be easy to acquire and deliver to clang, and we don't want to get into the business of building and redistributing the libcs for all platforms (and then be responsible for bugs etc).
Note that if you use cgo and call any C code from Go you will end up in the same situation even for Go -- because then you need a copy of the target system libc and a suitable system linker.
homebrewer
If your code does not rely on native libraries, or you're fine with shipping multiple copies for different operating systems, a single build works everywhere with dotnet installed.
Or you can cross-compile and run without having dotnet on the target system, I do it from Linux to all three platforms all the time, it's pretty seamless. The application can be packaged into a single binary (similar to Go), or as a bunch of files which you can then then package up into a zip file.
artimaeis
I'm a dabbler in Go, far from an expert. But I'm not familiar with a capability to use, say, native MacOS platform libraries from a go app that I'm compiling in Windows/Linux without using a VM of some sort to compile it. If that's possible I'd love to learn more.
dismalaf
Mono's been on Linux for like 20 years, maybe longer... C# is like Java, can run basically anywhere.
lmm
It's not Windows only but it is Windows-first. Every few years I take a look but it still doesn't feel like a serious ecosystem for Linux development (in the same way that e.g. Perl might have a Windows release, but it doesn't feel really first-class). I can't think of a single .NET/C# program that people would typically run on Linux to even have the runtime installed, so no wonder people don't bother investigating the languages.
CharlieDigital
Definitely not Windows-first.
Last startup we built our entire backend in .NET and C#. Every dev ended up using MacBooks. We shipped to AWS t4g Arm64 Linux targets.
This mis-perception is so irrational and not based on any facts.
shepherdjerred
Why would I use C# over any other language though?
jayd16
C# has a strong high level feature set and lower level tools to keep performance up. The language itself is well designed and consistent. Its able to mix functional and OO features without being dogmatic, leading to better dev-x over all.
ASP is actually very good these days and feels cleaner than Spring Boot. There's less choice but the available choices are good. It has arguably the best gRPC implementation. It's just a nice experience over all.
CharlieDigital
It's very productive and kinda nice.
Especially for big backend APIs: https://typescript-is-like-csharp.chrlschn.dev/
hu3
LINQ alone is unmatched.
qingcharles
Indeed. And I would think you can use Microsoft's free reverse proxy, YARP, in front of an app (on any platform) that doesn't natively support HTTP/3?
https://learn.microsoft.com/en-us/aspnet/core/fundamentals/s...
inejge
> At the same time, neither QUIC nor HTTP/3 are included in the standard libraries of any major languages including Node.js, Go, Rust, Python or Ruby.
.NET omission notwithstanding, one of the languages in the list is not like the others: Rust has a deliberately minimal standard library and doesn't include HTTP at all. I don't follow Rust HTTP/3 efforts closely, but there are at least two actively developed libraries: quiche and quinn.
Macha
The official Python stance on using the standard library HTTP client is also "are you sure?" and their stance on using the standard library HTTP server is "don't".
https://docs.python.org/3/library/http.client.html#module-ht...
https://docs.python.org/3/library/http.server.html#module-ht...
throwaway2037
> sad that .NET and C# are not considered "major"...
No need to be sad about that list. Java is also missing. There must be millions of enterprise programmers in the world using Java.I just checked the Java class HttpClient (a common JDK11+ HTTP client). It currently does not support HTTP/3, so add one more to the list!
Ref: https://docs.oracle.com/en/java/javase/24/docs/api/java.net....
Also, most of the very best network clients are now built on top of NettyIO. I can see an "incubator codec" here: https://github.com/netty/netty-incubator-codec-http3
Holy hell, this code is ridiculously complex: https://github.com/netty/netty-incubator-codec-http3/tree/ma...
I'm not hating on NettyIO here, but the protocol looks complex. Yet another reason why it is so slow to be deployed to more application frameworks.
lmm
I don't think many people use the builtin HttpClient these days. Apache httpclient would be the conservative standard, apparentyl it has some support for 2 but not 3: https://hc.apache.org/httpcomponents-client-5.4.x/index.html .
oaiey
There is a very simple reason for not being on the list: It support HTTP/3 and the purpose of the post was to show the lack of support in common languages. .NET does not fit that list well.
And it is obviously only popular in the dark matter of Enterprise Software Development.
epolanski
C# is much closer in popularity to TypeScript/Python (the number 1s) than it is to Rust.
CharlieDigital
> And it is obviously only popular in the dark matter of Enterprise Software Development.
Well, and apparently for game development given both Godot and Unity use C# and various builds of .NETycombinatrix
Kind of misleading - Rust stdlib doesn't include HTTP 1 or 2 either
o11c
Until the dotnet packages are shipped in every major Linux distro's default repositories (it's okay if it's a 5-year-old version), we can't call C# a major language.
But apparently there aren't enough people willing to actually do the work.
xpressvideoz
> side note: sad that .NET and C# are not considered "major"...
Even Microsoft does not use C# for their new projects. See the new TypeScript compiler that is being rewritten in Go. So I think it is safe to say C# is indeed a minor language.
CharlieDigital
> So I think it is safe to say C# is indeed a minor language
That's not really the case; StackOverflow survey[0] shows C# (27.1%) right behind Java (30.3%) and well ahead of Go (13.5%), Rust (12.6%), Kotlin (9.4%), Ruby (5.2%), and Scala (2.6%). If we exclude HTML/CSS, Bash/Shell, and SQL, C# would be #5 in actual languages used over the past year by devs in this survey.You get the same result from scraping job postings: https://www.devjobsscanner.com/blog/top-8-most-demanded-prog...
1. JS/TS (note these two are collapsed)
2. Python
3. Java
4. C#
Two completely separate sources with the same output... > See the new TypeScript compiler that is being rewritten in Go
If they had started from scratch, Anders mentioned the considerations would be different. But because they had an existing body of code that was not class based, it would be more of a re-write (C#) versus a refactor (Go). A lot of folks read the headline without actually reading Anders' comments and reasoning.C# is good for many things -- in particular application backends, game engines (both Godot and Unity) -- and not optimal for other things -- like serverless functions. Each language has a place and Go and Python are certainly better for CLI tools, for example.
bheadmaster
> StackOverflow survey[0] shows C# (27.1%) right behind Java (30.3%)
Can we rule out sample bias here? After all, Jon Skeet [0] is an important part of the Stack Overflow's C# community.
It might just be the case that C# and Java developers use Stack Overflow more than users of other languages.
[0] https://toggl.com/blog/save-princess-8-programming-languages
Teckla
But because they had an existing body of code that was not class based, it would be more of a re-write (C#) versus a refactor (Go).
I don't understand this reasoning at all, and I'm hoping you can shed some light on it.
As far as I know, C# supports static methods. Thus, using OO in C# would not have been required, would it?
I feel like I'm missing something here.
briandear
I’ve never filled out a stack overflow survey. I wouldn’t say Stack Overflow is statistically representative what’s being used — it’s statistically representative of people that use Stack Overlow. 10 years ago SO was my go-to. Now, I barely notice it — it seems very outdated in many respects.
0x457
I never understood SO as a measurement tool for anything, but people that can't read docs.
jabart
The interviews with the Typescript dev doing the rewrite will tell you why. Switching their compiler to Go was a quick transition since Go matched their current JS build. The dev also wanted to use go, and use functional programming. It would have required more work to switch from functional to OOP style that C# has. Dev also didn't want to learn F#. Nothing about C#, just a personal decision with the least amount of work to get to a beta.
int_19h
FWIW the "functional programming" angle here is misconstrued - C# has better facilities for it than Go.
The thing that they actually wanted is data-centric programming with structural types.
kevinmershon
It's pretty true from recent experience. I've recently started rewriting a C# based desktop/window stream tool because of how weak the support is across the board for C#. Microsoft abandoned WinRTC, Sipsorcery is one guy and is missing VP9 HEVC and AV1 support. And for fancier stuff like using computer shaders for color space conversion, SharpDX is constantly referenced by chatgpt and MS docs, yet it's archived and unmaintained as well. I ended up using media streams VideoFrame class but it and two other classes required to interact with it have unpreventable thread and memory leaks built into the WinRT implementations themselves 4+ years ago. Good times.
All of the above was easy to implement in Rust
kragen
This is an interesting point I hadn't thought of when I saw the announcement of the new TypeScript compiler. It might be overstating the case to say that C# is indeed a minor language, but it's thought-provoking that it wasn't Microsoft's automatic choice here, the way it is for some all-Microsoft in-house IT shops.
CharlieDigital
Microsoft themselves ship on a variety of platforms.
It's more about right tool for the right job.
Good example is Azure CLI; it's Python. Microsoft is also a big contributor in the Python scene[0]
I don't think it's surprising at all that they didn't use C# to write a compiler for TS.
They have internal champions for Rust[1]
I'd say Microsoft is possibly one of the most diverse shops when it comes to tech selection.
[0] https://devblogs.microsoft.com/python/supporting-the-python-...
[1] https://www.theregister.com/2022/09/20/rust_microsoft_c/
int_19h
TypeScript guys have a FAQ and even a video explaining why they chose Go exactly, and why not C# (or Rust, or other things). They had their reasons.
troupo
It's not thought-provoking if you care to spend 5 minutes and read/listen to the reasons they provided.
jayd16
Go folks are going to be holding up this Typescript decision for years, aren't they...
ralferoo
For me, I think the biggest issue with large scale deployment of HTTP 3 is that it increases the surface area of potentially vulnerable code that needs to be kept patched and maintained. I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Additionally, I'd posit that for most client applications, a few extra ms of latency on a request isn't really a big deal. Sure, I can imagine applications that might care, but I can't think of any applications I have (as a developer or as a user) where I'd trade to have more complexity on the networking layer for potentially saving a few ms per request, or more likely just on the first request.
lemagedurage
A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
For all the CPU optimisations we're doing, cutting out a 50ms roundtrip for establishing a HTTP connection feels like a great area to optimize performance.
motorest
> A "few extra ms" is up to 3 roundtrips difference, that's easily noticeable by humans on cellular.
That's a valid concern. That's the baseline already though, so everyone is already living with that without much in the way of a concern. It's a nice-to-have.
The problem OP presents is what are the tradeoffs for that nice-to-have. Is security holes an acceptable tradeoff?
IgorPartola
I routinely have concerns about lag on mobile. It sucks to have to wait for 10 seconds for a basic app to load. And that adds up over the many many users any given app or website has.
fragmede
Most people still use Google, and so they're living the fast HTTP 3 life, switching off that to a slower protocol only when interacting with non-Google/Amazon/MSFT properties. If your product is a competitor, but slower/inaccessible users are going to bounce off your product and not even be able to tell you why.
croes
And then the web app eats all that saved ms with ease.
guappa
Does it really matter? The website is first going to download 5mb of js, then it's going to show 3 popups.
the8472
Isn't 5G supposed to solve the mobile latency issue?
nine_k
The connection to your local tower can have a negligible latency. The connection all the way to the datacenter may take longer. Then, there is congestion sometimes, e.g. around large gatherings of people; it manifests as latency, too.
KaiserPro
> Isn't 5G supposed to solve the mobile latency issue?
Kinda.
So 5g is faster, but its still wireless, and shared spectrum. This means that the more people that use it, or the further they are away, the speed and bandwidth per client is adjusted.
(I'm not sure of the coding scheme for 5G, so take this with caution) For mobiles that are further away, or have a higher noise floor, the symbol rate (ie the number of radiowave "bits" that are being sent) is reduced so that there is a high chance they will be understood at the other end (Shannon's law, or something) Like in wifi, as the signal gets weaker, the headline connection speed drops from 100mb+ to 11.
In wifi, that tends to degrade the whole AP's performance, in 5G I'm not sure.
Either way, a bad connection will give you dropped packets.
TheRealPomax
And yet, compared to the time you're waiting for that mast head jpeg to load, plus an even bigger "react app bundle", also completely irrelevant.
HTTP/3 makes a meaningful difference for machines that need to work with HTTP endpoints, which is what Google needed it for: it will save them (and any other web based system similar to theirs) tons of time and bandwidth, which at their scale directly translates to dollars saved. But it makes no overall difference to individual humans who are loading a web page or web app.
There's a good argument to be made about wasting round trips and HTTP/3 adoption fixing that, but it's not grounded in the human experience, because the human experience isn't going to notice it and go "...did something change? everything feels so much faster now".
charleslmunger
Deploying QUIC led to substantial p95 and p99 latency improvements when I did it (admittedly a long time ago) in some widely used mobile apps. At first we had to correct our analysis for connection success rate because so many previously failing connections now succeeded slowly.
It's a material benefit over networks with packet loss and/or high latency. An individual human trying to accomplish something in an elevator, parking garage, or crowded venue will care about a connection being faster with a greater likelihood of success.
celsoazevedo
Almost every optimization is irrelevant if we apply the same reasoning to everything. Add all savings together and it does make a difference to real people using the web in the real world.
epolanski
> But it makes no overall difference to individual humans who are loading a web page or web app.
Navigating from my phone at 4g and my fiber connection has drastic differences.
Especially noticeable when on vacations or places with poor connections, TLS handshakes can take many, many, many, seconds..After the handshake and an established connection it's very different.
null
cyanmagenta
> I'd far rather have the OS provide a verified safe socket layer
There is work going on right now[1] to implement the QUIC protocol in the linux kernel, which gets used in userspace via standard socket() APIs like you would with TCP. Of course, who knows if it’ll ultimately get merged in.
eptcyka
Yea, but does the kernel then also do certificate validation for you? Will you pin certs via setsockopt? I think QUIC and TLS are wide enough attack surfaces to warrant isolation from the kernel.
cyanmagenta
> but does the kernel then also do certificate validation for you
No, the asymmetric cryptography is all done in userspace. Then, post-handshake, symmetric cryptography (e.g., AES) is done in-kernel. This is the same way it works with TCP if you’re using kTLS.
XorNot
The problem is that the situation where everyone rolls their own certificate stack is lunacy in this day and age. We need crypto everywhere, and it should be a lot easier to configure how you want: the kernel is a great place to surface the common interface for say "what certificates am I trusting today?"
The 10+ different ways you specify a custom CA is a problem I can't wait to see the back of.
SahAssar
The kernel already does TLS, but the handshake happens in user-space.
LtWorf
It will not.
jeroenhd
Experiencing the internet at 2000ms latency every month or so thanks to dead spots along train tracks, the latency improvements quickly become noticeable.
HTTP/3 is terrible for fast connections (with download speeds on gigabit fiber notably capped) and great for bad ones (where latency + three way handshakes make the web unusable).
Perhaps there should be some kind of addon/setting for the browser to detect the quality of the network (doesn't it already for some JS API?) and dynamically enable/disable HTTP/3 for the best performance. I can live with it off 99% of the time, but those rare times I'm dropped to 2G speeds, it's a night and day difference.
AnthonyMouse
> I'd far rather have the OS provide a verified safe socket layer, and a dynamically linked SSL library, that can be easily updated without any of the application layer needing to worry about security bugs in the networking layer.
Then you're trying to rely on the OS for this when it should actually be a dynamically linked third party library under some open source license.
Trying to get the OS to do it fails to one of two problems. Either each OS provides its own interface, and then every application has to be rewritten for each OS and developers don't want to deal with that so they go back to using a portable library, or the OS vendors all have to get together and agree on a standard interface, but then at least Microsoft refuses to participate and that doesn't happen either.
The real problem here is that mobile platforms fail to offer a package manager with the level of dependency management that has existed on Linux for decades. The way this should work is that you open Google Play and install whatever app that requires a QUIC library, it lists the QUIC library as a dependency, so the third party open source library gets installed and dynamically linked in the background, and the Play Store then updates the library (and therefore any apps that use it) automatically.
But what happens instead is that all the apps statically link the library and then end up using old insecure versions, because the store never bothered to implement proper library dependency management.
kelnos
> a few extra ms of latency on a request
That's not what it is, though. The graph embedded in the article shows HTTP/3 delivering content 1.5x-2x faster than HTTP/2, with differences in the hundreds of ms.
Sure, that's not latency, but consider that HTTP/3 can do fewer round-trips. RTs are often what kill you.
Whether or not this is a good trade off for the negatives you mention is still arguable, but you seem to be unfairly minimizing HTTP/3's benefits.
hubabuba44
If the initial tcp 3 way handshake fails it can be quite a bit longer you would have to wait than a few ms. Depending on OS it is a second or more.
xxs
>saving a few ms per request, or more likely just on the first request.
That's not given, either as UDP is normally not prioritized under congestion.
eptcyka
But that will change, as more and more clients will rely on UDP on port 443.
AnthonyMouse
It's also a poor congestion control practice to begin with. The main categories of UDP traffic are DNS, VoIP and VPNs. DNS is extremely latency sensitive -- the entirety of what happens next is waiting for the response -- so dropping DNS packets is a great way to make everything suck more than necessary. VoIP often uses some error correction and can tolerate some level of packet loss, but it's still a realtime protocol and purposely degrading it is likewise foolish.
And VPNs are carrying arbitrary traffic. You don't even know what it is. Assigning this anything less than "normal" priority is ridiculous.
In general middleboxes should stop trying to be smart. They will fail, will make things worse, and should embrace being as dumb and simple as possible. Don't try to identify traffic, just forward every packet you can and drop them at random when the pipe is full. The endpoints will figure it out.
sennalen
Connection migration sounds like a security nightmare
AnthonyMouse
How is it any worse than session resumption from a different IP address?
fresh_broccoli
The slow adoption of QUIC is the result of OpenSSL's refusal to expose the primitives needed by QUIC implementations that already existed in the wild. Instead, they decided to build their own NIH QUIC stack, which after all these years is still not complete.
Fortunately, this recently changed and OpenSSL 3.5 will finally provide an API for third party QUIC stacks.[1] It works differently than all the other existing implementations, as it's push-based instead of pull-based. It remains to be seen what it means for the ecosystem.
xg15
I feel another way to look at it is that there is a growing divide between the "fronted/backend developer" view of an application and the "ops/networking" view - or put differently, HTTP2 and HTTP3 are not really "application layer" protocols anymore, they're more on the level of TCP and TLS and are perceived as such.
As far as developers are concerned, we still live, have always lived and will always be living in a "plaintext HTTP 1.1 only" world, because those are the abstractions that browser APIs and application servers still maintain. All the crazy stuff in between - encryption, CDNs, changing network protocols - are just as abstracted away as the different hops of an IP packet and might just as well not exist from the application perspective.
koakuma-chan
> because those are the abstractions that browser APIs and application servers still maintain
Because semantics are the same for every version.
KaiserPro
I think its more that HTTP/3 only really gives marginal gains for most people.
Just as python3 had almost nothing for programmer over python2, apart from print needing brackets. Sure it was technically better, and allowed for future gains, but in practice, for the end user, there was no real reason to adopt it.
For devs outside of FAANG, there is no real reason to learn how to setup and implement http3
ratorx
I’d go even further and say that HTTP/3 gives almost no gains for the average person using a high speed wired or wireless internet connection at a fixed location (or changing locations infrequently).
However, for high latency mobile connections while roaming and continuously using the internet, it’s quite an optimisation.
I wouldn’t expect even the vast majority of devs in FAANG to care. It should purely be an infrastructural change that has no impact on application semantics.
jsheard
It's pretty glaring that nginx still doesn't have production-ready HTTP3 support despite being a semi-commercial product backed by a multi billion dollar corporation. F5 is asleep at the wheel.
LinuxBender
Out of curiosity have F5 added any new modules since they acquired Nginx?
pas
acquisition finished in 2019
there are quite a lot of features, but it's hard to say what constitutes a new module. (well, there's "Feature: the ngx_stream_set_module." so maybe yes?)
LinuxBender
One would probably have to go through git logs [1] so I guess I should do that after getting some food in the belly to answer my own question. It's a big log. Interesting side note, appears all commits from Maxim stopped in January 2024. Must be all F5 now.
jauntywundrkind
There's some cool stuff & capabilities here. Its surprising to me that uptake has been so slow.
Node.js just posted an update on the state of QUIC, which underlies http3 & has had some work over the years. They're struggling with openssl being slow to get adequate API support going. There's efforts that have working books for quic, but the prospect of switching is somewhat onerous.
Really unfortunate; so much of this work has been done for Node & there's just no straightforward path forwards.
sureIy
I'd love to see that OpenSSL fork drama on the main page of HN. Do you know where this was discussed?
billywhizz
there's a pretty good summary of things with links from daniel stenberg - the curl guy - here : https://daniel.haxx.se/blog/2021/10/25/the-quic-api-openssl-...
karel-3d
I think the issue is the complexity? It pushes a lot of logic into userspace.
jillesvangurp
My observation is that anything based on public cloud providers using their load balancers is basically using HTTP3 out of the box. This benefits people that use browsers that support this (essentially all browser and mobile browsers). And since it falls back to plain HTTP 1.1, there are no downsides for others.
Sites that use their own apache/nginx/whatever servers are not benefiting from this and need to do work. And this is of course not helped by the fact that http3 support in many servers is indeed still lacking. Which at this point should be a strong hint to maybe start considering something more modern/up to date.
Http clients used for API calls between servers that maybe use pipelining and connection reuse, benefit less from using HTTP3. So, fixing http clients to support http3 is less urgent. Though there probably are some good reasons to support this anyway. Likewise there is little benefit in ensuring communication between microservices in e.g. Kubernetes happens over http3.
JimDabell
I’ve been using niquests with Python. It supports HTTP/3 and a bunch of other goodies. The Python ecosystem has been kind of stuck on the requests package due to inertia, but that library is basically dead now. I’d encourage Python developers to give niquests a try. You can use it as a drop-in replacement for requests then switch to the better async API when you need to.
https://niquests.readthedocs.io/en/latest/
Traditionally these types of things are developed outside the stdlib for Python. I’m not sure why they draw the line where they do between urllib vs niquests, but it does sometimes feel like the batteries-included nature of Python is a little neglected in some areas. A good HTTP library seems like it belongs in the stdlib.
mixmastamyk
requests dead? The reason given for not including it in the stdlib was so it could evolve more rapidly. Back then the protocol layer was handled/improved by urllib3.
JimDabell
It’s not evolving at all:
> Requests is in a perpetual feature freeze, only the BDFL can add or approve of new features. The maintainers believe that Requests is a feature-complete piece of software at this time.
> One of the most important skills to have while maintaining a largely-used open source project is learning the ability to say “no” to suggested changes, while keeping an open ear and mind.
> If you believe there is a feature missing, feel free to raise a feature request, but please do be aware that the overwhelming likelihood is that your feature request will not be accepted.
— https://requests.readthedocs.io/en/latest/dev/contributing/#...
antisthenes
It takes a very special case of a person to complain about a feature-complete piece of software not evolving fast enough.
artyom
Every single project mentioned in the article is to some extent either open source and/or community driven.
So nobody considered HTTP/3 interesting enough to rush and add support for it very quickly. It'll get there, but fast? I don't think so, see IPv6.
Also, nobody considered HTTP/3 worth enough of paying for maintainers to add support for it.
jsheard
Nginx (F5) and Go (Google) are hardly scrappy open source projects with limited resources. The former is semi-commercial, you can pay for Nginx and still not have stable HTTP3 support. Google was one of the main drivers of the HTTP3 spec and has supported it both in Chromium and on their own cloud for years, but for whatever reason they haven't put the same effort into Go's stdlib.
arccy
It's in progress: quic is in testing in http://pkg.go.dev/golang.org/x/net/quic and http3 is being implemented https://github.com/golang/go/issues/70914
Since Go has strong backwards compatibility guarantees, they're unlikely to commit to APIs that may need to change in the standard library.
Orygin
The backwards compatibility guarantees are for the language and not the standard library. They won't make breaking changes nilly willy but it can and has happened for the std.
FuriouslyAdrift
I'd go with HAProxy over Nginx any day. It far more robust and more capable. They've had QUIC & HTTP/3 since 2022.
kccqzy
The comparison with IPv6 is interesting. IPv6 isn't mainly driven by open source or community. It is driven by the needs of large corporations, including both ISPs and tech companies. ISPs like T-mobile wanting to run an IPv6-only backbone network, and tech companies like Apple forcing every app in the App Store to work in IPv6-only mode (DNS64+NAT64). New operating system levels features for IPv6 are often proposed by big tech companies and then implemented eagerly by them; see for example DHCP option 108.
In a sense the need for IPv6 is driven by corporates just like that for HTTP/3.
elcritch
IPv6 always seemed to me to be driven by a certain class of purist networking geeks. Then some corporations started getting on board like you said, but many couldn't care less.
FuriouslyAdrift
The largest use of IPv6 is in mobile (cell) networks. When they effectively killed IP block mobility (provider independent netblocks), they (the standards bodies) effectively killed it's adoption everywhere else.
I work in the networking space and outside of dealing with certain European subsidiaries, we don't use IPv6 anywhere. It's a pain to use and the IPv6 stacks on equipment (routers, firewalls, etc) are no where near the quality, affordability, and reliability of their IPv4 stacks.
nine_k
The exhaustion of IPv4 address pool was easy to predict even in 2000, just by extrapolation of the growth curve.
Then came IP telephony backbone and the mobile internet, topped up with the cloud, and the need became acute. For the large corporations involved, at least.
kccqzy
Oh many purist networking geeks joined large corporations so that these corporations began to push IPv6 in a direction set by the geeks. They understood that as independent geeks they have essentially no say in the evolution of IPv6. My favorite example here is Android refusing to support stateful DHCPv6; it's clear that it's being pushed by purist networking geeks inside Google.
pas
wanting p2p to work (without quixotic NAT hole-punching) is puristry?
vlovich123
Ummm… Google invented QUIC and pushed it into Chrome and shuttled it through IETF to be ratified as a standard. Some of the large OSS projects are maintained by large companies (eg quiche is by Cloudflare) and Microsoft has MsQuic which you can link against directly or just use the kernel mode version built into the OS directly since Windows 11. The need for QUIC is actually even more driven by corporates since IPv6 was a very small comparative pain point compared to better reaching customers with large latency network connections.
hylaride
99% of the benefit of HTTP/3 is on distributed web serving where clients are connecting to multiple remote ends on a web page (which lets be honest, is mostly used for serving ads faster).
Why would the open source community prioritize this?
not_a_bot_4sho
> see IPv6
"We'll get to IPv6 after we finish IPv5"
confirmr
> You'll start to see lack of HTTP/3 support used as a signal to trigger captchas & CDN blocks, like as TLS fingerprinting is already today. HTTP/3 support could very quickly & easily become a way to detect many non-browser clients, cutting long-tail clients off from the modern web entirely.
That explains it. I've seen this when using 3 year old browsers on retail web sites recently. A few cloud providers think I’m a bot.
LinuxBender
I've been doing that on my hobby sites ever since all the popular browsers supported HTTP/2.0 [1]
if ($server_protocol != HTTP/2.0) { return 444; }
It knocks out a lot of bots. I am thankful that most bots are poorly maintained and most botters are just skiddies that could not maintain the code if they wanted to.[1] - https://nginx.org/en/docs/http/ngx_http_rewrite_module.html#...
userbinator
It's horrible that the Internet is slowly becoming a locked-down ecosystem. Everyone should turn off HTTP/3 in protest of this.
crazygringo
What exactly are sites supposed to do to prevent being the targets of DDoS, spam, fraud, aggressive bots, and other abuse? And it's not "locked down", it's usually just a CAPTCHA as long as you're not coming from an abusive IP range like might happen with a VPN.
Also there are a thousand other signals besides HTTP/3. It's not going to make a difference.
kragen
The normalization of CAPTCHAs for simply reading what ought to be public information strikes me as very alarming, as does characterizing essential privacy and anti-censorship measures like VPNs as "abusive".
Something like 1% of HTTP hits pose some risk of spam or fraud, those where somebody is trying to post a message or a transaction or something. The other 99% are just requesting a static HTML document or JPEG (or its moral equivalent with unwarranted and untrustworthy dynamic content injected). Static file serving is very difficult to DDoS, even without a caching CDN like Fastly. There is still a potentially large bandwidth cost, but generally the DoS actor has to pay more than the victim website, making it relatively unappealing.
Should web sites really be weighing "a thousand signals" to decide which version of the truth to present a given user with? That sounds dystopian to me.
userbinator
It's not "just a CAPTCHA", it's a monstrosity involving tons of invasive JS that requires the latest Big Browser and attempts to identify you.
progmetaldev
When it comes specifically to Cloudflare, it does not have to be this way. A site operator can choose to set their own rules for triggering CAPTCHAs, it's just that most don't actually bother to learn about the product they're using.
I use Cloudflare through my employer because I deal with clients that aren't willing to spend a few hundred dollars a month on hosting. In order to keep up with sales of new websites for these clients (where the real money lies), I need to keep hosting costs down, while also providing high-availability and security. Bot traffic is a real problem, and while I would love to not require using Cloudflare in favor of other technologies to keep a site running quickly and securely, I just can't find another solution near a similar price point. I've already tweaked the CMS I use to actually run with less than the minimum recommended requirements, so would have to take a more hostile action towards my clients to keep them at the same cost (such as using a less powerful CMS, or setting expiration headers far in the future - which doesn't help with bots).
If anyone has suggestions, I'd be open to them, but working for a small business I don't have the luxury to not run with Cloudflare (or a similar service if one exists). I have worked with this CMS since 2013, and have gone into the internals of the code to try and find every way to reduce memory and CPU usage so I don't need to depend on other services, but I don't see too many returns anymore.
I am all for internet privacy, and don't like where things are going, but also do work for lots of small businesses including charities and "mom and pop" shops that can't afford extra server resources. In no way do I use Cloudflare to erode user privacy or trust, but can understand others looking at it that way. If I had the option to pick my clients and their budgets, it wouldn't be an issue.
dadrian
It's not clear to me that HTTP/3 is relevant to anyone who isn't already using it. It's most useful for large-scale hosting providers and video. And these people have already adopted it, and don't necessarily use out-of-the-box web servers for their infrastructure.
lemagedurage
Small websites gain from reducing roundtrips on connection too. Fast websites are nice
dadrian
HTTP/2 already reduces roundtrips.
lieuwex
HTTP/3 even more so due to QUIC's shorter handshake process.
charleslmunger
At the cost of head-of-line blocking - one dropped TCP packet delays all HTTP/2 streams.
> Really it's hard to point to any popular open-source tools that fully support HTTP/3: rollout has barely even started.
> This seems contradictory. What's going on?
IT administrators and DevOps engineers such as myself typically terminate HTTP/3 connections at the load balancer, terminate SSL, then pass back HTTP 1.1 (_maybe_ 2 if the service is GRPC or GraphQL) to the backing service. This is way easier to administer and debug, and is supported by most reverse proxies. As such, there's not much need for HTTP/3 in server side languages like Golang and Python, as HTTP/1.1 is almost always available (and faster and easier to debug!) in the datacenter anyways.
HTTP/3 and IPv6 are mobile centric technologies that are not well suited for the datacenter. They really shine on ephemeral spotty connections, but add a lot of overhead in a scenario where most connections between machines are static, gigabit, low-latency connections.