There Isn't Much Point to HTTP/2 Past the Load Balancer
24 comments
·February 25, 2025treve
First 80% of the article was great, but it ends a bit handwavey when it gets to its conclusion.
One thing the article gets wrong is that non-encrypted HTTP/2 exists. Not between browsers, but great between a load balancer and your application.
fragmede
Not according to Edward Snowden, if you're Yahoo and Google.
Animats
The amusing thing is that HTTP/2 is mostly useful for sites that download vast numbers of tiny Javascript files for no really good reason. Like Google's sites.
paulddraper
Or small icon/image files.
Anyone remember those sprite files?
cyberpunk
You ever had to host map tiles? Those are the worst!
feyman_r
CDNs like Akamai still don’t support H2 back to origins.
That’s likely not because of the wisdom in the article per se, but because of rising complexity in managing streams and connections downstream.
wczekalski
It is very useful for long lived (bidirectional) streams.
m00x
Only if you're constrained on connections. The reason that HTTP2 is much better for websites is because of the slow starts of TCP connections. If you're already connected, you don't suffer those losses, and you benefit from kernel muxing.
awinter-py
plus in my experience some h2 features behave oddly with load balancers
I don't understand this super well, but could not get keepalives to cross the LB boundary w/ GCP
LAC-Tech
Personally, this lack of support doesn’t bother me much, because the only use case I can see for it, is wanting to expose your Ruby HTTP directly to the internet without any sort of load balancer or reverse proxy, which I understand may seem tempting, as it’s “one less moving piece”, but not really worth the trouble in my opinion.
That seems like a massive benefit to me.
chucky_z
gRPC?
agf
Surprised not to see this mentioned in the article.
Lots of places (including a former employer) have done tons of work to upgrade internal infrastructure to support HTTP/2 just so they could use gRPC. The performance difference from JSON-over-HTTP APIs was meaningful for us.
I realize there are other solutions but this is a common one.
lmm
I think this post gets the complexity situation backwards. Sure, you can use a different protocol between your load balancer and your application and it won't do too much harm. But you're adding an extra protocol that you have to understand, for no real benefit.
(Also, why do you even want a load balancer/reverse proxy, unless your application language sucks? The article says it "will also take care of serving static assets, normalize inbound requests, and also probably fend off at least some malicious actors", but frankly your HTTP library should already be doing all of those. Adding that extra piece means more points of failure, more potential security vulnerabilities, and for what benefit?)
harshreality
> why do you even want a load balancer/reverse proxy, unless your application language sucks?
Most load balancer/reverse proxy applications also handle TLS. Security-conscious web application developers don't want TLS keys in their application processes. Even the varnish authors (varnish is a load balancer/caching reverse proxy) refused to integrate TLS support because of security concerns; despite being reverse-proxy authors, they didn't trust themselves to get it right.
An application can't load-balance itself very well. Either you roll your own load balancer as a separate layer of the application, which is reinventing the wheel, or you use an existing load balancer/reverse proxy.
Easier failover with less (ideally zero) dropped requests.
If the app language isn't compiled, having it serve static resources could be much slower than having a reverse proxy do it.
lmm
> Security-conscious web application developers don't want TLS keys in their application processes.
If your application is in a non-memory-safe language, sure (but why would you do that?). Otherwise I would think the risk is outweighed by the value of having your connections encrypted end-to-end. If your application process gets fully compromised then an attacker already controls it, by definition, so (given that modern TLS has perfect forward secrecy) I don't think you really gain anything by keeping the keys confidential at that point.
pixelesque
> Sure, you can use a different protocol between your load balancer and your application and it won't do too much harm. But you're adding an extra protocol that you have to understand, for no real benefit.
Well, that depends...
At a certain scale (and arguably, not too many people will ever need to think about this), using UNIX sockets (instead of HTTP TCP) between the application and load balancer can be faster in some cases, as you don't go through the TCP stack...
> Also, why do you even want a load balancer/reverse proxy, unless your application language sucks?
Erm... failover... ability to do upgrades without any downtime... it's extra complexity yes, but it does have some benefits...
lmm
> At a certain scale (and arguably, not too many people will ever need to think about this), using UNIX sockets (instead of HTTP TCP) between the application and load balancer can be faster in some cases, as you don't go through the TCP stack...
Sure (although as far as I can see there's no reason you can't keep using HTTP for that). You can go even further and use shared memory (I work for a company that used Apache with Jk back in the day). But that's an argument for using a faster protocol because you're seeing a benefit from it, not an argument for using a slower protocol because you can't be bothered to implement the latest standard.
toast0
Load balancers are nice to have if you want to move traffic from one machine to another. Which sometimes needs to happen even if your application language doesn't suck and you can hotload your changes... You may still need to manage hardware changes, and a load balancer can be nice for that.
DNS is usable, but some clients and recursive resolvers like to cache results for way beyond the TTL provided.
Galanwe
> Also, why do you even want a load balancer/reverse proxy, unless your application language sucks
- To terminate SSL
- To have a security layer
- To load balance
- To have rewrite rules
- To have graceful updates
- ...
lmm
> To terminate SSL
To make sure that your connections can be snooped on over the LAN? Why is that a positive?
> To have a security layer
They usually do more harm than good in my experience.
> To load balance
Sure, if you're at the scale where you want/need that then you're getting some benefit from that. But that's something you can add in when it makes sense.
> To have rewrite rules > To have graceful updates
Again I would expect a HTTP library/framework to handle that.
fragmede
You terminate SSL as close to the user as possible, because that round trip time is greatly going to affect the user experience. What you do between your load balancer and application servers is up to you, (read: should still be encrypted) but terminating SSL asap is about user experience.
tuukkah
- To host multiple backends and APIs under one domain name
tuukkah
Answers from the article - the "extra" protocol is just HTTP/1.1 and the reason for a load balancer is the ability to have multiple servers:
> But also the complexity of deployment. HTTP/2 is fully encrypted, so you need all your application servers to have a key and certificate, that’s not insurmountable, but is an extra hassle compared to just using HTTP/1.1, unless of course for some reasons you are required to use only encrypted connections even over LAN.
> So unless you are deploying to a single machine, hence don’t have a load balancer, bringing HTTP/2 all the way to the Ruby app server is significantly complexifying your infrastructure for little benefit.
The TLS requirement from HTTP2 also hindered http2 origin uptake. The TLS handshake adds latency and is unnecessary on some instances. (This is mentioned in heading "Extra Complexity" in the article)