A 14kb page can load much faster than a 15kb page (2022)
248 comments
·July 19, 2025susam
welpo
> That said, I do use KaTeX with client-side rendering on a limited number of pages that have mathematical content
You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
susam
> You could try replacing KaTeX with MathML: https://w3c.github.io/mathml-core/
I would love to use MathML, not directly, but automatically generated from LaTeX, since I find LaTeX much easier to work with than MathML. I mean, while I am writing a mathematical post, I'd much rather write LaTeX (which is almost muscle memory for me), than write MathML (which often tends to get deeply nested and tedious to write). However, the last time I checked, the rendering quality of MathML was quite uneven across browsers, both in terms of aesthetics as well as in terms of accuracy.
For example, if you check the default demo at https://mk12.github.io/web-math-demo/ you'd notice that the contour integral sign has a much larger circle in the MathML rendering (with most default browser fonts) which is quite inconsistent with how contour integrals actually appear in print.
Even if I decide to fix the above problem by loading custom web fonts, there are numerous other edge cases (spacing within subscripts, sizing within subscripts within subscripts, etc.) that need fixing in MathML. At that point, I might as well use full KaTeX. A viable alternative is to have KaTeX or MathJaX generate the HTML and CSS on server-side and send that to the client and that's what I meant by server-side rendering in my earlier comment.
AnotherGoodName
Math expressions are like regex to me nowadays. I ask the llm coding assistant to write it and it’s very very good at it. I’ll probably forget the syntax soon but no big deal.
“MathML for {very rough textual form of the equation}” seems to give a 100% hit rate for me. Even when i want some formatting change i can ask the llm and that pretty much always has a solution (mathml can render symbols and subscripts in numerous ways but the syntax is deep). It’ll even add the css needed to change it up in some way if asked.
BlackFly
Katex renders to MathML (either server side or client side). Generally people want a slightly more fluent way of describing an equation than is permitted by a soup of html tags. The various tex dialects (generally just referred to as latex) are the preferred methods of doing that.
mr_toad
Server side rendering would cut out the 277kb library. The additional MathML being sent to the client is probably going to be a fraction of that.
mk12
If you want to test out some examples from your website to see how they'd look in KaTeX vs. browser MathML rendering, I made a tool for that here: https://mk12.github.io/web-math-demo/
em3rgent0rdr
Nice tool! Seems "New Computer Modern" font is the Native MathML rendering that looks closest like standard LaTeX rendering, I guess cause LaTeX uses Computer Modern by default. But I notice extra space around the parenthesis, which annoys me because LaTeX math allows you to be so precise about how wide your spaces (e.g. \, \: \; \!). Is there a way to get the spaces around the parenthesis to be just as wide as standard LaTeX math? And the ^ hat above f(x) isn't nicely above just the top part of the f.
djoldman
I never understood math / latex display via client side js.
Why can't this be precomputed into html and css?
susam
> I never understood math / latex display via client side js. Why can't this be precomputed into html and css?
It can be. But like I mentioned earlier, my personal website is a hobby project I've been running since my university days. It's built with Common Lisp (CL), which is part of the fun for me. It's not just about the end result, but also about enjoying the process.
While precomputing HTML and CSS is definitely a viable approach, I've been reluctant to introduce Node or other tooling outside the CL ecosystem into this project. I wouldn't have hesitated to add this extra tooling on any other project, but here I do. I like to keep the stack simple here, since this website is not just a utility; it is also my small creative playground, and I want to enjoy whatever I do here.
whism
Perhaps you could stand up a small service on another host using headless chrome or similar to render, and fall back to client side if the service is down and you don’t already have the pre rendered result stored somewhere. I suggest this only because you mentioned not wanting to pollute your current server environment, and I enjoy seeing these kind of optimizations done :^)
dfc
Is it safe to say the website is your passion project?
marcthe12
Well there is mathml but it has poor support in chrome til recently. That is the website native equations formatting.
mr_toad
It’s a bit more work, usually you’re going to have to install Node, Babel and some other tooling, and spend some time learning to use them if you’re not already familiar with them.
VanTodi
Another idea maybe would be to load the heavy library after the initial page is done. But it's loaded and heavy nonetheless. Or you could create svgs for the formulas and load them when they are in the viewport. Just my 2 cents
GavinAnderegg
14kB is a stretch goal, though trying to stick to the first 10 packets is a cool idea. A project I like that focuses on page size is 512kb.club [1] which is like a golf score for your site’s page size. My site [2] came in just over 71k when I measured before getting added (for all assets). This project also introduced me to Cloudflare Radar [3] which includes a great tool for site analysis/page sizing, but is mainly a general dashboard for the internet.
mousethatroared
A question as a non user:
What are you doing with the extra 500kB for me, the user?
> 90% of the time in interested in text. Most of the reminder vector graphics would suffice.
14 kB is a lot of text and graphics for a page. What is the other 500 for?
filleduchaos
Text, yes. Graphics? SVGs are not as small as people think especially if they're any more complex than basic shapes, and there are plenty of things that simply cannot be represented as vector graphics anyway.
It's fair to prefer text-only pages, but the "and graphics" is quite unrealistic in my opinion.
FlyingSnake
Second this. I also find 512kb as a more realistic benchmark and use it for my website.
The modern web has crossed the rubicon long time ago for 14kb websites.
Brajeshwar
512kb is pretty achievable for personal websites. My next target is to stay within 99kb (100kb as the ceiling). Should be pretty trivial on a few weekends. My website is in the Orange on 512kb.
crawshaw
If you want to have fun with this: the initial window (IW) is determined by the sender. So you can configure your server to the right number of packets for your website. It would look something like:
ip route change default via <gw> dev <if> initcwnd 20 initrwnd 20
A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.sangeeth96
> A web search suggests CDNs are now at 30 packets for the initial window, so you get 45kb there.
Any reference for this?
ryan-c
I'm not going to dig it up for you, but this is in line with what I've read and observed. I set this to 20 packets on my personal site.
londons_explore
be a bad citizen and just set it to 1000 packets... There isn't really any downside apart from potentially clogging up someone who has a dialup connection and bufferbloat.
notpushkin
This sounds like a terrible idea, but can anybody pinpoint why exactly?
jeroenhd
Anything non-standard will kill shitty middleboxes so I assume spamming packets faster than anticipated will have corporate networks block you off as a security thread of some kind. Mobile carriers also do some weird proxying hacks to "save bandwidth", especially on <4G, so you may also break some mobile connections. I don't have any proof but shitty middleboxes have broken connections with much less obvious protocol features.
But in practice, I think this should work most of the time for most people. On slower connections, your connection will probably crawl to a halt due to retransmission hell, though. Unless you fill up the buffers on the ISP routers, making every other connection for that visitor slow down or get dropped, too.
buckle8017
Doing that would basically disable the congestion control at the start of the connection.
Which would be kinda annoying on a slow connection.
Either you'd have buffer issues or dropped packets.
tgv
This could be another reason: https://blog.cloudflare.com/russian-internet-users-are-unabl...
> ... analysis [by Cloudflare] suggests that the throttling [by Russian ISPs] allows Internet users to load only the first 16 KB of any web asset, rendering most web navigation impossible.
firecall
Damn... I'm at 17.2KB for my home page! (not including dependencies)
FWIW I optimised the heck out of my personal homepage and got 100/100 for all Lighthouse scores. Which I had not previously thought possible LOL
Built in Rails too!
It's absolutely worth optimising your site though. It just is such a pleasing experience when a page loads without any perceptible lag!
apt-apt-apt-apt
Yeah, the fact that news.ycombinator.com loads instantly pleases my brain so much I flick it open during downtime automonkey-ly
Alifatisk
Lobsters, Dlangs forum and HN is one of the few places I know that loads instantly, I love it. This is how it should be like!
ghoshbishakh
rails has nothing to do with the rendered page size though. Congrats on the perfect lighthouse score.
Alifatisk
Doesn't Rails asset pipeline have an effect on the page size, like if Propshaft being used instead of Sprockets. From what I remember, Propshaft intentionally does not include minification or compression.
firecall
It’s all Rails 8 + Turbo + Stimulus JS with Propshaft handling the asset bundling / pipeline.
All the Tailwind building and so on is done using common JS tools, which are mostly standard out of the box Rails 8 supplied scripts!
Sprockets used to do the SASS compilation and asset bundling, but the Rails standard now is to facilitate your own preferences around compilation of CSS/JS.
firecall
Indeed it does not :-)
It was more a quick promote Rails comment as it can get dismissed as not something to build fast website in :-)
9dev
The overlap of people that don’t know what TCP Slow Start is and those that should care about their website loading a few milliseconds faster is incredibly small. A startup should focus on, well, starting up, not performance; a corporation large enough to optimise speed on that level will have a team of experienced SREs that know over which detail to obsess.
jeroenhd
When your approach is "I don't care because I have more important things to focus on", you never care. There's always something you can do that's more important to a company than optimising the page load to align with the TCP window size used to access your server.
This is why almost all applications and websites are slow and terrible these days.
marcosdumay
Well, half of a second is a small difference. So yeah, there will probably be better things to work on up to the point when you have people working exclusively on your site.
> This is why almost all applications and websites are slow and terrible these days.
But no, there are way more things broken on the web than lack of overoptimization.
hinkley
> half a second is a small difference
I don’t even know where to begin. Most of us are aiming for under a half second total for response times. Are you working on web applications at all?
sgarland
This. A million times this.
Performance isn’t seen as sexy, for reasons I don’t understand. Devs will be agog about how McMaster-Carr manages to make a usable and incredibly fast site, but they don’t put that same energy back into their own work.
People like responsive applications - you can’t tell me you’ve never seen a non-tech person frustratingly tapping their screen repeatedly because something is slow.
null
keysdev
That and SPA
andix
SPAs are great for highly interactive pages. Something like a mail client. It's fine if it takes 2-3 seconds extra when opening the SPA, it's much more important to have instant feedback when navigating.
SPAs are really bad for mostly static websites. News sites, documentation, blogs.
elmigranto
Right. That’s why all the software from, say, Microsoft works flawlessly and at peak efficiency.
SXX
This. It's exactly why Microsoft use modern frameworks such as React Native for their Start Menu used by billions of people every day.
hinkley
And this is why SteamOS is absolutely kicking Windows’ ass on handhelds.
chamomeal
Wait… please please tell me this is a weirdly specific joke
Nab443
And probably the reason why I have to restart it at least twice a week.
9dev
That’s not what I said. Only that the responsible engineers know which tradeoffs they make, and are competent enough to do so.
mnw21cam
Hahaha. Keep digging.
samrus
The decision to use react for the start menu wasnt out of competency. The guy said on twitter that thats what he knew so he used it [1]. Didnt think twice. Head empty no thoughts
nasso_dev
I agree, it feels like it should be how you describe it.
But if Evan Wallace didn't obsess over performance when building Figma, it wouldn't be what it is today. Sometimes, performance is a feature.
andersmurphy
Doesn’t have to be a choice it could just be the default. My billion cells/checkboxes[1] demos both use datastar and so are just over 10kb. It can make a big difference on mobile networks and 3G. I did my own tests and being over 14kb often meant an extra 3s load time on bad connections. The nice thing is I got this for free because the datastar maintainer cares about tcp slow star even though I might not.
anymouse123456
This idea that performance is irrelevant gets under my skin. It's how we ended up with Docker and Kubernetes and the absolute slop stack that is destroying everything it touches.
Performance matters.
We've spent so many decades misinterpreting Knuth's quote about optimization that we've managed to chew up 5-6 orders of magnitude in hardware performance gains and still deliver slow, bloated and defective software products.
Performance does in fact matter and all other things equal, a fast product is more pleasurable than a slow one.
Thankfully some people like the folks at Figma took the risk and proved the point.
Even if we're innovating on hard technical problems (which most of us are not), performance still matters.
sgarland
Agreed, though containers and K8s aren’t themselves to blame (though they make it easier to get worse results).
Debian Slim is < 30 MB. Alpine, if you can live with musl, is 5 MB. The problem comes from people not understanding what containers are, and how they’re built; they then unknowingly (or uncaringly) add in dozens of layers without any attempt at reducing or flattening.
Similarly, K8s is of course just a container orchestration platform, but since it’s so easy to add to, people do so without knowing what they’re doing, and you wind up with 20 network hops to get out of the cluster.
mr_toad
Containers were invented because VMs were too slow to cold start and used too much memory. Their whole raison d'être is performance.
bobmcnamara
Can you live fork containers like you can VMs?
VM clone time is surprisingly quick once you stop copying memory, after that it's mostly ejecting the NIC and bringing up the new one.
anymouse123456
That's another reason they're so infuriating. Containers are intended to make things faster and easier. But the allure of virtualization has made most work much, much slower and much, much worse.
If you're running infra at Google, of course containers and orchestration make sense.
If you're running apps/IT for an SMB or even small enterprise, they are 100% waste, churn and destruction. I've built for both btw.
The contexts in which they are appropriate and actually improve anything at all are vanishingly small.
anonymars
Yeah, I think Electron would be the poster child
zelphirkalt
Performance matters, but at least initially only as far as it doesn't complicate your code significantly. That's why a simple static website often beats some hyper modern latest framework optimization journey websites. You gotta maintain that shit. And you are making sacrifices elsewhere, in the areas of accessibility and possibly privacy and possibly ethics.
So yeah, make sure not to lose performance unreasonably, but also don't obsess with performance to the point of making things unusable or way too complicated for what they do.
anymouse123456
This kind of thinking is exactly the problem.
Yes, at the most absurd limits, some autists may occasionally obsess and make things worse. We're so far from that problem today, it would be a good one to have.
IME, making things fast almost always also makes them simpler and easier to understand.
Building high-performance software often means building less of it, which translates into simpler concepts, fewer abstractions, and shorter times to execution.
It's not a trade-off, it's valuable all the way down.
Treating high performance as a feature and low performance as a bug impacts everything we do and ignoring them for decades is how you get the rivers of garbage we're swimming in.
sgarland
> way too complicated for what they do
Notably, this is subjective. I’ve had devs tell me that joins (in SQL) are too complicated, so they’d prefer to just duplicate data everywhere. I get that skill is a spectrum, but it’s getting to the point where I feel like we’ve passed the floor, and need to firmly state that there are in fact some basic ideas that are required knowledge.
hinkley
If you’re implying that Docker is the slop, instead of an answer to the slop, I haven’t seen it.
01HNNWZ0MV43FF
Docker good actually
anymouse123456
nah - we'll look back on Docker the same way many of are glaring at our own sins with OO these days.
austin-cheney
I don’t see what size of corporation has to do with performance or optimization. Almost never do I see larger businesses doing anything to execute more quickly online.
zelphirkalt
Too many cooks spoil the broth. If you got multiple people pushing agenda to use their favorite new JS framework, disregarding simplicity in order to chase some imaginary goal or hip thing to bolster their CV, it's not gonna end well.
sgarland
Depending on the physical distance, it can be much more than a few msec, as TFA discusses.
andrepd
> a corporation large enough will have a team of experienced SREs that know over which detail to obsess.
Ahh, if only. Have you seen applications developed by large corporations lately? :)
achenet
a corporation large enough to have a team of experienced SREs that know which details to obsess over will also have enough promotion-hungry POs and middle managers to tell them devs to add 50MB of ads and trackers in the web page. Maybe another 100MB for an LLM wrapper too.
:)
hinkley
Don’t forget adding 25 individual Google Tag Managers to every page.
hackerman_fi
The article has IMO two flawed arguments:
1. There is math for how long it takes to send even one packet over satellite connection (~1600ms). Its a weak argument for the 14kb rule since there is no comparison with a larger website. 10 packets wont necessarily take 16 seconds.
2. There is a mention that images on webpage are included in this 14kb rule. In what case are images inlined to a page’s initial load? If this is a special case and 99.9% of images don’t follow it, it should be mentioned at very least.
throwup238
> In what case are images inlined to a page’s initial load?
Low resolution thumbnails that are blurred via CSS filters over which the real images fade in once downloaded. Done properly it usually only adds a few hundred bytes per image for above the fold images.
I don’t know if many bloggers do that, though. I do on my blog and it’s probably a feature on most blogging platforms (like Wordpress or Medium) but it’s more of a commercial frontend hyperoptimization that nudges conversions half a percentage point or so.
hinkley
Inlined svg as well. It’s a mess.
null
hsbauauvhabzb
Also the assumption that my userbase uses low latency satellite connections, and are somehow unable to put up with my website, when every other website in current existence is multiple megabytes.
ricardobeat
There was no such assumption, that was just the first example after which he mentions normal roundtrip latencies are usually in the 100-300ms range.
Just because everything else is bad, doesn't invalidate the idea that you should do better. Today's internet can feel painfully slow even on a 1Gbps connection because of this; websites were actually faster in the early 2000s, during the transition to ADSL, as they still had to cater to dial-up users and were very light as a result.
sgarland
> Just because everything else is bad, doesn't invalidate the idea that you should do better.
I get this all the time at my job, when I recommend a team do something differently in their schema or queries: “do we have any examples of teams currently doing this?” No, because no one has ever cared to try. I understand not wanting to be guinea pigs, but you have a domain expert asking you to do something, and telling you that they’ll back you up on the decision, and help you implement it. What more do you want?!
Alifatisk
I agree with the sentiment here, the thing is, I've noticed that the newer generations are using frameworks like Next.js as default for building simple static websites. That's their bare bone start. The era of plain html + css (and maybe a sprinkle of js) feels like it's fading away, sadly.
jbreckmckye
I think that makes sense.
I have done the hyper optimised, inline resource, no blocking script, hand minimised JS, 14kb website thing before and the problem with doing it the "hard" way is it traps you in a design and architecture.
When your requirements change all the minimalistic choices that seemed so efficient and web-native start turning into technical debt. Everyone fantasises about "no frameworks" until the project is no longer a toy.
Whereas the isomorphic JS frameworks let you have your cake and eat it: you can start with something that spits out compiled pages and optimise it to get performant _enough_, but you can fall back to thick client JavaScript if necessary.
fleebee
I think you're late enough for that realization that the trend already shifted back a bit. Most frameworks I've dealt with can emit static generated sites, Next.js included. Astro feels like it's designed for that purpose from the ground up.
austin-cheney
You have noticed that only just recently? This has been the case since jQuery became popular before 2010.
chneu
Arguably it's been this way since web 2.0 became a thing in like 2008?
zos_kia
Next.js bundles the code and aggressively minifies it, because their base use case is to deploy on lambdas or very small servers. A static website using next would be quite optimal in terms of bundle size.
the_precipitate
And you do know that .exe file is wasteful, .com file actually saves quite a few bytes if you can limit your executable's size to be smaller than 0xFF00h (man, I am old).
cout
And a.out format often saves disk space over elf, despite duplicating code across executables.
simgt
Aside from latency, reducing ressources consumption to the minimum required should always be a concern if we intend to have a sustainable future. The environmental impact of our network is not negligible. Given the snarky comments here, we clearly have a long way to go.
EDIT: some reply missed my point, I am not claiming this particular optimization is the holy grail, only that I'd have liked for added benefit of reducing the energy consumption to be mentioned
FlyingAvatar
The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
I am all for efficiency, but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
schiffern
In that spirit I have a userscript, ironically called Youtube HD[0], that with one edit sets the resolution to 'medium' ie 360p. On a laptop it's plenty for talking head content (the softening is nice actually), and I only find myself switching to 480p if there's small text on screen.
It's a small thing, but as you say internet video is relatively heavy.
To reduce my AI footprint I use the udm=14 trick[1] to kill AI in Google search. It generally gives better results too.
For general web browsing the best single tip is running uBlock Origin. If you can master medium[2] or hard mode (which will require un-breaking/whitelisting sites) it saves more bandwidth and has better privacy.[3]
To go all-out on bandwidth conservation, LocalCDN[4] and CleanURLs[5] are good. "Set it and forget it," improves privacy and load times, and saves a bit of energy.
Sorry this got long. Cheers
[0] https://greasyfork.org/whichen/scripts/23661-youtube-hd
[1] https://arstechnica.com/gadgets/2024/05/google-searchs-udm14...
[2] https://old.reddit.com/r/uBlockOrigin/comments/1j5tktg/ubloc...
andrepd
I've been using uBlock in advanced mode with 3rd party frames and scripts blocked. I recommend it, but it is indeed a pain to find the minimum set of things you need to unblock to make a website work, involving lots of refreshing.
Once you find it for a website you can just save it though so you don't need to go through it again.
LocalCDN is indeed a nobrainer for privacy! Set and forget.
null
OtherShrezzing
> but optimizing everywhere is a recipe for using up the resources to actually optimize where it matters.
Is it? My front end engineer spending 90 minutes cutting dependencies out of the site isn’t going to deny YouTube the opportunity to improve their streaming algorithms.
josephg
It might do the opposite. We need to teach engineers of all stripes how to analyse and fix performance problems if we’re going to do anything about them.
molszanski
If you turn this into open problem, without hypothetical limits of what an frontend engineer ca do it would become more interesting and more impactful in real life. That said engineer is human being who could use that time in myriad other ways that would be more productive to helping the environment
simgt
That's exactly it, but I fully expected whataboutism under my comment. If I had mentioned video streaming as a disclaimer, I'd probably have gotten crypto or Shein as counter "arguments".
Everyone needs to be aware that we are part of an environment that has limited resources beyond "money" and act accordingly, whatever the scale.
pyman
Talking about video streaming, I have a question for big tech companies: Why? Why are we still talking about optimising HTML, CSS and JS in 2025? This is tech from 35 years ago. Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site? The server could publish a link to the uncompressed source so anyone can inspect it, keeping the spirit of the open web alive. Do you realise how many years web developers have spent obsessing over this document-based legacy system and how to improve its performance? Not just years, their whole careers! How many cool technologies were created in the last 35 years? I lost count. Honestly, why are big tech companies still building on top of a legacy system, forcing web developers to waste their time on things like performance tweaks instead of focusing on what actually matters: the product.
ozim
I see you mistake html/css for what they were 30 years ago „documents to be viewed”.
HTML/CSS/JS is the only fully open stack, free as in beer and free and not owned by a single entity and standardized by multinational standardization bodies for building applications interfaces that is cross platform and does that excellent. Especially with electron you can build native apps with HTML/CSS/JS.
There are actually web apps not „websites” that are built. Web apps are not html with sprinkled jquery around there are actually heavy apps.
Naru41
The ideal HTML I have in mind is a DOM tree represented entirely in TLV binary -- and a compiled .so file instead of .js. And a unpacked data to be used directly in C programming data structure. Zero copy, no parsing, (data vaildation is unavoidable but) that's certainly fast.
ahofmann
1. How does that help not wasting resources? It needs more energy and traffic
2. Everything in our world is dwarfes standing on the shoulders of giants. To rip everything up and create something completely new is most of the time an idea that sounds better than it really would be. Anyone who thinks something else is mostly to young to see this pattern.
hnlmorg
That’s already how it works.
The binary is a compressed artefact and the stream is a TLS pipe. But the principle is the same.
In fact videos streams over the web are actually based on how HTTP documents are chunked and retrieved, rather than the other way around.
01HNNWZ0MV43FF
> Why can't browsers adopt a system like video streaming, where you "stream" a binary of your site?
I'll have to speculate what you mean
1. If you mean drawing pixels directly instead of relying on HTML, it's going to be slower. (either because of network lag or because of WASM overhead)
2. If you mean streaming video to the browser and rendering your site server-side, it will break features like resizing the window or turning a phone sideways, and it will be hideously expensive to host.
3. It will break all accessibility features like Android's built-in screen reader, because you aren't going to maintain all the screen reader and braille stuff that everyone might need server-side, and if you do, you're going to break the workflow for someone who relies on a custom tweak to it.
4. If you are drawing pixels from scratch you also have to re-implement stuff like selecting and copying text, which is possible but not feasible.
5. A really good GUI toolkit like Qt or Chromium will take 50-100 MB. Say you can trim your site's server-side toolkit down to 10 MB somehow. If you are very very lucky, you can share some of that in the browser's cache with other sites, _if_ you are using the same exact version of the toolkit, on the same CDN. Now you are locked into using a CDN. Now your website costs 10 MB for everyone loading it with a fresh cache.
You can definitely do this if your site _needs_ it. Like, you can't build OpenStreetMap without JS, you can't build chat apps without `fetch`, and there are certain things where drawing every pixel yourself and running a custom client-side GUI toolkit might make sense. But it's like 1% of sites.
I hate HTML but it's a local minimum. For animals, weight is a type of strength, for software, popularity is a type of strength. It is really hard to beat something that's installed everywhere.
jbreckmckye
I feel this way sometimes about recycling. I am very diligent about it, washing out my cans and jars, separating my plastics. And then I watch my neighbour fill our bin with plastic bottles, last-season clothes and uneaten food.
extra88
At least you and your neighbor are operating on the same scale. Don't stop those individual choices but more members of the populace making those choices is not how the problem gets fixed, businesses and whole industries are the real culprits.
oriolid
> The vast majority of internet bandwidth is people streaming video. Shaving a few megs from a webpage load would be the tiniest drop in the bucket.
Is it really? I was surprised to see that surfing newspaper websites or Facebook produces more traffic per time than Netflix or Youtube. Of course there's a lot of embedded video in ads and it could maybe count as streaming video.
danielbln
Cate to share that article, I find that hard to believe.
vouaobrasil
The problem is that a lot of people DO have their own websites for which they have some control over. So it's not like a million people optimizing their own websites will have any control over what Google does with YouTube for instance...
jychang
A million people is a very strong political force.
A million determined voters can easily force laws to be made which forces youtube to be more efficient.
I often think about how orthodoxical all humans are. We never think about different paths outside of social norms.
- Modern western society has weakened support for mass action to the point where it is literally an unfathomable "black swan" perspective in public discourse.
- Spending a few million dollars on TV ads to get someone elected is a lot cheaper than whatever Bill Gates spends on NGOs, and for all the money he spent it seems like aid is getting cut off.
- Hiring or acting as a hitman to kill someone to achieve your goal is a lot cheaper than the other options above. It seems like this concept, for better or worse, is not quite in the public consciousness currently. The 1960s 1970s era of assassinations have truly gone and past.
hnlmorg
It matters at web scale though.
Like how industrial manufacturing are the biggest carbon consumers and compared to them, I’m just a drop in the ocean. But that doesn’t mean I don’t also have a responsibility to recycle because culminate effect of everyone like me recycling quickly becomes massive.
Similarly, if every web host did their bit with static content, you’d still see a big reduction at a global scale.
And you’re right it shouldn’t the end of the story. However that doesn’t mean it’s a wasted effort / irrelevant optimisation
ofalkaed
I feel better about limiting the size of my drop in the bucket than I would feel about just saying my drop doesn't matter even if it doesn't matter. I get my internet through my phone's hotspot with its 15gig a month plan, I generally don't use the entire 15gigs. My phone and and laptop are pretty much the only high tech I have, audio interface is probably third in line and my oven is probably fourth (self cleaning). Furnace stays at 50 all winter long even when it is -40 out and if it is above freezing the furnace is turned off. Never had a car, walk and bike everywhere including groceries and laundry, have only used motorized transport maybe a dozen times in the past decade.
A nice side effect of these choices is that I only spend a small part of my pay. Never had a credit card, never had debt, just saved my money until I had enough that the purchase was no big deal.
I don't really have an issue with people who say that their drop does not matter so why should they worry, but I don't understand it, seems like they just needlessly complicate their life. Not too long ago my neighbor was bragging about how effective all the money he spent on energy efficient windows, insulation, etc, was, he saved loads of money that winter; his heating bill was still nearly three times what mine was despite using a wood stove to offset his heating bill, my house being almost the same size, barely insulated and having 70 year old windows. I just put on a sweater instead of turning up the heat.
Edit: Sorry about that sentence, not quite awake yet and doubt I will be awake enough to fix it before editing window closes.
qayxc
It's not low-hanging fruit, though. While you try to optimise to save a couple of mWh in power use, a single search engine query uses 100x more and an LLM chat is another 100x of that. In other words: there's bigger fish to fry. Plus caching, lazy loading etc. mitigates most of this anyway.
vouaobrasil
Engineering-wise, it sometimes isn't. But it does send a signal that can also become a trend in society to be more respectful of our energy usage. Sometimes, it does make sense to focus on the most visible aspect of energy usage, rather than the most intensive. Just by making your website smaller and being vocal about it, you could reach 100,000 people if you get a lot of visitors, whereas Google isn't going to give a darn about even trying to send a signal.
marcosdumay
So, literally virtue signaling?
And no, a million small sites won't "become a trend in society".
qayxc
I'd be 100% on board with you if you were able to show me a single - just a single - regular website user who'd care about energy usage of a first(!) site load.
I'm honestly just really annoyed about this "society and environment"-spin on advise that would have an otherwise niche, but perfectly valid reason behind it (TFA: slow satellite network on the high seas).
This might sound harsh and I don't mean it personally, but making your website smaller and "being vocal about it" (whatever you mean by that) doesn't make an iota of difference. It also only works if your site is basically just text. If your website uses other resources (images, videos, 3D models, audio, etc.), the impact of first load is just noise anyway.
You can have a bigger impact by telling 100,000 people to drive an hour less each month and if just 1% of your hypothetical audience actually does that, you'd achieve orders of magnitude more in terms of environmental and societal impact.
victorbjorklund
On the other hand - its kind of like saying we dont need to drive env friendly cars because it is a drop in the bucket compares to containerships etc
simgt
Of course, but my point is that it's still a constraint we should have in mind at every level. Dupont poisoning public water with pfas does not make you less of an arsehole if you toss your old iPhone in a pond for the sake of convenience.
timeon
Sure there are more resource-heavy places but I think the problem is general approach. Neglecting of performance and overall approach to resources brought us to these resource-heavy tools. It seems just dismissive when people pointing to places where there could be made more cuts and call it a day.
If we want to really fix places with bigger impact we need to change this approach in a first place.
qayxc
Sure thing, but's not low-hanging fruit. The impact is so miniscule that the effort required is too high when compared to the benefit.
This is micro-optimisation for a valid use case (slow connections in bandwidth-starved situations), but in the real world, a single hi-res image, short video clip, or audio sample would negate all your text-squeezing, HTTP header optimisation games, and struggle for minimalism.
So for the vast majority of use cases it's simply irrelevant. And no, your website is likely not going to get 1,000,000 unique visitors per hour so you'd have a hard time even measuring the impact whereas simply NOT ordering pizza and having a home made salad instead would have a measurable impact orders of magnitude greater.
Estimating the overall impact of your actions and non-actions is hard, but it's easier and more practical to optimise your assets, remove bloat (no megabytes of JS frameworks), and think about whether you really need that annoying full-screen video background. THOSE are low-hanging fruit with lots of impact. Trying to trim down a functional site to <14kB is NOT.
quaintdev
LLM companies should provide how much energy got consumed processing users request. Maybe people will think twice before generating AI slop
vouaobrasil
Absolutely agree with that. I recently visited the BBC website the other day and it loaded about 120MB of stuff into the cache - for a small text article. Not only does it use a lot of extra energy to transmit so much data, but it promotes a general atmosphere of wastefulness.
I've tried to really cut down my website as well to make it fairly minimal. And when I upload stuff to YouTube, I never use 4K, only 1080P. I think 4K and 8K video should not even exist.
A lot of people talk about adding XYZ megawatts of solar to the grid. But imagine how nice it could be if we regularly had efforts to use LESS power.
I miss the days when websites were very small in the days of 56K modems. I think there is some happy medium somewhere and we've gone way past it.
iinnPP
You'll find that people "stop caring" about just about anything when it starts to impact them. Personally, I agree with your statement.
Since a main argument is seemingly that AI is worse, let's remember that AI is querying these huge pages as well.
Also that the 14kb size is less than 1% of the current average mobile website payload.
hiAndrewQuinn
Do we? Let's compare some numbers.
Creating an average hamburger requires an input of 2-6 kWh of energy, from start to finish. At 15¢ USD/kWh, this gives us an upper limit of about 90¢ of electricity.
The average 14 kB web page takes about 0.000002 kWh to serve. You would need to serve that web page about 1-300,000 times to create the same energy demands of a single hamburger. A 14 mB web page, which would be a pretty heavy JavaScript app these days, would need about 1 to 3,000.
I think those are pretty good ways to use the energy.
justmarc
Just wondering how do you reached at the energy calculation for serving that 14k page?
For a user's access to a random web page anywhere, assuming it's not on a CDN near the user, you're looking at at ~10 routers/networks on the way involved in the connection. Did you take that into account?
justmarc
Slightly veering off topic but I honestly wonder how many burgers will I fry if I ask ChatGPT to make a fart app?
hombre_fatal
A tiny fraction of a burger.
ajsnigrutin
Now open an average news site, with 100s of request, tens of ads, autoplaying video ads, tracking pixels, etc., using gigabytes of ram and a lot of cpu.
Then multiply that by the number of daily visitors.
Without "hamburgers" (food in general), we die, reducing the size of usesless content on websites doesn't really hurt anyone.
hiAndrewQuinn
Now go to an average McDonalds, with hundreds of orders, automatically added value meals, customer rewards, etc. consuming thousands of cows and a lot of pastureland.
Then multiply that by the number of daily customers.
Without web pages (information in general), we return to the Dark Ages. Reducing the number of hamburgers people eat doesn't really hurt anyone.
swores
If Reddit serves 20 billion page views per month, at an average of 5MB per page (these numbers are at least in the vicinity of being right), then reducing the page size by 10% would by your maths be worth 238,000 burgers, or a 50% reduction worth almost 1.2million burgers per month. That's hardly insignificant for a single (admittedly, very popular) website!
(In addition to what justmarc said about accounting for the whole network. Plus I suspect between feeding them and the indirect effects of their contribution to climate change, I suspect you're being generous about the cost of a burger.)
lpapez
Being concerned about page sizes is 100% wasted effort.
Calculate how much electricity you personally consume in total browsing the Internet for a year. Multiply that by 10 to be safe.
Then compare that number to how much energy it takes to produce a single hamburger.
Do the calculation yourself if you do not believe me.
On average, we developers can make a bigger difference by choosing to eat salad one day instead of optimizing our websites for a week.
zigzag312
So, anyone serious about sustainable future should stop using Python and stop recommending it as introduction to programming language? I remember one test that showed Python using 75x more energy than C to perform the same task.
mnw21cam
I'm just investigating why the nightly backup of the work server is taking so long. Turns out python (as conda, anaconda, miniconda, etc) have dumped 22 million files across the home directories, and this takes a while to just list, let alone work out which files have changed and need archiving. Most of these are duplicates of each other, and files that should really belong to the OS, like bin/curl.
I myself have installed one single package, and it installed 196,171 files in my home directory.
If that isn't gratuitous bloat, then I don't know what is.
sgarland
Conda is its own beast tbf. Not saying that Python packaging is perfect, but I struggle to imagine a package pulling in 200K files. What package is it?
presentation
Or we can just commit to building out solar infrastructure and not worry about this rounding error anymore
ksec
Missing 2021 in the title.
I know it is not the exact topic, but sometimes I think we dont need the fastest response time but consistent response time. Like every single page within the site to be fully rendered with exactly 1s. Nothing more nothing less.
sangeeth96
I think the advise is still very relevant though. Plus, the varying network conditions mentioned in the article would ensure it’s difficult if impossible to guarantee consistent response time. As someone with spotty cellular coverage, I can understand the pains of browsing when you’re stuck with that.
ksec
Yes. I don't know how it could be achieved other than having JS rendered the whole thing, wait until time designated before showing it all. And that time could be dependent on network connection.
But this sort of goes against my no / minimal JS front end rendering philosophy.
mikl
How relevant is this now, if you have a modern server that supports HTTP/3?
HTTP/3 uses UDP rather than TCP, so TCP slow start should not apply at all.
gbuk2013
As per the article, QUIC (transport protocol underneath HTTP/3) uses slow start as well. https://datatracker.ietf.org/doc/id/draft-ietf-quic-recovery...
gsliepen
A lot of people don't realize that all these so-called issues with TCP, like slow-start, Nagle, window sizes and congestion algorithms, are not there because TCP was badly designed, but rather that these are inherent problems you get when you want to create any reliable stream protocol on top of an unreliable datagram one. The advantage of QUIC is that it can multiplex multiple reliable streams while using only a single congestion window, which is a bit more optimal than having multiple TCP sockets.
One other advantage of QUIC is that you avoid some latency from the three-way handshake that is used in almost any TCP implementation. Although technically you can already send data in the first SYN packet, the three-way handshake is necessary to avoid confusion in some edge cases (like a previous TCP connection using the same source and destination ports).
gbuk2013
They also tend to focus on bandwidth and underestimate the impact of latency :)
Interesting to hear that QUIC does away with the 3WHS - it always catches people by surprise that it takes at least 4 x latency to get some data on a new TCP connection. :)
hulitu
> How relevant is this now
Very relevant. A lot of websites need 5 to 30 seconds or more to load.
throwaway019254
I have a suspicion that the 30 second loading time is not caused by TCP slow start.
ajross
Slow start is about saving small-integer-numbers of RTT times that the algorithm takes to ramp up to line speed. A 5-30 second load time is an order of magnitude off, and almost certainly due to simple asset size.
I just checked my home page [1] and it has a compressed transfer size of 7.0 kB.
Not bad, I think! I generate the blog listing on the home page (as well as the rest of my website) with my own static site generator, written in Common Lisp [2]. On a limited number of mathematical posts [3], I use KaTeX with client-side rendering. On such pages, KaTeX adds a whopping 347.5 kB! Perhaps I should consider KaTeX server-side rendering someday! This has been a little passion project of mine since my university dorm room days. All of the HTML content, the common HTML template (for a consistent layout across pages), and the CSS are entirely handwritten. Also, I tend to be conservative about what I include on each page, which helps keep them small.[1] https://susam.net/
[2] https://github.com/susam/susam.net/blob/main/site.lisp
[3] https://susam.net/tag/mathematics.html