Skip to content(if available)orjump to list(if available)

Critical CSS

Critical CSS

123 comments

·May 6, 2025

lelandfe

Nice one. Would be cool if this also handled responsiveness. The need to dedupe responsive critical styles has made me resort to manually editing all critical stylesheets I've ever made.

I also see that this brings in CSS variable definitions (sorry, ~custom properties~) and things like that. Since critical CSS's size matters so much, it might be worth giving an option to compile all of that down.

> Place your original non-critical CSS <link> tags just before the closing </body> tag

I don't recommend doing this: you still want the CSS downloaded urgently, critical CSS is a façade. Moving to the end of body means the good stuff is discovered late, and thus downloaded late (and will block render if the preloader[0] discovers it).

These days I'd recommend:

    <link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">

    <noscript><link rel="stylesheet" href="styles.css"></noscript>
[0] https://web.dev/articles/preload-scanner

worble

    <link rel="preload" href="styles.css" as="style" onload="this.rel='stylesheet'">
Wouldn't this be blocked by a CSP that doesn't allow unsafe-inline?

null

[deleted]

jy14898

unsafe-hashes is a decent alternative

youngtaff

I wouldn’t use the JS hack to load CSS…

When the stylesheet loads and is applied to the CSSOM it’s going to trigger layout and style calculations for the elements it’s applied to maybe even the whole page

Browsers are pretty eager at fetching stylesheets even those at the bottom of the page

lelandfe

That stylesheet application was going to happen anyway, the difference now is that FCP will occur before it.

> Browsers are pretty eager at fetching stylesheets even those at the bottom of the page

Browsers begin fetching resources as they discover them. For a big enough document, that will mean low placed resources will suffer.

youngtaff

Sure that work is going to happen but often what you see is multiple stylesheets loaded using the async hack which results in multiple style and layout calculations as the browser can coalesce them because it doesn’t know that they’re stylesheets or when they will arrive

The whole philosophy of critical styles philosophy being those about the fold is a mistake in my view

Far better to adopt approaches like those recommended by Andy Bell that dramatically reduces stylesheet size

And do critical styles “correctly” i.e. load those that are needed to render the initial page and load the ones that rely on interactions separately

larodi

the prefetch attribute and other HTTP header hints, combined with proper CDN setups does almost the same. and would not require critical CSS to be nonstop rebuilt as the page develops. a properly configured CF is insanely fast.

null

[deleted]

todotask2

> HTTP header hints

Assume it's either 103 Early Hints or Resource Hints in HTTP/1.1 and 2.0.

stevenpotts

you are right, I will add this option, probably replace the 'before body' option, the 'DOMCONTENTLOADED' option has worked wonders for me, even tested it on old phones and slower connections, it's good enough for UX and Lighthousr

Gabrys1

+1 on responsiveness

oneeyedpigeon

Feels like premature optimisation to me. Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile? Maybe with the most complex web apps, I guess, but for almost all cases, I would have thought writing clean CSS, HTML, and JavaScript would render this unnecessary or even counterproductive.

dan-bailey

Oh my god, yes, this is useful. I do some freelance dev work for a small marketing agency, and I inherit a lot of Wordpress sites that show all the hallmarks of passing through multiple developers/agencies over the years, and the CSS and Javascript are *always* crufty with years of accumulated bad practices. I'm eager to try this.

bawolff

> Are there really cases where the CSS is so complex or the page loads so many resources that this effort is worthwhile?

On the contrary, the more complex the css is or the more resources loaded, the less this would be worthwhile.

The thing i think they are trying to optimize is latency due to RTT. When you first request the html file, the browser needs to read it before knowing the next thing to request. This requires a round trip to the server, which has latency (pesky speed of light). The larger your (critical) css, the more expensive this optimisation is so the less likely it is a net benefit.

Gabrys1

I would pay good money for this tool ~12 years ago. We had a site with enormous amounts of CSS that accumulated over the years and it was really unclear which rules are and which aren't critical

korm

The mod_pagespeed filter "prioritize_critical_css" was released exactly 12 years ago in early May 2013. At least 3 more popular critical css tools were released the following year, integrating with Grunt, Gulp, and later Webpack.

acjohnson55

For many sites, this probably is a premature optimization. But for sites that live off of click-through, like news/media, getting the text on screen is critical. Bounce rate starts to go up and ad revenue drops as soon as page loads are less than "immediate", which is about 1 second. The full page can actually be quite heavy once all the ads, scripts, and media load.

We were doing this optimization more than a decade ago when I worked at HuffPost.

dimmke

Seriously. When I look at the modern state of front-end development, it's actually fucking bonkers to me. Stuff like Lighthouse has caused people to reach for optimizations that are completely absurd.

This might make an arbitrary number go up in test suites, at the cost of massively increasing build complexity and reducing ease of working on the project all for very minimal if any improvement for the hypothetical end user (who will be subject to much greater forces out of the developer's control like their network speed)

I see so much stuff like this, then regularly see websites that are riddled with what I would consider to be very basic user interface and state management errors. It's absolutely infuriating.

rglover

Yup. Give people a number or stat to obsess over and they'll obsess over it (while ignoring the more meaningful work like stability and fixing real, user-facing bugs).

Over-obsession with KPIs/arbitrary numbers is one of the side-effects of managerial culture that badly needs to die.

mediumsmart

It’s just a few meaningful numbers like 0 accessibility errors, A+ for the securityheaders, flawless result on webkolls 5july net plus below 1 second loading time on pagespeed mobile. Once that has been achieved obsessing over stabilizing a flaky bloat pudding while patching over bugs aka features that annoy any user will have died.

stevenpotts

I know... to be fair, I did test this for my use cases on older phones with throttled slower connections and it did improve the UX but I get what you're saying, I think it also depends on your target audience, who cares if your site is poorly graded by Lighthouse if your user base has high end devices in places with great internet? not even google cares since the Core Web Vitals show up in green

stevenpotts

Oh, writing clean css, html and js is THE WAY TO GO but you might inherit a messy project, download a template or even work on a project you coded poorly

leptons

>Feels like premature optimisation to me.

To me thinking about how CSS loads is task #1, but I probably have some unique needs.

We were losing clients due to our web offering scoring poorly on page speed tests. Page speed being part of how a page is ranked can affect SEO (depending on who you ask), so it is very important to our clients. It's not my job to explain how I think SEO works, it's my job to make our clients happy.

I had to design a whole new system to get page speed scores to 100% on Google Lighthouse, which many of our clients were using to test their site's performance. When creating a site optimized for performance, how the CSS and JS and everything loads needs to be thought about before implementing the pages. It can be pretty difficult to optimize these things after-the-fact. We made pretty much everything on the page load in-line including JS and CSS, and the CSS for what displays "above the fold" loads above the HTML it styles. Everything "below the fold" gets loaded below the fold. No FOUC, nothing blocking the rendering of the page. No extra HTTP requests are made to load any of the content. A lot of the JS "below the fold" does not even get evaluated until it is scrolled into view, because that can also slow down page load speed. I took all the advice Google Lighthouse was giving me, and implemented our pages in a way that satisfies it completely. It wasn't really that difficult but it required me changing my thinking about how to approach building websites.

We were coming from a system that we didn't control, where they decided to load all of the CSS for the entire website on every page, which was amounting to about 3 to 4 MB of CSS alone, and the Javascript was even worse. There was never an attempt to optimize that system from the start, and now many years later they can't seem to optimize it at all. I won't name that system because we still build on it, but it's a real problem for us when a client compares their SEO and page speed scores to their competitors and then they leave us for our competitors, which score only a bit better for page speed.

If performance is the goal, there is no such thing as premature optimization, it has to be thought about from the start. So far our clients have been very happy about their 100% page speed scores (100% even on mobile), and our competition can't come anywhere close, unless they put in the work and start thinking differently about the problem.

I actually tried the tool that is the subject of this post on my sites, and it wouldn't work - likely because there is nothing to optimize. We simply don't do any HTTP requests for CSS, and the CSS we need "above the fold" is already "above the fold". I tried it on one of the old pages and it did give a result, but I don't need it because we don't build pages like we used to anymore.

todotask2

When I tested mine, I got the following:

Built on Astro web framework

HTML: 27.52KB uncompressed (6.10KB compressed)

JS: <10KB (compressed)

Critical CSS: 57KB uncompressed (7KB compressed) — tested using this site for performance analysis.

In comparison, many similar sites range from 100KB (uncompressed) to as much as 1MB.

The thing is, I can build clean HTML with no inline CSS or JavaScript. I also added resource hints (not Early Hints, since my Nginx setup doesn't support that out of the box), which slightly improve load times when combined with HTTP/2 and short-interval caching via Nginx. This setup allows me to hit a 100/100 performance score without relying on Critical CSS or inline JavaScript.

If every page adds 7KB, isn’t it wasteful—especially when all you need is a lightweight SPA or, better yet, more edge caching to reduce the carbon footprint? We don’t need to keep transmitting unnecessary data around the world with bloated HTML like Elementor for WordPress.

Why serve users unnecessary bloat? Mobile devices have limited battery life. It's not impossible to achieve lighting fast experience once you move away from shared hosting territory.

kijin

Yeah, it's a neat trick but kinda pointless. In a world with CDNs and HTTP/2, all this does is waste bandwidth in order to look slightly better in artificial benchmarks.

It might improve time to first paint by 10-20ms, but this is a webpage, not a first-person shooter. Besides, subsequent page loads will be slower.

aitchnyu

Yup, whereever we deviated from straightforward asset downloads to optimize something, we always end up slower or buggy. Like manually downloading display images or using websockets to upload stuff. Turns out servers and browsers have spent more person-years optimizing it better than me.

todotask2

And Critical CSS requires reducing the CSP (Content Security Policy), which I have already hardened almost entirely along with Permissions Policy.

nashashmi

Imagine this: before serving the page, a filter seeks out the critical css, inserts it, and removes all css links. Greatly improving page load times and reducing CDN loads.

Edit: on second reading, it seems like you are saying when another page from the same server with the same style loads again, the css would have to be reloaded and This increases bandwidth in cases where a site visitor loads multiple pages. So yes it is optimum for conditions where the referrer is external to the site.

robotfelix

It's worth noting that including Critical CSS in every page load isn't the only way to use it.

A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already) or only using the Critical CSS technique for pages that commonly come at the start of a session.

todotask2

> A lot of unnecessary bloat can be avoided by only including it when it looks like a user is visiting for the first time (and likely hasn't got the CSS files cached already

I’ve thought about that before but couldn’t figure out the ideal approach. Using a unique session cookie for non-logged in users isn’t feasible, as it could lead to memory or storage issues if a malicious actor attempts a DDoS attack.

I believe this approach also doesn’t work well for static pages, which are likely already hosted close to users.

One useful trick to keep in mind is that CSS content-visibility only applies in certain scenarios. One agency I came across using <iframe> for every section is a bad idea.

So my conclusion is mobile-first CSS is generally more practical and use PWA which I'm building now for site that has lots of listings.

stevenpotts

I searched online for tools to extract the critical css of a website for one of my clients, I couldn't find one that did the job so I did so after using Puppeteer locally and then decided to share the solution I used that let's you specify how long to wait after page load to extract the styles; even found a paid one but requested refund after it didn't work.

Feedback welcome, it's free for now.

jefozabuss

What was the problem with something like https://www.npmjs.com/package/penthouse ?

robotfelix

It's worth noting that penthouse's last release is a few weeks shy of 3 years ago (https://github.com/pocketjoso/penthouse/releases/tag/v.2.3.3).

Given there seem to be few other Critical CSS tools out there, its utility in driving web performance, and the fact Google's web.dev recommended tool (https://github.com/addyosmani/critical) uses penthouse under the hood, I'm surprised there isn't more effort and/or sponsorship going into helping maintain it.

null

[deleted]

al_borland

FYI: While a bit of an edge case, as I don’t know why anyone would do this realistically… If a site without CSS is passed, it throws an error.

stevenpotts

interesting! thanks!

promiseofbeans

Is the code somewhere? This seems like it'd be really useful as a Vite/Astro plugin

cAtte_

yeah, doing this manual copy-paste process every time you change something would count as cruel and unusual punishment

stevenpotts

hehe, I know... I should think about that, I got tons of visits but unsure this is worthwhile since it appears it's not that useful for most people?

indeyets

Is it the UI for penthouse lib? Settings look very similar :)

stevenpotts

it's based on penthouse, honestly, the most "difficult" part of this was setting up CloudRun with docker to get puppeteer to work and be able to wait whatever amount of time the user needed and netlify functions, I tried using tools like https://criticalcss.com/generate but they just didn't work because of the lack 'waiting' I guess

lxe

This is a footgun. You'll get a very consistent flash of unstyled content. It's not just an aesthetics issue -- when layout shifts in the middle of a page load, as your "non-critical" styles are applied, and user is interacting with something, it will kill your usability.

stevenpotts

Yeap, I had to change some styles after applying this but managed to get CLS to 0 for the websites, I know this isn't useful for everyone but it was really useful for me as I have been using templates for clients with limited budgets that have tons of libraries, I'm sure there are other use cases where this is useful

zaphodias

isn't the whole point avoiding FOUC, while also avoiding to block the rendering for CSS network requests?

lxe

Unless you're sure that your the "non-critical" css doesn't cause layout shifts (aka, it doesn't overload any "critical" styles), you're gonna see layout shifts even on fast connections if you load some styles at the top of the document and then do a link rel at the bottom.

rtsil

The critical css should cover everything above the fold to avoid that visible reflow.

null

[deleted]

bawolff

I guess this just assumes that this is the first view of your page and no user has css resources cached?

Or maybe they are saying this would always be worth it?

I assume it'd be a trade off between a number of factors. How many returning vs new visitors? Is css served with proper cache-control headers, 103 early hints and in a cdn? How big is your critical css, and how much of your critical html does it push out of the initial congestion window?

stevenpotts

Yes, for first view page and it is a trade off, best approach is writing all the styles and code yourself, not use libraries, etc.

austin-cheney

When I was doing performance examinations from localhost I found that CSS was mostly inconsequential if written at least vaguely efficiently and requested as early as possible from the HTML. By completely removing CSS I might be able to save up to 7ms of load time, but that was extremely hard to tell because that was well within the variance between test intervals.

https://github.com/prettydiff/wisdom/blob/master/performance...

bawolff

Obviously trying to do an optimization designed to reduce the impact of latency between client <-> server is going to have no impact if you are testing on localhost where latency is already effectively zero.

That's not to say i think this optimization is neccesarily worth it, just that testing on localhost is not a good test of this.

kqr

Hm. When I tried this on my site it retained a debugging element that is decidedly not required, but adds a lot of bytes to the CSS:

    body::after{display:none;content:"";background-image:url("data:image/svg+xml;utf8,<svg xmlns='http://www.w3.org/2000/svg' width='1' height='29'><rect style='fill: rgb(196,196,196);' width='1' height='0.25px' x='0' y='28'/></svg>");position:absolute;top:23px;left:0px;z-index:9998;opacity:1;width:1425px;height:3693px;background-size:auto 1.414rem;background-position-y:0.6rem}
(It lets me uncheck the "display: none" rule in the developer tools to get a baseline grid overlaid on the site to make sure things line up. They don't anymore because I forgot I had that in there until I saw it now!)

RadiozRadioz

I prefer a different approach: write your HTML in such a way that the page makes sense and is usable without CSS. It's also a good guiding star for your page's complexity; if your document markup is simple, sensible and meaningful, you're probably not overcomplicating your layout.

chipsrafferty

This doesn't really work for sites where reading text left to right, top to bottom is not the primary focus.

GavinAnderegg

Neat idea. I tried it on my site (https://anderegg.ca/) which already inlines its CSS, and got an error from the underlying library (https://www.npmjs.com/package/penthouse):

    {"error":true,"name":"Error","message":"css should not be empty","stack":"Error: css should not be empty\n at module.exports (/usr/src/app/node_modules/penthouse/lib/index.js:206:14)\n at file:///usr/src/app/server.js:153:31\n at process.processTicksAndRejections (node:internal/process/task_queues:95:5)"}

stevenpotts

Will check this case, thanks!

Brajeshwar

I’ve been away for quite a while, so just a loud thinking.

With tools such as PostCSS, and servers serving zipped styles across CDN, maintaining a single request to the styles; does it really benefit from breaking up the styles these days?

Also, I’m going to assume, besides the core styles that run a website, anything that is loaded later can come in with its specific styles as part of the HTML/JS.

For the critical CSS thing, we used to kinda do it by hand with some automation more as a toolset to help us decide how much to include (inside the HTML itself - `<styles>`) and then insert the stylesheet. But then, we always found it better to set a Stylesheet Budget and work with it.

a_gray

> serving zipped styles across CDN

CDNs haven't been cached across domains for years. I.e. using a CDN is no faster than a server serving it itself (usually slower because of DNS lookups, but sometimes slightly faster if the geolocation is closer if the DNS was already looked up).

bigbuppo

The performance impact of CDNs are definitely a complicated matter and always have been. They aren't a magic solution to any problems unless you're exceeding the origin's available bandwidth, or are serving up something that should be cacheable but somehow can't live without whatever it is that Elementor does that makes it worth every request taking 75 seconds to complete.

weo3dev

Not a fan.

I'm waiting for the day developers realize the fallacy of sticking with pixels as their measurement for Things on the Internet.

With a deeper understanding of CSS, one would recognize that simply parsing it out for only the components "above the fold" (which, why are pixels being used here in such an assumptive manner?), completely misses what is being used in modern CSS today - global variables, content-centric declarations, units based on character-widths, and so many other tools that would negate "needing" to do this in the first place.

stevenpotts

Yes, this is for specific use cases, I don't have a use for this for sites where I coded everything myself, this has been useful for static sites where I downloaded a template that uses libraries, probably useful for people with similar cases