Skip to content(if available)orjump to list(if available)

The appeal of serving your web pages with a single process

kevmo314

Wholly agree. Too often we think about scale far too early.

I've seen very simple services get bogged down in needing to be "scalable" so they're built so they can be spun up or torn down easily. Then a load balancer is needed. Then an orchestration layer is needed so let's add Kubernetes. Then a shared state cache is needed so let's deploy Redis. Then we need some sort of networking layer so let's add a VPC. That's hard to configure though so let's infra-as-code it with terraform. Then wow that's a lot of infrastructure so let's hire an SRE team.

Now nobody is incentivized to remove said infrastructure because now jobs rely on it existing so it's ossified in the organization.

And that's how you end up with a simple web server that suddenly exploded into costing millions a year.

AlotOfReading

In a former job, I wrote a static PWA to do initial provisioning for robots. A tech would load the page, generate a QR code, and put it in front of the camera to program the robot.

When I looked into having this static page hosted on internal infra, it would have also needed minimum two dedicated oncalls, terraform, LB, containerization, security reviews, SLAs, etc.

I gave up after the second planning meeting and put it on my $5 VPS with a letsencrypt cert. That static page is still running today, having outlived not only the production line, but also the entire company.

Aurornis

> When I looked into having this static page hosted on internal infra, it would have also needed minimum two dedicated oncalls, terraform, LB, containerization, security reviews, SLAs, etc.

In my experience there are two kinds of infrastructure or platform teams:

1) The friendly team trying to help everyone get things done with reasonable tradeoffs appropriate for the situation

2) The team who thinks their job is to make it as hard as possible for anyone to launch anything unless it satisfies their 50-item checklist of requirements and survives months of planning meetings where they try to flex their knowledge on your team by picking the project apart.

In my career it’s been either one or the other. I know it’s a spectrum and there must be a lot of room in the middle, yet it’s always been one extreme or the other for me.

stackskipton

Having work as Ops person at second place, most of time it ends up like that because Ops become dumping ground so they throw up walls in vain attempt to stem the tide. Application throwing error logs? Page out Ops. We made a mistake in our deploy. Oh well, page out Ops so they can help us recover. Security has found a vulnerability in a Java library? Sounds like Ops problem to us.

roncesvalles

Also, I think people vastly overestimate how much uptime their application really needs and vastly underestimate how reliable a single VPS can be.

I currently have VPSes running on both lowend and big cloud providers that have been running for years with no downtime except when it restarts for updates.

ajayvk

Having a single process web/app server simplifies things operationally. I am building https://github.com/claceio/clace, which is an application server for teams to deploy internal tools. It runs as a single process, which implements the webserver (TLS certs management, request routing, OAuth etc) as well as an app server for deploying apps developed in any language (managing container lifecycle, app upgrades through GitOps etc).

benoau

Shed a tear for Heroku, they made all this go away such a long time ago but ultimately squandered their innovation and the ~decade lead they had on other thinking in this fashion.

bigfatkitten

Salesforce is what happened to Heroku.

the__alchemist

Could you please clarify? I haven't noticed any impact to Heroku on my web applications; it.. just works, anecdotally. They send periodic mandatory upgrade emails re database and application stack, but they have been harmless so far; going back a decade.

benoau

They went from leading / pioneering horizontal scalability and database deploying and scaling and orchestration to "quiet-quitting" 15 years ago and doing almost nothing ever since - today they're barely worthy of mention in any discussion on any tech that solves these problems.

Lord_Zero

I guess it depends on what world you live in. For example, using ASPNET Core, I just drop in this https://learn.microsoft.com/en-us/aspnet/core/performance/ra... and boom I have rate limiting and I do not have to stress about threads or state or whatever.

geewee

That's almost certainly per server instance though, there's no mention of any type of synchronization across multiple instances, so if you e.g run many small ones or run the service as a lambda I'd be surprised if it worked like you expected.

colonCapitalDee

ASP.NET Core truly is a joy to work with

remram

It's easy to have two implementations of your rate-limiting thing, in-process and Redis. Or change your implementation when you need it. Just put a nice interface in front of it.

There is a cost to the network synchronisation, so you definitely want to scale vertically until you really must scale horizontally.

stana

Would love to explore this in Python. But would it be correct to assume single process service would not be as performant due to GIL?

simonw

The GIL means you only get to use a single core for your code that's written in Python, but that limit doesn't hold for parts that use C extensions which release the GIL - and there are a lot of those.

The GIL is also on the way out: Python 3.13 already shipped the first builds of "free threading" Python and 3.14 and onwards will continue to make progress on that front: https://docs.python.org/3/howto/free-threading-python.html

And honestly, a Python web app running on a single core is still likely good for hundreds or even thousands of requests a second. The vast majority of web apps get a fraction of that.

nogul

I’ve been running essential production systems with pythons uvicorn and 1 worker reliably, mainly because I get to use global state and the performance is more than fine. It’s much better than having to store global state somewhere else. Just make sure you measure the usage and chose a different design when you notice when this becomes a problem.

fsckboy

what was that superfast web server, opensource of some sort, from about 25 years ago, single process, single thread? it just raced around a loop taking care of many queued i/o streams

wmf

You might be thinking of Nginx.

Today I would not recommend single threading since it won't be able to use multiple cores.

j45

Ironically, in a good way - simple scales, complex fails.

tryauuum

and no race conditions whatsoever

Deebster

Well, unless you're using threads.

Or did I miss the sarcasm?

AStonesThrow

The filmmakers of “A Quiet Place” or the “Terminator” franchises should employ cks as a technical consultant

theideaofcoffee

Then why not go to the extreme and just build a static version and serve that? Why do you need any dynamic content at all? Then you won't need shared state, then you won't need a database, then you won't need 90% of the stuff that dynamically driven pages need. Be the l33t h4x0r you are and just go static, save the complexity for your build process, because that shows the true hacker spirit. Hell, you may even be able to wring out another few blog posts from that.

Also, why are people submitting every single post from this blog recently? Does this person actually do any work at UToronto, or is he just paid to write? There are -8000- links to various pages under this domain. I hope it's just a collective pseudonym like Nicolas Bourbaki and one person didn't write 8000 pages.

I'm desperate to use some of the insights from a navel-gazing university computing center in my infrastructure: IPv6 NAT (huh? what? What?!), custom config management driven by pathological NIH (I know precisely zilch about anything at utcc but I can already say with 100% confidence that your environment isn't special enough to do that), 'run more fibers', 'keep a list of important infrastructure contacts in case of outages', 'i just can't switch away from rc shell', and that's just in the last six months. On second thought, I'll just avoid all links to here in the future to save my sanity.

no_protocol

The traditional pattern seen here of serving pages under a hierarchy called `~cks` indicate this is the personal site of someone who is affiliated with the university. Unless otherwise noted you should probably assume all the content is from "cks", not an army of dozens of coders.

DaSexiestAlive

Single-thread could be a thing if it's like a full-stack all sitting in a web browser--like Dioxus is going toward..

If a web browser is in a glorified chromebook like a 2025 Macbook Air, indeed there's a lot of breathing room. A lot of ram. Processing power. Cores. It's nice. I get that.

And then you can do off-line first: meaning use the cached local storage available to WASM apps.

Then whatever needs to go to the mother ship, then call web apis in the cloud.

That would, in theory, basically giving power back from "net pc theory of things" back to "fat client"--if you ask the grey-haired nerds among you. And you would gain something.

But outside of a glorified chromebook like a 2025 Macbook Air--we have to remember that we are working with all kinds of web devices--everything from crap phones to satellite servers with terabytes of ram--so the scalability story as we have it isn't entirely wrong.

I have been to U of Toronto, very smart people. But honestly this is a troll piece. Doesn't go into any depth and one-sided. Unhelpful. I think U of Toronto's reputation would be better served by something more sophisticated than this asinine blog entry.