Serving a half billion requests per day with Rust and CGI
7 comments
·July 7, 2025simonw
I really like the code that accompanies this as an example of how to build the same SQLite powered guestbook across Bash, Python, Perl, Rust, Go, JavaScript and C: https://github.com/Jacob2161/cgi-bin
Bluestein
This is a veritable Rosetta stone of a repo. Wow.-
shrubble
In a corporate environment, for internal use, I often see egregiously specced VMs or machines for sites that have very low requests per second. There's a commercial monitoring app that runs on K8s, 3 VMs of 128GB RAM each, to monitor 600 systems; using 500MB per system, basically, just to poll it each 5 minutes, do some pretty graphs, etc. Of course it has a complex app server integrated into the web server and so forth.
andrewstuart
How meaningful is “per day” as a performance metric?
diath
Not at all, it may be a useful marketing metric, but not a performance one. The average load does not matter when your backend can't handle the peaks.
null
Honestly, I'm just trying to understand why people want to return to CGI. It's cool that you can fork+exec 5000 times per second, but if you don't have to, isn't that significantly better? Plus, with FastCGI, it's trivial to have separate privileges for the application server and the webserver. The CGI model may still work fine, but it is an outdated execution model that we left behind for more than one reason, not just security or performance. I can absolutely see the appeal in a world where a lot of people are using cPanel shared hosting and stuff like that, but in the modern era when many are using unmanaged Linux VPSes you may as well just set up another service for your application server.
Plus, honestly, even if you are relatively careful and configure everything perfectly correct, having the web server execute stuff in a specific folder inside the document root just seems like a recipe for problems.