Remember FastCGI? (2021)
59 comments
·April 8, 2025knagy
I went down this rabbit hole a while ago[1], it's a fun experience to try out technologies that nowadays are considered unconventional or outdated.
[1]: https://deadlime.hu/en/2023/11/24/technologies-left-behind/
ryao
If you find yourself with a scripted language where processing HTTP requests might be too slow or unsafe, I can still see some utility for FastCGI. For most of the rest of us, HTTP won, just write little HTTP webservers.
Without serious development effort, I would expect using an existing web server with FastCGI is faster than writing your own web server. It is also more secure as the FastCGI application can be run as a different user in a chroot, or a namespace based sandbox like a docker container.zokier
Using HTTP does not preclude having a reverse proxy. So the only difference between FastCGI and HTTP is really what protocol your proxy and your application use, and I don't really see why FastCGI would be major win there.
ryao
Given that the article’s comparison is between a HTTP server with fastCGI, and writing a little HTTP server, the web server with fastCGI will scale better, contrary to what the article suggests. It also would likely be more secure. Reverse proxies are tangential to this.
null
null
usef-
I was thinking that.
Was FastCGI a child of a world where we had neither good library use ("import net/http"), nor (much) layering in front of the server (balancers / cdns / cloudflare etc). So it made sense to assume a production-level layer on the box itself was always needed?
I remember the vigorous discussions comparing security of Apache vs IIS etc
phire
Partly.
But I suspect it's more that CGI was the way things had always been done. They didn't even consider doing a reverse proxy. They asked the question "how do we make CGI faster" and so ended up with FastCGI.
Other developers asked the same question and ended up making mod_php (and friends), embedding the scripting language directly into the web server.
chasd00
Iirc most of the content was static html/css in those days. Running code on a request was rare so cgi was like a bolt on to a static content server. It was available but not the norm. Perl and php gradually made it the norm to run code on every request.
mdpye
I remember in the very early days as a hobbyist working with cgi perl scripts for forums or guest books where the script just edited the "static" content in place.
The script would write new html files for new posts and do "fun" (I mean, terrifying) string manipulation on the main index to insert links to posts etc. Sometimes they used comments with metadata to help "parse" pages which would see edits.
These both were, and definitely were not, "the days" :D
BobbyTables2
Sure beats maintaining a custom webserver written in C!
assimpleaspossi
This is exactly why my web dev company used FastCGI for all our development and would still today if I didn't retire it just two years ago.
Wherever we needed faster interaction, FastCGI did the job and allowed us to interface with anything in the backend including our C programs.
ryao
Why did you retire it?
assimpleaspossi
My biggest customer was a national restaurant chain that thought they should have it all in house. So far, a big mistake.
My second biggest customer went out of business. Which left us with a bunch of itty-bitty businesses that I'm just too tired to be chasing after.
myaccountonhn
I just this week implemented an SCGI server, its even easier than FastCGI and the performance difference is arguably negligible.
The parser ended up being 30 lines of code.
NoboruWataya
Those of us with an interest in Gemini (the small web protocol, not the AI thing) will have become familiar with CGI and its variants again, as they are commonly used to serve dynamic content over that protocol. Another variant is SCGI[0]. It doesn't seem to have ever been nearly as popular as FastCGI, but seems to be more commonly supported by Gemini servers due to its simplicity.
zokier
It's baffling how FastCGI ever was popular, considering how weird it's premise is: let's take this well-known standard simple text based protocol and convert it to our own custom binary protocol, that will make writing services somehow simpler?! You are still dealing with all the complexity of writing a daemon, only difference is that instead of parsing HTTP you parse FastCGI.
Maybe today with all the complexity that is with http2/http3 etc it would make slightly more sense. But FastCGI was popular when all the world was just good old plain http1.
jagged-chisel
I (the user of the FastCGI library) don’t parse anything. I use the library. Whether it does HTTP or some custom protocol is irrelevant to my coding task. Might be relevant for performance, but who’s building anything big enough to worry about that? ;-)
BobbyTables2
I think it still makes sense if one wants all the fancy bells & whistles of Apache without reimplementing them in their own webserver.
In some sense, implementing a full webserver just because the world standardized on a horribly inefficient way of handling CGI is pretty silly.
We don’t rewrite “cp” and such just because we want to copy multiple files quickly…
Of course, if the sole purpose is just to handle CGI types of things, then the custom embedded webserver likely makes more sense. Apache is a horribly complex beast.
geenat
Rails, PHP, a lot of Python stuff (WSGI is CGI wrapped in a dict).
It's not only just about separation of concerns, but also separation of crashes/bugs/issues. FastCGI servers can run for years without restarts.
Thread creation/teardown/sleeping has gotten a lot faster as well in linux.
https://php-fpm.org/about/ it may be old but PHP-FPM is still one of the best FastCGI servers from a pragmatic point of view... ex: the ability to gracefully hot reload code, stop and start workers without losing any queries... all in production.
tacker2000
PHP-FPM is powering a huge chunk of the web, starting with the large amount of Wordpress sites and then much more…
miggol
Similar protocols like ASGI for Python are still ubiquitous. I assume because they offer separation of concerns and as a result good performance.
You would think the separation argument would still hold true for compiled languages, even if performance is no longer as relevant.
zokier
I don't think interfaces like ASGI/WSGI etc are that similar to FastCGI. The crucial difference is that FastCGI is a protocol, something you need to do IO for and crucially something you need to parse. In contrast ASGI etc by virtue of being in-process and specific to the language can provide an true API. In many ways FastCGI is closer to HTTP than ASGI.
miggol
That does make sense: an application server like Gunicorn talks ASGI with the application, but Gunicorn itself speaks plain old HTTP with the outside world (or reverse proxy). No more need for FastCGI inbetween.
That is in line with what the article is saying. Thanks for clarifying.
whalesalad
WSGI and FastCGI are not mutually exclusive. They talk to each other. They can both be part of the same stack.
lmz
There are also other modes/roles of FastCGI: Authorizer and Filter which may be interesting: https://fastcgi-archives.github.io/FastCGI_Specification.htm...
Not sure I've ever seen Filter in real life.
oso2k
Lots of people are forgetting the context in which FastCGI was conceived: servers were expensive uniprocesser machines and enterprise servers with 2 or 4 sockets were relatively extremely expensive. All processors at the time paid heavier penalties for context switching and/or creating between processes. So the optimization folks building web servers were things like preforking the front end in Apache and/or preforking the middle end (FastCGI, WSGI, SCGI, Java App Servers, et al). The database was Oracle, or MySQL or SQLServer which was already multi client based. Also, in *nix land, the popular scripting languages (Perl, PHP, bash) of the time had a ways to go in reducing their startup times.
erincandescent
The thing (Fast)CGI had, that http proxying doesn't (and lots of web frameworks/libraries a bit too tied to http, like go net/http don't) have is the SCRIPT_NAME (path processed so far) / PATH_INFO (path left to handle) distinction
zoobab
Python has cgi-bin support, I made a proxy of the poor like this:
$ mkdir cgi-bin $ echo -e "#!/bin/bash\necho -e \"Content-type: text/html\n"curl -s -k http://www.zoobab.com -o -" > cgi-bin/proxy.sh $ python -m http.server --cgi 8000 $ curl http://localhost:8000/cgi-bin/proxy.sh
You should get the html page of http://www.zoobab.com
smittywerben
Python removed cgi-bin calling it a dead battery module.
> cgi — Common Gateway Interface support Deprecated since version 3.11, removed in version 3.13.
ryao
That is irrelevant as he is talking about the Python http server that runs CGI executables, not using Python as a CGI executable. As for the latter, it is still an option:
smittywerben
Just pip uninstall pep 206 interface then connect execute this remote code repo that the community uses into your supply chain and pip install pep 206 that's the secure way to run the http test server in production. Like just break the source code that works by removing it because "just parse the syntax tree to override it that's the correct way" just pip install the cgi interface specification from the back alley it's better than paying Oracle to keep your code working because you can just override the interface the right way then lets pull out lincoln logs out because nobody can figure out how the fuck to install the real webserver or parse the AST of the test server to use their per-process GIL locked python interpreter and then when they've almost fixed that lets pull out the http test server too then we're ready to write some more code!
Signed, almost dead battery
at_a_remove
A shame, I used to use cgi all the time, back when I did web stuff. I wouldn't know what to do now, especially on IIS. Never did understand why I ought to want or need WSGI, other than I "ought to." Nor did I see how I was supposed to code against it, so I simply did cgi. It never raised a problem for me.
mxuribe
When i started dabbling in python back in the day, and when i had a need for cgi-bin sort of functionality (for simple web front-end), i think it was the sunsetting of popularity of python cgi, and more of the rise of WSGI...so there was less blog posts out there showing best practices around cgi, and more on wsgi...and to me it often felt like i was doing more unnecessary work via wsgi...but then, maybe i'm old and stodgy, and came from the php world prior to dabbling python? Then again, even though i feel i am more mentally sharper now than my younger self...it sure feels like back in the day i was able to be more productive sooner...and nowadays there's just so much setup and harnesses to begin with.
whalesalad
The first startup I ever worked for used Python server pages, .psp file extension.
mhd
Sure, but does anyone remember SCGI?
dfox
In my opinion that was step in the right direction, but the protocol design is truly weird. Somehow managing to come up with 4 different ways to encode length of byte strings in otherwise so simple protocol is remarkable achievement.
immibis
nginx supports four backend protocols out of the box: HTTP, FastCGI, SCGI, and static files.
Of these, if you get to pick one and the request isn't for a static file, SCGI is the obvious best choice.
You can also load extra plugin modules into nginx itself, of course, including one that puts a Lua interpreter inside nginx (mod_php-style).
FastCGI is a protocol you can use between a reverse proxy and a back end. It's better for this than HTTP because it passes proxy-generated metadata out of band from client-generated metadata, so you can't have vulnerabilities like fake X-Forwarded-For. It's also strictly defined so you can't have request parsing differences or request smuggling.
If you're currently using a reverse proxy, did you remember to make sure that your proxy always deletes X-Forwarded-For from the client, always adds its own, OR that the backend always ignores it? And you have to do this for each piece of metadata you expect to come from the proxy. With FastCGI this is not needed.
I chose SCGI instead of FastCGI, though, since nginx doesn't support multiplexing and I don't use large request bodies. SCGI not supporting multiplexing makes it much simpler to write a back end. You just accept a new socket and fork for each request.
By the way, FastCGI wasn't designed as "binary HTTP" as implied by some sibling comments, but rather "CGI over a socket". It passes the environment variables the CGI program would have had, and multiplexes its stdin, stdout and even stderr. SCGI is the same but without multiplexing or stderr.
Author complains about having to use a reverse proxy at all, which is fine for prototyping, but I have about 5 domains pointed at the same server, and multiple apps on some domains, so why wouldn't I use a reverse proxy to route those requests? And yes, I run the same nginx reverse proxy on my development machine for testing.