Show HN: I'm rewriting a web server written in Rust for speed and ease of use
63 comments
·October 21, 2025Etheryte
I know it's not popular to care about these things these days, but please consider a different installation mechanism than curl piped into sudo bash. It's irresponsible and normalizes a practice that never should've happened.
maccard
I care about these things.
But this is overblown. What’s your threat model here, you’re downloading a random thing from the internet and executing it. 99% of people are on single user machines, so root access doesn’t help, you’re screwed just by executing the thing if it’s malicious. Doing this is no worse than installing and running a random deb, or running npm install
QuantumNomad_
They do offer other installation methods already.
Installation via package managers (Debian/Ubuntu), using repo provided by ferron
https://ferron.sh/docs/installation/debian
Installation as a Docker container
https://ferron.sh/docs/installation/docker
And more.
Etheryte
That's true, but that's not what's front and center. Curl-sudo-bash is the first thing you see on the site, all the other options are close to the bottom of the page. Defaults matter and people tend to use whatever is the first option presented to them unless they have a good reason to do otherwise.
olalonde
> using repo provided by ferron
This poses a similar security risk to executing the "curl-sudo-bash".
null
olalonde
Can we stop with this nonsense already? If you trust them enough to run their server code, why wouldn't you trust them with the installation script?
earthnail
Because untrustworthy websites can piggyback on the brand name.
"Download ffmpeg here: sudo bash -c ..."
And then the installation script from our malicious site installs ffmpeg just fine, plus some stuff you have no idea about. And you never know that you've just been hacked.
dns_snek
Can you repeat this mental exercise for every other installation method you can think of? e.g. distributing deb/rpm files, distributing AppImages, asking users to add your custom repository and signing key?
(Yes I know that the last one has built-in benefits for automatic updates but that's not going to protect you on initial installation and its benefits can be replicated in a more portable way in any other auto-update mechanism with a similar amount of effort)
((And if you have the patience to set up a custom repository, you can simplify initial installation process using a "curl|bash" script))
QuantumNomad_
If you get your install instructions from an untrustworthy website, there’s nothing preventing them from telling you to use a third-party apt repository or ppa that gives you a malicious version of the thing.
There’s not really a difference between curl piped to bash, and installing packages from a third-party package repository that the distro maintainers have no involvement in with.
adastra22
I don’t trust them enough to run as root.
null
udev4096
But you have to. Nginx, caddy, traefik, etc cannot run without root or even if you can, it would be way more limiting
null
alexnewman
Read how Tls works. Many people Can mitm. That’s why we sign applications
dns_snek
If they can MITM the installation script delivered over HTTPS, they can also MITM the website delivered over HTTPS.
You can have 10 step instructions for users to add your PGP signing key and install your APT repository, but what difference does it make? None at all. A malicious website will copy your instructions and replace the signing key and the repository URL with their own.
null
selectnull
There is something funny going on in the benchmarking section. If you look at the charts, they don't benchmark the same servers in 4 examples.
Each of the 4 charts have data for Ferron and Caddy, but then include data for lighttpd, apache, nginx and traefik selectively for each chart, such that each chart has exactly four selected servers.
That doesn't inspire confidence.
crote
It's also using their own benchmarking tool, rather than one of the dozens of existing tools. Doesn't mean they are cheating, but it is a bit suspicious.
troupo
> That doesn't inspire confidence.
The problems start even higher on the page in "The problem with popular web servers" section that doesn't inspire confidence either.
From "nginx configs can become verbose" (because nginx is not "just" a web server [1]) to non-sequiturs like "Many popular web servers (including Apache and NGINX) are written in programming languages and use libraries that aren't designed for memory safety. This caused many issues, such as Heartbleed in OpenSSL"
[1] Sidetrack: https://x.com/isamlambert/status/1979337340096262619
Until ~2015, GitHub Pages hosted over 2 million websites on 2 servers with a multi-million-line nginx.conf, edited and reloaded per deploy. This worked incredibly well, with github.io ranking as the 140th most visited domain on the web at the time.
Nginx performance is fine (and probably that's why it's not included in the static page "benchmark")
sim7c00
its funny he mentions unsafe code in apache and nginx and then complains about openSSL bug (one thats more than 10 years old btw).
if this is a sense of the logic put into the application, no memory safe language will save it from the terrible bugs!
earthnail
Edit: just tried it for serving a fastapi. It's fantastic. Instant TLS via Let's Encrypt. There may be other webservers that are equally easy, but this one is certainly easier than Apache or ngninx, which I used so far. Love it.
--
Reach out to the guys at Kamal. They wrote their own reverse proxy because they thought Traefik was too complex, but they might be super happy about yours if Ferron is more powerful yet easy to configure because it might solve more of Kamal’s problems.
Not affiliated with Kamal at all, just an idea.
zsoltkacsandi
They wrote their proxy because the declarative configuration of the existing proxies does not fit into their deployment flow.
habibur
Looking at the graphs, I would recommend it would have been better to market it as "just as performant as nginx and htproxy" instead of "faster than all ...". While highlighting the simplicity as the added benefit above those all.
amelius
But nginx has acquired a lot of features over the years, which has pros and maybe also cons.
ansc
Great to see! Would love to try it, but I depend on graceful updates of configuration (i.e. adding and removing backends primarily). I can't find anything about that. Is it supported, either through updating configs or through API?
mynewaccount00
> Security is imperative
> Install with sudo curl bash
hacker_homie
This is kinda funny, but what is a better alternative for new projects on Linux?
arccy
it's a rust project which tries to claim the ability to build static binaries, you should be able to just download the server binary.
natrys
Yes it seems the binaries are here: https://ferron.sh/download
I will say that though, it's probably not rational to be okay with blindly running some opaque binary from a website, but then flip out when it comes to running an install script from the same people and domain behind the same software. At least from security PoV I don't see how there should be any difference, but it's true that install scripts can be opinionated and litter your system by putting files in unwanted places so nevertheless there are strong arguments outside of security.
gregoriol
Why not the usual package repositories and distribution by the official ones?
jraph
That's a slow process and you need someone to do the packaging, either yourself or a volunteer, and this for each distro. Which is not trivial to master and requires time. The "new" qualifier in the parent comment is key here.
Open build service [1] / openSUSE Build Service [2] might help a bit there though, providing a tool to automate packaging for different distributions.
PufPufPuf
Most Linux distributions won't package an unknown project. Chicken and egg problem. You could create your own PPA but that is basically the same as sudo curl bash in terms of security.
null
jagged-chisel
How’s that worse than downloading a random installer?
theandrewbailey
It's the Linux equivalent of downloading and running random binaries in Windows.
voidUpdate
*running as administrator
magackame
Gonna steal all your files, passwords and crypto as a regular user anyway?
tonyhart7
lmao
austin-cheney
I wrote my own web server from scratch last year the exact same reasons: starting from scratch with Apache and NGINX is too painful for my needs.
Here are my learnings:
* TLS (HTTPS) can be easily enabled by default, but it requires certificates. This requires a learning curve for the application developer but can be automated away from the user.
* The TLS certs will not be trusted by default until they are added to the OS and browser trust stores. In most cases this can be fully automated. This is most simple in Windows, but Firefox still makes use of its own trust store. Linux requires use of a package to add certs to each browser trust store and sudo to add to the OS. Self signed certs cannot be trusted in OSX with automation and requires the user to manually add the certs to the keychain.
* Everything executes faster when WebSockets are preferred over HTTP. An HTTP server is not required to run a WebSocket server allowing them to run in parallel. If the server is listening for the WebSocket handshake message and determines the connection to instead be HTTP it can allow both WebSocket and HTTP support from the same port.
* Complete user configuration and preferences for an HTTP or WebSocket server can be a tiny JSON object, including proxy and redirection support by a variety of addressable criteria. Traffic redirection should be identical for WebSocks and HTTP both from the users perspective as well as the internal execution.
* The server application can come online in a fraction of a second. New servers coming online will also take just milliseconds if not from certificate creation.
yincong0822
That’s awesome — congrats on reaching the release candidate stage! I’m curious about the performance improvements you mentioned. Did you benchmark against other Go web servers like Caddy or fasthttp? Also really like that you’ve made automatic TLS the default — that’s one of those “quality of life” features that make a huge difference for users.
I’m working on an open-source project myself (AI-focused), and I’ve been exploring efficient ways to serve streaming responses — so I’d love to hear more about how your server handles concurrency or large responses.
k_bx
I was previously waiting for River https://github.com/memorysafety/river/ to take off, it's built on top of previously open-sourced library by Cloudflare for revese-proxying, but just like many other "grant-based" projects it just died when funding stopped.
I really like the spirit and simplicity of Ferron, will try it out when I have a chance. Been waiting for gradually throw out nginx for a while now, nothing clicked all the checkboxes.
GrayShade
Hey! Sorry, I didn't get the chance to test it yet (like I promised when you launched), but can you say more about the rewrite? The title made me think you're porting it from Rust to another language :-).
Gepsens
Hey really cool. Am proficient in Rust if you need any help.
johnofthesea
Just for curiosity there is also: https://github.com/cablehead/http-nu
Which seems like interesting UX.
Hello! I got quite some feedback on a web server I'm building, so I'm rewriting the server to be faster and easier to use.
I (and maybe some other contributors?) have optimized the web server performance, especially for static file serving and reverse proxying (the last use case I optimized for very recently).
I also picked a different configuration format and specification, what I believe is easier to write.
Automatic TLS is also enabled by default out of the box, you don't need to even enable it manually, like it was in the original server I was building.
Yesterday, I released the first release candidate of my web server's rewrite. I'm so excited for this. I have even seen some serving websites with the rewritten web server, even if the rewrite was in beta.
Any feedback is welcome!