Skip to content(if available)orjump to list(if available)

Self-Hosting like it's 2025

Self-Hosting like it's 2025

232 comments

·April 1, 2025

0xEF

I love the idea of self-hosting, especially since I keep a number of very tiny websites/projects going at any given time, so resources would not really be too much of an issue for me.

What stops me is security. I simply do not know enough about securing a self-hosted site on real hardware in my home and despite actively continuing to learn, it seems like the more I learn about it, the more questions I have. My identity is fairly public at this point, so if I say the wrong thing to the wrong person on HN or whatever, do I need to worry about someone much smarter than me setting up camp on my home network and ruining my life? That may sound really stupid to many of you, but this is the type of anxiety that stops the under-informed from trying stuff like this and turning to services like Akamai/Linode or DO that make things fairly painless in terms of setup, monitoring and protection.

That said, I'm 110% open to reading/watching any resources people have that help teach newbies how to protect their assets when self-hosting.

Brian_K_White

A few days after a remark on hn, while the thread was still active, I received a mysterious package I didn't order from a weird drop shipping service where the original sender is unknown and undiscoverable to you the recipient. It didn't contain anything bad just a single surgical mask (during covid, common valueless item basically). The message was just that they could find my home address. It was a stupid message since I obviously do not hide my identity on hn. But it means you're not wrong to be careful, both in general, and on hn in particular.

raphman

Hmm, my first guess would have been that you have been a target of "brushing" [1]. In a Reddit thread from 2020 [2], multiple people mention that they received surgical masks they did not order.

[1] https://www.bbb.org/article/news-releases/20509-amazon-brush... [2] https://www.reddit.com/r/tulsa/comments/hpe8s1/just_got_a_su...

Brian_K_White

Interesting! I never heard of that.

The package came from a US company in Texas not China. Not directly, the mask could have been made anywhere, but the package did not contain any other mail labels like when you get something from China. And never happened before, never happened again, and was literally only a single mask.

Still, seems to fit anyway because the brushing descriptions do vary in the details a little. My example still fits.

Or maybe it still was the hn guy and this just the method they used because they knew about it.

Anyway thank you.

0xEF

It's always scary, no matter how innocuous. I'm glad it did not escalate into something else for you!

Without getting too deep into it, there are some things I know how to do with computers that I probably shouldn't, so my thought is this; if I, a random idiot who just happened to learn a few things, can do X, then someone smarter than me who learned how to attack a target in an organized way probably has methods that I cannot even conceive of, can do it easier, and possibly without me even knowing. It's this weird vacillation between paranoia and prudence.

For me, it's really about acknowledging what I know I don't know. I do some free courses, muck about with security puzzles, etc, even try my own experiments on my own machines, but the more I learn, the more I realize I don't know. I suppose that's the draw! The problem is when you learn these things in an unstructured way, it's hard to piece it all together and feel certain that you have covered all your vulnerable spots.

fm2606

I'm right there with you, except at times I have thrown caution to the wind and made my sites available.

My current setup is to rent a cheap $5/month VPS running nginx. I then reverse ssh from my home to the vps, with each app on a different port. It works great until my electric goes out and comes back on the apps become unavailable. I haven't gotten the restart script to work 100% of the time.

But, I'd love to hear thoughts on security of reverse SSH from those that know.

the_snooze

I do something similar with my home server, but with a WireGuard split tunnel. Much easier to set up and keep active all the time (i.e., on my phone).

Nginx handles proxying and TLSing all HTTP traffic. It also enforces access rules: my services can only be reached from my home subnet or VPN subnet. Everywhere else gets a 403.

Karrot_Kream

Why not just have nginx listen on the Wireguard interface itself? That way you drop all traffic coming inbound from sources not on your Wireguard network and you don't even have to send packets in response nor let external actors know you have a listener on that port.

_mitterpach

Maybe try running your services in docker, I don't know how difficult that would be to implement for you, but if you run it in containers you can get it to start up after an outage pretty reliably.

fm2606

Yeah, that is a good idea and as I have been doing a little bit of studying Kubernetes I thought about that too (overkill for sure).

cenamus

I suppose also no public IP on your home connection?

Because since my new provider only provides cg-nat, I've been using a cheap server, but actually having the server at home would be nice.

fm2606

Correct, there is no public IP address exposed to my home.

Right now my "servers" are Dell micro i5s. I've have used RPI 3 and 4 in the past. My initial foray into self-hosting were actual servers. Too hot, too noisy and too expensive to run continuously for my needs, but I did learn a lot. I still do even with the micros and pis.

loughnane

Since my setup is for personal use I just use a VPN. My home router is running OPNsense and this setup wasn't too bad. I also pay my ISP for a static IP address.

https://docs.opnsense.org/manual/how-tos/wireguard-client.ht...

Then on my phone I just flick on the switch and can access all my home services. It's a smidge less convenient, but feels nice and secure.

nijave

Don't expose anything to the Internet. Use a tunneling tool (Tailscale et al) or VPN

diggan

You'll have a hard time hosting websites/projects meant for the public to view, if you don't allow public internet traffic :)

raxxorraxor

But you don't have as many security issues as well :)

arevno

We've been running production traffic via Cloudflare Tunnels for over a year with no problems. Ngrok and tailscale both run similar services, too.

crtasm

I think they want to host public websites.

null

[deleted]

raxxorraxor

I think a normal patched Debian/Ubuntu with ufw rule for port 80/443 and 22, ssh certificate auth only and a simple nginx configuration is still very safe.

Of course there can be security issue on your webserver as well, but for a simple site this setup is learnable in an hour or two and you are ready to go.

You can hook that up on a pie attached to your router or pay a bit to have it hosted somewhere. Domain is perhaps 2-5$ and an TLS cert you can get from Let's Encrypt.

No idea how to put everything into a container that it makes sense. I just run this quite often on small hosted machines elsewhere. I just install everything manually because it takes 5 minutes if you have done it before.

nosebear

I agree - I always wonder should I go overkill and put everything in its own VM for separation? Is it ok to just use containers?

If using Podman, should I use rootless containers (which IMO suck because you can't do macvlan so the container won't easily get its own IP on my home network)? Is it ok to just use rootful Podman with an idmapped user running without root privileges inside the container and drop all unneccessary capabilities? Should I set up another POSIX user before, such that breaking out of the container would in the end just yield access to an otherwise unused UID on the host?

If using systemd-nspawn, do all the above concerns about rootful / rootless hold? Is it a problem that one needs to run systemd-nspawn itself with root? The manpage itself mentions "Like all other systemd-nspawn features, this is not a security feature and provides protection against accidental destructive operations only.", so should I trust nspawn in general?

Or am I just being paranoid and everything should just be running YOLO-style with UID 1000 without any separation?

All of this makes me quite wary about running my paperless-ngx instance with all my important data next to my Vaultwarden with all of my passwords next to any Torrent clients or anything else with unencrypted open ports on the internet. Also keeping everything updated seems to be a full time job by itself.

UK-Al05

Isn't 95% of it just blocking every port except the service you want to expose, and then making sure everything is up to date and the service is built in a secure way.

WAF's etc just hide the fact the code in your service is full of holes.

sceptic123

What's the 5% that's not blocking ports for services you want to expose?

Ensuring your infra is built in a secure way is as important as ensuring your service is built in a secure way.

majewsky

Part of it is that you may get (D)DoSed and then your ISP may be any amount of pissed at you for taking on significant ingress traffic on a residential network.

bauerd

Last thing I need is Kubernetes at home

mrweasel

It is still my opinion that most businesses do not need Kubernetes, and neither should anyone self-hosting a service at home.

I can see running something in a Docker container, and while I'd advise against containers what ships with EVERYTHING, I'd also advise against using Docker-compose to spin up an ungodly amount of containers for every service.

You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home. Find a service that can be installed using your operating systems package manager and set everything to auto-update.

Whatever you're self-hosting, be it for yourself, family or a few friends, there's absolutely nothing wrong with SQLite, files on disk or using the passwd file as your authentication backend.

If you are self hosting Kubernetes to learn Kubernetes, then by all means go ahead and do so. For actual use, stay away from anything more complex than a Unix around the year 2000.

dailykoder

I shared this sentiment. But since I just host some personal fun projects and I got really lazy when it comes to self-hosting, I found great pleasure in just creating the simplest possible docker containers. It just keeps the system super clean and easy to wipe and setup again. My databases are usually just mounted volumes which do reside on the host system

ohgr

If it wasn't for Kubernetes we'd need 1/3rd of our operations team. We're keeping unemployment down!

brulard

Is this a joke? I don't know much about Kubernetes, but I've heard from devops people it's quite helpful for bigger scale infrastructures.

vbezhenar

I'd love to use Kubernetes for my self hosting. The only problem is it's too expensive.

k8sToGo

How is it too expensive? If you want to use the eco system you can still use something like k3s

null

[deleted]

ndsipa_pomu

> You shouldn't be running multiple instances of Postgresql, or anything for that matter, at home.

It's not uncommon with self-hosting services using docker. It makes it easier to try out a new stack and you can mix and match versions of postgresql according to the needs of the software. It's also easier to remove if you decide you don't like the bit of software that you're trying out.

raphinou

Exactly, my first reaction was "I should write a blog post about why I still use Docker Swarm". I deploy to single node swarms, and it's a zero boiler plate solution. I had to migrate services to another server recently, and it was really painless. Why oh why doesn't Docker Swarm get more love (from its owners/maintainers and users)?....

Edit: anyone actually interested in such a post?

gmm1990

I'd be interested. Might be a strange question but I'll throw it out there, I seem to have a hard time finding a good way to define my self hosted infrastructure nodes and which containers can run on them, have you run into/have a solution for this? Like I want my database to run on my two beefier machines but some of the other services could run on the mini pcs.

raphinou

I am running one-node swarms, so everything I deploy is running on the same node. But from my understanding you can apply labels to the nodes, and limit the placement of containers. See here for an example (I am not affiliated to this site): https://www.sweharris.org/post/2017-07-30-docker-placement/

quectophoton

> I deploy to single node swarms, and it's a zero boiler plate solution.

Yup, it's basically like a "Docker Compose Manager" that lets you group containers more easily, since the manifest file format is basically Docker Compose's with just 1-2 tiny differences.

If there's one thing I would like Docker Swarm to have, is to not have to worry about which node creates a volume, I just want the service to always be deployed with the same volume without having to think about it.

That's the one weakness I see for multi-node stacks, the thing that prevents it from being "Docker Compose but distributed". So that's probably the point where I'd recommend maybe taking a look at Kubernetes.

galbar

I just want to add that I also have a Docker Swarm running, with four small nodes for my personal stuff plus a couple of friends' companies.

No issues whatsoever and it is so easy to manage. It just works!

mfashby

I moved us off docker swarm to GKE some years back. The multi node swarm was quite unstable, and none of the big cloud providers offered managed swarm in the same way they offer managed k8s.

It's a shame I agree because it was nicely integrated with dockers own tooling. Plus I wouldn't have had to learn about k8s :)

StrLght

I am very interested. I tried to migrate to Swarm, got annoyed at incompatibility with tons of small Docker Compose things, and decided against that. I'd love to read about your setup.

opsdisk

Would love a blog post on how you're using Docker Swarm.

kiney

bugs in it's infancy is what killed swarm for users.

ndsipa_pomu

Yep. I run a small swarm at work and have a 5-node RPi-4 swarm at home. Interested in why you'd run a single-node swarm instead of stand-alone docker.

import

Exactly. I am hosting 30+ services using docker compose and very happy. I don’t want to troubleshoot k8s in the early morning because home assistant is down and light dimmers are not working for some random k8s reason.

vbezhenar

Since I've migrated our company to Kubernetes, I almost stopped to worry about anything. It just works. Had much more troubles running it with spaghetti of docker containers and host-installed software on multiple servers, that setup definitely breaks like every week or every month. With Kubernetes I just press "update cluster" in some saturday evening once or twice a year and that's about it, pretty smooth sailing.

seba_dos1

All my "smart home" stuff needs is mosquitto on a OpenWrt router and bunch of cgi-bin scripts that can run anywhere. I already went through a phase of setting up tons of services which ended up being turned off when something changed in my life (moving, replacing equipment etc.) never to be resurrected as I couldn't be bothered to redo it without the novelty effect, so I learned from that.

johnisgood

I am quite happy even without Docker, but I can see the appeal in some cases.

bitsandboots

I clicked expecting a list of cool things to self host. Instead I got a list of ways I would never want to host. Mankind invented BSD jails so that I do not have to tie myself in a knot of container tooling and abstraction.

Gud

Indeed. I run a setup as you mentioned, with the various daemons in their own jail. Super simple set up, easy to maintain.

Lord knows why people overcomplicate things with docker/kubernetes/etc.

seba_dos1

"apt-get install" tends to be enough once you stop chasing latest-and-greatest and start to appreciate things just running with low maintenance more.

ryandrake

Same here. I've had the same setup for decades: A "homelab" server on my LAN for internal hobby projects and a $5 VPS for anything that requires public access. On either of these, I just install the software I need through the OS's package manager. If I need a web server, I install it. If I need ssh, I install it. If I need nfs, I install it. I've never seen any reason to jump into containers or orchestration or any of that complex infrastructure. I know there are a lot of people very excited about adding all of that stuff into the mix, but I've never had a use case that prompted me to even consider it!

brulard

I have very similar setup, with homelab and separate cheap VPS. In similar manner I have all the services installed directly on the OS but I'm starting to run into issues where I started to consider using docker. I run nginx with multiple (~10) node apps running through PM2. While this works ok-ish, I'm not happy that if for example one of my apps needs some packages installed, I need to do it for the whole server (ffmpeg, imageMagick, etc.). Other problem is that I can easily run into compatibility problems if I upgraded node.js for example. And if there was some vulnerability in some of the node_packages any of the projects use, the whole server is compromised. I think docker can be quite an easy solution to most of these problems.

alabastervlog

I only host 3rd party daemons (nothing custom) and only on my local network (plus Tailscale) so Docker’s great for handling package management and init, since I get up-to-date versions of a far broader set of services than Debian or ubuntu’s repos, clean isolation for easy management, and init/restarts are even all free. Plus it naturally documents what I need to back up (any “mounted” directories)

Docker lets my OS be be boring (and lets me basically never touch it) while having up to date user-facing software. No “well, this new version fixes a bug that’s annoying me, but it’s not in Debian stable… do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?”

I just use shell scripts to launch the services, one script per service. Run the script once, forget about it until I want to upgrade it. Modify the version in the script, take the container down and destroy it (easily automated as part of the scripts, but I haven’t bothered), run the script. Done, forget about it again until next time.

Almost all the commands I run on my server are basic file management, docker stuff, or zfs commands. I could switch distros entirely and hardly even notice. Truly a boring OS.

seba_dos1

> do I risk a 3rd party back port repo screwing up my system or other services, or upgrade the whole OS just to get one newer package, which comes with similar risks?

In these rare cases I usually just compile a newer deb package myself and let the package manager deal with it as usual. If there are too many dependencies to update or it's unusually complex, then it's container time indeed - but I didn't have to go there on my server so far.

Not diverging from the distro packages lets me not worry about security updates; Debian handles that for me.

skydhash

I totally agree! Containers are nice when your installation is ephemeral, deploying and updating several time in a short period. But using the package manager is as easy as you get.

MortyWaves

There is a fairly well known/popular blogger who’s blog I was following because of their self hosting/homelab/nix adventures.

Then they decided to port everything to K8 because of overblown internet drama and I lost all interest. Total shame that a great resource for Nix became yet another K8 fest.

dailykoder

I guess it can be comfortable for some people.

But I just wanted to comment something similar. It's probably heavily dependend on how many services you self-host, but I have 6 services on my VPS and they are just simple podman containers that I just run. Some of them automatically, some of them manually. On top of that a very simple nginx configuration (mostly just subdomains with reverse proxy) and that's it. I don't need an extra container for my nginx, I think (or is there a very good security reason? I have "nothing to hide" and/or lose, but still). My lazy brain thinks as long as I keep nginx up to date with my package manager and my certbot running, ill be fine

shepherdjerred

I'm very happy with Kubernetes at home. Everything just works at this point, though it did take a fair bit of fiddling at first.

I think it's a great way to learn Kubernetes if you're interested in that.

vbezhenar

I'm slowly configuring my new VPS. Here's approach I'm taking:

1. RHEL 9 with Developer Subscription. Installed dnf-automatic, set `reboot = when-changed`, so it's zero effort to reliably apply all updates with daily reboots. One or two minutes of downtime, not a big deal.

2. For services: podman with quadlets. It's RH-flavoured replacement for docker-compose. Not sure, if I like it, but I guess that's the "future", so I'm embracing it. Every service is a custom-built image with common parent to reduce space waste (by reusing base OS layer).

So far I want to run static http (nginx), vaultwarden, postfix and some webmail. May be more in the future.

This setup wastes a lot of disk space for image data, so expect to order few more gigabytes of disk to pay for modern tech.

terminalbraid

I am going to continue to stan for dokku for hosting web apps, docker images included

https://dokku.com/

sujaldev

There's caprover too: https://caprover.com/

rubslopes

coolify.io is also a great open-source alternative if someone wants a web interface.

brulard

Anyone with experience to compare Coolify vs. Dokku? (and maybe something else?)

SJC_Hacker

Is that really self hosting though ?

Self hosting to me is, at the very least having physical access to the machines.

rubslopes

I think GP miswrote; Dokku does not host, it manages containers and makes deployment easier. It's like a self-hosted Heroku.

boxed

Yes, it's self hosting. You have access to the machine.

thebiglebrewski

Aw yes Dokku is still great in 2025! Hear hear!

oulipo

Dokploy is really nice

TrayKnots

I am actually worried about the self-hosting pandemic. We self-hosters will stop flying under the radar. Wonder how long it will take until our matrix instances require to be backdoored, our immich are scanning our pictures with AI.

On an unrelated note, an article of how to rent a VPS in China would be interesting :)

anticrymactic

Isn't that the beauty of self hosting? How could anything be enforced on user-controlled servers? Practically everything self-hosted is open source and how would enforcing anything would even work?

diggan

> How could anything be enforced on user-controlled servers?

New laws comes to mind. If a government decides to try to outlaw encryption again, cloud/hosting companies located there wouldn't have a choice but to comply, or give up on the business. The laws could also be made in such way that individuals are responsible for avoiding it, even self-hosters, and if people are using it anyways, be legally held responsible for the potential harms of it.

madeofpalk

The problem lies with people who are technical enough to self-host, but might not be confident enough to fork/make changes. Maybe you could switch services, but there's still just enough friction/soft-lock in to actually migrate.

You are right though, it gives significantly more control to users. It's just realising 100% of the benefits that might be trickier.

Aachen

Matrix server backdoors aren't an issue though? It's about the client where decryption happens. If those aren't required to upload decrypted contents, you can always overlay some encryption protocol like OTR over any chat mechanism. I remember using it on MSN via Pidgin

Don't worry about the servers. Worry about mandated software on the client

infecto

I suspect you would have trouble hosting long term in China. I don’t recall the specifics now but IIRC every website hosted in China needs a special government ID which requires getting approval. My memory is hazy but it does feel like one of the poorer choices to host unless you live in mainland. There are many better options in the world that both do not restrict information as well as not requiring paperwork.

pjc50

> an article of how to rent a VPS in China would be interesting

Given that apparently it's quite difficult to even get a WeChat account without a national ID, I suspect that step 1 is "learn mandarin" and step 2 is "get a Chinese national ID".

vbezhenar

I didn't have any problems creating wechat account. May be I was lucky, I don't know, I just typed my phone number and it went pretty smooth, like whatsapp. Also was able to connect my visa card. I did it in the Kazakhstan and then was able to pay in China, no problems. May be they got exception for Kazakhstan specifically, we recently got visa-free travels there.

thenthenthen

Also your home modem/router is often tied to your ID and then there is ofc the firewall. IIRC You can get vos hosting and ICP code through Ali Cloud somewhat automagically. Agree it would be nice to give it a try some time.

TobTobXX

Did you try? It's a few years ago when I had to create one, but it was just as simple as WhatsApp (just a few more CAPTCHAs). And no VPNs or whatever, straight from a Swiss IP.

ciupicri

Why would you rent a VPS in China?

throwaway48476

Jurisdictional arbitrage.

Youden

I've been through just about everything to get where I am and I've ended up with Hashicorp Nomad and Consul with Traefik, managed by OpenTofu (open-source Terraform).

Things that haven't worked for me:

- Standalone Docker: Doesn't work great on its own. Containers often need to be recreated to modify immutable properties, like the specific image the container is running. To recreate the container, you need to store some state about how it _should_ work elsewhere.

- Quadlet: Too hard to manage clusters of services. Podman has subtle differences to Docker that occasionally cause problems and really tempting features (e.g. rootless) that cause more problems if you try to use them.

- Kubernetes: Waaaay too heavy. Even the "lightweight" distributions like k3s, k0s etc. embed large components of the official distribution, which are still heavy. Part of the embedded metric server for example periodically enumerates every single open file handle in every container. This leads to huge CPU spikes for a feature I don't care about.

With my setup now, I can more or less copy-paste a template into a new file, tweak some strings and have a HTTPS-enabled service available at https://thing.mydomain.mine. This works pretty painlessly even for services that need several volumes to maintain state or need several containers that work together.

JojoFatsani

Docker Compose is very suitable for the homelab scenario. I use it on my pi.

quickslowdown

Do you run a Nomad cluster? Or just on a single host? This is my desired state, I've set up Nomad a number of times but always get stuck in one place or another. I've gotten much further with Nomad than Kubernetes, but I've kind of always gone back to ol' faithful, writing a docker compose file and running everything that way.

Youden

Just a single host. The main thing that I couldn't figure out is how to turn off "bootstrap" mode, so I've just left it on.

Helmut10001

I tried portainer once, it looked nice and had a lot of features... for which I had no use. I always found `docker compose` much easier to use, it is often just an alias and a tab away, where for portainer I would have to open a browser tab and sometimes even touch my mouse!

Otherwise good article. If you want to go rootless (which you should!), Podman is the way to go; but Docker works rootless too, with some modifications [1]. I have found Docker rootless to be reliable and robust on both Debian and Ubuntu. It also solves permissions problems because your rootless user owns files inside and outside the container, whereas with rootful setups all files outside the container are owned by root, which can be a pain.

Also, you don't need Watchtower. Automatic `docker compose pull` can be setup using standard crontab, see [2].

[1]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...

[2]: https://du.nkel.dev/blog/2023-12-12_mastodon-docker-rootless...

kalaksi

Can I suggest Lightkeeper (I'm the maintainer): https://github.com/kalaksi/lightkeeper. I made it for my own needs to simplify repetitive tasks and provide an efficient view, and it has hotkeys and trying to stay "agile". You can drop to a terminal with a hotkey any time.

kuon

If you self host, do not use containers and all those things.

Just use a static site generator like zola or hugo and rsync to a small VPS running caddy or nginx. If you need dynamic thing, there are many frameworks you can just rsync too with little dependencies. Or use PHP, it's not that bad. Just restrict all locations except public ones to your ip in nginx config if you use something like wordpress and you should be fine.

If you have any critical stuff, create a zfs dataset and use that to backup to another VPS using zfs send, there are tools to make it easy, much easier than DB replication.

Aachen

What I'm reading is not to use containers for a web server, which makes sense because web servers have had vhosts since forever and you can host any number of sites on there independently already

But what about other services, like if you want a database server as well, a mail server, etc.?

I started using containers when I last upgraded hardware and while it's not as beneficial as I had hoped, it's still an improvement to be able to clone one, do a test upgrade, and only then upgrade the original one, as well as being able to upgrade services one by one rather than committing to a huge project where you upgrade the host OS and everything has to come with to the new major version

kuon

I manage about 500 servers. Critical services like DNS, mail, tftp, monitoring, routing, firewall... are all running openbsd in N+1 configuration, and in 15 years we had zero issue with that.

Now most servers are app servers, and they all run archlinux. We prepare images and we run them with PXE.

Both those are out of scope for self host.

But, we also have about a dozen of staging, dev, playground servers. And those are just regular installs of arch. We run postgres, redis, apps in many languages... For all that we use systems packages and AUR. DB upgrade? Zfs snapshot, and I follow arch wiki postgres upgrade, takes a few minutes, there is downtime, but it is fine. You mess anything? Zfs rollback. You miss a single file? cd .zfs/snapshots and grab it. I get about 30minutes of cumulated downtime per year on those machines. That's way enough for any self host.

We use arch because we try the latest "toys" on those. If you self host take an LTS distribution and you'll be fine.

skydhash

That’s when you favor stability and use an LTS OS. You can also isolate workload by using VMs. Containers is nice for the installation part, but the immutability can be a pain.

nijave

Containers are fine. Run them on a Linux host to save yourself some headaches

auxym

It seems you're talking about about self-hosting a website or web-app that you are developing for the public to use.

My vision of self-hosting is basically the opposite. I only self-host existing apps and services for my and my family's use. I have a TrueNAS box with a few disks, run Jellyfin for music and shows, run a Nextcloud instance, a restic REST server for backing up our devices, etc. I feel like the OP is more targeted this type of "self hosting".

cullumsmith

Still running everything from my basement using FreeBSD jails and shell scripts.

Sacrificing some convenience? Probably. But POSIX shell and coreutils is the last truly stable interface. After ~12 years of doing this I got sick of tool churn.

Gud

Same. Why add the complexity of docker/kubernetes/?

FreeBSD and jails is so easy to maintain its unbelievable.

bitsandboots

Not just stable - also easy to understand when, if ever, something goes wrong. There's very little magic, very little layers of complexity.

crivlaldo

This article motivated me to upgrade my hosting approach.

I've been running a DigitalOcean VPS for years hosting my personal projects. These include a static website, n8n workflows, and Umami analytics. I used manual Docker container management, Nginx, and manual Let's Encrypt certificate renewals. I was too lazy even to set up certbot.

I've migrated to a Portainer + Caddy setup. Now I have a UI for container management and automatic SSL certificate handling. It took about two hours.

Thanks for bringing me to 2025!

smjburton

This is a great introduction to self-hosting, good job OP. As some of the other comments mentioned, discussion about self-hosted security and the importance of back-ups would be good to include. Also, you link to some great resources for discovering self-hosted applications, but it would be interesting to hear some of the software you enjoy self-hosting outside of core infrastructure. As I'm sure you're aware, self-hosters are always looking for new ideas. :)

pentagrama

As a non-developer, I find it difficult to step into the self-hosting world. What I recommend for people like me is the service PikaPods [1], which takes care of the hard part of self-hosting.

I have now switched from some SaaS products to self-hosted alternatives:

- Feed reader: Feedly to FreshRSS.

- Photo management/repository: Google Photos to Imich or Imgich (don't remember).

- Bookmarks: Mozilla's Pocket to Hoarder.

And so far, my experience has been simple and awesome!

[1] https://www.pikapods.com