Skip to content(if available)orjump to list(if available)

My Self-Hosting Setup

My Self-Hosting Setup

72 comments

·July 19, 2025

jauntywundrkind

> Relatively easy for family and friends to use

> This means keep one login per person, ideally with SSO, for as many services as I can

Truly S-tier target. Incredible hard, incredible awesome.

I've said for a long time that Linux & open source is kind of a paradox. It goes everywhere, it speaks every protocol. But as a client, as an end. The whole task of coordinating, of groupwareing, of bringing together networks: that's all much harder, much more to be defined.

Making the many systems work together, having directory infrastructure: that stuff is amazing. For years I assumed that someday I'd be running FreeIPA or some Windows compatible directory service, but it sort of feels like maybe some OpenID type world might possibly be gel'ing into place.

mirdaki

Appreciate that! Simple login and access was certainly the hardest requirement to hit, but it can be the difference between people using something and not

And I agree with the feeling that open source is everywhere, up until a regular user picks up something. I think part of the paradox you mention is that every project is trying to work on their own thing, which is great, but also means there isn't a single entity pushing it all in one direction

But that doesn't mean we can't get to nice user experiences. Just in the self-hosting space, things have gotten way more usable in the last 5 years, both from a setup and usage perspective

Abishek_Muthian

I completely agree with the paradox, just yesterday I posted how FOSS is not accessible to non-techies on my problem validation platform[1].

I've been thinking if a platform which connects techies to non-techies can help solve that, say like a systems integrator for individuals.

[1] https://needgap.com/problems/484-foss-are-not-accessible-to-...

udev4096

It's not supposed to be. You put in time, use your brain to understand the system. Even a non-techie can easily understand OIDC and Oauth2, it's not that hard

Thorrez

As a techie, experienced in security, reading the OIDC spec... there are definitely some things I don't understand in there. I'm not sure the authors even understand what's going on.

On 2023-12-15 they published an update to OpenID Connect Core 1.0, called "errata set 2". Previously it said to verify an ID token in a token response, the client needs to

> * If the ID Token contains multiple audiences, the Client SHOULD verify that an azp Claim is present.

> * If an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.

The new version is quite different. Now it says

> * If the implementation is using extensions (which are beyond the scope of this specification) that result in the azp (authorized party) Claim being present, it SHOULD validate the azp value as specified by those extensions.

> * This validation MAY include that when an azp (authorized party) Claim is present, the Client SHOULD verify that its client_id is the Claim Value.

So core parts of the security of the ID Token are being changed in errata updates. What was the old purpose of azp? What is the new purpose of azp? Hard to tell. Did all the OIDC implementations in existence change to follow the new errata update (which didn't update the version number)? I doubt it.

https://openid.net/specs/openid-connect-core-1_0.html

https://web.archive.org/web/20231214085702/https://openid.ne...

Or how about a more fundamental question: Why does the ID Token have a signature? What attack does that signature prevent? What use cases does the signature allow? The spec doesn't explain that.

cycomanic

It's not really that hard to be honest. If you are not dead set on specific services, but make sso compatability the main selection metric for the services it's very feasible and not that difficult. I had very little experience when I set up my self hosted system and was set up very quickly using caddy and authentik. Alternatively yunohost is a very easy to use distribution that sets up everything using SSO.

mirdaki

Hey y'all, I know getting a setup that feels "right" can be a process. We all have different goals, tech preferences, etc.

I wanted to a share my blog post walking through how I finally built a setup that I can just be happy with and use. It goes over my goals, requirements, tech choices, layout, and some specific problems I've resolved.

Where I've landed of course isn't where everyone else will, but I hope it can serve as a good reference. I’ve really benefited from the content and software folks have freely shared, and hope I can continue that and help others.

raybb

Did you come across or consider using coolify at any point? I've been using it for over a year and quite enjoyed it for it's Heroku type ease of use and auto deployments from GitHub.

https://coolify.io/

mirdaki

No I haven't heard of it before. I do like the idea though, especially for side projects. Thanks for sharing, I'll look more at it!

redrove

How are you finding Nix for the homelab to be? Every time I try it I just end up confused, maybe next time will be the charm.

The reason I ask is I homelab “hardcore”; i.e. I have a 25U rack and I run a small Kubernetes cluster and ceph via Talos Linux.

Due to various reasons, including me running k8s in the lab for about 7 years now, I’ve been itching to change and consolidate and simplify, and every time i think about my requirements I somehow end up where you did: Nix and ZFS.

All those services and problems are very very familiar to me, feel free to ask me questions back btw.

mirdaki

I certainly didn't take to Nix the first few times I looked at it. The language itself is unusual and the error messages leave much to be desired. And the split around Flakes just complicates things further (though I do recommend using them, once you set it up, it's simple and the added reproducibility gives nice peace of mind)

But once I fully understood how it's features really make it easy for you to recover from mistakes and how useful the package options available from nixpkgs are, I decided it was time to sink in and figure it out. Looking at other folks nix config on GitHub (especially for specific services you're wanting to use) is incredibly helpful (mine is also linked in the post)

I certainly don't consider myself to be a nix expert, but the nice thing is you can do most things by using other examples and modifying them till you feel good about it. Then overtime you just get more familiar with and just grow your skill

Oh man, having a 25U rack sounds really fun. I have a moderate size cabinet I keep my server, desktop, a UPS, 10Gig switch, and my little fanless Home Assistant box. What's yours look like?

I should add it to the article, but one of my anti-requirements was anything in the realm of high availability. It's neat tech to play with, but I can deal with downtime for most things if the trade off is everything being much simpler. I've played a little bit with Kubernetes at work, but that is a whole ecosystem I've yet to tackle

redrove

>The language itself is unusual and the error messages leave much to be desired. And the split around Flakes just complicates things further

Those are my chief complaints as well, actually. I never quite got to the point where I grasped how all the bits fit together. I understand the DSL (though the errors are cryptic as you said) and the flakes seemed recommended by everyone yet felt like an addon that was forgotten about (you needed to turn them on through some experimental flag IIRC?).

I'll give it another shot some day, maybe it'll finally make sense.

>Oh man, having a 25U rack sounds really fun. I have a moderate size cabinet I keep my server, desktop, a UPS, 10Gig switch, and my little fanless Home Assistant box. What's yours look like?

* 2 UPSes (one for networking one for compute + storage)

* a JBOD with about 400TB raw in ZFS RAID10

* a little intertech case with a supermicro board running TrueNAS (that connects to the JBOD)

* 3 to 6 NUCs depending on the usage, all running Talos, rook-ceph cluster on the NVMEs, all NUCs have a Sonnet Solo 10G Thunderbolt NIC

* 10 Gig unifi networking and a UDM Pro

* misc other stuff like a zima blade, a pikvm, shelves, fans, ISP modem, etc

I'm not necessarily thinking about downsizing but the NUCs have been acting up and I've gotten tired of replacing them or their drives so I thought I'd maybe build a new machine to rule them all in terms of compute and if I only want one host then k8s starts making less sense. Mini PCs are fine if you don't push them to the brim like I do.

I'm a professional k8s engineer I guess, so on the software side most of this comes naturally at this point.

MisterKent

I've been trying to switch my home cluster from Debian + K3s to Talos but keep running into issues.

What does your persistent storage layer look like on Talos? How have you found it's hardware stability over the long term?

redrove

>What does your persistent storage layer look like on Talos?

Well, for its own storage: it's an immutable OS that you can configure via a single YAML file, it automatically provisions appropriate partitions for you, or you can even install the ZFS extension and have it use ZFS (no zfs on root though).

For application/data storage there's a myriad of options to choose from[0]; after going back and forth a few times years ago with Longhorn and other solutions, I ended up at rook-ceph for PVCs and I've been using it for many years without any issues. If you don't have 10gig networking you can even do iSCSI from another host (or nvmeof via democratic-csi but that's quite esoteric).

>How have you found it's hardware stability over the long term?

It's Linux so pretty good! No complaints and everything just works. If something is down it's always me misconfiguring or a hardware failure.

[0] https://www.talos.dev/v1.11/kubernetes-guides/configuration/...

esseph

Talos is the Linux kernel at heart, so.. just fine.

udev4096

Honestly, I personally like the combination of keepalived+docker(swarm if needed)+rsync for syncing config files. keepalived uses VRRP, which creates a floating IP. It's extremely lightweight and works like a charm. You won't even notice the downtime, the switch to another server IP is instant

cess11

Keepalived is great. Learning about it was one of the best things I got from building HA-aiming infra at a job once.

colordrops

Hi! Really excited by your work! I'm working on a similar project built on NixOS and curious what you thing.

My goal is to have a small nearly zero-conf apple-device-like box that anyone can install by just plugging it into their modem then going through a web-based installation. It's still very nascent but I'm already running it at home. It is a hybrid router (think OPNSense/PFSense) + app server (nextcloud, synology, yunohost etc). All config is handled through a single Nix module. It automatically configures dynamic DNS, Letsencrypt TLS certs, and subdomains for each app. It's got built in ad blocking and headscale.

I'm working on SSO at the moment. I'll take a look at your work and maybe steal some ideas.

The project is currently self-hosted in my closet:

https://homefree.host

meehai

Mine is much more barebone:

- one single machine - nginx proxy - many services on the same machine; some are internal, some are supposed to be public, are all accessible via the web! - internal ones have a humongous large password for HTTP basic auth that I store in an external password manager (firefox built in one) - public ones are either public or have google oauth

I coded all of them from scratch as that's the point of what I'm doing with homelabbing. You want images? browsers can read them. Videos? Browsers can play them.

The hard part is the backend for me. The frontend is very much "90s html".

mirdaki

Nice! I have a friend who is starting to program his infrastructure/services from scratch. It's a neat way to learn and make things fit well for your own needs

xyzzy123

Sometimes when I think about my home network, I think about it in terms of what will happen when I die and what I will be inflicting on my family as the ridiculous setups stop working. Or like, how much it would cost a police forensics team to try to make any sense of it.

I think "home labbing" fulfils much the same urge / need as the old guys (I hate to say it but very much mostly guys) met by creating hugely detailed scale model railways in their basement. I don't mean that in a particularly derogatory way, I just think some people have a deep need for pocket worlds they can control absolutely.

zeagle

I have given this a lot of thought. I assume the nas and its docker services won't boot starting everything up for someone else. My offsite encrypted backup is probably not recoverable without hiring someone. So:

- I have an ntfs formatted external USB drive to which cron copies over a snapshot of changed daily into a new folder. Stuff like paperless, flat file copy of seafile libraries. The size of that stuff is small <50gb, duplication is cheap. In event of death or dismemberment... that drive needs to be plugged into another machine. There are also seafile whole library copies on our various laptops without the iterative changes. Sync breaks... keep using your laptop.

- I've been meaning to put a small pc/rpi at a friend's place/work with a similar hard drive.

- the email domain is renewed for a decade and is hosted on iCloud for ease of renewal. Although I am not impressed that it bounces emails when storage is full from family member photos which happens regularly so may switch back to migadu.

mirdaki

I think planning for what happens once you aren't there to manage the setup (whether it be a vacation, hospital stay, or death) is important. It's not something I built specifically to make easy and I should think more on it

The most important thing is to be able to get important data off of it and have access to credentials that facilitate that. You could setup something like Nextcloud to always sync important data onto other people's devices, so make part of that easier

But I think another important aspect is making folks invested in the services. I don't expect my partner to care about or use most of them, but she does know as much as I do about using and automating Home Assistant (the little we've done). Things like that should keep working because of how core they can become to living our lives. It being a separate "appliance" and not a VM will also help manage that

But also that's a lot of hope and guessing. I think sitting down with whoever might be left with it and putting together a detailed plan is critical to any of that being successful

udev4096

Curious about your setup. Is it extremely unmanageable or have you gone out of your way to make it so?

xyzzy123

Unifi network; small proxmox vms for core services; big truenas box for movies, storage, "apps ecosystem" stuff like minecraft servers; baremetal 12 node k8s cluster on opi5s for "research" (coz I do lots of k8s at work).

Each "stage" above is like incremental failure domains, unifi only keeps internet working, core vms add functionality (like unifi mgmt, rancher, etc), truenas is for "fun extras" etc. k8s lab has nothing I need to keep on it because distributed storage operators are still kind of explodey.

Like each part makes sense individually but when I look at the whole thing I start to question my mental health.

ffsm8

Let's explore the implied argument a lil:

Imagine simplest possible deployment you've cooked up.

Now imagine explaining your mother how to maintain it after you're dead and she needs to access the files on the service you setup.

usually, selfhosting is not particularly hard. It's just conceptually way beyond what the average joe is able to do. (Not because they're not smart enough, but simply because they never learned to and will not learn now because they don't want to form that skill set. And I'm not hating on boomers, you can make the same argument with your hypothetical kids or spouse. The parents are just an easy placeholder because you're biologically required to have them, which isn't the case for any other familial relationship)

nothrabannosir

why does it have to be a non-technical next of kin ? Write down the details for a technically inclined person to follow, maybe a specific friend. Print at the top of the page “show this to X”. In the document explain how to recover the necessary data and replace the setup with a standard one.

I assume most people know at least one person who would do this for them , in the event of their death?

Aeolun

I think the pocket railways are a lot more comprehensible than my local network setup.

sandreas

Nice writeup, thank you. I already thought about having NixOS on my server, but currently I prefer proxmox. There are projects with NixOS + Proxmox, but I did not test it yet.

> My main storage setup is pretty simple. It a ZFS pool with four 10TB hard drives in a RAIDZ2 data vdev with an additional 256GB SDD as a cache vdev. That means two hard drives can die without me loosing that data. That gives me ~19TB of usable storage, which I’m currently using less than 10% of. Leaving plenty of room to grow.

I would question this when buying a new system and not having a bunch of disks laying around... having a RAID-Z2 with four 10GB disks offers the same space as a RAID1 with two 20GB disks. Since you don't need the space NOW, you could even go RAID1 with two 10TB disks and grow it by replacing it with two 20TB as soon as you need more. This in my opinion would be more cost effective, since you only need to replace 2 disks instead of 4 to grow. This would take less time and since prices per TB are probably getting lower over time, it could also save you a ton of money. I would also say that the ability of losing 2 disks won't save you from having a backup somewhere...

mirdaki

Oh yeah, I don't think the way I went about it was necessarily the most cost effective. I bought half of them on sale one year, didn't get around to setting things up, then bought the other two a year later on another sale once I finally got my server put together. I got them before I had my current plan in place. At one point I thought about having more services in a Kubernets cluster or something, but dropped that idea

Also agree, RAID isn't a replacement for a backup. I have all my important data on my desktop and laptop with plans for a dedicated backup server in the future. RAID does give you more breathing room if things go wrong, and I decided that was worth it

jancsika

Is there home lab for isolated LAN and "self-sufficient" devices?

I want to have a block of gunk on the LAN, and to connect devices to the LAN and be able to seamlessly copy that block to them.

Bonus: any gunk I bring home gets added to the block.

First part works with navidrome: I just connect through the LAN to my phone with amperfy and check the box to cache the songs. Now my song gunk is sync'd to the phone before I leave home.

This obviously would fit a different mindset. Author has a setup optimized for maximum conceivable gunk, whereas mine would need to be limited to the maximum gunk you'd want to have on the smallest device. (But I do like that constraint.)

Aeolun

I’ve got to appreciate putting the matrix server on Coruscant if nothing else :)

mirdaki

Thank you! The naming add a little bit of extra fun to it

dr_kiszonka

I am curious what are some good enough cheapskate self-hosting setups?

I want to self-host one of those floss Pocket replacements but I don't want to pay more than what these projects charge for hosting the software themselves (~$5). I am also considering self-hosting n8n. I don't have any sophisticated requirements. If it was possible I would host it from my phone with a backup to Google Drive.

solraph

Any of the 1L PCs from Dell, HP, or Lenovo. They sip power (5~10 watts), and take up minimal space. I've got a 6 or 7 VMs running on one, and it barely breaks 5% CPU usage.

See https://www.servethehome.com/introducing-project-tinyminimic... for a good list of reviews.

abeindoria

Seconded. A dell optiplex micro or hp pro desk with 7th Gen or 8th Gen i5 is approx $40-55 on eBay if you look. Works flawlessly.

mirdaki

Agree. If low cost and maximum value is you're goal, grab a used one of these or similar speed laptop (and you sort of get battery back up in that case)

Really, any machine from the last decade will be enough, so if you or someone you know have something lying around, go use that

The two main points to keep in mind are power draw (older things are usually going to be worse here) and storage expandability options (you may not need much storage for your use case though). Worse case you can plug in a USB external drive, but bare in mind that USB connection might be a little flaky

pedro_caetano

As a former firefox pocket user, what are the replacements?

I've looked into Wallabag but perhaps there are more I don't know?

redrove

I would look up intel N100 mini PCs. Extremely low power and fast enough (it’s even got hardware decoding).

null

[deleted]

qmr

Used NUCs, Raspberry Pi / pi zero.

Any old PC with low idle power draw.

sgc

How are you securing taris? Where is your local network firewall? Which one are you using?

Why did you go with Nextcloud instead of using something more barebones, for example a restic server?

mirdaki

This article (https://xeiaso.net/blog/paranoid-nixos-2021-07-18/) walks through a lot of the steps I've done on all my NixOS systems

As for Nextcloud vs a restic server, Nextcloud is heavier, but I do benefit from it's extra features (like Calendar and Contact management) as well as use a couple of apps (Memories for photos is quite nice). Plus it's much more family friendly, which was a core requirement for my setup

jjangkke

im using proxmox but struggling to setup subnets and vms

should I be using terraform and ansible?

im using cursor to ssh and it constantly needs to run commands to get "state" of the setup.

basically im trying to do what I used to do on AWS: setup VMs on private network talking to each other with one gateway dedicated to internet connection but this is proving to be extremely difficult with the bash scripts generated by cursor

if anyone can help me continue my journey with self hosting instead of relying on AWS that would be great

mirdaki

I've found a lot of docs (Proxmox and TrueNAS are both guilty of this) assume you have existing domain or tool knowledge. I'd recommend checking out some videos from selfhosting YouTubers. They often explain more about what's actually happening than just what buttons to select

Also, I found TrueNAS's interface a little more understandable. If Proxmox isn't jiving with you, you could give that a try

sgc

> im using proxmox but struggling to setup subnets and vms

That is a pretty broad target. I would say start by setting up an opnsense vm, from there you can do very little to start, just lock down your network so you can work in peace. But it can control your subnet traffic, host your tailscale, dchp server, and adguard home, etc.

As somebody who was quite used to hosting my own servers, before I first set up my homelab I thought proxmox would be the heart of it. Actually opnsense is the heart of the network, proxmox is much more in the background.

I think proxmox + opnsense is great tech and you should not be adding in terraform and ansible, but I am not sure that using cursor is helping you. You need a really good grasp of what is going on if your entire digital life is going to be controlled centrally. I would lean heavily on the proxmox tutorials and forums, and even more on the opnsense tutorials and forums. Using cursor for less important things afterwards, or to clarify a fine point every once in a while would make more sense.

redrove

I agree Proxmox default networking is lacking/insufficient at best. If you have VLANs or want to do LACP, anything more advanced than a simple interface you'll run into the limitations of the Proxmox implementation quite quickly.

I think the networking experience for hosts is one of the worst things about Proxmox.

ethan_smith

Try using Proxmox's web UI to create a Linux Bridge for each subnet, then attach VMs to appropriate bridges and configure a VM with two interfaces as your router between networks.

esseph

You don't need any scripts to do that.

Read the docs!

https://pve.proxmox.com/wiki/Network_Configuration#_choosing...

perelin

Outside of the stated requirements because its not fully open source, but https://www.cloudron.io/ made all my self hosting pains go away.

zer00eyz

It's nice to see a home lab on HN. Hardware has become a lost art for many.

If you dont have a home lab, start one. Grab a 1l pc off of ebay. Think center m720q or m920q with an i5 is a great place to start. It will cost you less than 200 bucks and if you want to turn it into a NAS or an Opnsense box later you can.

When it arrives toss Proxmox on it and get your toys from the community scripts section... it will let you get set up on 'easy mode'. Fair warning, having a home lab is an addiction, and will change how you look at development if you get into it deeply.

nathan_douglas

I credit homelabbing through my twenties with just about everything good that's happened to me in my career. I certainly didn't end up being moderately employable because I'm smart, charismatic, incisive, creative, lucky, educated, diligent, connected, handsome, sanitary, interesting, or thoughtful; no, it's because I have a tendency toward obsession, delusions of grandeur, and absolutely terrible impulse control.

So I started buying junk on eBay and trying to connect it together and make it do things, and the more frustrated I got, the less able I was to think about literally anything else, and I'd spend all night poking around on Sourceforge or random phpBBs trying to get the damn things to compile or communicate or tftp boot or whatever I wanted them to do.

The only problem was eventually I got good enough that I actually _could_ keep the thing running and my wife and kid and I started putting good stuff on my computers, like movies and TV shows and music and pictures and it started to actually be a big deal when I blew something up. Like, it wasn't just that I felt like a failure, but that I felt like a failure AND my kid couldn't watch Avatar and that's literally all he wanted to watch.

So now I have two homelabs, one that keeps my family happy and one that's basically the Cato to my Clouseau, a sort of infrastructural nemesis that will just actually try to kill me. Y'know, for fulfillment.

leovander

Not sure if it happens to most, but I have looped back around to not wanting to play sysadmin at home. Most of the stuff I have running I haven't updated in a awhile, luckily since I own it and it's all internal I don't need to worry about anyone taking away my locally hosted apps. Thank the IT gods for docker compose, and tools like portainer to minimize the amount of fuddling around I have to do.

__turbobrew__

Same, replaced the ISP router with my own and have a single box which has storage and compute for running VMs and NFS and that is it. Last thing I want to be doing on a Friday night is debugging why my home network is broken.