Skip to content(if available)orjump to list(if available)

Deploy from local to production (self-hosted)

Nelkins

All of these projects lack server hardening. I think for most devs it would not be a great idea to just point at a server and let 'er rip. I have a pretty extensive cloud-init script I use for this when deploying to a VPS. I workshopped it by finding existing scripts and having a back and forth with Claude. Feel free to roast it :)

https://gist.github.com/NatElkins/20880368b797470f3bc6926e35...

wongarsu

There is a weird dynamic going on where defaults have become "good enough": ssh with public key configured and password auth disabled, services defaulting to only listening on localhost, etc., but those improvements have also cause people to pay much less attention to server hardening or to checking if any of their services might have unsafe defaults.

The world is made much better by safer defaults, but they also lead to a degree of complacency.

Wilder7977

I had a quick look, that "deploy" user can run any sudo command without password? It's basically root at that point. I think that forcing a password (maybe using some lax timeout if you don't want to insert it so often) is a much better option. Correct me if I am wrong, but I also see that there are secrets in the file (e.g., gmail SMTP creds). Make sure the file is protected in read at a minimum. If those are your gmail app credentials, they are pretty serious and obtainable by just reading the file (same goes for postfix config).

klysm

I’ve had this argument so many times over the years, and usually the response comes down to security by obscurity because people won’t know the non-root username

Wilder7977

That I guess is relevant in the context of brute-force login, which given you only use key with, is not really something I would stress over. However, depending on what that user does, there might be vulnerable services running with its privileges, or there might be supply-chain vectors for tools that user runs.

simpaticoder

Thank you for sharing, because I didn't know what cloud-init was until your post. I've done something similar, but packaged as a library of bash functions, designed to be called in arbitrary order. I cannot comment on the specific decisions you made in your file, but the fact that a declarative, compact, and standard solution exists for this problem is music to my ears. Out of curiosity, where did YOU learn of the existence of this feature?

shortsunblack

Cloud init/Cloud config is a standard way to provision Linux hosts. It is slowly being outcompeted by Ignition and the friends, though.

mdaniel

> It is slowly being outcompeted by Ignition and the friends, though.

I hope not, because I lack enough foul language to describe my burning hatred for Ignition and all its cutesy campfire-related project codenames. Hate-red.

simpaticoder

Looks like it was invented by Canonical for AWS/EC2 in 2006 (!). It was then gradually adopted by other clouds over the next 10 years or so (GCP adopted in 2013, Azure a couple years later). Linode (Akamai Cloud now, I guess) adopted in 2023. Obligatory xkcd: https://xkcd.com/1053/

This got me to wondering when I first heard about HTML, HTTP, Linux, UTF-8, or any number of things, and from where, how so many of the things I've heard of once and never again, and the many important "standard" things I've never heard of.

codelion

server hardening is definitely an often overlooked aspect... that gist looks comprehensive. i'm curious, have you benchmarked the performance impact of all those security measures? it's a trade-off, right? some community members mentioned using CIS benchmarks as a starting point, then tailoring from there.

klysm

Security performance tradeoff is hard, but I always try to keep in mind what the downside to either is. A small performance hit can definitely matter, but for most use cases a security hit will matter a lot more

Timber-6539

I wouldn't put some configuration values like no-new-privileges:true in the global docker daemon config. Eventually you will find some app that will break because of this and you will spend hours troubleshooting it if you do not remember this tiny detail.

Something also has to be said for simplicity and redundant choices. For example replacing systemd-timesyncd with chrony is not justified. And some of the recommended sysctl values may be redundant and already the default in the target OS.

notarealllama

Thanks for this, I still provision with a bash script.

fragmede

port 22 and usepam is interesting. maybe set that to not 22 and not use pam unless you have a specific reason to. didn't dig deep to see if you had one but you're not setting a pam.conf.d file as far as I saw. There's more to pick apart, but if that's the best Claude can do my job is safe for 30 more seconds.

hire a professional to secure your shit.

computerfriend

Why not use PAM? Or is the issue the missing PAM hardening?

fragmede

why use pam when ssh has authentication built in? pam is great if you have a reason to use it. I use it on my Mac so I can use my fingerprint for sudo. turning on pam implies you want to do something with it.

flaskking

[flagged]

mediumsmart

I use rsync in a script. One line builds and the second line deploys - but that pushes to a server someone else has standing on the ground with a disk in it that hosts the site I myself made on a local machine. If I self-hoist the production server into my flat, couldn't I just copy the folder without the internet like from local to local?

rad_gruchalski

No need to even copy a folder, simply link it.

indigodaddy

This seems cool and all but it's fairly trivial to docker compose whatever stuffs/apps you want and install caddy as the reverse proxy on the host (I normally don't do caddy in a container but it might be better to)

You have to setup docker compose files with airo it looks like in any case, so this just simplifies the caddy part? But caddy is so simple to begin with I'm not sure the point..

delduca

Docker compose + Cloudflare Tunnels is my current setup, no need to deal with SSL, have a public IP address, and if you make use of Tailscale, you do need any open ports, witch is extremely secure and robust.

ratorx

Does it even configure caddy? I can’t see how the caddy config could be generated from the env.yaml (unless it relies on the directory name etc for the path).

Seems like something that could have been solved with just docker compose (by setting a remote DOCKER_HOST). If you need “automatic” proxying, then traefik can do it off container labels.

timdorr

Nope, it just copies over a hand-made Caddyfile and restarts a docker container that you need to already be running: https://github.com/bypirob/airo/blob/1a827a76f2254e5ca4f4ba4...

This looks extremely barebones and makes a large number of assumptions in its current state. This is better as a Show HN after some more dev work has been completed.

notpushkin

Agreed – it’s a bit too early to publicize this. Lots of great alternatives discussed here though!

Maybe I should try to Show HN my Docker dashboard, https://lunni.dev/ :thinking:

globular-toast

Surprised it doesn't use caddy-docker-proxy to automatically route traffic to your compose set up. You could just do that in dev and have a very simple compose override for prod that changes the domain names etc.

hn_rob

I have been using a makefile where each target executes a shell script snippet to build, push or deploy containers. The problem is that with simple docker build it doesn't recognize modified code files and uses cached layers. To pick up changes in the code I always have to build with --no-cache which is inefficient. I wonder if Airo can detect changes in the code and rebuild only the image layers that need to be rebuild.

globular-toast

This sounds like a misconfiguration on your part. I've never had this problem with docker before. Are you sure it's not your makefile skipping something because you haven't made the docker bits phony targets (if you're using make as a "script runner", everything should be a phony target. A sign that you're using the wet tool, but I digress).

scottydelta

I have been using coolify on a bare metal server and it has been such a great experience.

The ability to spin up services is crazy good and easy. I was able to evaluate n8n, windmill.dev and prefect in 3 hours because I was able to quickly setup these clusters and test them. And final thing was to compile my code as docker container and spin it up on coolify within mins with custom domain and ssl.

How is it different and eaiser than coolify kinda setup?

tharos47

How do you handle backups ? I recently setup coolify and installed the included WordPress. WordPress was broken by a failed module install, I wanted to restore a backup and didn't find a way to backup only one service/stack.

Compared to simply docker compose isolated in a vm/lxc container it was not a particularly better experience.

I also wanted to use cloudflare tunnels instead of exposing the server on the internet and it seems coolify really prefers to work directly on the internet (lacking reverse proxy doc, ...)

replete

Coolify will be great, 6 months ago however I experienced so many bugs around docker deployments I just couldn't trust it for production. Hopefully the new hires will make continued progress because there are some pretty great workflows possible. Already though, it is extremely cool to run your own platform

chris_pie

There's also a long standing issue of random peaks to 100% CPU, that they don't seem to be able to fix.

jilles

I did the same. Really enjoyed Coolify but at some point there was too much magic.

I’m now using Dokku and can’t imagine using something else.

sameklund

Considering both of these right now. Why was Dokku so much better for you?

notpushkin

I’m working on Lunni – similar to Coolify, but centered around compose files (hopefully that’s less magic!): https://lunni.dev/

Would love to hear your thoughts if you give it a try!

kilroy123

Same here but I've certainly ran into some rough edges. Especially with some of the more complicated one click services.

Overall, I like it but I wish it was a bit more polished.

barrettshepherd

What’s the advantage of this over Kamal? (https://kamal-deploy.org/)

ngrilly

I like that it's a single file executable. But Kamal offers much more, for eample zero-downtime deployment.

oulipo

Also Dokploy, Dokku, Coolify and similar

ajayvk

Another option you could consider is a tool I have been building for deploying webapps across a team. https://github.com/claceio/clace has the usual deployment related functionality like gitops, TLS certs.

It has some unique features like supporting OAuth authentication for the apps, staging env for each app (code and config changes are staged before deployment), supporting declarative update, supporting hypermedia based apps natively etc.

notpushkin

This is so cool! The scope is a bit broad (it’s a build tool, DevOps tool and a hypermedia app framework?) – maybe you could split this into multiple projects and start a whole ecosystem?

I’m also working on a deployment tool, though not nearly as ambitious – it’s just a Docker dashboard based around Compose files: https://lunni.dev/ (though it does everything Traefik does, and GitOps are also definitely on the roadmap)

qudat

I just use a tunnel service to self host web services, works great and is cheaper than a VPS: https://tuns.sh

__jonas

I've got a kind of similar setup but what I've been doing is instead using a registry I'm just pushing the images along with the compose file onto the server directly with rsync to keep things even simpler. Would be nice to have a proper tool like this to automate that (I'm just using a bespoke shell script).

zelifcam

Imagine if people just used the tools themselves instead of creating yet another layer in hopes of simplifying something that can already be done with a few lines on a bash script.

cdfuller

Agreed. This project isn't that upfront this is a wrapper around 4 commands. Docker build, docker push, docker pull and compose up.

MyOutfitIsVague

Whew, you're not joking. This whole thing is 156 lines of Go. I'd probably have just used a shell script for this kind of thing.

xnyan

I know ansible is not sexy or resource efficient, but it would be a handful of lines in a single task.yml and it would work reliably out of the box. Previously the part that was too much effort for me to be reliable was often bootstrapping the python environment on the host, but uv has been a game changer (at least it has been from my team) in terms of being able to efficiently and reliably ensure the exact python environment we want.

0xbadcafebee

I think we can all agree that any Go program that just executes some other program, is way better than a shell script!

I mean, what if you needed to change the way it worked? With bash you'd have to open a text editor, change a line, and save the file! And on top of that you need to understand shell scripting!

With Go, you can set up your development environment, edit the source code, run the compiler, download the external dependencies, generate a new binary, and copy it to your server. And all this requires is learning the Go language and its development model. This is clearly more advanced, and thus better.

ianburrell

This is perfect use for Make. Have command for build, push, and deploy. Then have one to do them all together. The advantage is can do individual commands, and put one for building and testing locally.

Long scripts suck in Makefile, but can call external scripts for anything big.

morcus

For small projects you can also add something like Watchtower to your compose file and then you need only build and push the image.

And I assume you want to be building once to test your changes anyways, so you really only need to push.

xandrius

Exactly, for over 4 years I've been using my trusty 10 lines of bash (most of them is confirmation) to deploy in seconds and with 0 downtime. I should probably opensource it, lol

Wilduck

I know you're joking a little, but I personally would love to see them! I'm very interested in how people manage simple deploys.

tbocek

Here is mine, I have a docker compose file locally, and this deploy.sh script deploy to my remote machine. That also means that my remote machine is building the image. And I have not found a good solution for secrets/env files yet:

  #!/usr/bin/env bash
  
  export DOCKER_HOST="ssh://username@host:port"
  docker compose up -d --build

polishdude20

My method is I push my code to master and then ssh to my server, git pull and restart the server

Etheryte

  while true; do git pull && sudo reboot; done

cryptbro

[flagged]

null

[deleted]