Show HN: Canine – A Heroku alternative built on Kubernetes
131 comments
·June 16, 2025TheTaytay
First of all, I'm often looking for a better "Heroku-esque" experience on my own metal, so thank you! This looks neat!
Also, your docs on how K8s works look really good, and might be the most approachable docs I've seen on the subject. https://canine.gitbook.io/canine.sh/technical-details/kubern...
Question: I assumed when I read the pitch, that I could spin up a managed K8s somewhere, like in Digital Ocean, and use this somehow. But after reading docs and comments, it sounds like this needs to manage my K8s for me? I guess my question is: 1) When I spin up a "Cluster" on Hetzner, is that just dividing up a single machine, or is it a true K8s cluster that spans across multiple machines? 2) If I run this install script on another server, does it join the cluster, giving me true distributed servers to host the pods? 3) Is there a way to take an existing managed K8s and have Canine deploy to it?
czhu12
Yeah so at the moment it kind of supports two options: 1. A single Hetzner VPS 2. An existing Kubernetes cluster.
I usually use #1 for staging / development apps, and then #2 for production apps. For #2, I manage the number of nodes on the Digital Ocean side, and kubernetes just magically reschedules my workload accordingly (also can turn on auto scaling).
I think the thing that you're getting at that is not supported is having Canine create a multi-node cluster directly within Hetzner.
There is a terraform to create a Kubernetes cluster from hetzner, but this isn't currently installed on Canine.
I'm not closed to trying it out, there were a few UI improvements I wanted to take a shot at first, but at the moment Canine assume's you have a cluster ready to go, or can help you walk through a K3s installation to a single VPS.
https://github.com/kube-hetzner/terraform-hcloud-kube-hetzne...
TheTaytay
Oh! This is good news! I was not asking about K8s on Hetzner per se. I was asking if I could spin up a managed cluster (on Digital Ocean, etc) and use this on it. It sounds like I can, which is great! I think I missed that in the docs.
nwienert
First - I really want something like this to exists and be great, so best of luck. As of today I'd consider this or Dokploy (Docker Swarm is underrated).
Small feedback - your "Why you should NOT use Canine" section actually is a net-negative for me. I actually was thinking it was cool that it may actually list downsides, but then you did a sarcastic thing that was annoying. I think you should just be frank - you'll have to purchase and manage servers, you'll be on the hook if they go down and have to get them back up, this is an early product made by one person, etc.
czhu12
Haha, well there goes my attempt to be different from the other landing pages out there. I'll take another stab, but appreciate the feedback!
harrisreynolds
Yeah... I like this "Why you should not use Canine" section too.
I was just on Posthog's site this morning and saw a similar section...
https://www.dropbox.com/scl/fi/rky248hgutwzzkzwhifxz/posthog...
nwienert
I'm all for doing it, the problem as-is is there are real downsides to something like Canine especially when it's super early like this. Posthog gets away with it because they aren't alpha (and have better humor, amongst other things).
I say keep it, just add some honesty there too.
1oooqooq
please keep it. it is awesome (and have to be said)! (but add the critical points too)
dgellow
What’s the state of docker swarm? I stopped following years ago when it felt the software has been abandoned by the docker team
vbezhenar
It is supported by docker and not abandoned. I just checked latest docker engine release notes and there are multiple fixes and enhancements. Certainly not as popular, compared to Kubernetes, but it is there.
horsawlarway
Eh, it's not quite that simple.
What the person above you is thinking of is almost certainly "swarm classic" which is actually dead (see: https://github.com/docker-archive/classicswarm)
Docker does support a different "Swarm mode" style deployment configuration, which is functionally https://github.com/moby/swarmkit, and really feels much more like Kubernetes to me than the original docker swarm.
I'm... honestly not sure why you'd pick it as a solution over all the k8s tooling they've been doing instead. It feels like the same level of complexity and then only benefit is that it's easier to configure than bare metal k8s, but things like k3s and microk8s tackle that same space.
If anyone is really using the swarm mode for a production service, I'd love to hear different opinions, though!
chrisweekly
100% agreed on both points.
debarshri
We maintain list of PaaS platform out there in the wild - https://github.com/debarshibasak/awesome-paas
westurner
dokku is a minimal PaaS that can also run on a VPS. There's a dokku-scheduler-kubernetes: https://github.com/dokku/dokku-scheduler-kubernetes
But it doesn't have support Helm charts.
Cloud computing architecture > Delivery links to SaaS, DaaS, DaaS, PaaS, IaaS: https://en.wikipedia.org/wiki/Cloud_computing_architecture
Cloud-computing comparison: https://en.wikipedia.org/wiki/Cloud-computing_comparison
Category:Cloud_platforms: https://en.wikipedia.org/wiki/Category:Cloud_platforms
awesome-selfhosted has a serverless / FaaS category that just links to awesome-sysadmin > PaaS: https://github.com/awesome-selfhosted/awesome-selfhosted#sof...
kot-behemoth
I’ve recently started an open-source self-hosted data platform (https://github.com/kot-behemoth/kitsunadata) with Dokku being a great initial deployment mode. It’s mature, simple to get started and has tons of docs / tutorials.
I collected a bunch of links while learning it, and launched https://github.com/kot-behemoth/awesome-dokku, as there wasn’t an “awesome” list.
Hope it helps someone!
emilsedgh
https://dokku.com/docs/deployment/schedulers/k3s/
This is a more featureful version.
czhu12
Ah yeah I've been looking for these to submit to. Thanks, I'll submit a PR!
czhu12
Would also add -- this has been by far the funnest project I've ever built. Owning the "tech stack" from top to bottom is a super satisfying feeling.
Rails app Canine infra Raspberry pi server My own ISP
Was a tech stack I managed to get an app running on, for some projects I've kicked around.
vanillax
Nit pick. Kubernetes doesnt run docker containers. It run containers that conform to the Open Container Initiative ( OCI ) . Docker is a licensed brand name.
cmckn
Another nit here: https://canine.gitbook.io/canine.sh/technical-details/kubern...
I know this is just a general description, but “10,000 servers” —> Kubernetes actually only claims support up to 5,000 nodes: https://kubernetes.io/docs/setup/best-practices/cluster-larg...
Plenty of larger clusters exist, but this usually requires extensive tuning (such as entirely replacing the API registry). And obviously the specific workload plays a large role. Kubernetes is actually quite far from supporting larger clusters out of the box, though most releases include some work in that direction.
cchance
Yep i hate when i see docker required id ont run anything with docker anymore just podman and containerd for the most part
conqrr
Very cool. I've looked into doing something similar for self hosting and have wanted something in between docker and Kubernetes. Nomad seemed like a good fit, but still a tad more work that dead simple docker and lack of ecosystem. I finally gave in to just using docker and living with deployment downtime on upgrades which is fine for a personal home server. But for production services, I wonder how much of K8s does Canine really abstract? Do I ever need to peek underneath the hood? I'm no k8s expert, but I wonder if there is simply no happy medium between these two.
psviderski
I'm actually building something in between Docker and Kubernetes: https://github.com/psviderski/uncloud. Like you I wanted that middle ground without the operational overhead. It's basically Docker-like CLI and Docker Compose with multi-machine and production capabilities but no control plane to maintain.
Still in active development but the goal is to keep it simple enough that you can easily understand what's happening at each layer and can troubleshoot.
conqrr
Looks promising and exactly what I want solved. Adding wireguard and Caddy is slick. How are you planning to go about Zero Downtime deploy? Maybe emulate Swarm?
psviderski
Thanks! For zero-downtime deploys, it does simple rolling updates one container at a time in a similar way k8s or swarm does it. It starts the new container alongside the old one, waits for it to become healthy, Caddy picks it up and updates its config, then removes the old one. The difference is that this process is driven by your CLI command (not a reconciliation loop in the cluster) so you get an instant feedback if something goes wrong.
stego-tech
I dig the concept! K8s is an amazing technology hampered by overwhelming complexity (flashback vibes to the early days of x86 virtualization), and thumbing through your literature it seems you’ve got a good grasp of the fundamentals everyone needs in order to leverage K8s in more scenarios - especially areas where PVE, Microcloud, or Cockpit might end up being more popular within (namely self-hosting).
I’ve got a spare N100 NUC at home that’s languishing with an unfinished Microcloud install; thinking of yanking that off and giving Canine a try instead!
czhu12
The part I found to be a little unwieldy at times was helm. It becomes a little unpredictable when you apply updates to the values.yaml file, which ones will apply, and which ones need to be set on start up. Also, some helm installations deploy a massive number of services, and it's confusing which ones are safe to restart when.
But, I've always found core kubernetes to be a delight to work with, especially for stateless jobs.
jitl
Helm is annoying. I’m thankful it makes software easier to install but it’s like being thankful for npm.
null
cyberpunk
i really don’t know where this complexity thing comes from anymore. maybe back in the day where a k8s cluster was a 2 hour kubespray run or something but it’s now a single yaml file and a ssh key if you use something like rke.
hombre_fatal
You are so used to the idiosyncrasies of k8s that you are probably blind to them. And you are probably so experienced with the k8s stack that you can easily debug issues so you discount them.
Not long ago, I was using Google Kubernetes Engine when DNS started failing inside the k8s cluster on a routine deploy that didn't touch the k8s config.
I hacked on it for quite some time before I gave up and decided to start a whole new cluster. At which point I decided to migrate to Linode if I was going to go through the trouble. It was pretty sobering.
Kubernetes has many moving parts that move inside your part of the stack. That's one of the things that makes it complex compared to things like Heroku or Google Cloud Run where the moving parts run in the provider's side of the stack.
It's also complex because it does a lot compared to pushing a container somewhere. You might be used to it, but that doesn't mean it's not complex.
esseph
Running large deployments on bare metal and managing the software and firmware lifecycle still has significant complexity. Modern tooling makes things much better - but it's not "easy".
The kubernetes iceberg is 3+ years old but still fairly accurate.
https://www.reddit.com/r/kubernetes/comments/u9b95u/kubernet...
vanillax
I was gonna echo this. K8s is rather easy to setup. Certificates, domains, CICD ( flux/argo ) is where some completely comes in.. If anyone wants to learn more I do have a video I think is the most straight forward yet productionalized capable setup for hosting at home.
nabeards
Looks like your video is k3s. Just a heads up to others hoping for a k8s bare metal setup.
KomoD
I am interested in the video.
wesleychen
Can you send me the video?
xp84
A few years ago, I set up a $40 k8s "cluster" which consisted of a couple of nodes, at DigitalOcean, and I set it up using this tutorial: https://www.digitalocean.com/community/tutorials/how-to-auto...
I was able to create a new service and deploy it with a couple of simple, ~8-line ymls and the cluster takes care of setting up DNS on a subdomain of my main domain, wiring up Lets Encrypt, and deploying the container. Deploying the latest version of my built container image was one kubectl command. I loved it.
notnmeyer
i assume when people are talking about k8s complexity, it’s either more complicated scenarios, or they’re not talking about managed k8s.
even then though, it’s more that complex needs are complex and not so much that k8s is the thing driving the complexity.
if your primary complexity is k8s you either are doing it wrong or chose the wrong tool.
stego-tech
> or they’re not talking about managed k8s
Bingo! Managed K8s on a hyperscaler is easy mode, and a godsend. I’m speaking from the cluster admin and bare metal perspectives, where it’s a frustrating exercise in micromanaging all these additional abstraction layers just to get the basic “managed” K8s functions in a reliable state.
If you’re using managed K8s, then don’t @ me about “It’S nOt CoMpLeX” because we’re not even in the same book, let alone the same chapter. Hypervisors can deploy to bare metal and shared storage without much in the way of additional configuration, but K8s requires defining PVs, storage classes, network layers, local DNS, local firewalls and routers, etc, most of which it does not want to play nicely with pre-1.20 out of the box. It’s gotten better these past two years for sure, but it’s still not as plug-and-play as something like ESXi+vSphere/RHEL+Cockpit/PVE, and that’s a damn shame.
Hence why I’m always eager to drive something like Canine!
(EDIT: and unless you absolutely have a reason to do bare metal self-hosted K8s from binaries you should absolutely be on a managed K8s cluster provider of some sort. Seriously, the headaches aren’t worth the cost savings for any org of size)
reconnecting
Your website stated that license is now 2024 and license is MIT. `© 2024 Canine, Inc. MIT License.`
When the ability to display the current year on the webpage is not critical, the difference between the Apache license (as listed on GitHub) and the MIT license (as listed on the website) is more significant concern.
What is the actual one?
serial_dev
A Heroku alternative built on Kubernetes!? If I need to know what Kubernetes is, Helm charts and whatnot, it's not really a Heroku alternative for me. I understand for some, managing them as basic as running "echo hello", but I don't want to even think about kubernetes and helm charts when I want something up and running very quickly.
czhu12
Yeah that was the goal of Canine -- to never have to know that Kubernetes exists, but to still take advantage of its mature ecosystem. And then one day down the line, if you ever do need more powerful features, exposing the Kubernetes API's directly.
rcarmo
I’m curious as to how storage and secrets are handled, since my recurring issue with Kubernetes is not deploying the containers or monitoring them but having a sane way to ensure a (re)deployed app or stack would use the same storage and/or multiple apps would put their data in consistent locations.
Also, having seen the demo video, it’s a happy path thing (public repo, has dockerfiles, etc. what about private code and images?)
znpy
I used heroku manh years ago and i have fond memories of it.
I think the landing page fails at answering the two most basic questions:
1. Can i deploy via a stupid simple “git push” ?
2. Can i express what my workloads are via a stupid simple Procfile?
matus_congrady
At https://stacktape.com, we're also in the same space. We're offering Heroku-like experience on top of your own AWS account.
I like what you're doing. But, to behonst, it's a tough market. While the promise of $265 vs $4 might seem like a no-brainer, you're comparing apples to oranges.
- Your DX is most likely be far from Heroku's. Their developer experience is refined by 100,000s developers. It's hard to think through everything, and you're very unlikely to make it anywhere close, once you go beyond simple use-cases.
- A "single VM" setup is not really production-grade. You're lacking reliability, scalability, redundancy and many more features that these platforms have. It definitely works for low-traffic side-projects. But people or entities that actually have a budget for something like this, and are willing to pay, are usually looking for a different solution.
That being said, I wish you all the luck. Maybe things change it the AI-generated apps era.
czhu12
Yeah I agree with you, but I think thats why maybe Kubernetes is a good place to work from. It already has a massive API with a pretty large ecosystem, so at the base level, the `kubectl` developer experience is about as good as any could be. K8 also makes it reasonably easy to scale to massive clusters, with good resilience, without too much of a hiccup
hardwaresofton
Hey, if you’re going to offer constructive feedback to a competitor, maybe don’t lead with a plug.
Hello HN!
I've been working on Canine for about a year now. It started when I was sick of paying the overhead of using stuff like Heroku, Render, Fly, etc to host some web apps that I've built. At one point I was paying over $400 a month for hosting these in the cloud. Last year I moved all my stuff to Hetzner.
For a 4GB machine, the cost of various providers:
Heroku = $260 Fly.io = $65 Render = $85 Hetzner = $4
(This problem gets a lot worse when you need > 4GB)
The only downside of using Hetzner is that there isn’t a super straightforward way to do stuff like:
- DNS management / SSL certificate management - Team management - Github integration
But I figured it should be easy to quickly build something like Heroku for my Hetzner instance. Turns out it was a bit harder than expected, but after a year, I’ve made some good progress
The best part of Canine, is that it also makes it trivial to host any helm chart, which is available for basically any open source project, so everything from databases (e.g. Postgres, Redis), to random stuff like torrent tracking servers, VPN’s endpoints, etc.
Open source: https://github.com/czhu12/canine Cloud hosted version is: https://canine.sh