I ditched Docker for Podman
408 comments
·September 5, 2025t43562
nickjj
> On the plus side, any company I work for doesn't have to worry about licences. Win win!
Was this a deal breaker for any company?
I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.
If you have a dev team of 10 people and are extremely profitable to where you need licenses you'd end up paying $9 a year per developer for the license. So $90 / year for everyone, but if you have US developers your all-in payroll is probably going to be over $200,000 per developer or roughly $2 million dollars. In that context $90 is practically nothing. A single lunch for the dev team could cost almost double that.
To me that is a bargain, you're getting an officially supported tool that "just works" on all operating systems.
csours
Companies aren't monoliths, they're made of teams.
Big companies are made of teams of teams.
The little teams don't really get to make purchasing decisions.
If there's a free alternative, little teams just have to suck it up and try to make it work.
---
Also consider that many of these expenses are born by the 'cost center' side of the house, that is, the people who don't make money for the company.
If you work in a cost center, the name of the game is saving money by cutting expenses.
If technology goes into the actual product, the cost for that is accounted for differently.
akerl_
The problem isn’t generally the cost, it’s the complexity.
You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.
weberc2
I'm of the opinion that large companies should be paying for the software they use regardless of whether it's open source or not, because software isn't free to develop. So assuming you're paying for the software you use, you still have the problem that you are subject to your internal procurement processes. If your internal procurement processes make it really painful to add a new seat, then maybe the processes need to be reformed. Open source only "fixes" the problem insofar as there's no enforcement mechanism, so it makes it really easy for companies to stiff the open source contributors.
nickjj
A large company who is buying licenses for tools has to deal with this for many different things. Docker is not unique here.
An IT department for a company of that size should have ironed out workflows and automated ways to keep tabs on who has what and who needs what. They may also be under various compliance requirements that expect due diligence to happen every quarter to make sure everything is legit from a licensing perspective.
Even if it's not automated, it's normal for a team to email IT / HR with new hire requirements. Having a list of tools that need licenses in that email is something I've seen at plenty of places.
I would say there's lots of other tools where onboarding is more complicated from a license perspective because it might depend on if a developer wants to use that tool and then keeping tabs on if they are still using it. At least with Docker Desktop it's safe to say if you're on macOS you're using it.
I guess I'm not on board with this being a major conflict point.
devjab
> You end up having to track who has it installed. Hired 5 more people this week? How many of them will want docker desktop? Oh, we’ve maxed the licenses we bought? Time to re-open the procurement process and amend the purchase order.
I don't quite get this argument. How is that different from any piece of software that an employee will want in any sort of enterprise setting? From an IT operations perspective it is true that Docker Desktop on Windows is a little more annoying than something like an Adobe product, because Docker Desktop users need their local user to be part of their local docker security group on their specific machine. Aside from that I would argue that Docker Desktop is by far one of the easiest developer tools (and do note that I said developer tools) to track licenses for.
In non-enterprise setups I can see why it would be annoying but I suspect that's why it's free for companies with fewer than 250 people and 10 million in revenue.
thinkingtoilet
Are you complaining about buying 5 licenses? It seems extremely easy to handle. It feels like sometimes people just want to complain.
almosthere
Everything is hard in a large company and they have hired teams to manage procurement so this is just you over thinking.
maxprimer
Even large companies with thousands of developers have budgets to manage and often times when the CT/IO sees free as an option that's all that matters.
ejoso
This math sounds really simple until you work for a company that is “profitable” yet constantly turning over every sofa cushion for spare change. Whuch describes most publicly traded companies.
It can be quite difficult to get this kind of money for such a nominal tool that has a lot of free competition. Docker was very critical a few years ago, but “why not use podman or containerd or…” makes it harder to stand up for.
troyvit
> I ask because the Docker Desktop paid license requirement is quite reasonable. If you have less than 250 employees and make less than $10 million in annual revenue it's free.
It is for now, but I can't think of a player as large as Docker that hasn't pulled the rug out from under deals like this. And for good reason, that deal is probably a loss leader and if they want to continue they need to convert those free customers into paying.
dice
> Was this a deal breaker for any company?
It is at the company I currently work for. We moved to Rancher Desktop or Podman (individual choice, both are Apache licensed) and blocked Docker Desktop on IT's device management software. Much easier than going through finance and trying to keep up with licenses.
regularfry
Deal breaker for us too, now in my second org where that's been true.
It's not just that you need a licence now, it's that even if we took it to procurement, until it actually got done we'd be at risk of them turning up with a list of IP addresses and saying "are you going to pay for all of these installs, then?". It's just a stupid position to get into. The Docker of today might not have a record of doing that, but I wouldn't rule out them getting bought by someone like Oracle who absolutely, definitely would.
k4rli
Docker Desktop is also (imo) useless and helps be ignorant.
Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.
All the same stuff can easily be done from cli.
com2kid
> Most Mac users I see using it struggle to see the difference between "image" and "container". Complete lack of understanding.
Because they just want their software package to run and they have been given some magic docker incantation that, if they are lucky, actually launches everything correctly.
The first time I used Docker I had so many damn issues getting anything to work I was put off of it for a long time. Heck even now I am having issues getting GPU pass through working, but only for certain containers, other containers it is working fine for. No idea what I am even supposed to do about that particular bit of joy in my life.
> All the same stuff can easily be done from cli.
If a piece of technology is being forced down a user's throat, users just wants it to work and go out of their way so they can get back to doing their actual job.
johnmaguire
I don't believe it's possible to run Docker on macOS without Docker Desktop (at least not without something like lima.) AFAIUI, Docker Desktop contains not just the GUI, but also the hypervisor layer. Is my understanding mistaken?
chuckadams
> struggle to see the difference between "image" and "container"
I'll take "Gatekeeping with Shibboleths" for $1000 Alex. Not everyone is a developer.
dakiol
I cannot run docker in macos without docker desktop. I use the cli to manage images, containers, and everything else.
j45
Not everyone uses software the same way.
Not everyone becomes a beginner to using software the same way or the one way we see.
jandrese
> Was this a deal breaker for any company?
It's not the money, it's the bureaucracy. You can't just buy software, you need a justification, a review board meeting, marketplace survey with explanations of why this particular vendor was chosen over others with similar products, sign off from the management chain, yearly re-reviews for the support contract, etc...
And then you need to work with the vendor to do whatever licensing hoops they need to do to make the software work in an offline environment that will never see the Internet, something that more often than not blows the minds of smaller vendors these days. Half the time they only think in the cloud and situations like this seem like they come from Mars.
The actual cost of the product is almost nothing compared to the cost of justifying its purchase. It can be cheaper to hire a full time engineer to maintain the open source solutions just to avoid these headaches. But then of course you get pushback from someone in management that goes "we want a support contract and a paid vendor because that's best practices". You just can't win sometimes.
Izmaki
None of your companies need to worry about licenses. Docker ENGINE is free and open source. Docker DESKTOP is a software suite that requires you to purchase a license to use in a company.
But Docker Engine, the core component which works on Linux, Mac and Windows through WSL2, that is completely and 1000% free to use.
xhrpost
From the official docs:
>This section describes how to install Docker Engine on Linux, also known as Docker CE. Docker Engine is also available for Windows, macOS, and Linux, through Docker Desktop.
https://docs.docker.com/engine/install/
I'm not an expert but everything I read online says that Docker runs on Linux so with Mac you need a virtual environment like Docker Desktop, Colima, or Podman to run it.
LelouBil
Docker desktop will run a virtual machine for you. But you can simply install docker engine in wsl or in a VM on mac exactly like you would on linux (you give up maybe automatic port forwarding from the VM to your host)
iainmerrick
If you're already paying for Macs, is paying for Docker Desktop really a big problem?
matsemann
If you've installed Docker on Windows you've most likely done that by using Docker Desktop, though.
GrantMoyer
Docker Engine without Docker Desktop is available through winget as "Docker CLI"[1].
[1]: https://github.com/microsoft/winget-pkgs/tree/master/manifes...
t43562
Right, we were using macs - same story.
firesteelrain
Podman is inside the Ubuntu WSL image. No need for docker at all
kordlessagain
This is not correct, at least when looking at my screen:
(base) kord@DESKTOP-QPLEI6S:/mnt/wsl/docker-desktop-bind-mounts/Ubuntu/37c7f28..blah..blah$ podman
Command 'podman' not found, but can be installed with:
sudo apt install podman
t43562
Those companies use docker desktop on their dev's machines.
connicpu
There's no need if all your devs use desktop Linux as their primary devices like we do where I work :)
Almondsetat
That's their completely optional prerogative
ac130kz
It's works great until you need that one option from Docker Compose that is missing in Podman Compose (which is written in Python for whatever reason, yeah...).
carwyn
You can use the real compose (Go) with Podman now. The Python clone is not your only option.
null
goldman7911
You only have to worry about licences if you use Docker DESKTOP. Why not use RANCHER Desktop?
I have been using it by years. Tested it in Win11 and Linux Mint. I can have even a local kubernetes.
vermaden
I ditched Docker and Podman for FreeBSD Jails :)
More here:
- https://vermaden.wordpress.com/2023/06/28/freebsd-jails-cont...
- https://vermaden.wordpress.com/2025/04/11/freebsd-jails-secu...
- https://vermaden.wordpress.com/2025/04/08/are-freebsd-jails-...
- https://vermaden.wordpress.com/2024/11/22/new-jless-freebsd-...
chuckadams
That's ... a lot of setup. Does FreeBSD have anything similar to containerd?
cheema33
Can you run MS SQL Server inside a FreeBSD jail? Or any of the thousands of other ready to run docker containers?
Whatever you gain by running FreeBSD comes at a high cost. And that high cost is keeping FreeBSD jails from taking over.
matrix12
Very distro specific however.
udev4096
How is that any different than running VMs on a linux host?
awoimbee
The main issue is podman support on Ubuntu. Ubuntu ships outdated podman versions that don't work out of the box. So I use podman v5, GitHub actions uses podman v3, and my coworkers on Ubuntu use docker. So now my script must work with old podman, recent podman and docker
rsyring
Additionally, there aren't even any trusted repos out there building/publishing a .deb for it. The ones that I could find when I searched last were all outdated or indicated they were not going to keep moving forward.
I could get over this. But, IMO, it lends itself to asking the "why" question. Why wouldn't Podman make installing it easier? And the only thing that makes sense to me is that RedHat doesn't want their dev effort supporting their competitor's products.
That's a perfectly reasonable stance, they owe me nothing. But, it does make me feel that anything not in the RH ecosystem is going to be treated as a second-class citizen. That concerns me more than having to build my own debs.
dathinab
> Why wouldn't Podman make installing it easier?
What else can they do then having a package for every distro?
https://podman.io/docs/installation#installing-on-linux
Including instructions to build from source (including for Debian and Ubuntu):
https://podman.io/docs/installation#building-from-source
I don't know about this specific case but Debian and or Ubuntu having outdated software is a common Debian/Ubuntu problem which nearly always is cause by Debian/Ubuntu itself (funnily if it's outdated in Ubuntu doesn't mean it's outdated in Debian and the other way around ;=) ).
rsyring
> What else can they do...
They can do what Docker and many other software providers do that are committed to cross OS functionality. They could build packages for those OSes. Example:
https://docs.docker.com/engine/install/ubuntu/#install-using...
The install instructions you link to are relying on the OS providers to build/package Podman as part of their OS release process. But that is notoriously out-of-date.
You could argue, "Not Podman's Problem", and, in one sense, you'd be right. But, again, it leads to the question "Why wouldn't they make it their problem like so many other popular projects have?" and I believe I answered that previously.
kiney
debian trixie has podman 5 packages in official repos. Good chance thqt those work on ubuntu
gm678
Also on Ubuntu 25.04, which I updated a homeserver too despite it not being LTS just for the easy access to Podman 5. Once Ubuntu 26.04 comes out the pain described by some sibling comments should end. Podman 4 is a workable version but 5.0 is where I'd say it really became a complete replacement for Docker and quadlets fully matured.
troyvit
This is my biggest problem too, and it's not just my problem but Podman's problem. Lack of name recognition is big for sure compared to Docker, but to me this version mismatch problem is higher on the list and more sure to keep Podman niche. Distros like Ubuntu always ship with older versions of software, it's sadly up to the maintainer to release newer versions, and Podman just doesn't seem interested in doing that. I don't know if it was their goal but it got me to use some RedHat derivative on my home server just to get a later version.
alyandon
Yeah, the lack of an official upstream .deb that is kept up to date (like the official Docker .deb repos) for Ubuntu really kills using podman for most of my internal use cases.
ramon156
One of the reasons I don't use Ubuntu/debian is because it's just too damn slow with updates. I'm noticing that to this day it's still an issue.
Yes I could use flatpack on ubuntu, however I feel like this is partly something Ubuntu/Debian should provide out-of-the-box
alyandon
LTS in general being slow to uptake new versions of software is a feature not a bug. It gives predictability at the cost of having to deal with older versions of software.
With Ubuntu at least, some upstreams publish official PPAs so that you aren't stuck on the rapidly aging versions that Canonical picks when they cut an LTS release.
Debian I found out recently has something similar now via "extrepo".
skydhash
I use debian specifically for things to be kept the same. Once I got things setup, I don’t really want random updates to come and break things.
rsyring
Ubuntu is committed to the Snap ecosystem and there is a lot of software that you can get from a snap if you need it to be evergreen.
ac130kz
That's an Ubuntu issue though, they ship lots of outdated software. Nginx, PHP, PostgreSQL, Podman, etc, the critical software that must be updated asap, even with stable versions they all require a PPA to be properly updated.
miki123211
I've been dealing with setting up Podman for work over the last week or so, and I wouldn't wish that on my worst enemy.
If you use rootless Podman on a Redhat-derived distribution (which means Selinux), along with a non-root user in your container itself, you're in for a world of pain.
Nextgrid
I've never seen the benefit of rootless.
Either the machine is a single security domain, in which case running as root is no issue, or it's not and you need actual isolation in which case run VMs with Firecracker/Kata containers/etc.
Rootless is indeed a world of pain for dubious security promises.
mbreese
One of the major use cases was multi-user HPC systems. Because they can be complicated, it’s not uncommon for bioinformatics data analysis programs to be distributed as containers. Large HPC clusters are multi-tennant by nature, so running these containers needs to be rootless.
There are existing tools that fill this gap (Singularity/Apptainer). But, there is always friction when you have to use a specialized tool versus the default. For me, this is a core usecase for rootless containers.
For the reduced feature set we need from containers in bioinformatics, rootless is pretty straightforward. You could get largely the same benefits from chroots.
Where I think the issues start is when you start to use networking, subuids, or other features that require root-level access. At this level, rootless because a tedious exercise in configuration that probably isn’t worth the effort. The problem is, the features I need will be different from the features you need. Satisfying all users in a secure was may not be worth it.
bbkane
I see your point but I wouldn't let the perfect be the enemy of the good.
If I just want to run a random Docker container, I'm grateful I can get at least "some security" without paying as much in setup/debugging/performance.
Of course, ideally I wouldn't have to choose and the thing that runs the container would be able to run it perfectly securely without me having to know that. But I appreciate any movement in that direction, even if it's not perfect.
pkulak
Rootless is nice because if you mount some directory in, all the files don't end up owned by root. You can get around that by custom building every image so the user has your user id, but that's a pain.
jwildeboer
Sure. Constructing the case to shoot yourself in the foot is not a big problem. But in reality things mostly just work. I’m happily running a bunch of services behind a (nginx) reverse proxy as rootless containers. Forgejo, the forgejo runner to build stuff, uptime-kuma and more on a bunch of RHEL10 machines with SELinux enabled.
preisschild
Do you do OCI/container builds inside your forgejo-runner container?
mfenniak
People having trouble getting this configured is a common issue for self-hosting Forgejo Runner. As a Forgejo contributor, I'm currently polishing up new documentation to try to support people with configuring this; here's the draft page: https://forgejo.codeberg.page/@docs_pull_1421/docs/next/admi...
(Should live at https://forgejo.org/docs/v12.0/admin/actions/docker-access/ once it is finished up, if anyone runs into the comment after the draft is gone.)
marcel_hecko
I have done the same. It's not too bad - just don't rely on LLMs to design your quadlets or systemd unit files. Read the docs for the exact podman version you use and it's pretty okay.
YorickPeterse
Meanwhile it works perfectly fine without any fuzz on my two Fedora Silverblue setups. This sounds less like a case of "Podman is suffering by definition" and more a case of a bunch of different variables coming together in a less than ideal way.
prmoustache
How so? I have been using exlusively podman on Fedora for the most part of the last 7 years or so.
goku12
That surprises me too. Podman is spearheaded by Redhat and Fedora/RHEL was one of the earliest distros to adopt it and phase out docker. Why wouldn't they have the selinux config figured out?
znpy
They have.
Most likely gp is having issues with volumes and hasn’t figured out how to mix the :z and :Z attribute to bind mounts. Or the containers are trying to do something that security-wise is a big no-no.
In my experience SELinux defaults have been much wiser than me and every time i had issues i ended up learning a better way to do what i wanted to do.
Other than that… it essentially just works.
Insanity
We went through an org wide Docker -> Podman migration and it went _relatively_ smooth. Some hiccups along the way but nothing that the SysDev team couldn't overcome.
ThatMedicIsASpy
SELinux has good errors and all I usually need is :z and :Z on mounts
gm678
Can confirm, have been doing exactly what GP says is a world of pain with no problems as soon as I learned what `:z` and `:Z` do and why they might be needed.
A good reference answer: https://unix.stackexchange.com/questions/651198/podman-volum...
TL;DR: lowercase if a file from the host is shared with a container or a volume is shared between multiple containers. Uppercase in the same scenario if you want the container to take an exclusive lock on the volumes/files (very unlikely).
sigio
I've setup a few podman machines (on debian), and generally liked it. I've been struggling for 2 days now to get a k8s server up, but that's not giving my any joy. (It doesn't like our nftables setup)
xrd
I love podman, and, like others have said here, it does not always work with every container.
I often try to run something using podman, then find strange errors, then switch back to docker. Typically this is with some large container, like gitlab, which probably relies on the entirety of the history of docker and its quirks. When I build something myself, most of the time I can get it working under podman.
This situation where any random container does not work has forced me to spin up a VM under incus and run certain troublesome containers inside that. This isn't optimal, but keeps my sanity. I know incus now permits running docker containers and I wonder if you can swap in podman as a replacement. If I could run both at the same time, that would be magical and solve a lot of problems.
There definitely is no consistency regarding GPU access in the podman and docker commands and that is frustrating.
But, all in all, I would say I do prefer podman over docker and this article is worth reading. Rootless is a big deal.
nunez
I presume that the bulk of your issues are with container images that start their PID 1s as root. Podman is rootless by default, so this causes problems.
What you can do if you don't want to use Docker and don't want to maintain these images yourself is have two Podman machines running: one in rootful mode and another in rootless mode. You can, then, use the `--connection` global flag to specify the machine you want your container to run in. Podman can also create those VMs for you if you want it to (I use lima and spin them myself). I recommend using --capabilities to set limits on these containers namespaces out of caution.
Podman Desktop also installs a Docker compatibility layer to smooth over these incompatibilities.
xrd
This is terrific advice and I would happily upvote a blog post on this! I'll look into exactly this.
gorjusborg
> I love podman, and, like others have said here, it does not always work with every container.
Which is probably one of the motivations for the blog post. Compatibility will only be there once a large enough share of users use podman that it becomes something that is checked before publish.
firesteelrain
Weird, we run GitLab server and runners all on podman. Honestly I wish we would switch to putting the runners in k8s. But it works well. We use Traefik.
xrd
Yeah, I had it running using podman, but then had some weird container restarts. I switched back to docker and those all went away. I am sure the solution is me learning more and troubleshooting podman, but I just didn't spend the time, and things are running well in an isolated VM under docker.
That's good to know it works well for you, because I would prefer not to use docker.
dathinab
in my experience (at least rootless) podman does enforce resource limits much better/stricter
we had some similar issues and it was due to containers running out of resources (mainly RAM/memory, by a lot, but only for a small amount of time). And it happens that in rootless this was correctly detected and enforced, but on non rootless docker (in that case on a Mac dev laptop) it didn't detect this resource spikes and hence "happened to work" even through it shouldn't have.
null
k_roy
I use a lot of `buildx` stuff. It ostensibly works in podman, but in practice, I haven't had much luck
mrighele
> If your Docker Compose workflow is overly complex, just convert it to Kubernetes YAML. We all use Kubernetes these days, so why even bother about this?
I find that kubernetes yaml are a lot more complex than docker compose. And while I do, no, not everybody uses kubernetes.
esseph
Having an LLM function as a translation layer from docker compose to k8s yaml works really well.
On another note, podman can generate k8s yaml for you, which is a nice touch and easy way to transition.
politelemon
Use an LLM is not a solution. It's effectively telling you to switch your brain off and hope nothing goes wrong in the future. In reality things do go wrong and any conversation should be done with a good understanding of the system involved.
hallway_monitor
While I agree with this concept, I don't think it is applicable here. Docker compose files and k8s yaml are basically just two different syntaxes, saying the same thing. Translating from one syntax to another is one of the best use cases for an LLM in my opinion. Like anything else you should read it and understand it after the machine has done the busy work.
SoftTalker
When things go wrong, you just ask the LLM about that too. It's 2025.
/s
IHLayman
You don’t need an LLM for this. Use `kubectl` to create a simple pod/service/deployment/ingress/etc, run `kubectl get -o yaml > foo.yaml` to bring it back to your machine in yaml format, then edit the `foo.yaml` file in your favorite editor, adding the things you need for your service, and removing the things you don’t, or things that are automatically generated.
As others have said, depending on an LLM for this is a disaster because you don’t engage your brain with the manifest, so you aren’t immediately or at least subconsciously aware of what is in that manifest, for good or for ill. This is how bad manifest configurations can drift into codebases and are persisted with cargo-cult coding.
[edit: edit]
esseph
> You don't need an LLM for this
I guess that depends on how many you need to do
BTW, I'm talking about docker/compose files. kubectl doesn't have a conversion there. When converting from podman, it's super simple.
Docker would be wise to release their own similar tool.
compose syntax isn't that complex, nor would it take advtange of many k8s features out of the box, but it's a good start for a small team looking to start to transition platforms
(Have been running k8s clusters for 5+ years)
hamdingers
This assumes everyone who wants to run containers via podman has kubectl and a running cluster to create resources in which is a strange assumption.
osigurdson
I don't know how to create a compose file, but I do know how to create a k8s yaml. Therefore, compose is more "complex" for me.
0_gravitas
This is a conflation of "Simple" and "Easy" (rather, "complex" and "hard"). 'Simple vs Complex' is more or less objective, 'Easy vs Hard' is subjective, and changes based on the person.
And of course, Easy =/= Simple, nor the other way around.
hamdingers
I'm a CKA and use docker compose exclusively in my homelab. It's simpler.
diarrhea
One challenge I have come across is mapping multi-UID containers to a single host user.
By default, root in the container maps to the user running the podman container on the host. Over the years, applications have adopted patterns where containers run as non-root users, for example www-data aka UID 33 (Debian) or just 1000. Those no longer map to your own user on the host, but subordinate IDs. I wish there was an easy way to just say "ALL container UIDs map to single host user". The uidmap and userns options did not work for me (crun has failed executing those containers).
I don’t see the use case for mapping to subordinate IDs. It means those files are orphaned on the host and do not belong to anyone, when used via volume mapping?
mixedbit
If I understand things correctly, this is Linux namespaces limitation, so tools like Docker or Podman will not be able to support such mapping without support from Linux. But I'm afraid the requirement for UIDs to be mapped 1:1 is fundamental, otherwise, say two container users 1000 and 0 are mapped to the same host user 1000. Who then should be displayed in the container as the owner of a file that is owned by the user 1000 on a host?
privatelypublic
Have you looked at idmapped mounts? I don't think it'll fix everything (only handles FS remapping, not kernel calls that are user permissioned)
diarrhea
I have not, thanks for the suggestion though.
A second challenge with the particular setup I’m trying is peer authentication with Postgres, running bare metal on the host. I mount the Unix socket into the container, and on the host Postgres sees the Podman user and permits access to the corresponding DB.
Works really well but only if the container user is root so maps natively. I ended up patching the container image which was the path of least resistance.
teekert
This. And then some way to just be “yourself” in the container as well. So logs just show “you”.
lights0123
ignore_chown_errors will allow mapping root to your user ID without any other mappings required.
raquuk
The "podman generate systemd" command from the article is deprecated. The alternative are Podman Quadlets, which are similar to (docker-)compose.yaml, but defined in systemd unit files.
stingraycharles
Which actually makes a lot of sense, to hand over the orchestration / composing to systemd, since it’s not client <> server API calls (like with docker) anymore but actual userland processes.
Cyph0n
Yep. It works even better on a declarative distro like NixOS because you can define and extend your systemd services (including containers) from a single config.
Taking this further (self-plug), you can automatically map your Compose config into a NixOS config that runs your Compose project on systemd!
solarkraft
It totally does! On the con side, I find systemd unit files a lot less ergonomic to work with than compose files that can easily be git-tracked and colocated.
mariusor
What makes a systemd service less ergonomic? I guess it needs a deployment step to place it into the right places where systemd looks for them, but is there anything else?
broodbucket
With almost no documentation, mind
raquuk
I find the man page fairly comprehensive: https://docs.podman.io/en/latest/markdown/podman-systemd.uni...
tux1968
Is linking to a 404 page meant to highlight the lack of docs, or is there some mistake?
null
nunez
I started working at Red Hat this past year, so obviously all Podman, all day long. It's a super easy switch. I moved to using Containerfiles in my LinkedIn courses as well, if for no other reason than it having a much more "open" naming convention!
Rootless works great, though there are some (many) images that will need to be tweaked out of the box.
Daemonless works great as well. You can still mount podman.sock like you can with Docker's docker.sock, but systemd handles dynamically generating the UNIX socket on connect() which is a much better solution than having the socket live persistently.
The only thing that I prefer Docker for is Compose. Podman has podman-compose, which works well and is much leaner than the incumbent, but it's kind of a reverse-engineered version of Docker Compose that doesn't support the full spec. (I had issues with service conditions, for example).
mdaniel
> that doesn't support the full spec
I'd guess that's because "the spec" is more .jsonschema than a spec about what behaviors any random version should do. And I say "version" because they say "was introduced in version $foo" but they also now go out of their way to say that the file declaring what version it conforms to is a warning
0xbadcafebee
If "security" is the reason you're switching to Podman, I have some bad news.
Linux gets a new privilege escalation exploit like once a month. If something would break out of the Docker daemon, it will break out of your own user account just fine. Using a non-root app does not make you secure, regardless of whatever containerization feature claims to add security in your own user namespace. On top of all that, Docker has a rootless mode. https://docs.docker.com/engine/security/rootless/
The only things that will make your system secure are 1) hardening every component in the entire system, or 2) virtualization. No containers are secure. That's why cloud providers all use mini-VMs to run customer containers (e.g. AWS Fargate) or force the customer to manage their own VMs that run the containers.
dktalks
If you are on a Mac, I have been using OrbStack[1] and it has been fantastic. I spin up few containers there, but my biggest use is just spinning up Alpine linux and then running most of my Docker containers in there.
ghrl
I use OrbStack too and think it's great software, both for running containers and stuff like having a quick Alpine environment. However, I don't see the point of running Docker within Alpine. Wouldn't that defeat the optimizations they have done? What benefits do you get?
dktalks
Many docker containers are optimized to run as Alpine on other systems. You get the benefit that it runs on Alpine itself.
dktalks
Setup is really easy once you install alpine
1. ssh orb (or machine name if you have multiple) 2. sudo apk add docker docker-cli-compose (install docker) 3. sudo addgroup <username> docker (add user to docker group) 4. sudo rc-update add docker default (set docker to start on startup)
Bonus, add lazydocker to manage your docker containers in a console
1. sudo apk add lazydocker
classified
You mean, you let Docker containers run inside the OrbStack container, or how does that work?
dktalks
No, you don't run the Docker containers run in OrbStack, you can spin up an Alpine instance and run all docker instance on it.
The benefit is that, Alpine has access to all your local and network drives so you can use them. You can sandbox them as well. It's not a big learning curve, just a good VM with access to all drives but isolated to local only.
dktalks
And you can run Docker inside OrbStack too, it is really good. But most of my containers are optimized Alpine containers so I prefer to run them on an OS they were built for and others in OrbStack.
Tajnymag
I've wanted to migrate multiple times. Unfortunately, it failed on multiple places.
Firstly, podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.
Secondly, the development ecosystem isn't really fully there yet. Many tools utilizing Docker via its socket, fail to work reliably with podman. Either because the API differs or because of permission limitations. Sure, the tools could probably work around those limitations, but they haven't and podman isn't a direct 1:1 drop in replacement.
bonzini
> podman had a much worse performance compared to docker on my small cloud vps. Can't really go into details though.
Are you using rootless podman? Then network redirection is done using user more networking, which has two modes: slirp4netns is very slow, pasta is the newer and good one.
Docker is always set up from the privileged daemon; if you're running podman from the root user there should be no difference.
Tajnymag
Well, yes, but rootless is basically the main selling point of podman. Once you start using daemons and privileged containers, you can just keep using docker.
bonzini
No, the main selling point is daemonless. For example, you put podman in a systemd unit and you can stop/start with systemctl without an external point of failure.
Comparing root docker with rootless podman performance is apples to oranges. However, even for rootless pasta does have good performance.
anilakar
SELinux-related permission errors are an endless nuisance with podman and quadlet. If you want to sandbox about anything it's easier to create a pod with full host permissions and necessary /dev/ files mounted, running a simple program that exposes minimal functionality over an isolated container network.
Aluminum0643
Udica, plus maybe ausearch | audit2allow -C, makes it easy to generate SELinux policies for containers (works great for me on RHEL10-like distros)
https://www.redhat.com/en/blog/generate-selinux-policies-con...
seemaze
Thats funny, podman had better performance and less resource usage on my resource constrained system. I chalked it up to crun vs runc, though both docker and podman both support configuring alternate runtimes. Plus no deamon..
To provide 1 contrary opinion to all the others saying they have a problem:
Podman rocks for me!
I find docker hard to use and full of pitfalls and podman isn't any worse. On the plus side, any company I work for doesn't have to worry about licences. Win win!