Skip to content(if available)orjump to list(if available)

docker2exe: Convert a Docker image to an executable

Epskampie

> Requirements on the executing device: Docker is required.

arjvik

Good friend built dockerc[1] which doesn't have this limitation!

[1]: https://github.com/NilsIrl/dockerc

hnuser123456

That screenshot in the readme is hilarious. Nice project.

ecnahc515

Instead it requires QEMU!

remram

I can't tell what this does from the readme. Does it package a container runtime in the exe? Or a virtual machine? Something else?

vinceguidry

Looks like MacOS and Windows support is still being worked on.

ugh123

lol guy makes a fair point. Open source software suffers from this expectation that anyone interested in the project must be technical enough to be able to clone, compile, and fix the inevitable issues just to get something running and usable.

Hamuko

I'd say that a lot of people suffer from this expectation that just because I made a tool for myself and put it up on GitHub in case someone else would also enjoy it that I'm now obligated to provide support for you. Especially when the person in the screenshot is angry over the lack of a Windows binary.

dowager_dan99

Thank goodness; solving this "problem" for the general internet destroyed it. Your point seems to be someone else should do that for every stupid asshole on the web?

dheera

But will this run inside another docker container?

I normally hate things shipped as containers because I often want to use it inside a docker container and docker-in-docker just seems like a messy waste of resources.

vinceguidry

Docker in Docker is not a waste of resources, they just make the same container runtime the container is running on available to it. Really a better solution than a control plane like Kubernetes.

remram

Docker is not emulation so there's no waste of resources.

rcfox

Doesn't podman get around a lot of those issues?

harha_

Yeah, it feels like nothing but a little trick. Why would anyone want to actually use this? The exe simply calls docker, it can embed an image into the exe but even then it first calls docker to load the embedded image.

jve

I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes.

I haven't tried this stuff, but maybe this is something in that direction.

lelanthran

> I see a use case. The other day I wished that I could pack CLI commands as docker containers and execute them as CLI commands and get return codes

I don't understand this requirement/specification; presumably this use-case will not be satisfied by a shell script, but I don't see how.

What are you wanting from this use-case that can't be done with a shell script?

matsemann

I do that for a lot of stuff. Got a bit annoyed with internal tools that was so difficult to set up (needed this exact version of global python, expected this and that to be in the path, constantly needed to be updated and then stuff broke again). So I built a docker image instead where everything is managed, and when I need to update or change stuff I can do it from a clean slate without affecting anything else on my computer.

To use it, it's basically just scripts loaded into my shell. So if I do "toolname command args" it will spin up the container, mount the current folder and some config folders some tools expect, forward some ports, then pass the command and args to the container which runs them.

99% of the time it works smooth. The annoying part is if some tool depends on some other tool on the host machine. Like for instance it wants to do some git stuff. I will then have to have git installed and my keys copied in as well for instance.

johncs

Basically the same as Python’s zipapps which have some niche use cases.

Before zipapp came out I built superzippy to do it. Needed to distribute some python tooling to users in a university where everyone was running Linux in lab computers. Worked perfectly for it.

j45

Could be ease of use for end users who don't docker.

worldsayshi

But now you have two problems.

alumic

I was so blown away by the title and equally disappointed to discover this line.

Pack it in, guys. No magic today.

stingraycharles

Thank god there’s still this project that can build single executables that work on multiple OS’es, I’m still amazed by that level of magic.

cozyman

[flagged]

Hamuko

I feel like it's much easier to send a docker run snippet than an executable binary to my Docker-using friends. I usually try to include an example `docker run` and/or Docker Compose snippet in my projects too.

drawfloat

Is there any alternative way of achieving a similar goal (shipping a container to non technical customers that they can run as if it were an application)?

regularfry

It feels like there ought to be a way to wrap a UML kernel build with a container image. Never seen it done, but I can't think of an obvious reason why it wouldn't work.

mrbluecoat

See the dockerc comment above

null

[deleted]

dennydai

Just use shebang

https://news.ycombinator.com/item?id=38987109

#!/usr/bin/env -S bash -c "docker run -p 8080:8080 -it --rm \$(docker build --progress plain -f \$0 . 2>&1 | tee /dev/stderr | grep -oP 'sha256:[0-9a-f]*')"

cess11

That's bat guano insane, but I still like it more than TFA.

renewiltord

It's not that crazy. Another fun use is with a `uv run` shebang https://news.ycombinator.com/item?id=42855258

tomjakubowski

Note to others who might like to write long shebangs: the -S argument there to /usr/bin/env is load-bearing, and if you forget it weird stuff will happen, at least on most Linux systems. I wrote about it a few years ago, based on a true story. https://crystae.net/posts/two-shebang-papercuts/

rullopat

It's great for sending your 6 GB hello world exe to your friends I suppose

xandrius

The beauty of docker is that it is a reflection of how much someone cares about deployments: do you care about being efficient? You can use `scratch` or `X-alpine`. Do you simply not care and just want things to work? Always go for `ubuntu` and you're good to go!

You can have a full and extensive api backend in golang, having a total image size of 5-6MB.

hereonout2

I've done both, tiny scratch based images with a single go binary to full fat ubuntu based things.

What is killing me at the moment is deploying Docker based AI applications.

The CUDA base images come in at several GB to start with, then typically a whole host of python dependencies will be added with things like pytorch adding almost a GB of binaries.

Typically the application code is tiny as it's usually just python, but then you have the ML model itself. These can be many GB too, so you need to decide whether to add it to the image or mount it as a volume, regardless it needs to make it's way onto the deployment target.

I'm currently delivering double digit GB docker images to different parts of my organisation which raises eyebrows. I'm not sure a way around it though, it's less a docker problem and more an AI / CUDA issue.

Docker fits current workflows but I can't help feeling having custom VM images for this type of thing would be more efficient.

kevmo314

PyTorch essentially landed on the same bundling CUDA solution, so you're at least in good company.

endofreach

> You can have a full and extensive api backend in golang, having a total image size of 5-6MB.

So people are building docker "binaries", that depend on docker installed on the host, to run a container inside a container on the host– or even better, on a non-linux host, all of that then runs in a VM on the host... just... to run a golang application that is... already compiled to a binary?

xandrius

Sure but a Docker setup is more than just running the binary. You have setup configs, env vars, external dependencies, and all executed in the same way.

Of course you can do it directly on the machine but maybe you don't need containers then.

In the same vein: people put stuff within a box, which is then put within another bigger box, inside a metal container, on top on another floating container. Why? Well, for some that's convenient.

anthk

Golang should not need docker. It's statically built.

hereonout2

Docker / containers are more than just that though. Using it allows your golang process to be isolated and integrated into the rest of your tooling, deployment pipelines, etc.

cik

It sounds like docker export and makeself combined. We already ship to select customers prebuilt containers exactly this way.

aussieguy1234

On Linux, there would be little to no performance penalty to something like this since Docker is just fancy chroot, re using the same kernel as the host.

But not on other platforms. They are the same but run Linux in a VM.

ransom1538

Ah finally. We have finished where we started.

blueflow

That was my first thought. Back in the days you gave your friends a stand-alone *.COM program on a floppy. We have come full circle on static linking.

kkapelon

This is just a simpler wrapper over the docker executable that you need to have installed anyway.

rietta

I remember thinking that the Visual Basic runtime was unacceptable bloat overhead and now this. Cool work though. Also reminds me of self extracting WinZip files.

sitkack

At some point in the future we will be nostalgic for the monstrosities of the present.

int_19h

"ChatGPT, execute this natural language description of what the program should do"

int_19h

I remember those times, as well. There was an amusing (in retrospect) period in late 90s - early 00s where one of the metrics for RAD tools of the day was how large a hello world type app is. Delphi was so popular then in part because it did very well on that metric - the baseline was on the order of 300 Kb, if I remember correctly, and you could have fairly complicated apps under 1 Mb. Visual Basic was decidedly meh on that count because between your EXE and MSVBVM60.DLL, it wouldn't fit on a single 1.44 Mb floppy.

hda111

Why? Would be easier to embed both podman and the image in one executable to create a self-contained file. No docker needed.

nine_k

Tired: docker run.

Wired: docker2exe.

Inspired: AppImage.

(I'll show myself out.)

arjav0703

This is useful if you want to share your container (probably something that is prod ready) to someone who knows nothing about docker. An usecase would be, you built a custom software for someone's business/usecase and they are the only one using that particular container.

fifilura

Docker is mostly backend, but I wonder how far we are from universally executable native applications?

I.e. download this linux/mac/windows application to your windows/linux/mac computer.

Double-click to run.

Seems like all bits and pieces are already there, just need to put them together.

Piskvorrr

The devil is in the details.

What do you mean, "requires Windows 11"? What is even "glibc" and why do I need a different version on this Linux machine? How do I tell that the M4 needs an "arm64", why not a leg64 and how is this not amd64?

In other words, it's very simple in theory - but the actual landscape is far, FAR more fragmented than a mere "that's a windows/linux/mac box, here's a windows/linux/mac executable, DONE"

(And that's for an application without a GUI.)

fifilura

Yes, it is difficult, but difficult problems have been solved before.

With dependency management systems, docker, package managers.

MacOS and Windows is closed source and that is of course a problem, I guess the first demo would be universally runnable linux executable on Windows.

int_19h

> I guess the first demo would be universally runnable linux executable on Windows.

The other way around is easier, and already exists thanks to Wine and the ability of Linux kernel to register custom executable formats (https://docs.kernel.org/admin-guide/binfmt-misc.html)

Piskvorrr

I have been trying. As I may not have been entirely clear the first time:

It's not that hard to wrap your python/java/whatever app in a polyglot executable that will run on your Linux box, on your Mac, and on your Windows box. Here's a much harder target: "I would like to take this to any of such boxes, of reasonably vanilla config, and get it to run there, or at least crawl. 'Start and catch fire' doesn't count, 'exit randomly' doesn't count." The least problematic way to do this is "assume Java", and even that is wildly unsuccessful (versions and configs and JVMs, oh my!). The second least problematic is "webpage" (unless you are trying to interact with any hardware).

The differences in boxes within an OS are often as large as differences across OSes. Docker was supposed to help with this by "we'll ship your box then," and while the idea works great, the assumption "there's already a working Docker, and/or you can just drop a working Docker" is...not great: you just push everything up a level of abstraction, yet end up with the original problem unsolved and unchanged. (There's an actual solution "ship the whole box, hardware and everything," but the downsides are obvious)

lucasoshiro

> universally executable native applications

To achieve that you'll need some kind of compatibility layer. Perhaps something like wine? Or WSL? Or a VM?

Then you'll have what we already have with JVM and similar

woodrowbarlow

https://justine.lol/ape.html -- αcτµαlly pδrταblε εxεcµταblε

this works for actual compiled code. no vm, no runtime, no interpreter, no container. native compiled machine code. just download and double-click, no matter which OS you use.

Piskvorrr

"Please note this is intended for people who don't care about desktop GUIs, and just want stdio and sockets without devops toil."

woodrowbarlow

cosmopolitan-libc has aspirations (but not concrete plans) to add SDL interfaces for all supported platforms. this would allow APE executables to compile-in cross-platform UI toolkits like QT.

ivewonyoung

How different would that be from Flatpak

fifilura

Does it make linux applications run on Windows or mac?

null

[deleted]