Skip to content(if available)orjump to list(if available)

Bunster: Compile bash scripts to self contained executables

skulk

If you want portable shell-scripts that come with their dependencies bundled, Nix also has a solution: writeShellApplication[0] (and more simpler ones like writeShellScript).

    writeShellApplication {
      name = "show-nixos-org";

      runtimeInputs = [ curl w3m ];

      text = ''
        curl -s 'https://nixos.org' | w3m -dump -T text/html
      '';
    }
writeShellApplication will call shellcheck[1] on your script and fail to build if there are any issues reported, which I think is the only sane default.

[0]: https://nixos.org/manual/nixpkgs/stable/#trivial-builder-wri...

[1]: https://www.shellcheck.net/

samtheprogram

So it compiles to a single executable that I can send to someone who isn’t on Nix?

Because if I wanted a portable shell script, I’d just write shell and check if something is executable in my path.

This just looks like Nix-only stuff that exists in an effort to be ultra declarative, and in order to use it you’d need to be on Nix.

skulk

There is nix-bundle (which I admittedly have never had a reason to use)

https://github.com/nix-community/nix-bundle

azeirah

Nix is the best.

If you're reading this and wondering how you can use this for yourself?

You don't need nixos at all. You can install nix on any linux-like system, including on MacOS

rounce

Well you're still leaning on Nix to provide the dependencies. All `writeShellApplication` will do is prepend the `PATH` variable with the `bin` directories of the provided `runtimeInputs`, it still just spits out a bash script, not a binary that includes bash, the script, and the other dependencies. I reckon it's quite possible for someone to lean on Nix to implement producing an all-in-one binary though.

skulk

mentioned in another comment, there are ways to bundle nix derivations into standalone run-on-any-linux binaries: https://github.com/nix-community/nix-bundle

rounce

Thanks! I suspected something already existed like this but I didn't find anything from 30s of web search.

johnvaluk

Is it possible to override shellcheck? It's a valuable tool that I use all the time, but it reports many false positives. It's not unusual for junior developers to introduce bugs in scripts because they blindly follow the output of shellcheck.

nerflad

A comment before the problematic line can specify options to shellcheck, (e.g)

# shellcheck disable=SC2086

which remain valid within that block.

Of course, disabling the linter should be done with deliberation...

gchamonlive

I still haven't come around to using nix in my daily workflow. My concern is high entry bar, obscure errors and breaking changes, but also excessive use of storage either because that's how it works or because I won't know how to manage the store well.

How's nix these days? How long would you expect someone with years of Linux management experience (bash, ansible, terraform, you name it, either onprem or on cloud) to get comfortable with nix? And what's would be the best roadmap to start introducing nix slowly in my workflow?

epic9x

Start by using home-manager in your current environment. Once you can modularize your own config, start building other systems with it. It's a very deep rabbit hole, and starting off as a replacement for managing your own dotfile scripts and the like is a great way to try it out without having to replace whole systems.

rounce

I'd say start even smaller by making a simple `flake.nix` with a devShell output within a project and use it to manage the project's dependencies, that way you're experiencing it within a fairly constrained opt-in environment. Nix is simple when you 'get it' but it can be quite overwhelming for someone new to it, Home-Manager is pretty big and has regions of complexity and while it might be a good candidate for daily driving Nix without running NixOS, IMO it's best to start really small.

randall

omg i love nix so much.

Phlogistique

The README fails to address the elephant in the room, which is that usually shell scripts mainly call external commands; as far as I can tell there is no documentation of which built-ins are supported?

That said, in a similar vein, you could probably create a bundler that takes a shell script and bundles it with busybox to create a static program.

mkesper

Busybox commands often don't support all features used to and differ even substantially if you depend on GNU additions. https://www.busybox.net/about.html

zamalek

I assume this is what they are talking about here:

> Standard library: we aim to add first-class support for a variety of frequently used/needed commands as builtins. you no longer need external programs to use them.

That's not going to be an easy task, and would basically entail porting those commands to go.

nodesocket

I also wondered this as well. How is something like "cat file.json | jq '.filename' | grep out.txt" implement into Go?

beepbooptheory

I haven't looked at the code, but I assume this is just taking care of things like pipes, loops, variables, conditionals, etc, and leaving the actual binaries like jq as stubs assumed to be there. Its abstracting the shell, not the programs you run in the shell.

mananaysiempre

Sure, but why is that an interesting goal? Historically, bash has had very good backwards compatibility, and it’s unlikely that you need new features anyway.

adamc

Right, but wouldn't an app built around creating a container with all the dependencies make more sense?

hezag

Disclamer: the elephant in the room has nothing to do with ElePHPant, the PHP mascot.

shrx

It should be possible to run bash scripts on any system supported by jart's cosmopolitan library [1], which provides a platform-agnostic bash executable [2].

[1] https://justine.lol/cosmo3/

[2] https://cosmo.zip/pub/cosmos/bin/

mixedmath

I'm confronted with a similar problem frequently. I have a growing bash script and it's slowly growing in complexity. Once bash scripts become sufficiently long, I find editing them later to be very annoying.

So instead, at some point I change the language entirely and write a utility in python/lua/c/whatever other language I want.

As time goes on, my limit for "sufficient complexity" to justify leaving bash and using something like python has dropped radically. Now I follow the rule that as soon as I do something "nontrivial", it should be in a scripting language.

As a side-effect, my bash scripting skills are worse than they once were. And now the scope of what I consider "trivial" is shrinking!

ComputerGuru

My problem with python is startup time, packaging complexity (either dependency hell or full blown venv with pipx/uv). I’ve been rewriting shell scripts to either Makefiles (crazy but it works and is rigorous and you get free parallelism) or rust “scripts” [0] depending on their nature (number of outputs, number of command executions, etc)

Also, using a better shell language can be a huge productivity (and maintenance and sanity) boon, making it much less “write once, read never”. Here’s a repo where I have a mix of fish-shell scripts with some converted to rust scripts [1].

[0]: https://neosmart.net/blog/self-compiling-rust-code/

[1]: https://github.com/mqudsi/ffutils

roelschroeven

I've often read that people have a problem with Python's startup time, but that's not at all my experience.

Yes, if you're going to import numpy or pandas or other heavy packages, that can be annoyingly slow.

But we're talking using Python as a bash script alternative here. That means (at least to me) importing things like subprocess, pathlib. In my experience, that doesn't take long to start.

$ cat helloworld.py #!/usr/bin/env python3 import subprocess from pathlib import Path print("Hello, world!\n")

$ time ./helloworld.py Hello, world!

real 0m0.034s user 0m0.016s sys 0m0.016s

34 milliseconds doesn't seem a lot of time to me. If you're going to run it in a tight loop than yes, that's going to be annoying, but in interactive use I don't even notice delays as small as that.

As for packaging complexity: when using Python as a bash script alternative, I mostly can easily get by with using only stuff from the standard library. In that case, packaging is trivial. If I do need other packages then yes, that can be major nuisance.

drdrey

once you start importing more packages, you easily end up with 100+ ms startup time

null

[deleted]

fieu

I have exactly the same issue. I maintain a project called discord.sh which sends Discord webhooks via pure Bash (and a little bit of jq and curl). At some point I might switch over to Go or C.

https://github.com/fieu/discord.sh

wiether

First of all, thank you for your work!

I'm using it daily for many years now and it does exactly what I expect it to do.

Now I'm a little concerned by the end of your message because it could make its usage a bit trickier...

My main usecase is to curl the raw discord.sh file from GitHub in a Dockerfile and put in in /user/local/bin, so then I can _discord.sh_ anytime I need it. Mostly used for CI images.

The only constraint is to install jq if it's not already installed on the base image.

Switching to Go or C would make the setup much harder I'm afraid

fieu

Thank you for using the project!

On the concern of it would be harder to setup, I think it would be easier in fact, you would simply curl the Go or C statically generated binary to your path and would alleviate the need for jq or curl to be installed alongside.

I think the reason I haven’t made the switch yet is I like Bash (even though my script is getting pretty big), and in a way it’s a testament to what’s possible in the language. Projects like https://github.com/acmesh-official/acme.sh really show the power of Bash.

That and I think the project would need a name change, and discord.sh as a name gets the point across better than anything I can think of.

Imustaskforhelp

From what it seems , it seems that its possible to run this thing without installing go,rust,c itself

to quote from the page

With scriptisto you can build your binary in an automatically managed Docker container, without having compilers installed on host. If you build your binary statically, you will be able to run it on host. There are a lot if images that help you building static binaries, starting from alpine offering a MUSL toolchain, to more specialized images.

Find some docker-* templates via scriptisto new command.

Examples: C, Rust. No need to have anything but Docker installed!

Builds in Docker enabled by populating the docker_build config entry, defined as such:

Also I am watching the video again because I had viewed it a looong time ago !

benediktwerner

Why would that make the setup harder? If they provide a statically-linked executable, you can just download and run it, without even the need to install jq or anything else. It's not like they'd provide Go code and ask you to compile it yourself. Go isn't Python.

maccard

I agree. My limit is pretty much one you start branching or looping, it should be in another tool. If that seems low to you, that’s the point

bigstrat2003

I definitely agree. Bash is such an unpleasant language to work with, with so many footguns, that I reach for a language like Python as soon as I'm beyond 10 lines or so.

NoMoreNicksLeft

Yesterday, I had a problem where wget alone could do 98% of what I wanted. I could restrict which links it followed, but the files I needed to retrieve were a url parameter passed in with a header redirect at the end. I spent an hour relearning all the obscure stuff in wget to get that far. The python script is 29 lines, and it turns out I can just target a url that responds with json and dig the final links out of that. Usually though, yeh, everything starts as a bash script.

sammnaser

I don't see what problem this solves, especially in its current form only supporting Unix. Bash scripts are already portable enough across Unix environments, the headaches come from dependency versioning (e.g. Mac ships non-GNU awk, etc). Except with this, when something breaks, I don't even get to debug bash (which is bad enough), but a binary compiled from Go transpiled from bash.

nightowl_games

One of the most critical elements of a shell script is that the source can be easily examined.

Bringing this into your system seems like a huge liability.

The syntax of shell scripts is terrible, but we write it to do simple things easily without needing more external tools.

git-bash on windows is generally good enough to do the kind of things most shell scripts do.

This tool feels like the worst of both worlds: bash syntax + external dependency.

koolba

Does it support eval?

Because then you could compile something like

    #!/usr/bin/env bash
    eval “$@“
And get a statically compiled bash!

Imustaskforhelp

What does this do mate? (I tried to run it and it failed)

koolba

It evaluates the arguments to the command as bash commands.

So if you save the file as foo.sh and add it to your PATH. You could run:

    $ foo.sh 'date ; ls ; foo=bar ; echo "Hello $foo"'
Or really anything you'd like as the argument is treated as a bash script.

NOTE: The original comment had the #! of the shebang backwards (as !#) due to a typo.

mbreese

You’d need to pass in arguments…

All it does is evaluate the expression you pass in as arguments.

    ./evalme.sh echo hello world
The joke being that if you could transpile this evalme.sh script to a static binary, you’d effectively have a static version of bash itself (transpiled to Go).

epic9x

Portability and other constraints I've discovered with the shell have always been a sign I need to reach for different tool. Bash is so often a "glue" language where accessibility and readability are it's primary feature right after the immediate utility of whatever it's automating. Writing POSIX compatible scripts is probably safer and can be validated with projects like shellcheck.

That said - this is a neat project and I've seen plenty of "enterprise" use-cases where this kind of thing could be useful.

jonathaneunice

Ambitious.

Given the great diversity of shell scripting needed (even if just bash) across different variants of Linux and Unix and different platform versions, debugging the resulting transpiled executables is not something I'd be keen to take on. You'd want to be an expert in the Go ecosystem at minimum, and probably already committed to moving your utility programming into Go.

stabbles

A big advantage of shell scripts is that they're scripts and you can peek in the sources or run with `-x` to see what it does.

berbec

Seeing as how they just implemented the IF statement[0] two weeks ago, I'm going to hold of for a few more releases before testing.

[0]: https://github.com/yassinebenaid/bunster/pull/88

withinboredom

I think you’d have to say more. It looks quite sane to me.

vander_elst

Are there performance drawbacks in particular with long pipelines (e.g. something like `cat | grep | sed | bc | paste | ...`)?

ComputerGuru

To the contrary. They’re all run in parallel and the (standard) output goes directly from one to the next without being buffered by the shell. Unix overhead for process creation is very low compared to others, doing the same under, for example, Windows, would be more expensive.

But if you have to run n processes, much better to run them in a single pipeline like that.

(Source: I’m a shell developer. Fish-shell ftw!)