Vet is a safety net for the curl | bash pattern
192 comments
·July 24, 2025pxeger1
jerf
I've also seen really wonderfully-written scripts that, if you read them manually, allow you to change where whatever it is is installed, what features it may have, optional integration with Python environments, or other things like that.
I at least skim all the scripts I download this way before I run them. There's just all kinds of reasons to, ranging all the way from the "is this malicious" to "does this have options they're not telling me about that I want to use".
A particular example is that I really want to know if you're setting up something that integrates with my distro's package manager or just yolo'ing it somewhere into my user's file system, and if so, where.
inetknght
> I've also seen really wonderfully-written scripts that
I'll take a script that passes `shellcheck ./script.sh` (or, any other static analysis) first. I don't like fixing other people's bugs in their installation scripts.
After that, it's an extra cherry on top to have everything configurable. Things that aren't configurable go into a container and I can configure as needed from there.
sim7c00
right? read before u run. if you cant make sense of it all, dont run. if you can make sense of it all, you're free to refactor it to your own taste :) saves some time usually. as you say, a lot are quite nicely written
groby_b
> read before u run
Lovely sentiment, not applicable when you actually work on something. You read your compiler/linker, your OS, and all libraries you use? Your windowing system? Your web browser? The myriad utilities you need to get your stuff done? And of course, you've read "Reflections on trusting trust" and disassembled the full output of whatever you compile?
The answer is "you haven't", because most of those are too complex for a single person to actually read and fully comprehend.
So the question becomes, how do you extend trust. What makes a shell script untrustworthy, but the executable you or the script install trustworthy?
AndyMcConachie
100% agree. The question of whether I should install lib-X for language-Y using Y's package management system or the distribution's package management system is unresolved.
Diti
It’s solved by Nix. Whichever package management you choose (nixpkgs or pip or whatever), the derivation should have the same hash in the Nix store.
(Nix isn’t the solution for OP’s problems though – Nix packages are unsigned, so it’s it’s basically backdoor-as-a-service.)
mingus88
My problem with it is that it encourages unsafe behavior.
How many times will a novice user follow that pattern until some jerk on discord drops a curl|bash and gets hits
IRC used to be a battlefield for these kinds of tricks and we have legit projects like homebrew training users it’s normal to raw dog arbitrary code direcly into your environment
SkiFire13
What would you consider a safer behaviour for downloading programs from the internet?
mingus88
You are essentially asking what is safer than running arbitrary code from the internet sight unseen directly into your shell and I guess my answer would be any other standard installation method!
The OS usually has guardrails and logging and audits for what is installed but this bypasses it all.
When you look at this from an attackers perspective, it’s heaven.
My mom recently got fooled by a scammer that convinced her to install remote access software. This curl pattern is the exact same vector, and it’s nuts to see it become commonplace
thewebguyd
Use your distro's package manager and repos first and foremost. Flatpak is also a viable alternative to distribution, and if enabled, comes along with some level of sandboxing at least.
"Back in the day" we cloned the source code and compiled ourself instead of distributing binaries & install scripts.
But yeah, the problem around curl | bash isn't the delivery method itself, it's the unsafe user behavior that generally comes along with it. It's the *nix equivalent of downloading an untrusted .exe from the net and running it, and there's no technical solution for educating users to be safe.
Safer behavior IMO would be to continue to encourage the use of immutable distros (Fedora silverbue and others). RO /, user apps (mostly) sandboxed, and if you do need to run anything untrusted, it happens inside a distrobox container.
bawolff
Literally anything else.
Keep in mind that its possible to detect when someone is doing curl | bash and only send the malicious code when curl is being piped, to make it very hard to detect.
codedokode
Software should run in a sandbox. Look at Android for example.
troupo
> My problem with it is that it encourages unsafe behavior.
Then why don't Linux distributions encourage safe behaviour? Why do you still need sudo permissions to install anything on most Linux systems?
> How many times will a novice user follow that pattern until some jerk on discord
I'm not a novice user and I will use this pattern because it's frankly easier and faster, especially when the current distro doesn't have some combination of things installed, or doesn't have certain packages, or...
keyringlight
I think a lot of this comes down to assumptions about the audience and something along the lines of "it's not a problem until it is". It's one aspect I wonder about with migrants from windows, and all the assumptions or habits they bring with them. Microsoft has been trying to put various safety rails around users for the past 20 years since they started taking security more seriously with xp, and that gets pushback every time they try and restrict or warn.
ChocolateGod
> Why do you still need sudo permissions to install anything on most Linux systems?
You don't with Flatpak or rootless containers, that's partially why they're being pushed so much.
They don't rely on setuid for it either
aragilar
Because you're making system-wide changes which affect more than just your user?
There are and there has been distros that install per user, but at some level something needs to manage the hardware and interfaces to it.
mingus88
I’m not a novice user anymore either, but I care about my security and privacy.
When I see a package from a repo, I have some level of trust. Same with a single binary from GitHub.
When I see a curl|bash I open it up and look at it. Who knows what the heck is doing. It does not save me any time and in fact is a huge waste of time to wade through random shell scripts which follow a dozen different conventions because shell is ugly.
Yes you could argue an OS package runs scripts too that are even harder to audit but those are versioned and signed and repos have maintainers and all kinds of things that some random http GET will never support.
You don’t care? Cool. Doesn’t mean it’s good or safe or even convenient for me.
umanwizard
> Why do you still need sudo permissions to install anything on most Linux systems
Not guix :)
One of the coolest things about it.
IgorPartola
This exactly. You never know what it will do. Will it simply check that you have Python and virtualenv and install everything into a single directory? Or will it hijack your system by adding trusted remote software repositories? Will it create new users? Open network ports? Install an old version of Java it needs? Replace system binaries for “better” ones? Install Docker?
Operating systems already have standard ways of distributing software to end users. Use it! Sure maybe it takes you a little extra time to do a one off task of adding the ability to build Debian packages, RPM, etc. but at least your software will coexist nicely with everything else. Or if your software is such a prima-donna that it needs its own OS image, package it in a Docker container. But really, just stop trying to reinvent the wheel (literally).
stouset
Yes! What I really want from something like this is sandboxing the install process to give me a guaranteed uninstall process.
mjmas
tinycorelinux reinstalls its extensions into a tmpfs every boot which works nicely. (and you can have different lists of extensions that get loaded)
hsbauauvhabzb
Why would you possibly want to remove my software?
ChocolateGod
This reminded me how if you wanted to remove something like cPanel back in the day your really only option was to just reinstall the whole OS.
1vuio0pswjnm7
Many times a day both in scripts and interactively I use a small program I refer to as "yy030" that filters URLs from stdin. It's a bit like "urlview" but uses less complicated regex and is faster. There is no third party software I use that is distributed via "curl|bash" and in practice I do not use curl or bash, however if I did I might use yy030 to extract any URLs from install.sh something like this
curl https://example.com/install.sh|yy030
or curl https://example.com/install.sh > install.sh
yy030 < install.sh
Another filter, "yy073", turns a list of URLs into a simple web page. For example, curl https://example.com/install.sh|yy030|yy073 > 1.htm
I can then open 1.htm in an HTML reader and select any file for download or processing by any program according to any file associations I choose, somewhat like "urlview".I do not use "fzf" or anything like that. yy030 and yy073 are small static binaries under 50k that compile in about 1 second.
I also have a tiny script that downloads a URL received on stdin. For example, to download the third URL from install.sh to 1.tgz
yy030 < install.sh|sed -n 3p|ftp0 1.tgz
"ftp" means the client is tnftp"0" means stdin
nikisweeting
This is always the beef that I've had with it. Particularly the lack of automatic updates and enforced immutable monotonic public version history. It leads to each program implementing its own non-standard self-updating logic instead of just relying on the system package managers. https://docs.sweeting.me/s/against-curl-sh
shadowgovt
Much of the reason `curl | bash` grew up in the Linux ecosystem is that "single binary that just runs" approach isn't really feasible (1) because the various distros themselves don't adhere to enough of a standard to support it. Windows and MacOS, being mono-vendor, have a sufficiently standardized configuration that install tooling that just layers a new application into your existing ecosystem is relatively straightforward: they're not worrying about what audio subsystem you installed, or what side of the systemd turf war your distro landed on, or which of three (four? five?) popular desktop environments you installed, or whether your `/dev` directory is fully-populated. There's one answer for the equivalent of all those questions on Mac and Win so shoving some random binary in there Just Works.
Given the jungle that is the Linux ecosystem, that bash script is doing an awful lot of compatibility verification and alternatives selection to stand up the tool on your machine. And if what you mean is "I'd rather they hand me the binary blob and I just hook it up based on a manifest they also provided..." Most people do not want to do that level of configuration, not when there are two OS ecosystems out there that Just Work. They understandably want their Linux distro to Just Work too.
(1) feasible traditionally. Projects like snap and flatpak take a page from the success Docker has had and bundle the executable with its dependencies so it no longer has to worry about what special snowflake your "home" distro is, it's carrying all the audio / system / whatever dependencies it relies upon with it. Mostly. And at the cost of having all these redundant tech stacks resident on disk and in memory and only consolidateable if two packages are children of the same parent image.
fouc
I first encountered `curl | bash` in the macOS world, most specifically with installing the worst package manager ever, homebrew, which first came out in 2009. Since then it's spread.
I call it the worst because it doesn't support installing specific versions of libraries, doesn't support downgrading, etc. It's basically hostile and forces you to constantly upgrade everything, which invariably leads to breaking a dependency and wasting time fixing that.
These days I mostly use devbox / nix at the global level and mise (asdf compatible) at the project level.
ryandrake
Ironic, because macOS's package management system is supposed to be the simplest of all! Applications are supposed to just live in /Applications or ~/Applications, and you're supposed to be able to cleanly uninstall them by just deleting their single directory. Not all 3rd party developers seem to have gotten that memo, and you frequently see crappy and unnecessary "installers" in the macOS world.
There may be good or bad reasons why Homebrew can't use the standard /Applications pattern, but did they have to go with "curl | bash"?
tghccxs
Why is homebrew the worst? Do you have a recommendation for something better? I default to homebrew out of inertia but would love to learn more.
JoshTriplett
Statically link a binary with musl, and it'll work on the vast majority of systems.
> they're not worrying about what audio subsystem you installed
Some software solves this by autodetecting an appropriate backend, but also, if you use alsa, modern audio systems will intercept that automatically.
> what side of the systemd turf war your distro landed on
Most software shouldn't need to care, but to the extent it does, these days there's systemd and there's "too idiosyncratic to support and unlikely to be a customer". Every major distro picked the former.
> or which of three (four? five?) popular desktop environments you installed
Again, most software shouldn't care. And `curl|bash` doesn't make this any easier.
> or whether your `/dev` directory is fully-populated
You can generally assume the devices you need exist, unless you're loading custom modules, in which case it's the job of your modules to provide the requisite metadata so that this works automatically.
networked
You can also use vipe from moreutils:
curl -sSL https://example.com/install.sh | vipe | sh
This will open the output of the curl command in your editor and let you review and modify it before passing it on to the shell.
If it seems shady, clear the text.vet looks safer. (Edit: It has the diff feature and defaults to not running the script. However, it also doesn't display a new script for review by default.) The advantage of vipe is that you probably have moreutils available in your system's package repositories or already installed.
TZubiri
Huh
Why not just use the tools separately instead of bringing a third tool for this.
Curl -o script.sh
Cat script.sh
Bash script.sh
What a concept
networked
What it comes down to is that people want a one-liner. Telling them they shouldn't use a one-liner doesn't work. Therefore, it is better to provide a safer one-liner.
This assumes that securing `curl | sh` separately from the binaries and packages the script downloads makes sense. I think it does. Theoretically, someone can compromise your site http://example.com with the installation script https://example.com/install.sh but not your binary downloads on GitHub. Reviewing the script lets the user notice that, for example, the download is not coming from the project's GitHub organization.
bawolff
If you are really paranoid you should use cat -v, as otherwise terminal control characters can hide the malicious part of the script.
panki27
At this point, the whole world is just a complexity Olympiad
adolph
Same but less instead of cat so my fingers stay in the keyboard.
Vet, vite, etc are kind of like kitchen single-taskers like avocado slicer-scoopers. Surely some people get great value out of them but a table-knife works just fine for me and useful in many task flows.
I'd get more value out of a cross-platform copy-paster so I'm not skip-stepping in my mind between pbpaste and xclip.
hsbauauvhabzb
Have you tried aliases?
jjgreen
Splendid idea, especially since "curl | bash" can be detected on the server [1] (which if compromised could serve hostile content to only those who do it)
[1] https://web.archive.org/web/20250622061208/http://idontplayd...
IshKebab
This is one of those theoretical issues that has absolutely no practical implications.
dgl
Here's an example of a phish actually using it: https://abyssdomain.expert/@filippo/114868224898553428 (also note "cat" is potentially another antipattern, less -U or cat -v is what you want).
IshKebab
Sure so how many people do you think saw `echo "Y3Vy[...]ggJg==" | base64 -d | bash` and thought "hmm that's suspicious, I'd better check what it is is doing... Ah it's curling another bash script. I'd better see what that script is. Downloads script. Ah I see, a totally legit script. All is well, I'll run the command!"
Its zero. Zero people. Nobody is competent enough to download and review a bash script and also not recognise this obvious scam.
They probably threw the pipe detection in just because they could (and because it's talked about so frequently).
falcor84
Yes, ... but if the server is compromised, they could also just inject malware directly into the binary that it's installing, right? As I see it, at the end of the day you're only safe if you're directly downloading a package whose hash you can confirm via a separate trusted source. Anything else puts you at the mercy of the server you're downloading from.
sim7c00
depending on what you run one method might have more success than another. protections for malicious scripts vs. modified binaries are often different tools or different components of the same tool that can have varying degrees of success.
you could also use the script to fingerprint and beacon to check if the target is worth it and what you might want to inject into said binary if thats your pick.
still i think i agree, if you gonna trust a binary from that server or a scripts its potato potato...
check what you run before you run it with whatever tools or skills u got and hope for the best.
if you go deepest into this rabbithole, you cant trust your hard disk or network card etc. so its then at some point just impossible to do anyhting. microcode patches, malicious firmwares, whatever.
for pragmatic reasons line needs to be drawn. if your paranoid good luck and dont learn too much about cybersecurity, or you will need to build your own computer :p
null
baq
we've been curl | bashing software on windows since forever, it was called 'downloading and running an installer' and yes, there was the occasional malware. the solution to that was antivirus software. at this point even the younger hners should see how the wheel of history turns.
meanwhile, everyone everywhere is npm installing and docker running without second thoughts.
inanutshellus
> meanwhile, everyone everywhere is npm installing and docker running without second thoughts.
Well... sometimes like, say, yesterday [1], there's a second thought...
[1] https://www.bleepingcomputer.com/news/security/npm-package-is-with-28m-weekly-downloads-infected-devs-with-malware/
simonw
"the solution to that was antivirus software"
How well did that work out?
thewebguyd
> How well did that work out?
Classic old school antivirus? Not great, but did catch some things.
Modern EDR systems? They work extremely well when properly set up and configured across a fleet of devices as it's looking for behavior and patterns instead of just going off of known malware signatures.
maccard
My last job had a modern endpoint detection system running on it and my 7 year old MacBook was as quick as my top of the line i9 processor because of it. I have never seen software destroy a systems performance as much as carbon black, crowdstrike and cortex do.
They’re also not exactly risk free - [0]
[0] https://en.m.wikipedia.org/wiki/2024_CrowdStrike-related_IT_...
panki27
If modern EDR systems are so great without relying on classical signature matching, then why are they still doing it? Why do they keep fetching "definition databases" as often as possible?
... because it's the only thing that somewhat works. From my personal experience, the heuristic and "AI-based" approaches lead to so many false positives, it's not even worth pursuing them.
The best AV remains and will always be common sense.
esafak
Great. It motivated me to drop kick Windows and move to Linux and MacOS.
nicce
Do you know how deeply integrated anti-virus is on macOS?
bongodongobob
As someone who manages 1000s of devices, great.
Cthulhu_
"everyone else" is using an app store that has (read: should have) vetted and reviewed applications.
tonymet
windows has had ACLs and security descriptors for 20+ years. Linux is a super user model.
Windows Store installs, so about 75% of installs, install sandboxed and no longer need escalation.
The remaining privileged installs that prompt with UAC modal are guarded by MS Defender for malicious patterns.
Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.
eredengrin
> Comparing sudo <bash script> to any Windows install is 30+ years out of date. sudo can access almost all memory, raw device access, and anywhere on disk.
They didn't say anything about sudo, so assuming global filesystem/memory/device/etc access is not really a fair comparison. Many installers that come as bash scripts don't require root. There are definitely times I examine installer scripts before running them, and sudo is a pretty big determining factor in how much examination an installer will get from me (other factors include the reputation of the project, past personal experience with it, whether I'm running it in a vm or container already, how I feel on the day, etc).
tonymet
Even comparing non sudo / non-privileged, Windows OS & Defender have many more protections. Controlled Folder Access restricts access to most of the home directory . And Defender Real-time is running during install and run. Windows stores secrets in TPM, which isn’t used on Linux desktop. The surface area of malicious code is much smaller.
A bash script is only guarded by file system permissions. All the sensitive content in the home directory is vulnerable. And running sudo embedded would mostly succeed.
ndsipa_pomu
At least with curl and bash, the code is human readable, so it's easy to inspect it as long as you have some basic knowledge of bash scripts.
fragmede
software running in docker's a bit more sandboxed than running outside of it, even if it's not bulletproof.
johnfn
Am I missing something? Even if you do `vet foobar-downloader.sh` instead of `curl foobar-downloader.sh | bash`, isn't your next command going to be to execute `foobar` regardless, "blindly trusting" that all the source in the `foobar` repository isn't compromised, etc?
lr0
No it says that it will show you the script first so you can review it. What I don't get is why do you nee d a program for this, you can simple curl the script to a file, `cat` it, and review it.
simonw
It shows you the installation script but that doesn't help you evaluate if the binary that the script installs is itself safe to run.
dotancohen
Right, this tool does one thing - make it easy to see the script. Another tool does something else. That's kind of the UNIX Philosophy.
null
geysersam
Yes but even if you inspect the code of the installation script the program you just installed might still be compromised/malicious? It doesn't seem more likely that an attacker managed to compromise an installation script, than that they managed to compromise the released binary itself.
loloquwowndueo
If you’re just going to run it blindly you don’t need vet. It’s not automatic - just gives you a chance to review the script before run I h it.
jrm4
As an old-timer, going through this thread, I must say that there's just not enough hate for the whole Windows/Mac OS inclination to not want to let users be experimental.
Everyone here is sort of caught up in this weird middle ground, where you're expecting an environment that is both safe and experimental -- but the two dominant Oses do EVERYTHING THEY CAN to kill the latter, which, funny enough, can also make the former worse.
Do not forget, for years you have been in a world in which Apple and Microsoft do not want you to have any real power.
Galanwe
The whole point of "curl|bash" is to skip dependency on package managers and install on a barebone machine. Installing a tool that allow to install tools without installation tool is...
chii
but then it needs to come with a curl|bash uninstall tool. Most of these install scripts are just half the story, and the uninstalling part doesn't exist.
ryandrake
Sadly, a great many 3rd party developers don't give a single shit about uninstallation, and won't lift a finger to do it cleanly, correctly and completely. If their installer/packager happens to do it, great, but they're not going to spend development cycles making it wonderful.
thewebguyd
This is why its so upsetting over in Linux land how so many people are just itching to move away from distro package managers and package maintainers. Curl | bash is everywhere now because "packaging is hard" and devs can't be arsed to actually package their software for the OS they developed it for.
Like, yeah I get it - it's frustrating when xyz software you want isn't in the repos, but (assuming it's open source) you're also welcome to package it up for your distro yourself. We already learned lessons from Windows where installers and "uninstallers" don't respect you or your filesystem. Package managers solved this problem many, many years ago.
jrpear
For those install scripts which allow changing the install prefix (e.g. autoconf projects---though involving a built step too), I've found GNU Stow to be a good solution to the uninstall situation. Install in `/usr/local/stow` or `~/.local/stow` then have Stow set up symlinks to the final locations. Then uninstall with `stow --delete`.
ndsipa_pomu
Most of the time I've seen curl|bash, it is to add a repository source to the package manager (debian/ubuntu).
nikisweeting
this is the only sane way to do it, curl|sh should just automate the package manager commands
totetsu
Isn't it better to run with firejail or bubblewrap to contain the changes to an overlayfs or whatever and see exactly what the script would do before running it?
smallerfish
Yes but it's clunky as hell. We need something like a curl x.sh | firejail --new, which prompts a) do you want overlayfs? b) do you want network isolation? c) do you want to allow home directory access?
And then, some equivalent for actually running whatever was installed. This would need to introspect what the installation script did and expose new binaries, which of course run inside the sandbox when invoked.
To move past the "| bash" lazy default, people need an easy to remember command. The complexity of the UI of these tools hinders adoption.
aezart
I think aside from any safety issues, another reason to prefer a deb or something over curl | bash is that it lets your package manager know what you're installing. It can warn you about unmet dependencies, it knows where all the individual components are installed, etc. When I see a deb I feel more confident that the software will play nicely with other stuff on the system.
mid-kid
In my experience, too many of these curl scripts are a bootstrap for another script or tarball which gets downloaded from somewhere else, and then downloads more stuff. Looking at just the main script tells you nothing. Consider for example the rust install procedure: It downloads a binary rustup, for bootstrapping, which then does the installation procedure and embeds itself into your system, and then downloads the actual compiler, and you have no chance of verifying the whole chain, nor really knowing what it changes until after the fact. Consider also systems like `pip` which through packages like puccinialin do the same inscrutable installation procedure, when a rust-based python package needs to be compiled.
Suffice to say, it's best to avoid any of this, and do it using the package manager, or manually. I only run scripts like this on systems that I otherwise don't care about, or in throwaway containers.
gchamonlive
I like that vet, which wraps the `curl | bash` pattern, can be installed via the `curl | bash` pattern but it's documented under https://github.com/vet-run/vet?tab=readme-ov-file#the-trusti....
I don't see it in arch's aur though. That would be my preferred install method. Maybe I'd take a look at it later if it's really not available there.
sgc
Given the conversations in this thread about the annoying package management that leads to so much use of curl | bash, I have a question: Which Linux distro is the least annoying in this regard? Specifically, I mean 1) packages are installed in a predictable location (not one of 3-5 legacy locations, or split between directories); 2) configuration files are installed in a predictable location; 3) Packages are up to date; 4) There is a a large selection of software in the repositories; 5) Security is a priority for the distro maintainers; 6) It's not like pulling teeth if I want/need to customize my setup away from the defaults.
I have always used Debian / Ubuntu because I started with my server using them and have wanted to keep the same tech stack between server and desktop - and they have a large repository ecosystem. But the fragmentation and often times byzantine layout is really starting to grind my gears. I sometimes find a couple overlapping packages installed, and it requires research and testing to figure out which one has priority (mainly networking...).
Certainly, there is no perfect answer. I am just trying to discover which ones come close.
GrantMoyer
Try Arch Linux. It hits all your points except maybe 5.
1. It symlinks redundant bin and lib directories to /usr/bin, and its packages don't install anything to /usr/local.
2. You can keep most config files in /etc or $XDG_CONFIG_HOME. Occasionally software doesn't follow the standards, but that's hardly the distro's fault.
3. Arch is bleeding edge
4. Arch repos are pretty big, plus thete's the AUR, plus packaging software yourself from source or binaries is practically trivial.
5. Security is not the highest priority over usability. You can configure SELinux and the like, but they're not on by default. See https://wiki.archlinux.org/title/Security.
6. There are few defaults to adhear to on Arch. Users are expected to customize.
Elfener
I switched to NixOS to solve this sort of problem.
Configuration of system-wide things is done in the nix langauge in one place.
It also has the most packages of any distro.
And I found packaging to be more approachable than other distros, so if something isn't packaged you can do it properly rather than just curl|bash-ing.
lima
The only distros that are cleanly customizable are declarative ones approaches like NixOS or guix.
speed_spread
I'm using Fedora Kinoite with distrobox. My development envs are containerized. This makes it easy to prevent tech stacks from interfering and also provides some security because any damage should be contained to the container. It does add initial overhead to the setup but once you get going it's great.
My problem with curl|bash is not that the script might be malicious - the software I'm installing could equally be malicious. It's that it may be written incompetently, or just not with users like me in mind, and so the installation gets done in some broken, brittle, or non-standard way on my system. I'd much rather download a single binary and install it myself in the location I know it belongs in.