Skip to content(if available)orjump to list(if available)

How I install personal versions of programs on Unix

bonoboTP

When I was a beginner I was surprised (on university machines and clusters) that I can't install packages for myself but if I download the source, I can compile and use almost anything by putting it in my home dir. I still don't quite get why we have to do this dance and can't just install from apt into home, but whatever (downloading the .deb file and unpacking it as an archive is also an option though, but still quite some manual effort.)

A great tool to manage these home-based installations is GNU Stow. In fact I've written scripts that just take the tarball, compile it with the typical workflow (autotools or cmake), setting prefix and DESTDIR as needed and then use Stow to put it in place. Then if I want to "uninstall" something, I use `stow --delete`. Works well enough for most of the use cases, like installing a newer GCC or Cmake than available on a cluster etc.

tracnar

That's actually how the nix package manager works, normal users can 'install', or even build, packages. It works because the installation does not really have any side effects beyond using some resources (disk space, network, CPU), which as you point out you could anyway use as a normal user.

chazeon

Yeah, it has been painful previously, but now we get conda / uv, it is in a much better condition right now.

bonoboTP

That's a much more narrow set of software though, compared to the breadth of distro package managers.

moffkalast

Well how else would we gradually drive people to madness if not by asking them their sudo password every five seconds?

nickelpro

Flying in the face of convention for no apparent reason. Use the normal FHS/XDG directories, makes everything easier.

`/usr/local` for system level, non-managed packages

`~/.local` for user level, non-managed packages

`/opt` for system level "add-on" packages. Typically proprietary upstreams. Zoom, Discord, the proprietary builds of Chrome or VSC, etc.

WhyNotHugo

Installing to /usr/local is not only simple and convenient—it's also what most build scripts do by default unless you specify otherwise.

For a lot of tools, I find that writing a port can be almost as little work and installing it manually. With the added bonus that the package manager will then track its dependencies and that I can share the port with other users of the distro.

Alpine makes this particularly easy thanks to the simplicity behind its APKBUILD files. BSDs usually have relatively simple recipe format too (although not as simple as APKBUILD or PKGBUILD tbh).

setheron

Too bad the search for libraries has to be consolidated. * Cough * Nix * Cough*

ciupicri

`/usr/local` and `/opt` FTW!

vaylian

You need root permissions to write to these locations.

For personal installations you should use $XDG_DATA_HOME: https://specifications.freedesktop.org/basedir-spec/latest/#... and then create symlinks to binaries in $HOME/.local/bin.

hnlmorg

“should” is massively overstating things here.

For starters, it’s not a UNIX thing. It’s really more a Linux thing. Some Unixes have also adopted it but not enough for someone to argue that someone “should” be using it in reply to an article about Unix more generally.

Secondly, XDG stems from managing desktop applications. Granted there’s no reason it can’t nor shouldn’t be used for CLI tools too. But equally there isn’t any reason why someone needs to use it. And on headless platforms, it makes zero sense to enforce XDG over any other Unix standard.

Lastly, XDG is surprising complicated. Your example of symlinks et al demonstrates just how much additional work you’re doing just to follow XDG here. If you CLI tool doesn’t spew dozens of config files in $HOME then your recommendation isn’t any better than the one in the article.

I really want to like XDG but like many things that originated from Linux, it’s far from an elegant solution to a simple problem and barely supported outside of Linux too. So I can’t blame anyone for deciding not to bother with it for their own personal software on their personal systems.

null

[deleted]

jmclnx

Yes, this is what I do:

* Linux: "/usr/local" or "/opt/local" depending on distro

* NetBSD: "/usr/local", NetBSD packages go into "/usr/pkg". I wish all other systems adopted pkgsrc :(

* OpenBSD: "/opt/local", OpenBSD packages go into /usr/local

If on a system I do not "own", like AIX at work or SDF, I use $HOME/local

zahlman

I had understood that /opt is intended to be inherently "local" in the /opt/local sense. What sort of things would you put in /opt but not /opt/local?

jmclnx

/opt/local is for programs/libraries I created and smaller programs I downloaded source and compiled.

/opt would be for canned items. For example, if for some reason on Linux, I want to execute the latest Firefox or Thunderbird I would get the pre-built binaries and extract them in /opt. When I did that in the past, all items will be stored in /opt/firefox and /opt/thunderbird.

Plus many proprietary objects like to be sitting in /opt in their own directories.

zahlman

(Too late to fix the typo now, but I meant "the /usr/local sense".)

paulddraper

Yes /opt/local is non-standard.

ajsnigrutin

Haven't used ubuntu/debian for years now...

There used to be a "checkinstall" tool, so you'd ./configure, make, and instead of "make install" use "checkinstall" - this would make some fakeroot magic, install the package and create a deb package for it, that could be apt-get removed later.

And iirc, automake uses /usr/local by default (if you didn't specify the --prefix).

glitchc

On my own systems, I use /opt for this purpose, including program installs and app images.

zahlman

Yep, I'm in this camp. In particular, since the bin/lib dichotomy doesn't work well for everything[1], I'll put things free-form in /opt as needed, and then symlink executable entry points at /usr/local/bin. [2] This is also more or less what Pipx global installs (available since version 1.5.0) do (https://pipx.pypa.io/stable/installation/#-global-argument).

[1] e.g. if you want to install Python applications in isolated virtual environments, the venv has its own bin/ and lib/ internally - you wouldn't want to flatten that structure and have all those applications share top-level /usr/local/bin and /usr/local/lib, because that would involve manually destroying the venv structure and also defeating the entire purpose of them.

[2] although this is not completely smooth if, like me, you want to compile multiple versions of Python from source and then make venvs based off of them - see https://github.com/python/cpython/issues/106045 .

precompute

Same. /opt/zz for loose binaries, dedicated folders for the rest.

pomatic

The only time my home directory gets cleaned up (it is littered with random binaries) is when I get a new machine... It feels very wrong, but also quite cathartic at the same time!

NikkiA

I install them in ~/p/... and have a script that walks any directory under ~/p/ and if it has a 'bin' subdirectory, it adds it to the path, otherwise it adds the directory itself to the path. (it also wipes existing ~/p/ paths before the search, enabling it to also act as a path updater)

So after that it's `./configure --prefix=$HOME/p/`, `make install` and `build-personal-path`

notme43

I can't speak to their use case but why not use Gentoo? Portage solves a lot of these problems with slotting or by letting you roll your own ebuilds with a local repo. It seems cleaner and less effort, unless you need/like the Ubuntu userland.

pbhjpbhj

The lengths I've had to go to (on Windows, and on Ubuntu) to get games installed for multiple users is frankly ridiculous.

I feel like I'm always fighting against the OS just to share files amongst users.

I suppose the answer is probably a deduplicating fs like zfs/btrfs - although, do they dedupe across users (that feels like an exploit route).

hnlmorg

> I feel like I'm always fighting against the OS just to share files amongst users.

I’m not really sure what the issue you’re having is from what you’ve shared. But if the files and have read permissions for the relevant users, groups, or even everyone, then it should “just work”.

Perhaps the issue was with folder permissions? Hidden ACLs? Or shortcut icons? (the way Linux handles application icons is vastly over complicated in my personal opinion)

> I suppose the answer is probably a deduplicating fs like zfs/btrfs - although, do they dedupe across users (that feels like an exploit route).

Why would it be an exploit route? zfs and btrfs are CoW (copy-on-write) file systems which means if someone makes an edit on one copy, it wouldn’t edit both copies.

Maybe you’re thinking of hard links? These aren’t limited to CoW file systems (eg supported by NTFS and ext4) and those could run into issues where one person could update someone else’s “copy”.

transfire

GoboLinux has you all beat.

jofla_net

great, now all i need is a tutorial on how to view said page. whats the magical whitelisted useragent? working as designed i see

mubou

tl;dr: by putting them in the home directory

Everyone does this; it's pretty standard. Using XDG directories (~/.local/bin) is most common nowadays, but hey, you do you.

It does annoy me that cargo has its own bin directory in ~/.cargo, but I'm too lazy to set the env vars to move it.

ndegruchy

I use the `~/.local/bin` methodology for single binaries, too. Linux, macOS, doesn't matter. I try hard to not make my home directory look like a phonebook when I `ls -alh` it. Sometimes that's easier said than done. I also use `~/.local/opt` as a container for larger applications (like Emacs). While I don't have architecture distinctions, I also don't use multiple versions or platforms regularly.

kccqzy

I actually prefer ~/.cargo/bin because it's clear from the path where I installed it. I only put executables I wrote or I compiled into ~/.local/bin. All language-specific package managers have a dedicated directory. The system package package naturally installs to /usr/bin.

null

[deleted]