Suckless.org: software that sucks less
245 comments
·February 21, 2025saidinesh5
jclulow
The thing is, dynamic linking doesn't mean using LD_LIBRARY_PATH or building full blown OS packages as the only way to find the correct libraries. There's a first class facility for locating shared libraries, using the -R flag to provide a RUNPATH/RPATH in the binary. The runtime link editor will use that path to locate shared libraries. You can make your binaries relocatable as well, by using $ORIGIN in the RPATH: this gets expanded at runtime to the path of the executable, so, e.g., $ORIGIN/../lib would go up one from bin/ where the executable is and down alongside into the lib directory for your software.
LD_LIBRARY_PATH is a debugging and software engineering tool, and shouldn't ever be part of shipped software.
fanf2
Build systems often make it a huge pain to get the right rpath. The chrpath tool makes it easy to fix the rpath after libtool got it wrong.
kazinator
If you know that the executables will be in a certain directory, such that ../lib relative to that is where the libraries are found, then you just make the rpath '$ORIGIN/../lib'. That's it; no guesswork.
Build systems that make it a huge paint to calculate elaborate rpath, e.g. knowing what the sysroot will translate to on the actual target system or whatever, are just doing counterproductive nonsense.
By the way, I seem to recall that Yocto uses chrpath in order to get its own tools to reference its own copy of glibc that it provides (for itself, in the build environment).
ttyprintk
The suckless tools are recompile-to-reconfigure, so they don’t hit the length limitation of chrpath or fatal bugs in patchelf.
vlovich123
And the main advantage of doing all that work vs statically linking is? Don’t get me wrong - dynamically linking for dev builds makes a lot of sense to cut down on relink times. But I just don’t see it for distribution since doing that RPath work reduces the main argument for dynamic linking (i.e. the OS can patch the vulnerability for all installed packages without waiting for each to release).
kazinator
All what work? Just adding -R '$ORIGiN' when linking the program will then cause it to look for shared libs in its own directory.
The program still looks for libc in the system directory.
The argument for this is that if you ship software that features multiple executables that share code, they can have it as shared libraries, and easily find it relative to their own location, just like on Windows.
Brian_K_White
They didn't claim it was better than static. That doesn't make it worse than static either. They are two different answers for two different problems.
It's simply that when you do want external but bundled neighboring libs, there is a good way to do it.
null
jimmaswell
There's definitely value in the static approach in some cases, but there are some downsides e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.
If you require a library, you can specify it as a dependency in your dpkg/pacman/portage/whatever manifest and the system should take care of making it available. You shouldn't need to write custom scripts that trawl around for the library. Another approach could be to give your users a "make install" that sticks the libraries somewhere in /opt and adds it as the lowest priority ld_library_path as a last resort, maybe?
saidinesh5
> e.g. your utility will need to be recompiled and updated if a security vulnerability is discovered in one of those libraries. You also miss out on free bugfixes without recompiling.
This was the biggest pain point in deploying *application software* on Linux though. Distributions with different release cycles providing different versions of various libraries and expect your program to work with all of those combinations. The Big famous libraries like Qt , gtk might follow proper versioning but the smaller libraries from distro packages - guarantee. Half of them don't even use semantic versioning.
Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.
Custom start up script was to find the same xml library I've used in the tar ball i packaged the application in. They could then extract that tar ball wherever they need - including /opt and run the script to start my application and it ran as it should. Iirc we used to even use rpath for this.
viraptor
> Half of them don't even use semantic versioning.
This is a red herring. Distros existed before semantic versioning was defined and had to deal with those issues for ages. When packaging, you check for the behaviour changes in the package and its dependencies. The version numbers are a tiny indicator, but mostly meaningless.
wakawaka28
>Imagine distros swapping out the libraries you've actually tested out your code with with their libraries for "security fixes" or whatever the reason. That causes more problems than it fixes.
I don't believe that it causes more problems than it fixes. It's just that you didn't notice the problems being silently fixed!
There are issues related to different distros packaging different versions of libraries. But that's just an issue with trying to support different distros and/or their updates. There are tradeoffs with everything. Dynamic linking is more appropriate for things that are part of a distro, because it creates less turnover of packages when things update.
sidewndr46
I often refer to semantic versioning as "semanticless versioning". Everyone disagrees about what constitutes a change warranting each version number to be increased
EfficientDude
This is never ever a problem unless a developer insists on always using the most cutting edge version of a library. There's no law that says you have to use the bleeding edge of every library when you make a program. Another issue these days, is that library maintainers often add new features or delete old features without incrementing the major version number. In the olden days it was assumed that minor versions were for bug fixes that don't break compatibility, and when you wanted to change how the library works in a major way, you increment the major number.
Now a lot of stuff is contnuously buggified so there is no concept of stable and in-progress.
butterisgood
"Free bug fixes without compiling". I think YMMV.
It depends a lot on ABI/API stability and actual modularity of ... components. There's not always a guarantee of that.
Shared libraries add a lot of complexity to a system for the assumption that people can actually build modular code well in any language that can create a shared library. Sometimes you have to recompile because, while a #define might still exist, its value may have changed between versions, and chaos can ensue - including new, unexpected bugs - for free!
dgfitz
My current day job has probably 60 apps the depend on one shared library.
Static linking has its place pace, no doubt, but it should not be the norm.
delta_p_delta_x
Fun fact... Many Windows programs generally do some sort of 'hybrid static linking' (this is my own terminology) where programs are distributed with the `.dll` libraries just next to the binary. There is no concept of RPATH on Windows—the loader looks for dynamically-linked libraries in a fixed set of locations which includes the binary's directory.
Windows programs generally do link dynamically to core Windows libraries—which users are never expected to mess with anyway—and the C and C++ runtimes, but even these can be statically linked against with `cl.exe /MT`. Some programs even distribute newer versions of the C/C++ runtimes; that's where the famous Visual C++ Redistributables come from.
I agree, though—static linkage should be the default for end-user programs. I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
torginus
Imo this is a much saner solution for a system that supports precompiled applications.
Every time I tried to get a third-party binary app running on Linux, I discovered what the vendor did is they shipped half their dependencies as blobs, and relying on the system for the other half - which is an incredibly brittle system that breaks constantly.
The entry point usually is a script that sets LD_LIBRARY_PATH and then calls into the executable.
int_19h
> There is no concept of RPATH on Windows—the loader looks for dynamically-linked libraries in a fixed set of locations which includes the binary's directory.
This is not true - you can control the DLL path via manifests. There's also a "known DLLs" list in registry which can globally redirect basically any DLL system-wide.
kazinator
Many programs link various libraries of theirs statically, but then dynamically link to the system C library.
For instance, GNU Make uses some of GNULib.
https://git.savannah.gnu.org/cgit/make.git (gl subdirectory)
sunshowers
For Rust, cargo zigbuild lets you compile against arbitrary old glibc. Probably can get that working with C as well.
GoblinSlayer
Compiling for CentOS 6 is a linking problem, and any linker lets you link with whatever you want, it's a matter of running the linker with the right arguments.
forrestthewoods
> I long for a time when Linux gets 'libc' redistributables and I can compile for an old-hat CentOS 6 distribution on a modern, updated, rolling-release Arch Linux without faffing with a Docker container.
The Linux linking model is so bad. So extremely very bad. Build systems should never rely on whatever garbage happens to be around locally. The glibc devs should be ashamed.
aqueueaqueue
There was a time when 200Mb was a big HDD. Can forgive for that reason.
torstenvl
It's true. I keep an Ubuntu 16 LTS VM around for making binary redistributables on Linux. It's the only way to ensure compatibility.
anacrolix
Noice
o11c
The problem is that static libraries are actually more likely to break across time in practice, since "the system" is more than just "syscalls".
For example, in places where the a filesystem-related "standard" has changed, I have old static binaries that fail to start entirely, whereas the same-dated dynamic libraries just need to bother to actually install dependencies.
I am convinced that every argument in favor of static linking is because they don't know how to package-manager.
torginus
Nobody knows how to use the package manager. What happens in practice, is every single program uses the the package versions the distro happens to ship with.
If you want a newer version, too bad - your OS doesn't ship that so better luck in the next release. OR you can set up a private repo, and either ship a binary that has the dependencies included (shipping half the userland with your audio player), or they package the newer version of library, which will unwittingly break half your system, if not today, then surely at the next distro upgrade.
It speaks volumes of Linux package management woes, that no vendor ships anything analogous to brew or chocolatey.
gugagore
I thought it spoke to windows and macOS woes that package management was a third-party concern.
What is the gap between e.g. `apt` and Homebrew?
7bit
> I am convinced that every argument in favor of static linking is because they don't know how to package-manager.
Which would be a fair reason. People who like to build things might just not want to also learn how to package stuff.
kelnos
It's not all-or-nothing, though. If you need a dependency that isn't widely available on distros, then statically link it. It's fine. No big deal. But if you actually care about being a responsible maintainer, make sure you follow new releases of that dependency, and release new versions of your app (with the new version of the dependency) if there's an important bug fix to it.
If you're linking to libX11 or libgtk or something else that's common, rely on the distro providing it.
I really don't get all the anti-shared-library sentiment. I've been using and developing software for Linux for a good 25 years now, and yes, I've certainly had issues with library version mismatches, but a) they were never all that difficult to solve, and b) I think the last time I had an issue was more than a decade ago.
While I think my experience is probably fairly typical, I won't deny that others can have worse experiences than I do. But I'm still not convinced statically linking everything (or even a hybrid approach where static linking is more common) would be an overall improvement.
int_19h
The sane thing here is to maintain a clear notion of what the "OS" is versus the "app", and use dynamic linking on that boundary, but not elsewhere. Which is more or less how Windows and macOS do things.
null
the-lazy-guy
Unrelated to suckless, there's a project (confusingly) named stal/IX: https://stal-ix.github.io/
It is also a statically linked Linux distribution. But it's core idea is reproducible nix-style builds (including installing as many different versions/build configurations of any package), but with less pl fuff (no fancy funcional language - just some ugly jinja2/shell style build descriptions; which in practice work amazingly well, because underlying package/dependency model is very solid - https://stal-ix.github.io/IX.html).
It is very opionated (just see this - https://stal-ix.github.io/STALIX.html), and a bit rough, but I was able to run it in VMs sucessfully. It would be amazing if it stabilizes one day.
snarfy
It's a trade-off.
Consider how dynamic linking libc works when a critical security bug is found and fixed. To update your system you update libc.so.
If it were statically linked, you need to update your whole distribution.
fc417fc802
I would be more accepting of the trade-off if it wasn't so brittle in practice.
Nix is much closer to a "good" dynamic linking solution IMO except that it makes it overly difficult to swap things out at runtime. I appreciate that the default is reproducible and guaranteed to correspond to the hash but sometimes I want to override that at runtime for various reasons. (It's possible this has changed since I last played with that tooling. It's been awhile.)
beefsack
I generally find with Nix and NixOS that I'm able to just use a dev shell to create little custom environments at runtime as needed. Another option is `mkOutOfStoreSymlink` if you want some dynamic config for some GUI you are running.
Depends on what you are trying to achieve though.
aqueueaqueue
The article touches on that. If "your whole distribution" comparable to a Chrome in size? That gets updated.
skywal_l
Linus Torvalds agrees with you: https://youtu.be/Pzl1B7nB9Kc?feature=shared&t=65
Levitating
Also the stali approaches to filesystem hierarchy is aspiring https://sta.li/filesystem/
> AppImages
AppImages require a large amount of (obsolete) dependencies to run, making their portabiltiy practically worthless. Newer immutable distros like Aeon don't ship the necessary packages to run an AppImage.
lugu
It's been around ten years that my desktop barely changed except a few pixels thanks to dwm and dmenu. I am a bit exaggerating but I love the stability that minimalism brings. If only they could make a pdf viewer...
homebrewer
zathura is close enough — it's very minimalistic and supports everything under the sun: pdf, djvu, comics, epub...
vq
Are you aware of Sioyek[0]? It's a PDF viewer with a fairly minimal UI and a focus on keyboard interaction.
[0]: https://sioyek.info
naasking
SumatraPDF: https://www.sumatrapdfreader.org/free-pdf-reader
No frills, super fast and small. Been using it on Windows for years.
evanjrowley
I recently downloaded SumatraPDF to open a PDF on Windows XP. Glad they still host a release that works on it. :)
Koshkin
Need wine (or something stronger) to run it on Linux.
SoftTalker
Thanks, I love finding little nuggets like this here.
BoingBoomTschak
Eh, it vendors an old version of mupdf. Very bad idea, considering that it's a C program/library handling a notoriously complex format often shared on the Internet.
Personally, I just use mupdf (which I sandbox through bubblewrap).
flubbergusto
Have any issues with mupdf? I find it suckless.
zamadatix
To the currently dead sibling comment by kjrfghslkdjfl (on the off chance they get to see this): mupdf is extremely cross platform. I felt that should have at least been mentioned before your comment reached being dead over that misunderstanding.
andrewflnr
That's, uh, not why the comment is dead.
fc417fc802
Seconding this. It's my default choice for many file formats, not just pdf. However it doesn't support jpegxl so in those cases I use Okular (very much not minimal but quite usable).
Imustaskforhelp
Yes okular is just brilliant for PDFs i love okular
kjrfghslkdjfl
That's android only. He's talking about desktop.
I like it too though.
null
unclad5968
> Do not use for loop initial declarations
> Variadic macros are acceptable, but remember
Maybe my brain is too smooth, but I don't understand how for(int i = 0...) is too clever but variadic macros are not. That makes no sense to me.
astrobe_
That's a bad coding style document. There's no rationale given, except for a bunch of references at the top, which are clearly argument from authority.
I think the no "loop initial declarations" is for consistency with "all declarations at the top". Other coding style guides favor "declarations as close as possible to first use", including guidelines for mission critical systems (if you resort to argument from authority I have some too...) [1].
As much as I like Suckless, this section is just pet peeves that can safely be ignored; unless you submit a patch to a project that aligns with it.
guenthert
> There's no rationale given
True and it would indeed be desirable that it were. Here I go out on a limb and assume it's because someone got bitten by attempting to use the loop index outside the loop (common for search operations) while declaring the index within and outside the loop. A bug (gcc and clang can warn about using -Wshadow, but which sadly isn't part of -Wall) which might easily occur when multiple people edit the code over a longer time-span.
celrod
I use Wshadow personally. I highly recommend it. I think code that violates it (even if correct) is harder to understand.
flykespice
their rationale is very inconsistent: tells you to use c99 but you must place declarations at the top, and you can't use C++ style // comments (introduced in c99 aswell).
Why not force c90 altogether then?
zzo38computer
At least in my opinion, "for loop initial declaration" is especially useful in macros (although many programs will never need such macros, they are especially useful when you do have them).
An example of such a macro is the following macro (the loop and the variable declaration will both be optimized out by the compiler; I have tested this):
#define lpt_document() for(int lpt_document_=lpt_begin();lpt_document_;lpt_document_=(lpt_end(),0))
Another macro (which is a part of a immediate mode UI implementation) is: #define win_form(xxx) for(win_memo win_mem=win_begin_();;win_step_(&win_mem,xxx))
sitkack
One can embrace minimalism without all of this https://news.ycombinator.com/item?id=25751853
indrora
Or the self-reich-ousness of folks like Suckless[0], like telling the OG author of redare (pancake, who is a very competent malware reverse engineer) he's an idiot [1].
[0] https://tilde.team/~ben/suckmore/ [1] https://dev.suckless.narkive.com/mEex8nff/cannot-run-st#post...
Y_Y
I had a look at your links, and I think the assertion that the people involved in Suckless are National Socialists is insufficiently supported. I don't know these people outside their software, but if you're going to accuse someone of being part of some reviled political group I think you should have something stronger than "they went on a hike carrying torches around the time some extremist group had a march with torches".
sunshowers
Personally, I would avoid even the appearance of impropriety by simply never going on a hike carrying around torches. Headlamps are better anyway.
spacechild1
Let's have a look at the pictures in https://suckless.org/conferences/2017/
Their t-shirts in the first picture are based on a popular neo-nazi sujet: https://media1.faz.net/ppmedia/aktuell/3215685722/1.10213069... (HKNKRZ = Hakenkreuz = Swastika)
The second picture could be easily mistaken for a typical neo-nazi torchlight march. Keep in mind that suckless are German and Germany has a certain history with torchlight marches... Not to mention the guy with the "SCKLSS" t-shirt and camo pants.
A member once threw around the term "cultural marxism" which is a nazi term ("Kulturbolschewismus"). Again, these guys are German and certainly know the historic context.
One member even used to send e-mails from a server with the hostname "Wolfsschanze" (https://en.m.wikipedia.org/wiki/Wolf%27s_Lair).
To me it is obvious that they play with neo-nazi imagery. Now, maybe they are not really nazis and just immature edgeloards, but the optics are still terrible. I'm from Austria and after seeing these pictures I wouldn't touch the project with a ten foot pole.
tomtomtom777
Suckless has a beautiful coding philosophy and I wish all software was written with this in mind, but surely a window manager and X-menu aren't really the best showcases? These aren't the types of programs where complexity is the biggest enemy.
I'm not claiming I could write these tools as simple as these, but surely the importance of these paradigms arise when actual complicated software is needed?
brianmurphy
You have to see what already exists in X11-land before judging that simple wasn't hard to do. (e.g. xterm)
imiric
The drama around this community is silly. I use these tools because I absolutely love their philosophy on software, and software alone. I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways. Most annoying of all is that I can't do anything about it. I'm not going to spend days of my life digging into their source code to make the changes I want, nor spend time pestering the maintainers to make the changes for me.
So I ended back at my st fork I've been using for years, which sucks... less. :) It consists of... 4,765 SLOC, of which I only understand a few hundred, but that's enough for my needs. I haven't touched the code in nearly 5 years, and the binary is that old too. I hope it compiles today, but I'm not too worried if it doesn't. This program has been stable and bug-free AFAICT for that long. I can't say that about any other program I use on a daily basis. Hhmm I suppose the GNU coreutils can be included there as well. But they also share a similar Unixy philosophy.
So, is this philosophy perfect? Far from it. But it certainly comes closer than any other approach at building reliable software. I've found that keeping complexity at bay is the most difficult, yet most crucial thing.[1]
Philpax
> I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I just don't really want to use or support software by people who, at best, think it's appropriate to joke about an ideology that wants me [0] dead, or at worst, actively subscribe to that ideology. There are some things that I'm not willing to look past.
[0]: non-white, non-straight, left of the political spectrum
ninjin
Having been on their mailing lists and IRC channel for over four years, I have seen maybe a handful of "edgy" comments that made me go "sigh" or "Ew!" and they are generally from two or so people that are on the fringe of the community. Yes, it is possible that this is some sort of elaborate trick, but they sure give the appearance of mostly a bunch of helpful folks that care deeply about their own code and projects while caring very little to police people and rather just ignore them.
Oh, there are also the edgelords occasionally lured in by Luke Smith's videos (who has never sat foot in community or contributed code while I have been around and I am not sure if he ever did) who usually get laughed out of IRC after delivering an unhinged chanspeak rant.
kelnos
> ... and they are generally from two or so people that are on the fringe of the community.
How do the people at the center of the community react to this, though? If they are not condemning that sort of behavior, and possibly kicking people like that out of the community, then they are complicit at best, and tacitly approve at worst.
imiric
I get that, they're probably assholes. But if I limited my usage of software and consumption of art to only those not authored by assholes, I would probably have a less enjoyable and boring existence. Not to mention exhausting.
I think it's possible to separate the art from the artist, and enjoy the art without being concerned about the artist's beliefs, and whether I disagree with them.
Also, you don't necessarily support them by using their software. The software is free to use by anyone, and you never have to interact with the authors in any way. Software is an amorphous entity. Unless they're using it to spread their personal beliefs, it shouldn't matter what that is. By choosing not to use free software, you're only depriving yourself.
But this is your own choice, of course, and I'm not saying it's wrong. Just offering a different perspective.
kelnos
> I get that, they're probably assholes.
I think you're setting up a too-general argument here. "Asshole" an encompass a huge variety of things, from "actively genocidal" to just "kinda annoying", and everything in between.
I'm pretty "mainstream" demographically (white, straight, cisgender), but if the developer of software I use said something like "all atheists should be shot", I would immediately stop using their software and find something else.
> By choosing not to use free software, you're only depriving yourself.
Sometimes making a statement means enduring some sort of disadvantage or hardship in return. In fact I think that's part of the point. If it doesn't cost me anything to stop supporting something I find offensive, then my (admittedly mild) protest doesn't really have much substance behind it.
In this particular case, there's nothing that the suckless folks have built that doesn't have alternatives that are also free software, so I don't think anyone who refuses to use suckless software is depriving themselves of free software.
Ferret7446
That would indeed be concerning if true; do you have a reference? Unfortunately, the vast majority of such claims I've found to be misconstrued which makes me skeptical (the boy keeps crying wolf).
econ
Then, when people are no longer allowed to talk about what (stupid shit) they believe and jokes should only be made behinds peoples back.
Who should be on the committee that decides what we may talk and joke about and how should the committee inform it self?
The new forbidden topics will be chosen from the set of topics people talk about which get smaller, stranger and more political. What people secretly believe will be much closer to the secret dialog while the public dialog floats away.
That people are saying things is the least of your concern.
Fascinating perspective tho. It is much easier if one is more secure, talks easy or has a more mundane world view. Not someone one can choose. Thicker skin however.
Also interesting, if one didn't like the people running the lunchroom at the end of the street or didn't like the visitors you use to be able to go to some other place. Today they are all part of the same chain. We've lost a lot of freedom there.
GoatInGrey
It's honestly distressing how all of these violent ideologies are growing in popularity. Nazism, socialism, and whatever else should be thrown on the pile. If you're a queer black "executive" like myself, there are a lot of people that believe the world would be a better place with you dead.
It's getting to the point that I'm considering keeping myself ignorant of developers' beliefs for my own mental wellness.
T-Winsnes
Huh, socialism as a violent ideology was not on my bingo card for 2025
ihsw
Absence of obsession with identity politics is not the same as wanting you dead. I don't want my tax dollars funding your personal lifestyle choices, just as you wouldn't want yours funding mine.
Barrin92
> I'm not going to spend days of my life digging into their source code to make the changes I want
This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
The entire philosophy behind is entirely performative in many ways. There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
imiric
> This is an odd thing to bring up though because that's quite literally the only way to make any changes to suckless software, editing source code in C.
You're ignoring the part where the tools are often a fraction of the size and complexity of similar tools. I can go through a 5K SLOC program and understand it relatively quickly, even if I'm unfamiliar with the programming language or APIs. I can't do the same for programs 10 or 100x that size. The code is also well structured and documented IME, so changing it is not that difficult.
In practice, once you configure the program to your liking, you rarely have to recompile it again. Like I said, I'm using a 5 year old st binary that still works exactly how I want it to.
Maintaining a set of patches is usually not a major problem either. The patches are often small, and conflicts are rare, but easily fixable. Again, in my experience, which will likely be different from yours. Our requirements for how we want the software to work will naturally be different.
The madness you describe to me sounds like a feature. It intentionally makes it difficult to add a bunch of functionality to the software, which is also what keeps it simple.
Avshalom
>I can go through a 5K SLOC program and understand it relatively quickly
you already said in your first post that you can't understand it.
kombine
I have a small config for Kitty that does not require any patching and recompilation and can survive Kitty updates for years to come. I don't understand why I need to study the source code of my terminal emulator.
spit2wind
Has anyone created a walkthrough of the code?
I've looked before and not found anything, but it's a niche thing on an already niche thing.
zzo38computer
Sometimes, I have had to change software (although not from suckless, since I do not use any of their software) by modifying and recompiling it, to do what I wanted.
> There's nothing simple or "unbloated" about having to recompile a piece of software every time you want to make what should really be a runtime user configuration, and it makes an entire compiler toolchain effectively a dependency for even the most trivial config change.
It is true, but depending on the software, sometimes this is acceptable. (Some of the internet server software that I wrote (such as scorpiond) are configured in this way, in order to take advantage of compiler optimizations.)
For some other programs, some things will have to be configured at compile time (mostly things that probably don't need to be changed after making a package of this program in some package manager), although most things can be configured at run time and do not need to be onfigured at compile time.
> I tried their window manager out once and the only way to add some functionality through plugins is to apply source code patches, but there's no guarantee that the order doesn't mess things up, so you basically end up manually stitching pieces of code together for functionality that is virtually unrelated. It's actual madness from a complexity standpoint.
This is a valid criticism, and is why I don't do that for my own software. However, it is sometimes useful to make your own modifications to existing programs, but just applying sets of patches that do not necessarily match is the madness that you describe.
marrs
Not when you want to write your own patches, it isn't. I think the design of DWM could be improved to make patching easier, but it was a revelation to me when I discovered it: for the first time in my life, I was using open source software that was actually designed to be extended.
klaussilveira
> I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways.
After moving to a gigantic monitor and gigantic resolutions, my poor st fork was suffering. zutty was a great replacement for me: https://git.hq.sig7.se/zutty.git
imiric
Ah, interesting. Thanks for sharing!
9283409232
Whenever suckless comes up, I see more people saying "the drama is silly" than I do actual drama. I don't even know what drama people are talking about.
indrora
Short form:
* One of the lead devs' laptops is named after Hitler's hideout in the forest
* Their 2017 conference had a torchwalk that was a staple of Nazi youth camping (and heavily encouraged by the SS as a nationalism thing)
* Multiple of the core devs are just assholes to people on and offline.
* Most of the suckless philosophy is "It does barely what it needs to and it was built by us, so it's superior to what anyone else has written". A lot of it shows in dwm, dmenu, etc.
timewizard
I took dwm and made it my own 15 years ago. Hasn't really changed since. From my point of view they're not wrong.
Open Source Software used to be about individuality.
They're not actively campaigning to remove other window managers are they? That seems to be a feature of "community software" for whatever reason.
ivirshup
Not defending them, the Hitler laptop thing seems bad, but within Germany torchwalks are pretty normal and not Nazi associated. For example, there was one as part of a ceremony honoring Merkel as she left office.
asveikau
I like dwm, and dwm being pro-fascist would be disappointing to me.
At risk of putting myself out there, it shows how crazy things have gotten when neo-nazi sympathies are described as "just some political beliefs".
slithytoves
FRIGN is not a lead dev, he is simply a dev.
dijit
> One of the lead devs' laptops is named after Hitler's hideout in the forest
“The Wolf’s Lair” (but in German) sounds like it could plausibly be selected coincidentally.
There are a lot of IRC nerds who use wolves as part of a moniker, “Canis”, “Lupine” & “Aardwolf” spring immediately to mind.
mackal
[flagged]
deadbabe
In order to write highly opinionated software you have to be some kind of an asshole, otherwise other people wear you down with their own opinions.
imiric
I'm not sure how you missed it, since it comes up in practically every Suckless-related thread[1], including this one. The drama is mostly in social media and IRC circles, though it tends to spill over here as well.
[1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
zzo38computer
> I couldn't care less what the authors personal beliefs and political leanings are, or who they offended on IRC or social media.
I agree. Such things are not relevant when considering to use their formats and programs and stuff like that.
What is relevant is their software and related stuff like that, and not their political leanings, etc. I do not agree with all of their ideas about computer software, although I agree with some of them.
Like them, I also don't like systemd, so I agree with them about not liking systemd.
I do use farbfeld, although I wrote all of the software for doing so by myself rather than using their software (although it should be interoperable with their software, and any other software that supports farbfeld (such as ImageMagick)). Also, I do not use farbfeld for disk files, but only with pipes. (My farbfeld utilities package also includes the only XPM encoder/decoder that I know of that supports some of the uncommon features, that most XPM encoders/decoders I know of are not compatible with or are not fully capable of.)
I may consider libzahl if I have a use for big integers, although I also might not need it. (I had written some dealing with big integers before; one program I wrote (asn1.c) that deals with big integers only converts between base 100 and base 128 in order to convert OIDs between text and binary format.)
However, I would also want software that can better handle non-Unicode text (so, it is one things I try to write), which many programs don't do properly. This should mean that any code that deals with Unicode (if any) is bypassed when non-Unicode is used. Some programs should not need to support Unicode at all (including some that should not need to care about character encoding at all, or that do not deal with text, etc). (I had considered writing my own terminal emulator for this and other reasons.)
opan
I would highly recommend the "foot" terminal if you are on Wayland and can use it. I used urxvt, termite, and briefly alacritty in the past.
Imustaskforhelp
I use foot and with a catpucchin theme , oh it's so nice and cozy.
I use pure zsh with some plugins manually installed , the luke smith dot files, and the history part sometimes take a lot to load but foot is just fast
flubbergusto
> I recently spent a few hours evaluating different terminals. I went back to urxvt, tried Alacritty again, gave Ghostty a try, and spent quite some time configuring Kitty. After all this I found that they all suck in different ways
Last time I did the same (days not hours tho lol) was somewhat surprised to find myself landing on xterm. After resolving a couple of gotchas (reliable font-resizing is somewhat esoteric; neovim needs `XTERM=''`; check your TERM) I have been very pleased and not looked back.
urxvt is OG but xterm sixel support is nice.
azthecx
It also makes for very "efficient" software, the amount of time Sent has saved me, with very minor styling modifications, makes it one of the best software I've ever used.
alkonaut
> surf is a simple web browser based on WebKit2/GTK+. It is able to display websites and follow links.
That’s… certainly a low bar for not sucking
clintonc
From the page about dwm:
> Because dwm is customized through editing its source code, it's pointless to make binary packages of it. This keeps its userbase small and elitist. No novices asking stupid questions.
...sucks less than what? :) Simple is good, but simpler does not necessarily mean better.
baruchel
DWM is obviously not competing with GNOME or KDE and is quite a niche window manager. However, by focusing on being a simple, hackable tool—rather than adding menus, settings, help pages, and so on—it remains reliable and easy to maintain. Each DWM user typically has their own set of carefully selected patches and can (re)compile/(re)install it as a single binary in just two or three minutes.
No one is forced to use it, but the overall experience is quite convincing.
marrs
It does in this case.
swfsql
I think I miss a suckless but async and for everything (lots and lots of apps). To the point where each of those different apps are single thread, and in such a way that they collaboratively all share the same single thread. So I'm looking for a single app that has many library-apps inside of it, all brutalistic, async and single thread.
otabdeveloper4
Single threading sucks. It's 2025 and even low-end computers have dozens of hardware threads. I don't want to compute like it's 1995 anymore.
tombert
I agree, but there is a sort of beauty in programs that were written for absurdly slow hardware.
There was a thing on HN like seven years ago [1] that talked about how command line tools can be many times faster than Hadoop; the streams and pipelines are just so ridiculously optimized.
Obviously you're not going to replace all your Hadoop clusters with just Bash and netcat, and I'm sure there are many cases where Hadoop absolutely outperforms something hobbled together with a Bash script, but I still think it serves a purpose: because these tools were written for such tiny amounts of RAM and crappy CPUs, they perform cartoonishly fast on modern computers.
I don't like coding like it's 1995 either, and I really don't write code like that anymore; most of the stuff I write nowadays can happily assume several gigs of memory and many CPUs, but I still respect people that can squeeze every bit of juice out of a single thread and no memory.
[1] https://adamdrake.com/command-line-tools-can-be-235x-faster-...
otabdeveloper4
Single threading always makes it run slower though.
Also lots of 1995 assumptions lead to outrageously slow software if used today. Python in 1995 was only marginally slower than C++. It's orders of magnitude slower today.
swfsql
If it sucks we can call it suckmore then
ElectricalUnion
Is this reinventing cooperative multitasking + lack of process-based memory protection with more steps?
The reason why (almost) everyone migrated away to preemptive multitasking + memory protection is because it only takes one piece of code behaving sightly different from what the system/developer expected to bring the entire thing down to a halt, either by simply being slower that expected, or by modifying state it's not supposed to.
LAC-Tech
I like that these guys are here. I definitely appreciate what they're doing.
But I think I like software that sucks a little bit. BSPWM with its config as shell commands to the bspc daemon is about right; re-compiling C code is a bit much.
xmichael909
Yah, I don't know, DWM looks like it kinda sucks imho...
SoftTalker
I've used awesome for years. Love it, and never really looked at anything else since I found it. It's based on a fork of dwm I guess, so maybe I would like dwm also.
marrs
Nah, it's great. It's the only thing that keeps me on Linux tbh.
snailmailstare
If the goal were not to suck at all a GUI would be the wrong choice of genre.
But I do hope the st buffer overflow fixes my st usage in builds..
jauntywundrkind
I used it for a year or two before I switched to wmii. It was pretty great... In like 1997 or so.
chjj
Heresy.
nicebyte
yeah no. I've mainlined dwm + dmenu all the way back in 200x, I've written tons of makefiles and have the scars to prove it.
These days I'm off of this minimalism crap. it looks good on paper, but never survives collision with reality [1] (funny that this post is on hn front-page today as well!).
[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
snailmailstare
I like these tools because they are minimalist.. I don't really care for the fact that they are C/make oriented and would rather help someone rewriting them in go or rust than show that I have a non minimal amount of scar tissue to work with a needlessly complicated past.
nicebyte
my comment isn't about things being written using c/make/whatever, it's precisely about the faulty assumption that complexity is needless.
snailmailstare
Oh then I totally disagree (or don't understand why you would need to see a psychoanalysis of a blacksmith to evaluate their offerings?). Many projects have places that need some complexity, configuration or advanced tools that doesn't imply the hardware store should stop selling average hammers or make you wade through an aisle of crap from providers like peloton to see if they better meet your needs.
(I.e. show me where in the article he replaced a standard tool like the hammer or pot with a complex one customized to exactly what he wanted to solve or explain why that advanced tool wouldn't suck given that there's a lot more details than one would expect.)
skydhash
I just went back to fedora+gnome on my PCs from FreeBSD+(tiling wm). I think minimalism is good when your workflow is very focused and you already know the requirements for your stack. But if you have unexpected workflows coming in everyday, the maintenance quickly becomes a burden. Gnome may not be perfect, but it's quite nice as a baseline for a working environment.
yoyohello13
Same. I ran dwm for a long time. These days I just run Gnome. You can make it work very similar to a tiling window manager, and all that random crap the world throws at you (printers, projectors, random other monitors, Java programs) "Just Work".
The biggest impact suckless had on me was via. Their Stali Linux FAQ: https://sta.li/faq/ .
They've built an entirely statically linked user space for Linux . Until then i never questioned the default Linux "shared libraries for everything" approach and assumed that was the best way to deliver software.
Every little cli tool i wrote at work - i used to create distro packages for them or a tarball with a shell script that set LD_LIBRARY_PATH to find the correct version of the xml libraries etc i used.
It didn't have to be this way. Dealing with distro versioning headaches or the finnicky custom packaging of the libraries into that tar ball just to let the users run by 150 kb binary.
Since then I've mostly used static linking where i can. AppImages otherwise. I'm not developing core distro libraries. I'm just developing a tiny "app" my users need to use. I'm glad with newer languages like Go etc... static linking is the default.
Don't get me wrong. Dynamic linking definitely has it's place. But by default our software deployment doesn't need to be this complicated.