Parallel ./configure
193 comments
·April 25, 2025iforgotpassword
IshKebab
I don't think it's fair to say "because they are lazy or don't understand". Who would want to understand that mess? It isn't a virtue.
A fairer criticism would be that they have no sense to use a more sane build system. CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.
smartmic
I took the trouble (and even spent the money) to get to grips with autotools in a structured and detailed way by buying a book [1] about it and reading as much as possible. Yes, it's not trivial, but autotools are not witchcraft either, but as written elsewhere, a masterpiece of engineering. I have dealt with it without prejudice and since then I have been more of a fan of autotools than a hater. Anyway, I highly recommend the book and yes, after reading it, I think autotools is better than its reputation.
null
xiaoyu2006
Autotools use M4 to meta-program a bash script that meta-programs a bunch of C(++) sources and generates C(++) sources that utilizes meta-programming for different configurations; after which the meta-programmed script, again, meta-programs monolithic makefiles.
This is peak engineering.
1718627440
Yes, that sound ridiculous, but it is that way, so that the user can modify each intermediate step, which is the main selling point. As a user I really prefer that experience, which is why I as a developer put up with the non-sense of M4. (Which I think is more due to M4 being a macro language, then inherent language flaws.)
krior
Sounds like a headache. Is there a nice Python lib to generate all this M4-mumbo-jumbo?
knorker
autotools is the worst, except for all the others.
I'd like to think of myself as reasonable, so I'll just say that reasonable people may disagree with your assertion that cmake is in any way at all better than autotools.
IshKebab
Nope, autotools is actually the worst.
There is no way in hell anyone reasonable could say that Autotools is better than CMake.
NekkoDroid
> CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.
Having done a deep dive into CMake I actually kinda like it (really modern cmake is actually very nice, except the DSL but that probably isn't changing any time soon), but that is also the problem: I had to do a deep dive into learning it.
kazinator
Someone who doesn't want to understand a huge mess should probably not be bringing it into their project.
In software you sometimes have to have the courage to reject doing what others do, especially if they're only doing it because of others.
rollcat
This.
Simple projects: just use plain C. This is dwm, the window manager that spawned a thousand forks. No ./configure in sight: <https://git.suckless.org/dwm/files.html>
If you run into platform-specific stuff, just write a ./configure in simple and plain shell: <https://git.suckless.org/utmp/file/configure.html>. Even if you keep adding more stuff, it shouldn't take more than 100ms.
If you're doing something really complex (like say, writing a compiler), take the approach from Plan 9 / Go. Make a conditionally included header file that takes care of platform differences for you. Check the $GOARCH/u.h files here:
<https://go.googlesource.com/go/+/refs/heads/release-branch.g...>
(There are also some simple OS-specific checks: <https://go.googlesource.com/go/+/refs/heads/release-branch.g...>)
This is the reference Go compiler; it can target any platform, from any host (modulo CGO); later versions are also self-hosting and reproducible.
Levitating
I want to agree with you, but as someone who regularly packages software for multiple distributions I really would prefer people using autoconf.
Software with custom configure scripts are especially dreaded amongst packagers.
Joker_vD
Why, again, software in the Linux world has to be packaged for multiple distributions? On the Windows side, if you make installer for Windows 7, it will still work on Windows 11. And to the boot, you don't have to go through some Microsoft-approved package distibution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.
knorker
Interesting that you would bring up Go. Go is probably the most head-desk language of all for writing portable code. Go will fight you the whole way.
Even plain C is easier.
You can have a whole file be for OpenBSD, to work around that some standard library parts have different types on different platforms.
So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
You need a file for Linux 32 bit ARM (int32/int32 bit), one for Linux 64 bit ARM (int64,int64), one for OpenBSD 32 bit ARM (int64/int32), etc…. Maybe you can group them, but this is just one difference, so in the end you'll have to do one file per combination of OS and Arch. And all you wanted was pluggable "what's a Timeval?". Something that all build systems solved a long time ago.
And then maybe the next release of OpenBSD they've changed it, so now you cannot use Go's way to write portable code at all.
So between autotools, cmake, and the Go method, the Go method is by far the worst option for writing portable code.
rollcat
I have specifically given an example of u.h defining types such as i32, u64, etc to avoid running a hundred silly tests like "how long is long", "how long is long long", etc.
> So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
I assume you mean [syscall.Timeval]?
$ go doc syscall
[...]
Package syscall contains an interface to the low-level operating system
primitives. The details vary depending on the underlying system [...].
Do you have a specific use case for [syscall], where you cannot use [time]?technion
There's a trending post right now for printf implemented in bare metal and my first thought was "finally, all that autoconf code that checks for printf can handle the use can where it doesn't exist".
epcoa
> either they are lazy or don't understand them enough to do it themselves.
Meh, I used to keep printed copies of autotools manuals. I sympathize with all of these people and acknowledge they are likely the sane ones.
Levitating
I've had projects where I spent more time configuring autoconf than actually writing code.
That's what you get for wanting to use a glib function.
null
rbanffy
It’s always wise to be specific about the sizes you want for your variables. You don’t want your ancient 64-bit code to act differently on your grandkids 128-bit laptops. Unless, of course, you want to let the compiler decide whether to leverage higher precision types that become available after you retire.
creatonez
Noticed an easter egg in this article. The text below "I'm sorry, but in the year 2025, this is ridiculous:" is animated entirely without Javascript or .gif files. It's pure CSS.
This is how it was done: https://github.com/tavianator/tavianator.com/blob/cf0e4ef26d...
o11c
Unfortunately it forgets to HTML-escape the <wchar.h> etc.
tavianator
Whoops! Forgot to do that when I switched from a ``` block to raw html
codys
I did something like the system described in this article a few years back. [1]
Instead of splitting the "configure" and "make" steps though, I chose to instead fold much of the "configure" step into the "make".
To clarify, this article describes a system where `./configure` runs a bunch of compilations in parallel, then `make` does stuff depending on those compilations.
If one is willing to restrict what the configure can detect/do to writing to header files (rather than affecting variables examined/used in a Makefile), then instead one can have `./configure` generate a `Makefile` (or in my case, a ninja file), and then have the "run the compiler to see what defines to set" and "run compiler to build the executable" can be run in a single `make` or `ninja` invocation.
The simple way here results in _almost_ the same behavior: all the "configure"-like stuff running and then all the "build" stuff running. But if one is a bit more careful/clever and doesn't depend on the entire "config.h" for every "<real source>.c" compilation, then one can start to interleave the work perceived as "configuration" with that seen as "build". (I did not get that fancy)
tavianator
Nice! I used to do something similar, don't remember exactly why I had to switch but the two step process did become necessary at some point.
Just from a quick peek at that repo, nowadays you can write
#if __has_attribute(cold)
and avoid the configure test entirely. Probably wasn't a thing 10 years ago though :)
o11c
The problem is that the various `__has_foo` aren't actually reliable in practice - they don't tell you if the attribute, builtin, include, etc. actually works the way it's supposed to without bugs, or if it includes a particular feature (accepts a new optional argument, or allows new values for an existing argument, etc.).
aaronmdjones
#if __has_attribute(cold)
You should use double underscores on attribute names to avoid conflicts with macros (user-defined macros beginning with double underscores are forbidden, as identifiers beginning with double underscores are reserved). #if __has_attribute(__cold__)
# warning "This works too"
#endif
static void __attribute__((__cold__))
foo(void)
{
// This works too
}
codys
yep. C's really come a long way with the special operators for checking if attributes exist, if builtins exist, if headers exist, etc.
Covers a very large part of what is needed, making fewer and fewer things need to end up in configure scripts. I think most of what's left is checking for items (types, functions) existence and their shape, as you were doing :). I can dream about getting a nice special operator to check for fields/functions, would let us remove even more from configure time, but I suspect we won't because that requires type resolution and none of the existing special operators do that.
mikepurvis
You still need a configure step for the "where are my deps" part of it, though both autotools and CMake would be way faster if all they were doing was finding, and not any testing.
throwaway81523
GNU Parallel seems like another convenient approach.
fmajid
It has no concept of dependencies between tasks, or doing a topological sort prior to running the task queue. GNU Make's parallel mode (-j) has that.
epistasis
I've spent a fair amount of time over the past decades to make autotools work on my projects, and I've never felt like it was a good use of time.
It's likely that C will continue to be used by everyone for decades to come, but I know that I'll personally never start a new project in C again.
I'm still glad that there's some sort of push to make autotools suck less for legacy projects.
monkeyelite
You can use make without configure. If needed, you can also write your own configure instead of using auto tools.
Creating a make file is about 10 lines and is the lowest friction for me to get programming of any environment. Familiarity is part of that.
viraptor
It's a bit of a balance once you get bigger dependencies. A generic autoconf is annoying to write, but rarely an issue when packaging for a distro. Most issues I've had to fix in nixpkgs were for custom builds unfortunately.
But if you don't plan to distribute things widely (or have no deps).. Whatever, just do what works for you.
edoceo
Write your own configure? For an internal project, where much is under domain control, sure. But for the 1000s of projects trying to multi-plarform and/or support flavours/versions - oh gosh.
monkeyelite
It depends on how much platform specific stuff you are trying to use. Also in 2025 most packages are tailored for the operating system by packagers - not the original authors.
Autotools is going to check every config from the past 50 years.
eqvinox
To extend on sibling comments:
autoconf is in no way, shape or form an "official" build system associated with C. It is a GNU creation and certainly popular, but not to a "monopoly" degree, and it's share is declining. (plain make & meson & cmake being popular alternatives)
tidwall
I've stopped using autotools for new projects. Just a Makefile, and the -j flag for concurrency.
psyclobe
cmake ftw
JCWasmx86
Or meson is a serious alternative to cmake (Even better than cmake imho)
torarnv
CMake also does sequential configuration AFAIK. Is there any work to improve on that somewhere?
OskarS
Meson and cmake in my experience are both MUCH faster though. It’s much less of an issue with these systems than with autotools.
aldanor
You mean cargo build
yjftsjthsd-h
... can cargo build things that aren't rust? If yes, that's really cool. If no, then it's not really in the same problem domain.
fmajid
And on macOS, the notarization checks for all the conftest binaries generated by configure add even more latency. Apple reneged on their former promise to give an opt-out for this.
fishgoesblub
Very nice! I always get annoyed when my fancy 16 thread CPU is left barely used as one thread is burning away with the rest sitting and waiting. Bookmarking this for later to play around with whatever projects I use that still use configure.
Also, I was surprised when the animated text at the top of the article wasn't a gif, but actual text. So cool!
andreyv
Autoconf can use cache files [1], which can greatly speed up repeated configures. With cache, a test is run at most once.
[1] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...
fanf2
Sadly the cache files don’t record enough about the environment to be usable if you change configure options. They are generally unreliable.
SuperV1234
CMake also needs this, badly...
torarnv
Agreed! The CMake Xcode generator is extremely slow because not only is it running the configure tests sequentially, but it generates a new Xcode project for each of them.
rbanffy
I get the impression configure not only runs sequentially, but incrementally, where previous results can change the results of tests run later. Were it just sequential, running multiple tests as separate processes would be relatively simple.
Also, you shouldn’t need to run ./configure every time you run make.
fmajid
No, but if you are doing something like rebuilding a distro's worth of packages from source from scratch, the configure step starts to dominate. I build around 550, and it takes around 6 hours on a single node.
Most checks are common, so what can help is having a shared cache for all configure scripts so if you have 400 packages to rebuild, it doesn't check 400 times if you should use flock or fcntl. This approach is described here: https://jmmv.dev/2022/06/autoconf-caching.html
It doesn't help that autoconf is basically abandonware, with one forlorn maintainer trying to resuscitate it, but creating major regressions with new releases: https://lwn.net/Articles/834682/
rbanffy
> It doesn't help that autoconf is basically abandonware
A far too common tragedy of our age.
pdimitar
I don't disagree with that general premise but IMO autotools being (gradually?) abandoned is logical. It served its purpose. Not saying it's still not very useful in the darker shadows of technology but for a lot of stuff people choose Zig, Rust, Golang etc. today, with fairly good reasons too, and those PLs usually have fairly good packaging and dependency management and building subsystems built-in.
Furthermore, there really has to be a better way to do what autotools is doing, no? Sure, there are some situations where you only have some bare sh shell and nothing much else but I'd venture to say that in no less than 90% of all cases you can very easily have much more stuff installed -- like the `just` task runner tool, for example, that solves most of the problems that `make` usually did.
If we are talking in terms of our age, we also have to take into account that there's too much software everywhere! I believe some convergence has to start happening. There is such a thing as too much freedom. We are dispersing so much creative energy for almost no benefit of humankind...
moralestapia
>The purpose of a ./configure script is basically to run the compiler a bunch of times and check which runs succeeded.
Wait is this true? (!)
gdwatson
Historically, different Unixes varied a lot more than they do today. Say you want your program to use the C library function foo on platforms where it’s available and the function bar where it isn’t: You can write both versions and choose between them based on a C preprocessor macro, and the program will use the best option available for the platform where it was compiled.
But now the user has to set the preprocessor macro appropriately when he builds your program. Nobody wants to give the user a pop quiz on the intricacies of his C library every time he goes to install new software. So instead the developer writes a shell script that tries to compile a trivial program that uses function foo. If the script succeeds, it defines the preprocessor macro FOO_AVAILABLE, and the program will use foo; if it fails, it doesn’t define that macro, and the program will fall back to bar.
That shell script grew into configure. A configure script for an old and widely ported piece of software can check for a lot of platform features.
im3w1l
I'm not saying we should send everyone a docker container with a full copy of ubuntu, electron and foo.js whether they have foo in their c library or not, but maybe there is a middle ground?
moralestapia
I think this is a gigantic point in favor of interpreted languages.
JS and Python wouldn't be what they are today if you had to `./configure` every website you want to visit, lmao.
klysm
The closer and deeper you look into the C toolchains the more grossed out you’ll be
acuozzo
Hands have to get dirty somewhere. "As deep as The Worker's City lay underground, so high above towered the City of Metropolis."
The choices are:
1. Restrict the freedom of CPU designers to some approximation of the PDP11. No funky DSP chips. No crazy vector processors.
2. Restrict the freedom of OS designers to some approximation of Unix. No bespoke realtime OSes. No research OSes.
3. Insist programmers use a new programming language for these chips and OSes. (This was the case prior to C and Unix.)
4. Insist programmers write in assembly and/or machine code. Perhaps a macro-assembler is acceptable here, but this is inching toward C.
The cost of this flexibility is gross tooling to make it manageable. Can it be done without years and years of accrued M4 and sh? Perhaps, but that's just CMake and CMake is nowhere near as capable as Autotools & friends are when working with legacy platforms.
klysm
There is no real technical justification for the absolute shit show that is the modern C toolchain
Am4TIfIsER0ppos
Yes.
kazinator
I've implemented a configuration caching mechanism for myself (in one important project) which stores configuration artifacts in a cache directory, associated by the commit hash. It works as a git hook:
$ git bisect good
Bisecting: 7 revisions left to test after this (roughly 3 steps)
restored cached configuration for 2f8679c346a88c07b81ea8e9854c71dae2ade167
[2f8679c346a88c07b81ea8e9854c71dae2ade167] expander: noexpand mechanism.
The "restored cached configuration" message is from the git hook. What it's not saying is that it also saved the config for the commit it is navigating away from.I primed the cache by executing a "git checkout" for each of a range of commits.
Going forward, it will populate itself.
This is the only issue I would conceivably care about with regard to configure performance. When not navigating in git history, I do not often run configure.
Downstream distros do not care; they keep their machines and cores busy by building multiple packages in parallel.
It's not ideal because the cache from one host is not applicable to another; you can't port it. I could write an intelligent script to populate it, which basically identifies commits (within some specified range) that have touched the config system, and then assumes that for all in-between commits, it's the same.
The hook could do this. When it notices that the current sha doesn't have a cached configuration, it could search backwards through history for the most recent commit which does have it. If the configure script (or something influencing it) has not been touched since that commit, then its cached material can be populated for all in-between commits right through the current one. That would take care of large swaths of commits in a typical bisect session.
kazinator
The right way to do this is not to rely on the git hashes, but to hash the inputs into the configuration system (those that are in version control, not the implicit environmental inputs from the platform).
For instance, if the only input to the configuration system is the body of the configure script, then we hash that. That is then our key to the generated materials.
gorgoiler
On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?
x = 2 + 2
y = 2 * 2
z = f(x, y)
print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?*And superficially off the topic of this thread, but possibly not.
gdwatson
Superscalar processors (which include all mainstream ones these days) do this within a single core, provided there are no data dependencies between the assignment statements. They have multiple arithmetic logic units, and they can start a second operation while the first is executing.
But yeah, I agree that we were promised a lot more automatic multithreading than we got. History has proven that we should be wary of any promises that depend on a Sufficiently Smart Compiler.
lazide
Eh, in this case not splitting them up to compute them in parallel is the smartest thing to do. Locking overhead alone is going to dwarf every other cost involved in that computation.
gdwatson
Yeah, I think the dream was more like, “The compiler looks at a map or filter operation and figures out whether it’s worth the overhead to parallelize it automatically.” And that turns out to be pretty hard, with potentially painful (and nondeterministic!) consequences for failure.
Maybe it would have been easier if CPU performance didn’t end up outstripping memory performance so much, or if cache coherency between cores weren’t so difficult.
maccard
I think you’re fixating on the very specific example. Imagine if instead of 2 + 2 it was multiplying arrays of large matrices. The compiler or runtime would be smart enough to figure out if it’s worth dispatching the parallelism or not for you. Basically auto vectorisation but for parallelism
snackbroken
Bend[1] and Vine[1] are two experimental programming languages that take similar approaches to automatically parallelizing programs; interaction nets[3]. IIUC, they basically turn the whole program into one big dependency graph, then the runtime figures out what can run in parallel and distributes the work to however many threads you can throw at it. It's also my understanding that they are currently both quite slow, which makes sense as the focus has been on making `write embarrassingly parallelizable program -> get highly parallelized execution` work at all until recently. Time will tell if they can manage enough optimizations that the approach enables you to get reasonably performing parallel functional programs 'for free'.
[1] https://github.com/HigherOrderCO/Bend [2] https://github.com/VineLang/vine [3] https://en.wikipedia.org/wiki/Interaction_nets
chubot
That looks more like a SIMD problem than a multi-core problem
You want bigger units of work for multiple cores, otherwise the coordination overhead will outweigh the work the application is doing
I think the Erlang runtime is probably the best use of functional programming and multiple cores. Since Erlang processes are shared nothing, I think they will scale to 64 or 128 cores just fine
Whereas the GC will be a bottleneck in most languages with shared memory ... you will stop scaling before using all your cores
But I don't think Erlang is as fine-grained as your example ...
Some related threads:
https://news.ycombinator.com/item?id=40130079
https://news.ycombinator.com/item?id=31176264
AFAIU Erlang is not that fast an interpreter; I thought the Pony Language was doing something similar (shared nothing?) with compiled code, but I haven't heard about it in awhile
juped
There's some sharing used to avoid heavy copies, though GC runs at the process level. The implementation is tilted towards copying between isolated heaps over sharing, but it's also had performance work done over the years. (In fact, if I really want to cause a global GC pause bottleneck in Erlang, I can abuse persistent_term to do this.)
fmajid
Yes, Erlang's zero-sharing model is what I think Rust should have gone for in its concurrency model. Sadly too few people have even heard of it.
chubot
That would be an odd choice for a low-level language ... languages like C, C++, and Rust let you use whatever the OS has, and the OS has threads
A higher level language can be more opinionated, but a low level one shouldn't straight jacket you.
i.e. Rust can be used to IMPLEMENT an Erlang runtime
If you couldn't use threads, then you could not implement an Erlang runtime.
steveklabnik
Very early on, Rust was like this! But as the language changed over time, it because less appropriate.
speed_spread
I believe it's not the language preventing it but the nature of parallel computing. The overhead of splitting up things and then reuniting them again is high enough to make trivial cases not worth it. OTOH we now have pretty good compiler autovectorization which does a lot of parallel magic if you set things right. But it's not handled at the language level either.
inejge
> …where x and y evaluate in parallel without me having to do anything.
I understand that yours is a very simple example, but a) such things are already parallelized even on a single thread thanks to all the internal CPU parallelism, b) one should always be mindful of Amdahl's law, c) truly parallel solutions to various problems tend to be structurally different from serial ones in unpredictable ways, so there's no single transformation, not even a single family of transformations.
fweimer
There have been experimental parallel graph reduction machines. Excel has a parallel evaluator these days.
Oddly enough, functional programming seems to be a poor fit for this because the fanout tends to be fairly low: individual operations have few inputs, and single-linked lists and trees are more common than arrays.
null
colechristensen
there have been fortran compilers which have done auto parallelization for decades, i think nvidia released a compiler that will take your code and do its best to run it on a gpu
this works best for scientific computing things that run through very big loops where there is very little interaction between iterations
The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...