The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...
I did something like the system described in this article a few years back. [1]
Instead of splitting the "configure" and "make" steps though, I chose to instead fold much of the "configure" step into the "make".
To clarify, this article describes a system where `./configure` runs a bunch of compilations in parallel, then `make` does stuff depending on those compilations.
If one is willing to restrict what the configure can detect/do to writing to header files (rather than affecting variables examined/used in a Makefile), then instead one can have `./configure` generate a `Makefile` (or in my case, a ninja file), and then have the "run the compiler to see what defines to set" and "run compiler to build the executable" can be run in a single `make` or `ninja` invocation.
The simple way here results in _almost_ the same behavior: all the "configure"-like stuff running and then all the "build" stuff running. But if one is a bit more careful/clever and doesn't depend on the entire "config.h" for every "<real source>.c" compilation, then one can start to interleave the work perceived as "configuration" with that seen as "build". (I did not get that fancy)
Noticed an easter egg in this article. The text below "I'm sorry, but in the year 2025, this is ridiculous:" is animated entirely without Javascript or .gif files. It's pure CSS.
And on macOS, the notarization checks for all the conftest binaries generated by configure add even more latency. Apple reneged on their former promise to give an opt-out for this.
On the topic* of having 24 cores and wanting to put them to work: when I were a lad the promise was that pure functional programming would trivially allow for parallel execution of functions. Has this future ever materialized in a modern language / runtime?
x = 2 + 2
y = 2 * 2
z = f(x, y)
print(z)
…where x and y evaluate in parallel without me having to do anything. Clojure, perhaps?
*And superficially off the topic of this thread, but possibly not.
I get the impression configure not only runs sequentially, but incrementally, where previous results can change the results of tests run later. Were it just sequential, running multiple tests as separate processes would be relatively simple.
Also, you shouldn’t need to run ./configure every time you run make.
Very nice! I always get annoyed when my fancy 16 thread CPU is left barely used as one thread is burning away with the rest sitting and waiting. Bookmarking this for later to play around with whatever projects I use that still use configure.
Also, I was surprised when the animated text at the top of the article wasn't a gif, but actual text. So cool!
I actually think this is possible to improve if you have the autoconf files. You could parse it to find all the checks you know can run in parallel and run those.
As a user I highly appreciate ./configure for the --help flag, which usually tells me how to build a program with or without particular functionalities which may or may not be applicable to my use-case.
Why do we need to even run most of the things in ./configure? Why not just have a file in /etc which is updated when you install various packages which ./configure can read to learn various stats about the environment? Obviously it will still allow setting various things with parameters and create a Makefile, but much faster.
It is possible in theory to speed up existing configure scripts by switching interpreter from /bin/sh to something that scans file, splits it to independent blocks and runs them in parallel.
Parallel ./configure
(tavianator.com)147 points by brooke2k 15 hours ago | 119 comments
Comments
Instead of splitting the "configure" and "make" steps though, I chose to instead fold much of the "configure" step into the "make".
To clarify, this article describes a system where `./configure` runs a bunch of compilations in parallel, then `make` does stuff depending on those compilations.
If one is willing to restrict what the configure can detect/do to writing to header files (rather than affecting variables examined/used in a Makefile), then instead one can have `./configure` generate a `Makefile` (or in my case, a ninja file), and then have the "run the compiler to see what defines to set" and "run compiler to build the executable" can be run in a single `make` or `ninja` invocation.
The simple way here results in _almost_ the same behavior: all the "configure"-like stuff running and then all the "build" stuff running. But if one is a bit more careful/clever and doesn't depend on the entire "config.h" for every "<real source>.c" compilation, then one can start to interleave the work perceived as "configuration" with that seen as "build". (I did not get that fancy)
[1]: https://github.com/codyps/cninja/tree/master/config_h
This is how it was done: https://github.com/tavianator/tavianator.com/blob/cf0e4ef26d...
It's likely that C will continue to be used by everyone for decades to come, but I know that I'll personally never start a new project in C again.
I'm still glad that there's some sort of push to make autotools suck less for legacy projects.
*And superficially off the topic of this thread, but possibly not.
[1] https://www.gnu.org/savannah-checkouts/gnu/autoconf/manual/a...
Also, you shouldn’t need to run ./configure every time you run make.
Also, I was surprised when the animated text at the top of the article wasn't a gif, but actual text. So cool!
Nice writeup though.
Is there any such previous work?
Wait is this true? (!)
it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST