Skip to content(if available)orjump to list(if available)

Wild – A fast linker for Linux

Wild – A fast linker for Linux

221 comments

·January 24, 2025

pzmarzly

Ever since mold relicensed from AGPL to MIT (as part of mold 2.0 release), the worldwide need for making another fast linker has been greatly reduced, so I wasn't expecting a project like this to appear. And definitely wasn't expecting it to already be 2x faster than mold in some cases. Will keep an eye on this project to see how it evolves, best of luck to the author.

estebank

Note that Mold has no interest in becoming incremental, so there is a big reason there for another linker to exist. I find it kind of embarrassing that MS' linker has been incremental by default for decades, yet there's no production ready incremental linker on Linux yet.

jcelerier

OTOH even lld, fast but fairly slower than mold, is already incredibly faster than MS's linker even without the incrmeentality. Like, I'm routinely linking hundreds of megabytes in less than a second anyways, not sure incrementality is that much worth it

cyco130

Not a rhetorical question: Could it be that part of the speed difference is due to the file system speed? I was shocked when I saw how much modern(ish) Windows file systems were slower than modern(ish) Linux ones.

pjmlp

Additionally the way precompiled headers are handled in Visual C++ and C++ Builder have always been much better than traditional UNIX compilers, and now we have modules as well.

zik

The way precompiled headers work in C++ is a bit of an ugly hack. And worse, it's almost as slow as just compiling them all every time anyway.

paulddraper

It has to be a candidate for the longest biggest gap in build tooling ever.

bogwog

[flagged]

dang

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html

estebank

Yes, I missed a word. And I believe pretty much everybody else realized what I meant to say.

Feel free to point me in the direction of a production grade incremental compiler that can run on Linux, GNU or otherwise.

Thorrez

I'm pretty sure that's a typo, and "incremental" was meant to be included in that sentence.

bdhcuidbebe

Why so hostile? Have a break, go look at the clouds, they are beautiful today!

panzi

Why does AGPL Vs MIT matter for a linker?

usr1106

Hmm, my naive summary of AGPL is "If you run AGPL code in your web backend you are obliged to offer the backend source to everyone using a web client". No wonder it's explicitly forbidden at Google.

What does that mean for a linker? If you ship a binary linked with an AGPL linker you need to offer the source of the linker? Or of the program being linked?

nicoburns

In practice I think it's pretty much equivalent to the GPL for a linker. But I can understand why people in commercial settings are wary of this license.

account42

Instead of spreading FUD you could go read the AGPL.

cies

iirc the mold author wanted to make money off of it (and I dont blame him).

AGPL is avoided like the plague by big corps: same big corps are known for having money to pay for licenses and sometimes (yes, I look at you Amazon) being good at deriving value from FLOSS without giving back.

iirc AGPL was used so everyone can just use it, big biz is still compelled to buy a license. this has been done before and can be seen as one of the strategies to make money off FLOSS.

dspearson

Under what circumstances would commercial companies be required to buy a license?! If they provide Linking as a Service?

o11c

Corps want to be able to release and use tools that take away the freedoms that GPL-family licenses provide. Often this results in duplication of effort.

This is not theoretical; it happens quite frequently. For toolchains, in particular I'm aware of how Apple (not that they're unique in this) has "blah blah open source" downloads, but often they do not actually correspond with the binaries. And not just "not fully reproducible but close" but "entirely new and incompatible features".

The ARM64 saga is a notable example, which went on for at least six months (at least Sept 2013 to March 2014). XCode 5 shipped with a closed-source compiler only for all that time.

oguz-ismail

So they donate money instead of code? The project somehow benefits from the switch to MIT?

zelcon

Corps don't want to have to release the source code for their internal forks. They could also potentially be sued for everything they link using it because the linked binaries could be "derivative works" according to a judge who doesn't know anything.

pwdisswordfishz

They don't have to release source for internal forks.

saagarjha

I think you should get new lawyers if this is their understanding of how software licenses work.

integricho

what is the status of Windows support in mold? reading the github issues leads to a circular confusion, the author first planned to support it, then moved Windows support to the sold linker, but then sold got archived recently so in the end there is no Windows support or did I just misunderstand the events?

secondcoming

Maybe I'm holding it wrong, but mold isn't faster at all if you're using LTO, which you probably should be.

compiler-guy

Mold will be faster than LLD even using LTO, but all of its benefits will be absolutely swamped by the LTO process, which is, more or less, recompiling the entire program from high-level LLVM-IR. That's extremely expensive and dwarfs any linking advantages.

So the benefit will be barely noticable. As another comment points out, LTO should only be used when you need a binary optimized to within an inch of its life, such as a release copy, or a copy for performance testing.

paulddraper

Username checks out.

And factual.

0x457

I think we're talking about non-release builds here. In those, you don't want to use LTO, you just want to get that binary as fast as possible.

Arelius

Yeah, if you're development process requires LTO you may be holding it wrong....

Specifically, if LTO is so important that you need to be using it during development, you likely have a very exceptional case, or you have some big architectural issues that are causing much larger performance regressions then they should be.

IshKebab

> you're development process requires LTO you may be holding it wrong....

Not necessarily. LTO does a very good job of dead code elimination which is sometimes necessary to fit code in microcontroller memory.

jcalvinowens

If you're debugging, and your bug only reproduces with LTO enabled, you don't have much of a choice...

benatkin

Being able to choose a middle ground between development/debug builds and production builds is becoming increasingly important. This is especially true when developing in the browser, when often something appears to be slow in development mode but is fine in production mode.

WebAssembly and lightweight MicroVMs are enabling FaaS with real time code generation but the build toolchain makes it less appealing, when you don't want it to take half a minute to build or to be slow.

josephg

> Yeah, if you're development process requires LTO you may be holding it wrong....

I spent a few months doing performance optimisation work. We wanted to see how much performance we could wring out of an algorithm & associated data structures. Each day I’d try and brainstorm new optimisations, implement them, and then A/B test the change to see how it actually affected performance. To get reliable tests, all benchmarks were run in release mode (with all optimisations - including LTO - turned on).

benatkin

Agreed. Both fast and small are desirable for sandboxed (least authority) isomorphic (client and server) microservices with WebAssembly & related tech.

null

[deleted]

account42

You should be using LTO where incremental build times are a concern, i.e. for development builds.

And for realease builds link time is hardly a concern.

easythrees

Wait a minute, it’s possible to relicense something from GPL to MIT?

prmoustache

Yes if you are the only developper and never received nor accepted external contributions or if you managed to get permission from every single person who contributed or replaced their code with your own.

computably

> or if you managed to get permission from every single person who contributed

This makes it sound more difficult than it actually is (logistically); it's not uncommon for major projects to require contributors to sign a CLA before accepting PRs.

DrillShopper

Yes. Generally you need permissions from contributors (either asking them directly or requiring a contribution agreement that assigns copyright for contributions to either the author or the org hosting the project), but you can relicense from any license to any other license.

That doesn't extinguish the prior versions under the prior license, but it does allow a project to change its license.

satvikpendem

I looked at this before, is it ready for production? I thought not based on the readme, so I'm still using mold.

For those on macOS, Apple released a new linker about a year or two ago (which is why the mold author stopped working on their macOS version), and if you're using it with Rust, put this in your config.toml:

    [target.aarch64-apple-darwin]
    rustflags = [ 
        "-C",
        "link-arg=-fuse-ld=/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/ld",
        "-C",
        "link-arg=-ld_new",
    ]

dralley

No, the author is pretty clear that it shouldn't be used for production yet

satvikpendem

Great, I'll keep a look out but will hold off on using it for now.

brink

I don't even use mold for production. It's for development.

bla3

Isn't the new linked just the default these days? I'm not sure adding that has any effect.

newman314

Can you confirm that's still the right location for Sequioa?

I have the command line tools installed and I only have /usr/bin/ld and /usr/bin/ld-classic

satvikpendem

Then it'd be the /usr/bin/ld as I believe my solution was for before they moved the linker it seems.

saagarjha

/usr/bin/ld will correctly invoke the right linker, it's a stub to look at your developer dir and reexec.

kryptiskt

What would be refreshing would be a C/C++ compiler that did away with the intermediate step of linking and built the whole program as a unit. LTO doesn't even have to be a thing if the compiler can see the entire program in the first place. It would still have to save some build products so that incremental builds are possible, but not as object files, the compiler would need metadata to know of the origin and dependencies of all the generated code so it would be able to replace the right things.

External libs are most often linked dynamically these days, so they don't need to be built from source, so eliminating the linker doesn't pose a problem for non-open source dependencies. And if that's not enough letting the compiler also consume object files could provide for legacy use cases or edge cases where you must statically link to a binary.

dapperdrake

SQLite3 just concatenation everything together into one compilation unit. So, more people have been using this than probably know about it.

https://sqlite.org/amalgamation.html

jdxcode

I totally see the point of this, but still, you have to admit this is pretty funny:

> Developers sometimes experience trouble debugging the quarter-million line amalgamation source file because some debuggers are only able to handle source code line numbers less than 32,768 [...] To circumvent this limitation, the amalgamation is also available in a split form, consisting of files "sqlite3-1.c", "sqlite3-2.c", and so forth, where each file is less than 32,768 lines in length

yellowapple

That would imply that such debuggers are storing line numbers as not just 16-bit numbers (which is probably sensible, considering that source files longer than that are uncommon), but as signed 16-bit numbers. I can't fathom a situation where line numbers would ever be negative.

dapperdrake

*concatenates

Apologies for the typo. And now it is too late to edit the post.

almostgotcaught

[flagged]

nn3

>Secondly, if you think any compiler is meaningfully doing anything optimal >>("whole program analysis") on a TU scale greater than say ~50kloc (ie ~10 files) >relative to compiling individually you're dreaming.

That's wrong. gcc generates summaries of function properties and propagate those up and down the call tree, which for LTO is then build in a distributed way. It does much more than mere inlining, but even advanced analysis like points to analysis.

https://gcc.gnu.org/onlinedocs/gccint/IPA.html https://gcc.gnu.org/onlinedocs/gccint/IPA-passes.html

It scales to millions of lines of code because it's partioned.

jcalvinowens

> if you think any compiler is meaningfully doing anything optimal ("whole program analysis") on a TU scale greater than say ~50kloc (ie ~10 files) relative to compiling individually you're dreaming.

You can build the Linux kernel with LTO: simply diff the LTO vs non-LTO outputs and it will be obvious you're wrong.

dapperdrake

SQLite3 may be a counter-example:

https://sqlite.org/amalgamation.html

ComputerGuru

There’s been a lot of interest in faster linkers spurred by the adoption and popularity of rust.

Even modest statically linked rust binaries can take a couple of minutes in the link stage of compilation in release mode (using mold). It’s not a rust-specific issue but an amalgam of (usually) strictly static linking, advanced link-time optimizations enabled by llvm like LTO and bolt, and a general dissatisfaction with compile times in the rust community. Rust’s (clinically) strong relationship with(read: dependency on) LLVM makes it the most popular language where LLVM link-time magic has been most heavily universally adopted; you could face these issues with C++ but it wouldn’t be chalked up to the language rather than your toolchain.

I’ve been eyeing wild for some time as I’m excited by the promise of an optimizing incremental linker, but to be frank, see zero incentive to even fiddle with it until it can actually, you know, link incrementally.

pjmlp

C++ can be rather faster to compile than Rust, because some compilers do have incremental compilation, and incremental linking.

Additionally, the acceptance of binary libraries across the C and C++ ecosystem, means that more often than not, you only need to care about compiling you own application, and not the world, every time you clone a repo, or switch development branch.

yosefk

compiling crates in parallel is fast on a good machine. OTOH managing C++ dependencies without a standard build & packaging system is a nightmare

pjmlp

Imagine if Linus needed a gaming rig to develop Linux...

And he also did not had cargo at his disposal.

No need to point out it is C instead, as they share common roots, including place of birth.

Or how we used to compile C++ between 1986 and 2000's, mostly in single core machines, developing games, GUIs and distributed computing applications in CORBA and DCOM.

sitkack

I solved this by using Wasm. Your outer application shell calls into Wasm business logic, only the inner logic needs to get recompiled, the outer app shell doesn't even need to restart.

ComputerGuru

I don’t think I can use wasm with simd or syscalls, which is the bulk of my work.

sitkack

I haven't used SIMD in Rust (or Wasm). Syscalls can be passed into the Wasm env.

https://doc.rust-lang.org/core/arch/wasm32/index.html#simd

https://nickb.dev/blog/authoring-a-simd-enhanced-wasm-librar...

Could definitely be more effort than it is worth just to speed up compilation.

SkiFire13

How is this different than dynamically linking the business logic library?

sitkack

Very similar, but Wasm has additional safety properties and affordances. I am trying to get away from dynamic libs as an app extension mechanism. It is especially nice when application extension is open to end users, they won't be able to crash your application shell.

https://wasmtime.dev/ https://github.com/bytecodealliance/wasmtime

ajb

2008: Gold, a new linker, intended to be faster than Gnu LD

2015(?): Lld a drop in replacement linker, at least 2x as fast as Gold

2021: mold, a new linker, several times faster than lld

2025: wild, a new linker...

o11c

Rarely mentioned: all of these occur at the cost of not implementing a very large number of useful features used by real-world programs.

account42

Like ICF? Wait no, everyone supports that except GNU ld.

einpoklum

Can you name a few of these features, for those of us who don't know much about linking beyond the fact that it takes compiled object files and makes an executable (and maybe does LTO)?

kibwen

Presumably they're talking about linker scripts, and IMO if you're one of the vanishingly rare people who absolutely needs a linker script for some reason, then, firstly, my condolences, and secondly, given that 99.999% percent of users never need linker scripts, and given how much complexity and fragility their support adds to linker codebases, I'm perfectly happy to say that the rest of us can happily use fast and simple linkers that don't support linker scripts, and the other poor souls can keep using ld.

wolfd

I’m not sure if you’re intending to leave a negative or positive remark, or just a brief history, but the fact that people are still managing to squeeze better performance into linkers is very encouraging to me.

ajb

Certainly no intention to be negative. Not having run the numbers, I don't know if the older ones got slower over time due to more features, or the new ones are squeezing out new performance gains. I guess it's also partly that the bigger codebases scaled up so much over this period, so that there are gains to be had that weren't interesting before.

wolfd

Good question, I always wonder the same thing. https://www.phoronix.com/news/Mold-Linker-2024-Performance seems to show that that the newer linkers still outperform their predecessors, even after maturing. But of course this doesn’t show the full picture.

cbmuser

Gold is slated for removal from binutils for version 2.44.0, so it's officially dead.

saagarjha

Where is the effort going now? lld?

dundarious

For windows, there is also [The RAD Linker](https://github.com/EpicGamesExt/raddebugger?tab=readme-ov-fi...) though quite early days.

fuzztester

Related, and a good one, though old:

The book Linkers and Loaders by John Levine.

Last book in the list here:

https://www.johnlevine.com/books.phtml

I had read it some years ago, and found it quite interesting.

It's a standard one in the field.

He has also written some other popular computer books (see link above - pun not intended, but noticed).

shmerl

That looks promising. In Rust to begin with and with the goal of being fast and support incremental linking.

To use it with Rust, this can probbaly also work using gcc as linker driver.

In project's .cargo/config.toml:

    [target.x86_64-unknown-linux-gnu]
    rustflags = ["-C", "link-arg=-fuse-ld=wild"]
Side note, but why does Rust need to plug into gcc or clang for that? Some missing functionality?

davidlattimore

Unfortunately gcc doesn't accept arbitrary linkers via the `-fuse-ld=` flag. The only linkers it accepts are bfd, gold lld and mold. It is possible to use gcc to invoke wild as the linker, but currently to do that, you need to create a directory containing the wild linker and rename the binary (or a symlink) to "ld", then pass `-B/path/to/directory/containing/wild` to gcc.

As for why Rust uses gcc or clang to invoke the linker rather than invoking the linker directly - it's because the C compiler knows what linker flags are needed on the current platform in order to link against libc and the C runtime. Things like `Scrt1.o`, `crti.o`, `crtbeginS.o`, `crtendS.o` and `crtn.o`.

inkyoto

> It is possible to use gcc to invoke wild as the linker, but currently to do that, you need to create a directory containing the wild linker and rename the binary (or a symlink) to "ld", then pass `-B/path/to/directory/containing/wild` to gcc.

Instead of renaming and passing -B in, you can also modify the GCC «spec» file's «%linker» section to make it point to a linker of your choice, i.e.

  %linker:
  /scratch/bin/wild %{wild_options}
Linking options can be amended in the «%link_command» section.

It is possible to either modify the default «spec» file («gcc -dumpspecs») or pass your own along via «-specs=my-specs-file». I have found custom «spec» files to be very useful in the past.

The «spec» file format is documented at https://gcc.gnu.org/onlinedocs/gcc/Spec-Files.html

shmerl

Ah, good to know, thanks!

May be it's worth filing a feature request for gcc to have parity with clang for arbitrary linkers?

sedatk

Because Rust compiler generates IR bytecode, not machine code.

shmerl

That's the reason to use llvm as part of Rust compiler toolchain, not to use gcc or clang as linker manager?

sedatk

You're right, @davidlattimore seems to have answered that.

KerrAvon

I'm curious: what's the theory behind why this would be faster than mold in the non-incremental case? "Because Rust" is a fine explanation for a bunch of things, but doesn't explain expected performance benefits.

"Because there's low hanging concurrent fruit that Rust can help us get?" would be interesting but that's not explicitly stated or even implied.

davidlattimore

I'm not actually sure, mostly because I'm not really familiar with the Mold codebase. One clue is that I've heard that Mold gets about a 10% speedup by using a faster allocator (mimalloc). I've tried using mimalloc with Wild and didn't get any measurable speedup. This suggests to me that Mold is probably making heavier use of the allocator than Wild is. With Wild, I've certainly tried to optimise the number of heap allocations.

But in general, I'd guess just different design decisions. As for how this might be related to Rust - I'm certain that were Wild ported from Rust to C or C++, that it would perform very similarly. However, code patterns that are fine in Rust due to the borrow checker, would be footguns in languages like C or C++, so maintaining that code could be tricky. Certainly when I've coded in C++ in the past, I've found myself coding more defensively, even at a small performance cost, whereas with Rust, I'm able to be a lot bolder because I know the compiler has got my back.

menaerus

> Mold gets about a 10% speedup by using a faster allocator (mimalloc). I've tried using mimalloc with Wild and didn't get any measurable speedup

Perhaps it is worth repeating the experiment with heavy MLoC codebases. jmalloc or mimalloc.

einpoklum

Rust is a perfectly fine language, and there's no reason you should not be able to implement fast incremental linking using Rust, so - I wish you success in doing that.

... however...

> code patterns that are fine in Rust due to the borrow checker, would be footguns in languages like C or C++,

That "dig" is probably not true. Or rather, your very conflation of C and C++ suggests that you are talking about the kind of code which would not be used in modern C++ of the past decade-or-more. While one _can_ write footguns in C++ easily, one can also very easily choose not to do so - especially when writing a new project.

panstromek

Tell me you don't have rust experience without telling me you don't have rust experience.

bjourne

What a coincidence. :) Just an hour ago I compared the performance of wild, mold, and (plain-old) ld on a C project I'm working on. 23 kloc and 172 files. Takes about 23.4 s of user time to compile with gcc+ld, 22.5 s with gcc+mold, and 21.8 s with gcc+wild. Which leads me to believe that link time shouldn't be that much of a problem for well-structured projects.

davidlattimore

It sounds like you're building from scratch. In that case, the majority of the time will be spent compiling code, not linking. The case for fast linkers is strongest when doing iterative development. i.e. when making small changes to your code then rebuilding and running the result. With a small change, there's generally very little work for the compiler to do, but linking is still done from scratch, so tends to dominate.

commandersaki

Yep in my case I have 11 * 450MB executables that take about 8 minutes to compile and link. But for small iterative programming cycles using the standard linker with g++, it takes about 30 seconds to link (If I remember correctly). I tried mold and shaved 25% of that time, which didn't seem worth the change overall; attempted wild a year ago but ran into issues, but will revisit at some point.

menaerus

Exactly. But also even in build-from-scratch use-case when there's a multitude of binaries to be built - think 10s or 100s of (unit, integration, performance) test binaries or utilities that come along with the main release binary etc. Faster linkers giving even a modest 10% speedup per binary will quickly accumulate and will obviously scale much better.

bjourne

True, I didn't think of that. However, the root cause here perhaps is fat binaries? My preferred development flow consists of many small self-contained dynamically linked libraries that executables link to. Then you only have to relink changed libraries and not executables that depend on them.

iknowstuff

So is this your preferred flow because of slow linkers?

wolf550e

The linker time is important when building something like Chrome, not small projects.

searealist

Fast linkers are mostly useful in incremental compilation scenarios to cut down on the edit cycle.

ndesaulniers

How about ld.lld?

1vuio0pswjnm7

"These benchmark were run on David Lattimore's laptop (2020 model System76 Lemur pro), which has 4 cores (8 threads) and 42 GB of RAM."

https://news.ycombinator.com/item?id=33330499

NB. This is not to suggest wild is bloated. The issue if any is the software being developed with it and the computers of those who might use such software.

1vuio0pswjnm7

https://news.ycombinator.com/item?id=42896619

"... I have 16 GB of ram, I can't upgrade it..."

klibertp

Half in jest, but I'd think anybody coding in Rust already has 32GB of RAM...

(Personally, upgrading my laptop to 64GB at the expense of literally everything else was almost a great decision. Almost, because I really should have splurged on RAM and display instead of going all-in on RAM. The only downside is that cleaning up open tabs once a week became a chore, taking up the whole evening.)

sylware

The real issue is actually runtime ELF (and PE) which are obsolete on modern hardware architecture.

bmacho

What do you mean by this?

sylware

ELF(COFF) should now be only an assembler output format on modern large hardware architecture.

On modern large hardware architecture, for executable files/dynamic libraries, ELF(PE[+]) has overkill complexity.

I am personnally using a executable file format of my own I do wrap into an "ELF capsule" on linux kernel. With position independent code, you kind of only need memory mapped segments (which dynamic libraries are in this very format). I have two very simple partial linkers I wrote in plain and simple C, one for risc-v assembly, one for x86_64 assembly, which allow me to link into such executable file some simple ELF object files (from binutils GAS).

There is no more centralized "ELF loader".

Of course, there are tradeoffs, 1 billion times worth it in regards of the accute simplicity of the format.

(I even have a little vm which allows me to interpret simple risc-v binaries on x86_64).

o11c

You're giving up a lot if you stop using a format that supports multiple mapping, relro, dynamic relocations, ...

ndesaulniers

Can it link the Linux kernel yet? Was a useful milestone for LLD.

davidlattimore

Not yet. The Linux kernel uses linker scripts, which Wild doesn't yet support. I'd like to add support for linker scripts at some point, but it's some way down the priority list.

oguz-ismail

Does it at least support -Ttext, -Tdata, etc.?