Skip to content(if available)orjump to list(if available)

“Why is the Rust compiler so slow?”

taylorallred

So there's this guy you may have heard of called Ryan Fleury who makes the RAD debugger for Epic. The whole thing is made with 278k lines of C and is built as a unity build (all the code is included into one file that is compiled as a single translation unit). On a decent windows machine it takes 1.5 seconds to do a clean compile. This seems like a clear case-study that compilation can be incredibly fast and makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.

lordofgibbons

The more your compiler does for you at build time, the longer it will take to build, it's that simple.

Go has sub-second build times even on massive code-bases. Why? because it doesn't do a lot at build time. It has a simple module system, (relatively) simple type system, and leaves a whole bunch of stuff be handled by the GC at runtime. It's great for its intended use case.

When you have things like macros, advanced type systems, and want robustness guarantees at build time.. then you have to pay for that.

duped

I think this is mostly a myth. If you look at Rust compiler benchmarks, while typechecking isn't _free_ it's also not the bottleneck.

A big reason that amalgamation builds of C and C++ can absolutely fly is because they aren't reparsing headers and generating exactly one object file so the linker has no work to do.

Once you add static linking to the toolchain (in all of its forms) things get really fucking slow.

Codegen is also a problem. Rust tends to generate a lot more code than C or C++, so while the compiler is done doing most of its typechecking work, the backend and assembler has a lot of things to chuck through.

treyd

Not only does it generate more code, the initially generated code before optimizations is also often worse. For example, heavy use of iterators means a ton of generics being instantiated and a ton of call code for setting up and tearing down call frames. This gets heavily inlined and flattened out, so in the end it's extremely well-optimized, but it's a lot of work for the compiler. Writing it all out classically with for loops and ifs is possible, but it's harder to read.

fingerlocks

The swift compiler is definitely bottle necked by type checking. For example, as a language requirement, generic types are left more or less in-tact after compilation. They are type checked independent of what is happening. This is unlike C++ templates which are effectively copy-pasting the resolved type with the generic for every occurrence of type resolution.

This has tradeoffs: increased ABI stability at the cost of longer compile times.

ChadNauseam

That the type system is responsible for rust's slow builds is a common and enduring myth. `cargo check` (which just does typechecking) is actually usually pretty fast. Most of the build time is spent in the code generation phase. Some macros do cause problems as you mention, since the code that contains the macro must be compiled before the code that uses it, so they reduce parallelism.

tedunangst

I just ran cargo check on nushell, and it took a minute and a half. I didn't time how long it took to compile, maybe five minutes earlier today? So I would call it faster, but still not fast.

I was all excited to conduct the "cargo check; mrustc; cc" is 100x faster experiment, but I think at best, the multiple is going to be pretty small.

rstuart4133

> Most of the build time is spent in the code generation phase.

I can believe that, but even so it's caused by the type system monomorphising everything. When it use qsort from libc, you are using per-compiled code from a library. When you use slice::sort(), you get custom assembler compiled to suit your application. Thus, there is a lot more code generation going on, and that is caused by the tradeoffs they've made with the type system.

Rusts approach give you all sorts of advantages, like fast code and strong compile time type checking. But it comes with warts too, like fat binaries, and a bug in slice::sort() can't be fixed by just shipping of the std dynamic library, because there is no such library. It's been recompiled, just for you.

FWIW, modern C++ (like boost) that places everything in templates in .h files suffers from the same problem. If Swift suffers from it too, I'd wager it's the same cause.

cogman10

Yes but I'd also add that Go specifically does not optimize well.

The compiler is optimized for compilation speed, not runtime performance. Generally speaking, it does well enough. Especially because it's usecase is often applications where "good enough" is good enough (IE, IO heavy applications).

You can see that with "gccgo". Slower to compile, faster to run.

cherryteastain

Is gccgo really faster? Last time I looked it looked like it was abandoned (stuck at go 1.18, had no generics support) and was not really faster than the "actual" compiler.

Zardoz84

Dlang compilers does more than any C++ compiler (metaprogramming, a better template system and compile time execution) and it's hugely faster. Language syntax design has a role here.

dhosek

Because Russt and Swift are doing much more work than a C compiler would? The analysis necessary for the borrow checker is not free, likewise with a lot of other compile-time checks in both languages. C can be fast because it effectively does no compile-time checking of things beyond basic syntax so you can call foo(char) with foo(int) and other unholy things.

steveklabnik

The borrow checker is usually a blip on the overall graph of compilation time.

The overall principle is sound though: it's true that doing some work is more than doing no work. But the borrow checker and other safety checks are not the root of compile time performance in Rust.

kimixa

While the borrow checker is one big difference, it's certainly not the only thing the rust compiler offers on top of C that takes more work.

Stuff like inserting bounds checking puts more work on the optimization passes and codegen backend as it simply has to deal with more instructions. And that then puts more symbols and larger sections in the input to the linker, slowing that down. Even if the frontend "proves" it's unnecessary that calculation isn't free. Many of those features are related to "safety" due to the goals of the language. I doubt the syntax itself really makes much of a difference as the parser isn't normally high on the profiled times either.

Generally it provides stricter checks that are normally punted to a linter tool in the c/c++ world - and nobody has accused clang-tidy of being fast :P

taylorallred

These languages do more at compile time, yes. However, I learned from Ryan's discord server that he did a unity build in a C++ codebase and got similar results (just a few seconds slower than the C code). Also, you could see in the article that most of the time was being spent in LLVM and linking. With a unity build, you nearly cut out link step entirely. Rust and Swift do some sophisticated things (hinley-milner, generics, etc.) but I have my doubts that those things cause the most slowdown.

drivebyhooting

That’s not a good example. Foo(int) is analyzed by compiler and a type conversion is inserted. The language spec might be bad, but this isn’t letting the compiler cut corners.

jvanderbot

If you'd like the rust compiler to operate quickly:

* Make no nested types - these slow compiler time a lot

* Include no crates, or ones that emphasize compiler speed

C is still v. fast though. That's why I love it (and Rust).

Thiez

This explanation gets repeated over and over again in discussions about the speed of the Rust compiler, but apart from rare pathological cases, the majority of time in a release build is not spent doing compile-time checks, but in LLVM. Rust has zero-cost abstractions, but the zero-cost refers to runtime, sadly there's a lot of junk generated at compile-time that LLVM has to work to remove. Which is does, very well, but at cost of slower compilation.

vbezhenar

Is it possible to generate less junk? Sounds like compiler developers took a shortcuts, which could be improved over time.

tptacek

I don't think it's interesting to observe that C code can be compiled quickly (so can Go, a language designed specifically for fast compilation). It's not a problem intrinsic to compilation; the interesting hard problem is to make Rust's semantics compile quickly. This is a FAQ on the Rust website.

vbezhenar

I encountered one project in 2000-th with few dozens of KLoC in C++. It compiled in a fraction of a second on old computer. My hello world code with Boost took few seconds to compile. So it's not just about language, it's about structuring your code and using features with heavy compilation cost. I'm pretty sure that you can write Doom with C macros and it won't be fast. I'm also pretty sure, that you can write Rust code in a way to compile very fast.

taylorallred

I'd be very interested to see a list of features/patterns and the cost that they incur on the compiler. Ideally, people should be able to use the whole language without having to wait so long for the result.

vbezhenar

So there are few distinctive patterns I observed in that project. Please note that many of those patterns are considered anti-patterns by many people, so I don't necessarily suggest to use them.

1. Use pointers and do not include header file for class, if you need pointer to that class. I think that's a pretty established pattern in C++. So if you want to declare pointer to a class in your header, you just write `class SomeClass;` instead of `#include "SomeClass.hpp"`.

2. Do not use STL or IOstreams. That project used only libc and POSIX API. I know that author really hated STL and considered it a huge mistake to be included to the standard language.

3. Avoid generic templates unless absolutely necessary. Templates force you to write your code in header file, so it'll be parsed multiple times for every include, compiled to multiple copies, etc. And even when you use templates, try to split the class to generic and non-generic part, so some code could be moved from header to source. Generally prefer run-time polymorphism to generic compile-time polymorphism.

kccqzy

Templates as one single feature can be hugely variable. Its effect on compilation time can be unmeasurable. Or you can easily write a few dozen lines that will take hours to compile.

ceronman

I bet that if you take those 278k lines of code and rewrite them in simple Rust, without using generics, or macros, and using a single crate, without dependencies, you could achieve very similar compile times. The Rust compiler can be very fast if the code is simple. It's when you have dependencies and heavy abstractions (macros, generics, traits, deep dependency trees) that things become slow.

taylorallred

I'm curious about that point you made about dependencies. This Rust project (https://github.com/microsoft/edit) is made with essentially no dependencies, is 17,426 lines of code, and on an M4 Max it compiles in 1.83s debug and 5.40s release. The code seems pretty simple as well. Edit: Note also that this is 10k more lines than the OP's project. This certainly makes those deps suspicious.

MindSpunk

The 'essentially no dependencies' isn't entirely true. It depends on the 'windows' crate, which is Microsoft's auto-generated Win32 bindings. The 'windows' crate is huge, and would be leading to hundreds of thousands of LoC being pulled in.

There's some other dependencies in there that are only used when building for test/benchmarking like serde, zstd, and criterion. You would need to be certain you're building only the library and not the test harness to be sure those aren't being built too.

90s_dev

I can't help but think the borrow checker alone would slow this down by at least 1 or 2 orders of magnitude.

steveklabnik

Your intuition would be wrong: the borrow checker does not take much time at all.

tomjakubowski

The borrow checker is really not that expensive. On a random example, a release build of the regex crate, I see <1% of time spent in borrowck. >80% is spent in codegen and LLVM.

FridgeSeal

Again, as this been often repeated, and backed up with data, the borrow-checker is a tiny fraction of a Rust apps build time, the biggest chunk of time is spent in LLVM.

Aurornis

> makes me wonder why other languages like Rust and Swift can't just do something similar to achieve similar speeds.

One of the primary features of Rust is the extensive compile-time checking. Monomorphization is also a complex operation, which is not exclusive to Rust.

C compile times should be very fast because it's a relatively low-level language.

On the grand scale of programming languages and their compile-time complexity, C code is closer to assembly language than modern languages like Rust or Swift.

js2

"Just". Probably because there's a lot of complexity you're waving away. Almost nothing is ever simple as "just".

pixelpoet

At a previous company, we had a rule: whoever says "just" gets to implement it :)

forrestthewoods

I wanted to ban “just” but your rule is better. Brilliant.

taylorallred

That "just" was too flippant. My bad. What I meant to convey is "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".

steveklabnik

> "hey, there's some fast compiling going on here and it wasn't that hard to pull off. Can we at least take a look at why that is and maybe do the same thing?".

Do you really believe that nobody over the course of Rust's lifetime has ever taken a look at C compilers and thought about if techniques they use could apply to the Rust compiler?

maxk42

Rust is doing a lot more under the hood. C doesn't track variable lifetimes, ownership, types, generics, handle dependency management, or handle compile-time execution (beyond the limited language that is the pre-compiler). The rust compiler also makes intelligent (scary intelligent!) suggestions when you've made a mistake: it needs a lot of context to be able to do that.

The rust compiler is actually pretty fast for all the work it's doing. It's just an absolutely insane amount of additional work. You shouldn't expect it to compile as fast as C.

rednafi

I’m glad that Go went the other way around: compilation speed over optimization.

For the kind of work I do — writing servers, networking, and glue code — fast compilation is absolutely paramount. At the same time, I want some type safety, but not the overly obnoxious kind that won’t let me sloppily prototype. Also, the GC helps. So I’ll gladly pay the price. Not having to deal with sigil soup is another plus point.

I guess Google’s years of experience led to the conclusion that, for software development to scale, a simple type system, GC, and wicked fast compilation speed are more important than raw runtime throughput and semantic correctness. Given the amount of networking and large - scale infrastructure software written in Go, I think they absolutely nailed it.

But of course there are places where GC can’t be tolerated or correctness matters more than development speed. But I don’t work in that arena and am quite happy with the tradeoffs that Go made.

ode

Is Go still in heavy use at Google these days?

galangalalgol

That is exactly what go was meant for and there is nothing better than picking the right tool for the job. The only foot gun I have seen people run into is parallelism with mutable shared state through channels can be subtly and exploitably wrong. I don't feel like most people use channels like that though? I use rust because that isn't the job I have. I usually have to cramb slow algorithms into slower hardware, and the problems are usually almost but not quite embarrassingly parallel.

AndyKelley

My homepage takes 73ms to rebuild: 17ms to recompile the static site generator, then 56ms to run it.

    andy@bark ~/d/andrewkelley.me (master)> zig build --watch -fincremental
    Build Summary: 3/3 steps succeeded
    install success
    └─ run exe compile success 57ms MaxRSS:3M
       └─ compile exe compile Debug native success 331ms
    Build Summary: 3/3 steps succeeded
    install success
    └─ run exe compile success 56ms MaxRSS:3M
       └─ compile exe compile Debug native success 17ms
    watching 75 directories, 1 processes

whoisyc

Just like every submission about C/C++ gets a comment about how great Rust is, every submission about Rust gets a comment about how great Zig is. Like a clockwork.

Edit: apparently I am replying to the main Zig author? Language evangelism is by far the worst part of Rust and has likely stirred up more anti Rust sentiment than “converting” people to Rust. If you truly care for your language you should use whatever leverage you have to steer your community away from evangelism, not embrace it.

qualeed

Neat, I guess?

This comment would be a lot better if it engaged with the posted article, or really had any sort of insight beyond a single compile time metric. What do you want me to take away from your comment? Zig good and Rust bad?

kristoff_it

I think the most relevant thing is that building a simple website can (and should) take milliseconds, not minutes, and that -- quoting from the post:

> A brief note: 50 seconds is fine, actually!

50 seconds should actually not be considered fine.

qualeed

As you've just demonstrated, that point can be made without even mentioning Zig, let alone copy/pasting some compile time stuff with no other comment or context. Which is why I thought (well, hoped) there might be something more to it than just a dunk attempt.

Now we get all of this off-topic discussion about Zig. Which I guess is good for you Zig folk... But it's pretty off-putting for me.

whoisyc's comment is extremely on point. As the VP of community, I would really encourage thinking about what they said.

ww520

Nice. Didn't realize zig build has --watch and -fincremental added. I was mostly using "watchexec -e zig zig build" for recompile on file changes.

Graziano_M

New to 0.14.0!

nicoburns

My non-static Rust website (includes an actual webserver as well as a react-like framework for templating) takes 1.25s to do an incremental recompile with "cargo watch" (which is an external watcher that just kills the process and reruns "cargo run").

And it can be considerably faster if you use something like subsecond[0] (which does incremental linking and hotpatches the running binary). It's not quite as fast as Zig, but it's close.

However, if that 331ms build above is a clean (uncached) build then that's a lot faster than a clean build of my website which takes ~12s.

[0]: https://news.ycombinator.com/item?id=44369642

AndyKelley

The 331ms time is mostly uncached. In this case the build script was already cached (must be re-done if the build script is edited), and compiler_rt was already cached (must be done exactly once per target; almost never rebuilt).

nicoburns

Impressive!

taylorallred

@AndyKelley I'm super curious what you think the main factors are that make languages like Zig super fast at compiling where languages like Rust and Swift are quite slow. What's the key difference?

AndyKelley

Basically, not depending on LLVM or LLD. The above is only possible because we invested years into making our own x86_64 backend and our own linker. You can see all the people ridiculing this decision 2 years ago https://news.ycombinator.com/item?id=36529456

unclad5968

LLVM isnt a good scapegoat. A C application equivalent in size to a rust or c++ application will compile an order of magnitude quicker and they all use LLVM. I'm not a compiler expert, but it doesn't seem right to me that the only possible path to quick compilation for Zig was a custom backend.

zozbot234

The Rust folks have cranelift and wild BTW. There are alternatives to LLVM and LLD, even though they might not be as obvious to most users.

steveklabnik

I'm not Andrew, but Rust has made several language design decisions that make compiler performance difficult. Some aspects of compiler speed come down to that.

One major difference is the way each project considers compiler performance:

The Rust team has always cared to some degree about this. But, from my recollection of many RFCs, "how does this impact compiler performance" wasn't a first-class concern. And that also doesn't really speak to a lot of the features that were basically implemented before the RFC system existed. So while it's important, it's secondary to other things. And so while a bunch of hard-working people have put in a ton of work to improve performance, they also run up against these more fundamental limitations at the limit.

Andrew has pretty clearly made compiler performance a first-class concern, and that's affected language design decisions. Naturally this leads to a very performant compiler.

coolsunglasses

I'm also curious because I've (recently) compiled more or less identical programs in Zig and Rust and they took the same amount of time to compile. I'm guessing people are just making Zig programs with less code and fewer dependencies and not really comparing apples to apples.

kristoff_it

Zig is starting to migrate to custom backends for debug builds (instead of using LLVM) plus incremental compilation.

All Zig code is built in a single compilation unit and everything is compiled from scratch every time you change something, including all dependencies and all the parts of the stdlib that you use in your project.

So you've been comparing Zig rebuilds that do all the work every time with Rust rebuilds that cache all dependencies.

Once incremental is fully released you will see instant rebuilds.

vlovich123

Zig isn’t memory safe though right?

pixelpoet

It isn't a lot of things, but I would argue that its exceptionally (heh) good exception handling model / philosophy (making it good, required, and performant) is more important than memory safety, especially when a lot of performance-oriented / bit-banging Rust code just gets shoved into Unsafe blocks anyway. Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge

What I'm more interested to know is what the runtime performance tradeoff is like now; one really has to assume that it's slower than LLVM-generated code, otherwise that monumental achievement seems to have somehow been eclipsed in very short time, with much shorter compile times to boot.

vlovich123

> Even C/C++ can be made memory safe, cf. https://github.com/pizlonator/llvm-project-deluge

> Fil-C achieves this using a combination of concurrent garbage collection and invisible capabilities (each pointer in memory has a corresponding capability, not visible to the C address space)

With significant performance and memory overhead. That just isn't the same ballpark that Rust is playing in although hugely important if you want to bring forward performance insensitive C code into a more secure execution environment.

kristoff_it

How confident are you that memory safety (or lack thereof) is a significant variable in how fast a compiler is?

ummonk

Zig is less memory safe than Rust, but more than C/C++. Neither Zig nor Rust is fundamentally memory safe.

Ar-Curunir

What? Zig is definitively not memory-safe, while safe Rust, is, by definition, memory-safe. Unsafe rust is not memory-safe, but you generally don't need to have a lot of it around.

echelon

Zig is a small and simple language. It doesn't need a complicated compiler.

Rust is a large and robust language meant for serious systems programming. The scope of problems Rust addresses is large, and Rust seeks to be deployed to very large scale software problems.

These two are not the same and do not merit an apples to apples comparison.

edit: I made some changes to my phrasing. I described Zig as a "toy" language, which wasn't the right wording.

These languages are at different stages of maturity, have different levels of complexity, and have different customers. They shouldn't be measured against each other so superficially.

ummonk

This is an amusing argument to make in favor of Rust, since it's exactly the kind of dismissive statement that Ada proponents make about other languages including Rust.

steveklabnik

Come on now. This isn't acceptable behavior.

(EDIT: The parent has since edited this comment to contain more than just "zig bad rust good", but I still think the combative-ness and insulting tone at the time I made this comment isn't cool.)

echelon

> but I still think the combative-ness and insulting tone at the time I made this comment isn't cool

Respectfully, the parent only offers up a Zig compile time metric. That's it. That's the entire comment.

This HN post about Rust is now being dominated by a cheap shot Zig one liner humblebrag from the lead author of Zig.

I think this thread needs a little more nuance.

ahartmetz

That person seems to be confused. Installing a single, statically linked binary is clearly simpler than managing a container?!

jerf

Also strikes me as not fully understanding what exactly docker is doing. In reference to building everything in a docker image:

"Unfortunately, this will rebuild everything from scratch whenever there's any change."

In this situation, with only one person as the builder, with no need for CI or CD or whatever, there's nothing wrong with building locally with all the local conveniences and just slurping the result into a docker container. Double-check any settings that may accidentally add paths if the paths have anything that would bother you. (In my case it would merely reveal that, yes, someone with my username built it and they have a "src" directory... you can tell how worried I am about both those tidbits by the fact I just posted them publicly.)

It's good for CI/CD in a professional setting to ensure that you can build a project from a hard drive, a magnetic needle, and a monkey trained to scratch a minimal kernel on to it, and boot strap from there, but personal projects don't need that.

linkage

Half the point of containerization is to have reproducible builds. You want a build environment that you can trust will be identical 100% of the time. Your host machine is not that. If you run `pacman -Syu`, you no longer have the same build environment as you did earlier.

If you now copy your binary to the container and it implicitly expects there to be a shared library in /usr/lib or wherever, it could blow up at runtime because of a library version mismatch.

scuff3d

Thank you! I got a couple minutes in and was confused as hell. There is no reason to do the builds in the container.

Even at work, I have a few projects where we had to build a Java uber jar (all the dependencies bundled into one big far) and when we need it containerized we just copy the jar in.

I honestly don't see much reason to do builds in the container unless there is some limitation in my CICD pipeline where I don't have access to necessary build tools.

vorgol

Exactly. I immediately thought of the grug brain dev when I read that.

hu3

From the article, the goal was not to simplify, but rather to modernize:

> So instead, I'd like to switch to deploying my website with containers (be it Docker, Kubernetes, or otherwise), matching the vast majority of software deployed any time in the last decade.

Containers offer many benefits. To name some: process isolation, increased security, standardized logging and mature horizontal scalability.

adastra22

So put the binary in the container. Why does it have to be compiled within the container?

hu3

That is what they are doing. It's a 2 stage Dockerfile.

First stage compiles the code. This is good for isolation and reproducibility.

Second stage is a lightweight container to run the compiled binary.

Why is the author being attacked (by multiple comments) for not making things simpler when that was not claimed that as the goal. They are modernizing it.

Containers are good practice for CI/CD anyway.

dwattttt

Mightily resisting the urge to be flippant, but all of those benefits were achieved before Docker.

Docker is a (the, in some areas) modern way to do it, but far from the only way.

a3w

Increased security compared to bare hardware, lower than VMs. Also, lower than Jails and RKT (Rocket) which seems to be dead.

eeZah7Ux

> process isolation, increased security

no, that's sandboxing.

MangoToupe

I don't really consider it to be slow at all. It seems about as performant as any other language this complexity, and it's far faster than the 15 minute C++ and Scala build times I'd place in the same category.

randomNumber7

When C++ templates are turing complete is it pointless to complain about the compile times without considering the actual code :)

namibj

Incremental compilation good. If you want, freeze the initial incremental cache after a single fresh build to use for building/deploying updates, to mitigate the risk of intermediate states gradually corrupting the cache.

Works great with docker: upon new compiler version or major website update, rebuild the layer with the incremental cache; otherwise just run from the snapshot and build newest website update version/state, and upload/deploy the resulting static binary. Just set so that mere code changes won't force rebuilding the layer that caches/materializes the fresh clean build's incremental compilation cache.

null

[deleted]

ozgrakkurt

Rust compiler is very very fast but language has too many features.

The slowness is because everyone has to write code with generics and macros in Java Enterprise style in order to show they are smart with rust.

This is really sad to see but most libraries abuse codegen features really hard.

You have to write a lot of things manually if you want fast compilation in rust.

Compilation speed of code just doesn’t seem to be a priority in general with the community.

aquariusDue

Yeah, for application code in my experience the more I stick to the dumb way to do it the less I fight the borrow checker along with fewer trait issues.

Refactoring seems to take about the same time too so no loss on that front. After all is said and done I'm just left with various logic bugs to fix which is par for the course (at least for me) and a sense of wondering if I actually did everything properly.

I suppose maybe two years from now we'll have people that suggest avoiding generics and tempering macro usage. These days most people have heard the advice about not stressing over cloning and unwraping (though expect is much better imo) on the first pass more or less.

Something something shiny tool syndrome?

adastra22

As a former C++ developer, claims that rust compilation is slow leave me scratching my head.

eikenberry

Which is one of the reasons why Rust is considered to be targeting C++'s developers. C++ devs already have the Stockholm syndrome needed to tolerate the tooling.

MyOutfitIsVague

Rust's compilation is slow, but the tooling is just about the best that any programming language has.

GuB-42

How good is the debugger? "edit and continue"? Hot reload? Full IDE?

I don't know enough Rust, but I find these aspects are seriously lacking in C++ on Linux, and it is one of the few things I think Windows has it better for developers. Is Rust better?

galangalalgol

Also modern c++ with value semantics is more functional than many other languages people might come to rust from, that keeps the borrow checker from being as annoying. If people are used to making webs of stateful classes with references to each pther. The borrow checker is horrific, but that is because that design pattern is horrific if you multithread it.

zozbot234

> Stockholm syndrome

A.k.a. "Remember the Vasa!" https://news.ycombinator.com/item?id=17172057

MobiusHorizons

Things can still be slow in absolute terms without being as slow as C++. The issues with compiling C++ are incredibly well understood and documented. It is one of the worst languages on earth for compile times. Rust doesn’t share those language level issues, so the expectations are understandably higher.

const_cast

Rust shares pretty much every language-level issue C++ has with compile times, no? Monomorphization explosion, turing-complete compile time macros, complex type system.

int_19h

But it does share some of those issues. Specifically, while Rust generics aren't as unstructured as C++ templates, the main burden is actually from compiling all those tiny instantiations, and Rust monomorphization has the same exact problem responsible for the bulk of its compile times.

shadowgovt

I thorougly enjoy all the work on encapsulation and reducing the steps of compilation to compile, then link that C does... Only to have C++ come along and undo almost all of it through the simple expedient of requiring templates for everything.

Oops, changed one template in one header. And that impacts.... 98% of my code.

kenoath69

Where is Cranelift mentioned

My 2c on this is nearly ditching rust for game development due to the compile times, in digging it turned out that LLVM is very slow regardless of opt level. Indeed it's what the Jai devs have been saying.

So Cranelift might be relevant for OP, I will shill it endlessly, took my game from 16 seconds to 4 seconds. Incredible work Cranelift team.

BreakfastB0b

I participated in the most recent Bevy game jam and the community has a new tool that came out of Dioxus called subsecond which as the name suggests provides sub-second hot reloading of systems. It made prototyping very pleasant. Especially when iterating on UI.

https://github.com/TheBevyFlock/bevy_simple_subsecond_system

jiehong

I think that’s what zig team is also doing to allow very fast build times: remove LLVM.

norman784

Yes, Zig author commented[0] that a while ago

[0] https://news.ycombinator.com/item?id=44390972

norman784

Nice, I checked a while ago and was no support for macOS aarch64, but seems that now it is supported.

lll-o-lll

Wait. You were going to ditch rust because of 16 second build times?

metaltyphoon

Over time that adds up when your coding consists of REPL like workflow.

sarchertech

16 seconds is infuriating for something that needs to be manually tested like does this jump feel too floaty.

But it’s also probable that 16 seconds was fairly early in development and it would get much worse from there.

smcleod

I've got to say when I come across an open source project and realise it's in rust I flinch a bit know how incredibly slow the build process is. It's certainly been one of the deterrents to learning it.

duped

A lot of people are replying to the title instead of the article.

> To get your Rust program in a container, the typical approach you might find would be something like:

If you have `cargo build --target x86_64-unknown-linux-musl` in your build process you do not need to do this anywhere in your Dockerfile. You should compile and copy into /sbin or something.

If you really want to build in a docker image I would suggest using `cargo --target-dir=/target ...` and then run with `docker run --mount type-bind,...` and then copy out of the bind mount into /bin or wherever.

aappleby

you had a functional and minimal deployment process (compile copy restart) and now you have...