Rust compiler performance

(kobzol.github.io)

Comments

jtrueb 12 June 2025
A true champion

> when I started contributing to Rust back in 2021, my primary interest was compiler performance. So I started doing some optimization work. Then I noticed that the compiler benchmark suite could use some maintenance, so I started working on that. Then I noticed that we don’t compile the compiler itself with as many optimizations as we could, so I started working on adding support for LTO/PGO/BOLT, which further led to improving our CI infrastructure. Then I noticed that we wait quite a long time for our CI workflows, and started optimizing them. Then I started running the Rust Annual Survey, then our GSoC program, then improving our bots, then…

norir 12 June 2025
Compiler performance must be considered up front in language design. It is nearly impossible to fix once the language reaches a certain size without it being a priority. I recently saw here the observation that one can often get a 2x performance improvement through optimization, but 10x requires redesigning the architecture.

Rust can likely never be rearchitected without causing a disastrous schism in the community, so it seems probable that compilation will always be slow.

jplusequalt 12 June 2025
Having worked on large scale C++ code-bases and thus used to long compilation times, it surprises me that this is the hill many C++ devs would die on in regards to their dislike of Rust.
kristoff_it 23 hours ago
> Speaking of DoD, an additional thing to consider is the maintainability of the compiler codebase. Imagine that we swung our magic wand again, and rewrote everything over the night using DoD, SIMD vectorization, hand-rolled assembly, etc. It would (possibly) be way faster, yay! However, we do not only care about immediate performance, but also about our ability to make long-term improvements to it.

This is an unfortunate hyperbole from the author. There's a lot of distance between DoD and "hand-rolled assembly" and thinking that it's fair to put them in the same bucket to justify the argument of maintainability is just going to hurt the Rust project's ability to make a better compiler for its users.

You know what helps a lot making software maintainable? A Faster development loop. Zig has invested years into this and both users and the core team itself have started enjoying the fruits of that labor.

https://ziglang.org/devlog/2025/#2025-06-08

Of course everybody is free to choose their own priorities, but I find the reasoning flawed and I think that it would ultimately be in the Rust project's best interest to prioritize compiler performance more.

juliangmp 18 hours ago
>[...] this will depend on who you ask, e.g. some C++ developers don’t mind Rust’s compilation times at all, as they are used to the same (or worse) build times

Yeah pretty much. C++ is a lot worse when you consider the practical time spent vs compilation benchmarks. In most C++ projects I've seen/worked on, there were one or sometimes more code generators in the toolchain which slowed things down a lot.

And it looks even more dire when you want to add clang-tidy in the mix. It can take like 5 solid minutes to lint even small projects.

When I work in Rust, the overall speed of the toolchain (and the language server) is an absolute blessing!

adrian17 12 June 2025
> On this benchmark, the compiler is almost twice as fast than it was three years ago.

I think the cause of the public perception issue could be the variant of Wirth's law: the size of an average codebase (and its dependencies) might be growing faster than the compiler's improvements in compiling it?

ruuda 11 hours ago
The Rust ecosystem is getting slower faster than the compiler is getting faster. Libraries grow to add features, they add dependencies. Individually the growth is not so bad, and justified by features or wider platform support. But they add up, and especially dependencies adding dependencies act as a multiplier.

I started writing a post about this many years ago, but never finished it. I took a few slow-changing projects of mine that had a pinned Rust compiler, and then updated both the compiler and dependencies to the latest versions. Invariably, everything got slower to compile, even though the compiler update in isolation made things faster!

jadbox 12 June 2025
Not related to the article, but after years of using Rust, it still is a pain in the ass. While it may be a good choice for OS development, high frequency trading, medical devices, vehicle firmware, finance software, or working on device drivers, it feels way overkill for most other general domains. On the other hand, I learned Zig and Go both over a weekend and find they run almost as fast and don't suffer from memory issues (as much as say Java or C++).
kjuulh 8 hours ago
Could Rust be faster, yes. But honestly, for our use-case shipping; tools, services, libraries and what have you in production, it is plenty fast. That said, Rust definitely falls off a cliff once you get to a very large workspace (I'd say plus 100k lines of code it begins to snowball), but you can design yourself out of that, unless you build truly massive apps.

Incremental builds doesn't disrupt my feedback loop much, only when paired with building for multiple targets at once. I.e. Leptos where a wasm and native build is run. Incremental builds do however, eat up a lot of space, a comical amount even. I had a 28GB target/ folder yesterday from working a few hours on a leptos app.

One recommendation is to definitely upgrade your CI workers, Rust definitely benefits from larger workers than the default GitHub actions runners as an example.

Compilling a fairly simple app, though including DuckDB which needs to be compiled, took 28 minutes on default runners. but on a 32x machine, we're down to around 3 minutes. Which is fast enough that it doesn't disrupt our feedback loop.

kunley 18 hours ago
The article is fine and has a lot of good points, but tries to avoid the main issue like a plague. So I will speak it here:

The slowness comes mainly from LLVM.

lrvick 7 hours ago
If anyone wants to feel better about compile times for their rust programs, try full source bootstrapping the rust compiler itself. Took about 2 days on 64 cores until very recently (thanks to mrustc 0.74). Now only 7 hours!
Animats 23 hours ago
This isn't a huge problem. My big Rust project compiles in about a minute in release mode. Failed compiles with errors only take a few seconds. That's where most of the debugging takes place. Once it compiles, it usually works the first time.
baalimago 8 hours ago
Seems to me that Rust has hit bedrock.

If there's no tangible solution to this design flaw today, what will happen to it in 20 years? My expectation is that the amount of dependencies will increase, as will the complexity of the Rust ecosystem at large, which will make the compilation times even worse.

vlovich123 12 June 2025
I wonder if how much value there is in skipping LLVM in favor of having a JIT optimized linked in instead. For release builds it would get you a reasonable proxy if it optimized decently while still retaining better debugability.

I wonder if the JVM as an initial target might be interesting given how mature and robust their JIT is.

baalimago 8 hours ago
Why haven't Rust been forked by some bigger company, who have the time and resources to specialize it into something which fits better into a professional market? Yes I'm saying low compilation time -> high development RTT is a requirement for the professional market.
bnolsen 17 hours ago
Compilation speed makes go nice. Zig should end up being king here depending on comptime use (ie: lack of operators can be overcome by using comptime to parse formulae strings for things like geometric algebra).
johnfn 21 hours ago
> First, let me assure you - yes, we (as in, the Rust Project) absolutely do care about the performance of our beloved compiler, and we put in a lot of effort to improve it.

I'm probably being ungrateful here, but here goes anyway. Yes, Rust cares about performance of the compiler, but it would likely be more accurate to say that compiler performance is, like, 15th on the list of things they care about, and they'll happily trade off slower compile times for one of the other things.

I find posts about Rust like this one, where they say "ah, of course we care about perf, look, we got the compile times on a somewhat nontrivial project to go from 1m15s to 1m09s" somewhat underwhelming - I think they miss the point. For me, I basically only care if compile times are virtually instantaneous. e.g. Vite scales to a million lines and can hot-swap my code changes in instantaneously. This is where the productivity benefits come in.

Don't just trust me on it. Remember this post[1]?

> "I feels like some people realize how much more polish could their games have if their compile times were 0.5s instead of 30s. Things like GUI are inherently tweak-y, and anyone but users of godot-rust are going to be at the mercy of restarting their game multiple times in order to make things look good. "

[1]: https://loglog.games/blog/leaving-rust-gamedev/#compile-time...

daxfohl 12 June 2025
Maybe these features already exist, but I'd like a way to: 1) Type check without necessarily building the whole thing. 2) Run a unit test, only building the dependencies of that test. Do these exist or are they remotely feasible?
panstromek 12 June 2025
The original title is:

Why doesn't Rust care more about compiler performance?

synthos 21 hours ago
Regarding AVX: could rust be compiled with different symbols that target different x64 instruction sets, then at runtime choose the symbol set that is the more performant for that architecture?
littlestymaar 12 June 2025
In my experience working on medium-sized Rust projects (hundreds of thousands of LoCs, not millions), incremental compilation and mold pretty much solved the problem in practice. I still occasionally code on my 13 years old laptop when traveling and compilation time is fine even there (for cargo check and debug build, that is, I barely ever compile in release mode locally).

What's painful is compiling from scratch, and particularly the fact that every other week I need to run cargo clean and do a full rebuild to get things working. IMHO this is a much bigger annoyance than raw compiler speed.

dboreham 12 June 2025
I'd vote for filesystem space utilization to be worked on before performance.
superkuh 6 hours ago
The biggest problem with the Rust compiler is not it's speed in compiling. It's that rustc from 3 months ago can't compile most Rust code written today. And don't tell me that cargo versioning fixes this, it doesn't. The very improvements we are celebrating here, which are very real and appreciated, are part of this problem. Rust is young and Rust changes very, very fast. I think it'll be a great language in a decade when it's no longer just used by bleeding edge types and has a target that stands still for more than a few months.
jmyeet 12 June 2025
I'm a big fan of Rust but there are definitely warts that are going to be difficult to cure [1]. This is 5 years old now but I believe it's still largely relevant.

It is a weird hill to die on for C/C++ devs though, given header files and templates creating massive compile-time issues that really can't be solved.

Google is known for having infrastructure for compiling large projects. They use Blaze (open-sourced at Bazel) to define hermetic builds then use large systems to cache object graphs (for compilation units) and caching compiled objects because Google uses some significant monoliths that would take a significant amount of time to compile from scratch.

I wonder what this kind of infrastructure can do for a large Rust project.

[1]:https://www.pingcap.com/blog/rust-compilation-model-calamity...

ModernMech 3 hours ago
The biggest thing that's happened in recent time to improve Rust compiler performance was the introduction the Apple M-series chips. On my x86 machine, it'll take maybe 10 minutes for a fresh build of my project, but on my Apple machine that's down to less than a minute, even on the lower end Mac Mini. For incremental builds it only takes a few seconds. I'm fine with this amount of compilation time for what it buys me, and I don't feel it slows me down any because (and I know this sounds like cope) it gives me a minute to breathe and collect my thoughts. Sometimes I find that debugging a problem while actively coding in an interactive REPL is different from debugging offline.

I'm not sure why but the way I would explain it is when you're debugging in an interactive REPL you're always get fast incremental result, but you may be going down an unproductive rabbit hole and spinning your tires. When I hit that compile button, I'm able to take a step back and maybe see the problem from another angle. Still, I prefer a short development loop, but I do think you lose something from it.