This is one of the (several?) things that make me very worried about Rust long-term. I love the language, and reach for it even when it sometimes isn't the most appropriate thing. But reading some of the made-up syntax in the "Removing Coherence" section makes my head hurt.
When I used to write Scala, I accepted the fact that I don't have a background in type/set/etc. theory, and that there were some facets of the language that I'd probably never understand, and some code that others had written that I'd probably never understand.
With a language like Rust, I feel like we're getting there. Certain GAT syntxes sometimes take some time for me to wrap my head around when I encounter them. Rust feels like it shouldn't be a language where you need to have some serious credentials to be able to understand all its features and syntax.
On the other end we have Go, which was explicitly designed to be easy to learn (and, unrelatedly, I don't like for quite a few reasons). But I was hoping that we could have a middle ground here, and that Rust could be a fully-graspable systems-level language.
Then again, for more comparison, I haven't used C++ since before they added lambdas. I wonder if C++ has some hairy concepts and syntax today on par with Rust's more difficult parts.
Tangent to the topic: One of the great things about Go is that the Go team goal is to have a great developer experience. As a result, they try to bundle common third party libraries (mux, zap) into the standard library. For example, they offered an http server, but due to lacking features community packages offered convenience. The Go team used those libraries as a reference to what people wanted, and addrd a performant and simple http routing in the standard library[1].
From that link:
> We made these changes as part of our continuing effort to make Go a great language for building production systems. We studied many third-party web frameworks, extracted what we felt were the most used features, and integrated them into net/http. Then we validated our choices and improved our design by collaborating with the community in a GitHub discussion and a proposal issue. Adding these features to the standard library means one fewer dependency for many projects. But third-party web frameworks remain a fine choice for current users or programs with advanced routing needs.
I used Rust for ~14 months and released one profitable SaaS product built entirely in Rust (actix-web, sqlx, askama).
I won't be using Rust moving forward. I do like the language but it's complicated (hard to hold in your head). I feel useless without the LSP and I don't like how taxing the compiler and LSP are on my system.
It feels really wasteful to burn CPU and spin up fans every time I save a file. I find it hard to justify using 30+ GB of memory to run an LSP and compiler. I know those are tooling complaints and not really the fault of the language, but they go hand in hand. I've tried using a ctags-based workflow using vim's built in compiler/makeprg, but it's less than ideal.
I also dislike the crates.io ecosystem. I hate how crates.io requires a GitHub account to publish anything. We are already centralized around GitHub and Microsoft, why give them more power? There's an open issue on crates.io to support email based signups but it has been open for a decade.
I think a lot of developers look at Typescript and come away thinking that a static type system is something you can retrofit onto any language. These devs ask why anyone would still want to use a dynamically typed language, as though static typing is something that can be had for free. But the reality is that a robust type system ends up profoundly shaping the design of a language, and introduces these sorts of thorny design questions, with each option bringing its own tradeoffs and limitations.
We want our languages to make it easy to write correct programs. And we want our languages to make it hard to write incorrect programs. And trying to have both at once is very difficult.
Then you just implement Serialize and Deserialize for TypeWithDifferentSerialization.
This cover most occasional cases where you need to work around the orphan rule. And semantically, it's pretty reasonable: If a type behaves differently, then it really isn't the same type.
The alternative is to have a situation where you have library A define a data type, library B define an interface, and library C implement the interface from B for the type from A. Very few languages actually allow this, because you run into the problem where library D tries to do the same thing library C did, but does it differently. There are workarounds, but they add complexity and confusion, which may not be worth it.
I feel like encapsulation and composition are in strong tension, and this is one place where it boils over.
I've written a decent bit of Rust, and am currently messing around with Zig. So the comparison is pretty fresh on my mind:
In Rust, you can have private fields. In Zig all fields are public. The consequences are pretty well shown with how they print structs:
In Rust, you derive Debug, which is a macro that implements the Debug trait at the definition site. In Zig, the printing function uses reflection to enumerate the provided struct's fields, and creates a print string based on that. So Rust has the display logic at the definition site, while Zig has the logic at the call site.
It's similar with hash maps: in Rust you derive/implement the Hash and PartialEq trait, in Zig you provide the hash and eq function at the call site.
Each one has pretty stark downsides:
Zig - since everything is public, you can't guarantee that your invariants are valid. Anyone can mess around with your internals.
Rust - once a field is private (which is the convention), nobody else can mess with the internals. This means outside modules can't access internal state, so if the API is bad, you're pretty screwed.
Honestly, I'm not sure if there is a way to resolve this tension.
EDIT: one more thought: Zig vs Rust also shows up with how object destruction is handled. In Rust you implement a Drop trait, so each object can only have one way to be destroyed. In Zig you use defer/errdefer, so you can choose what type of destructor runs, but this also means you can mess up destruction in subtle ways.
I’m not convinced that the problem is actually a problem. Suppose someone writes a type PairOfNumbers with a couple fields. The author did not define a serialization. You use it in another type and want it to serialize it as:
{ "a": 1, "b": 2 }
I use it and want to serialize it as:
[ 1, 2 ]
What we’re doing is fine. You should get your serialization and I should get mine. But if either of us declares, process-wide, that one of us has determined the One True Serialization of PairOfInts, I think we are wrong.
Sure, maybe current Rust and current serde make it awkward to declare non-global serializers, but that doesn’t mean that coherence is a mistake.
To me, the correct solution to the problem of being tied to one ecosystem crate for utility features like serialization or logging is reflection / comptime. The problem is not the orphan rule, it’s that Rust needs reflection a lot more than a dynamically-typed language does, and it should have been added a long time ago. (It’s in development now, but it will most likely be years before it ships in a stable version.)
coherence rules are one of those things that seem annoying until you maintain a library others depend on. orphan rules saved me from some nasty diamond dependency situations. the frustration is real when you just want to impl Display for some foreign type though — end up with a newtype wrapper every time and hate it.
"Note that nonbinary crates still obey the orphan rules."
I find it slightly humorous that this sentence contains three words which would be understood completely differently by the majority of the English-speaking population.
I’m not sure I fully understand but this seems to be the kind of problem that Ocaml functors solve. You program against an interface (signature) and you supply a concrete implementation (structure) when you want to run it. You can use different implementations in different parts of your application.
So maybe do something similar in Rust by expanding how you import and export modules?
I don’t think Rust needs this; Rust has done great for the last decade with the coherence rules it has. I am glad to not have to worry about this, and to not have to worry about any of the downstream problems (like linker errors) that coherence structurally eliminates.
I don't think explicit naming of impls is wise. They will regularly be TraitImpl or similar and add no real value. If you want to distinguish traits, perhaps force them to be within separate modules and use mod_a::mod_b::<Trait for Type> syntax.
> An interesting outcome of removing coherence and having trait bound parameters is that there becomes a meaningful difference between having a trait bound on an impl or on a struct:
The Rust ecosystem does a lot of what I like to call "inverse dependency injection".
If a Rust library needs support for TLS, typically that library implements a feature for each existing TLS backend, and keeps first-class integration which each one. The obvious thing would be to have a TLS Trait, and have each TLS library implement that trait (i.e.: dependency injection).
Because of to the orphan rule, such a trait would likely have to be declared in a small self-contained library, and each TLS library would implement that trait. I don't see any obvious impediment (aside from the fact that all TLS implementations would have to expose the same API and set of behaviours), but for some reason, the Rust ecosystem has taken the path of "every library has first-class integration with every possible provider".
This makes it really tricky to build libraries which rely on other libraries which rely on a TLS library, because your consumers can't easily select, for example, which TLS implementation to use. Libraries end up having lots of feature flags which just propagate feature flags to their dependencies.
Re: specialization and the comptime/reflection initiative
Since they allow observing whether a trait is implemented or not in the current crate they would probably become unsound if impls can be declared in downstream crates. They are a partial solution but also make other solutions harder to implement soundly (and viceversa)
I've been worried about this before and the problem is real. I don 't know who maintains serde but if that gets hacked its gonna be an epic supply chain attack
Great write up of a problem that I'm glad Golang sidesteps
The problem with this is that it's systemic and central to Rusts trait-based ecosystem composition.
Go’s has a version but it's much smaller and more local. In Go, consumer-defined structural interfaces remove most of the pressure that causes the Rust problem in the first place which is producer led.
Note the use case - someone wants to have the ability to replace a base-level crate such as serde.
When something near the bottom needs work, should there be a process for fixing it, which is a people problem? Or should there be a mechanism for bypassing it, which is a technical solution to a people problem? This is one of the curses of open source.
The first approach means that there will be confrontations which must be resolved. The second means a proliferation of very similar packages.
This is part of the life cycle of an open source language. Early on, you don't have enough packages to get anything done, and are grateful that someone took the time to code something.
Then it becomes clear that the early packages lacked something, and additional packages appear. Over time, you're drowning in cruft. In a previous posting, I mentioned ten years of getting a single standard ISO 8601 date parser adopted, instead of six packages with different bugs.
Someone else went through the same exercise with Javascript.
Go tends to take the first approach, while Python takes the second. One of Go's strengths is that most of the core packages are maintained and used internally by Google. So you know they've been well-exercised.
Between Github and AI, it's all too easy to create minor variants of packages. Plus we now have package supply chain attacks. Curation has thus become more important. At this point in history, it's probably good to push towards the first approach.
As a non Rust man, how real are the problems in this article? Does it show up in real word or is it just a edge case? I only program in C17, C++ as C with classes and C#. Anyone can give me a good read what Traits even are?
Similar but not exactly the same as named impls, I'd really like to see a language handle this by separating implementing a trait from making a particular existing implementation the implicit default. Orphan rules can apply to the latter, but can be overriden in a local scope by any choice of implementation.
This is largely based on a paper I read a long time ago on how one might build a typeclass/trait system on top of an ML-style module system. But, I suspect such a setup can be beneficial even without the full module system.
Language changes could help for sure. There’s a library implementation we can use right now though: https://facet.rs/ Basically a derive macro for reflection. Yeah it’s one (more) trait to derive on all your types but then users can use that to do reflection or pretty printing or diffing or whatever they want.
It is fundamentally difficult to have an “ecosystem”.
Would much rather see a bunch of libraries that implement everything for a given use case like web-dev, embedded etc.
Unfortunately this is hard to do in rust because it is hard to implement the low level primitives.
Language’s goal should be to make building things easier imo. It should be simple to build a serde or a tokio.
From what I have seen in rust, people tend to over-engineer a single library to the absolute limit instead just building a bunch of libraries and moving on.
As an example, if it is easy to build a btreemap then you don’t have to have a bunch of traits from a bunch of different libraries pre-implemented on it. You can just copy it, adapt it a bit and move on.
Then you can have a complete thing that gives you everything you need to write a web server and it just works
I never understood why Rust couldn't figure this shit out. Scala did.
> If a crate doesn’t implement serde’s traits for its types then those types can’t be used with serde as downstream crates cannot implement serde’s traits for another crate’s types.
You are allowed to do this in Scala.
> Worse yet, if someone publishes an alternative to serde (say, nextserde) then all crates which have added support for serde also need to add support for nextserde. Adding support for every new serialization library in existence is unrealistic and a lot of work for crate authors.
You can easily autoderive a new typeclass instance. With Scala 3, that would be:
trait Hash[A]:
extension (a: A) def hash: Int
trait PrettyPrint[A]:
extension (a: A) def pretty: String
// If you have Hash for A, you automatically get PrettyPrint for A
given autoDerive[A](using h: Hash[A]): PrettyPrint[A] with
extension (a: A) def pretty: String = s"<#${a.hash.toHexString}>"
> Here we have two overlapping trait impls which specify different values for the associated type Assoc.
trait Trait[A]:
type Assoc
object A:
given instance: Trait[Unit] with
type Assoc = Long
def makeAssoc: instance.Assoc = 0L
object B:
given instance: Trait[Unit] with
type Assoc = String
def dropAssoc(a: instance.Assoc): Unit =
val s: String = a
println(s.length)
@main def entry(): Unit =
B.dropAssoc(A.makeAssoc) // Found: Playground.A.instance.Assoc Required: Playground.B.instance².Assoc²
Also it's sort of amazing how few people modify their tools and remove the objectionable bits.
For example, Java. Checked exceptions. Everyone hates checked exceptions. They're totally optional. Nothing in the JVM talks about a checked exception. Patch javac, comment out the checked exception checker, and compile. Nothing goes wrong. You can write Java and not deal with checked exceptions.
Likewise, you can modify rustc and make it not enforce the orphan rule.
Too many people treat their tools as black boxes and their warts as things they must tolerate and not things they can fix with their own two hands without anybody's permission.
Does the author expect incumbents to relax language rules that grant exorbitant privilege to incumbents? The orphan rule is up there with error handling in ways Rust is a screwed up language that appeals to people who don't know what they're missing. Other systems languages aren't weak in these ways.
I will never stop hating on the orphan rule, a perfect summary of what’s behind a lot of rust decisions. Purism and perfectionism at the cost of making a useful language, no better way to torpedo your ecosystem and make adding dependencies really annoying for no reason. Like not even a —dangerously-disable-the-orphan-rule, just no concessions here.
An incoherent Rust
(boxyuwu.blog)237 points by emschwartz 23 March 2026 | 148 comments
Comments
When I used to write Scala, I accepted the fact that I don't have a background in type/set/etc. theory, and that there were some facets of the language that I'd probably never understand, and some code that others had written that I'd probably never understand.
With a language like Rust, I feel like we're getting there. Certain GAT syntxes sometimes take some time for me to wrap my head around when I encounter them. Rust feels like it shouldn't be a language where you need to have some serious credentials to be able to understand all its features and syntax.
On the other end we have Go, which was explicitly designed to be easy to learn (and, unrelatedly, I don't like for quite a few reasons). But I was hoping that we could have a middle ground here, and that Rust could be a fully-graspable systems-level language.
Then again, for more comparison, I haven't used C++ since before they added lambdas. I wonder if C++ has some hairy concepts and syntax today on par with Rust's more difficult parts.
From that link:
> We made these changes as part of our continuing effort to make Go a great language for building production systems. We studied many third-party web frameworks, extracted what we felt were the most used features, and integrated them into net/http. Then we validated our choices and improved our design by collaborating with the community in a GitHub discussion and a proposal issue. Adding these features to the standard library means one fewer dependency for many projects. But third-party web frameworks remain a fine choice for current users or programs with advanced routing needs.
[1]: https://go.dev/blog/routing-enhancements
I won't be using Rust moving forward. I do like the language but it's complicated (hard to hold in your head). I feel useless without the LSP and I don't like how taxing the compiler and LSP are on my system.
It feels really wasteful to burn CPU and spin up fans every time I save a file. I find it hard to justify using 30+ GB of memory to run an LSP and compiler. I know those are tooling complaints and not really the fault of the language, but they go hand in hand. I've tried using a ctags-based workflow using vim's built in compiler/makeprg, but it's less than ideal.
I also dislike the crates.io ecosystem. I hate how crates.io requires a GitHub account to publish anything. We are already centralized around GitHub and Microsoft, why give them more power? There's an open issue on crates.io to support email based signups but it has been open for a decade.
We want our languages to make it easy to write correct programs. And we want our languages to make it hard to write incorrect programs. And trying to have both at once is very difficult.
Let's say you have one library with:
And you want to define a custom serialization. In this case, you can write: Then you just implement Serialize and Deserialize for TypeWithDifferentSerialization.This cover most occasional cases where you need to work around the orphan rule. And semantically, it's pretty reasonable: If a type behaves differently, then it really isn't the same type.
The alternative is to have a situation where you have library A define a data type, library B define an interface, and library C implement the interface from B for the type from A. Very few languages actually allow this, because you run into the problem where library D tries to do the same thing library C did, but does it differently. There are workarounds, but they add complexity and confusion, which may not be worth it.
I've written a decent bit of Rust, and am currently messing around with Zig. So the comparison is pretty fresh on my mind:
In Rust, you can have private fields. In Zig all fields are public. The consequences are pretty well shown with how they print structs: In Rust, you derive Debug, which is a macro that implements the Debug trait at the definition site. In Zig, the printing function uses reflection to enumerate the provided struct's fields, and creates a print string based on that. So Rust has the display logic at the definition site, while Zig has the logic at the call site.
It's similar with hash maps: in Rust you derive/implement the Hash and PartialEq trait, in Zig you provide the hash and eq function at the call site.
Each one has pretty stark downsides: Zig - since everything is public, you can't guarantee that your invariants are valid. Anyone can mess around with your internals. Rust - once a field is private (which is the convention), nobody else can mess with the internals. This means outside modules can't access internal state, so if the API is bad, you're pretty screwed.
Honestly, I'm not sure if there is a way to resolve this tension.
EDIT: one more thought: Zig vs Rust also shows up with how object destruction is handled. In Rust you implement a Drop trait, so each object can only have one way to be destroyed. In Zig you use defer/errdefer, so you can choose what type of destructor runs, but this also means you can mess up destruction in subtle ways.
Sure, maybe current Rust and current serde make it awkward to declare non-global serializers, but that doesn’t mean that coherence is a mistake.
And IMHO coherence and orphan rules have majorly contributed to the quality of the eco system.
I find it slightly humorous that this sentence contains three words which would be understood completely differently by the majority of the English-speaking population.
So maybe do something similar in Rust by expanding how you import and export modules?
> An interesting outcome of removing coherence and having trait bound parameters is that there becomes a meaningful difference between having a trait bound on an impl or on a struct:
This seems unfortunate to me.
If a Rust library needs support for TLS, typically that library implements a feature for each existing TLS backend, and keeps first-class integration which each one. The obvious thing would be to have a TLS Trait, and have each TLS library implement that trait (i.e.: dependency injection).
Because of to the orphan rule, such a trait would likely have to be declared in a small self-contained library, and each TLS library would implement that trait. I don't see any obvious impediment (aside from the fact that all TLS implementations would have to expose the same API and set of behaviours), but for some reason, the Rust ecosystem has taken the path of "every library has first-class integration with every possible provider".
This makes it really tricky to build libraries which rely on other libraries which rely on a TLS library, because your consumers can't easily select, for example, which TLS implementation to use. Libraries end up having lots of feature flags which just propagate feature flags to their dependencies.
Since they allow observing whether a trait is implemented or not in the current crate they would probably become unsound if impls can be declared in downstream crates. They are a partial solution but also make other solutions harder to implement soundly (and viceversa)
The problem with this is that it's systemic and central to Rusts trait-based ecosystem composition.
Go’s has a version but it's much smaller and more local. In Go, consumer-defined structural interfaces remove most of the pressure that causes the Rust problem in the first place which is producer led.
When something near the bottom needs work, should there be a process for fixing it, which is a people problem? Or should there be a mechanism for bypassing it, which is a technical solution to a people problem? This is one of the curses of open source. The first approach means that there will be confrontations which must be resolved. The second means a proliferation of very similar packages.
This is part of the life cycle of an open source language. Early on, you don't have enough packages to get anything done, and are grateful that someone took the time to code something. Then it becomes clear that the early packages lacked something, and additional packages appear. Over time, you're drowning in cruft. In a previous posting, I mentioned ten years of getting a single standard ISO 8601 date parser adopted, instead of six packages with different bugs. Someone else went through the same exercise with Javascript.
Go tends to take the first approach, while Python takes the second. One of Go's strengths is that most of the core packages are maintained and used internally by Google. So you know they've been well-exercised.
Between Github and AI, it's all too easy to create minor variants of packages. Plus we now have package supply chain attacks. Curation has thus become more important. At this point in history, it's probably good to push towards the first approach.
This is largely based on a paper I read a long time ago on how one might build a typeclass/trait system on top of an ML-style module system. But, I suspect such a setup can be beneficial even without the full module system.
Would much rather see a bunch of libraries that implement everything for a given use case like web-dev, embedded etc.
Unfortunately this is hard to do in rust because it is hard to implement the low level primitives.
Language’s goal should be to make building things easier imo. It should be simple to build a serde or a tokio.
From what I have seen in rust, people tend to over-engineer a single library to the absolute limit instead just building a bunch of libraries and moving on.
As an example, if it is easy to build a btreemap then you don’t have to have a bunch of traits from a bunch of different libraries pre-implemented on it. You can just copy it, adapt it a bit and move on.
Then you can have a complete thing that gives you everything you need to write a web server and it just works
So they're finally rediscovering OCaml!
> If a crate doesn’t implement serde’s traits for its types then those types can’t be used with serde as downstream crates cannot implement serde’s traits for another crate’s types.
You are allowed to do this in Scala.
> Worse yet, if someone publishes an alternative to serde (say, nextserde) then all crates which have added support for serde also need to add support for nextserde. Adding support for every new serialization library in existence is unrealistic and a lot of work for crate authors.
You can easily autoderive a new typeclass instance. With Scala 3, that would be:
> Here we have two overlapping trait impls which specify different values for the associated type Assoc. Scala catches this too.For example, Java. Checked exceptions. Everyone hates checked exceptions. They're totally optional. Nothing in the JVM talks about a checked exception. Patch javac, comment out the checked exception checker, and compile. Nothing goes wrong. You can write Java and not deal with checked exceptions.
Likewise, you can modify rustc and make it not enforce the orphan rule.
Too many people treat their tools as black boxes and their warts as things they must tolerate and not things they can fix with their own two hands without anybody's permission.