Torvalds: You can avoid Rust as a C maintainer, but you can't interfere with it
94 comments
·February 22, 2025Muromec
1970-01-01
>I find it distressing that you are complaining about new users of your code, and then you keep bringing up these kinds of complete garbage arguments. Honestly, what you have been doing is basically saying "as a DMA maintainer I control what the DMA code is used for". And that is not how any of this works.
I appreciate that his anger is still there, its just worded differently, in a more modern (softer) attack. Is this still how we control developers in 2025? Yes! Deal with it or go fork it yourself. The day this goes away is the day Linux begins to die a death of 1,000,000 cuts in code quality.
p0w3n3d
> to be at least somewhat nice nowadays
That's a huge sacrifice when speaking of him, which we must appreciate. But to be honest, I must agree with his point of view.
Golden rule when developing projects is to stick to one (the least amount possible of) technology, otherwise, you'll end up with software for which you need to hire developers of different languages or accept the developers that won't be experts in some of them. I am working on a project that, up until a year ago, had been partly written in Scala. All the Java developers who didn't know Scala were doomed to either learn it in pain (on errors and mistakes) or just ignore tasks regarding that part of the system.
lambdaone
You're right that this is generally a golden rule. But rules can have exceptions, and this seems to be one of them; the Linux kernel is now so large and complex, and C so obviously outdated now, that it's worth the pain to start writing drivers in Rust. And because of the modularity of the kernel, and the care taken to make Rust binary-compatible with C, this looks to be actually practical, as individual subsystems will be either entirely Rust or entirely C, particularly when new drivers are involved.
Joel_Mckay
"C so obviously outdated now" lol...
People could have spawned another kernel branch focused around Rust, but know the predictable outcome:
https://en.wikipedia.org/wiki/Second-system_effect
Thus the novelty bias hijacks peoples ability to reason, and everyone gets upset as the Polyglot rots apart.
Polyglot projects by their very nature inject unstable dependencies into the build tree. This also makes the core feature of bootstrapping Linux on new hardware more difficult.
If someone says it is about learning yet another language, than they are disregarding peoples valid concerns. The kernel should be forked eventually with or without foundation support, if people refuse to isolate a branch with the extensive refactoring mission written in hype.
YMMV =3
GiorgioG
I've been doing this for over 20 years and it's the first I've heard of this "golden rule". I guess we've all been doing it wrong...writing our backends (pick your poison), frontends (TS/JS) and queries (SQL) in a variety of languages forever.
josefx
And if you look how that mess started out you had cross site scripting on the frontend because html allowed you to inject more javascript from everywhere and SQL injection on the backend because you had to translate your input from one language to another with tools that went out of their way to interpret data as commands.
The modern web is a gigantic mess with security features hacked on top of everything to make it even remotely secure and the moment it hit the desktop thanks to electron we had cross site scripting attacks that allowed everyone to read local files from a plugin description page. If anything it is the ultimate proof how bad things can go.
jeroenhd
I've mostly seen language mixing in frontend. Backends seem to end up either being completely ported to a new (compatible) language, or experimental new languages get ported back. Perhaps frontend developers are just more versatile because they have to, with frameworks and the base spec constantly shifting under their feet.
Even many backend devs seem to shy away from things like SQL because they're not too comfortable with it. Which isn't bad per se, it's very easy to make a small mistake in a query that crushes the database, just a personal observation of mine.
thegrim33
At my last job at a FAANG we had an Android app in Kotlin, and in all their wisdom the management decided to jump on the hip new thing, React Native, and start coding new/certain features in React Native.
Multiple years later, what was the state of things? We had a portion of the codebase in Kotlin with dedicated native/Kotlin developers, and a portion of the codebase in RN with dedicated RN/JS developers.
Any time there's a bug it's a constant shuffle between the teams of who owns it, which part of the code, native or JS the bug is coming from, who's responsible for it. A lot of time time nobody even knows because each team is only familiar with part of the app now.
The teams silo themselves apart. Each team tries its best to hold on to the codebase - native teams tries to prevent JS team from making the whole thing JS, the JS team tries to covert as much to JS as possible. Native team argues why JS features aren't good, JS team argues the benefits over writing in native. Constant back and forth.
Now, no team has a holistic view of how the app works. There's massive chunks of the app that some other team owns and maintains in some other language. The ability to have developers "own" the app, know how it works, have a holistic understanding of the whole product, rapidly drops.
Every time there's a new feature there's an argument about whether it should be native or RN. Native team points out performance and look-and-feel concerns, RN team points out code sharing / rapid development benefits. Constant back and forth. Usually whoever has the most persuasive managers wins, rather than on technical merit.
Did we end up with a better app with our new setup, compared to one app, written in one language, with a team of developers that develop and own and know the entire app? No, no I don't think so.
Feels like pretty parallel of a situation compared to Rust/C there.
qchris
While I think your points about some of the difficulties that arise in multi-language/framework projects is fair, I sort of roll my eyes whenever someone frames Rust as something like the "hip new thing".
The Linux kernel's first "release" was in 1991, hit 1.0 in 1994, and arguably the first modern-ish release in 2004 with the 2.6 kernel. Rust's stable 1.0 release was in 2015, 13 years ago. There are people in the workforce now who were in middle school when Rust was first released. Since then, it has seen 85 minor releases and three follow-on editions, and built both a community of developers and gotten institutional buy-in from large orgs in business-critical code.
Even if you take the 1991 date as the actual first release, Rust as a stable language has existed for over 1/3 of Linux's public development history (and of course had a number of years of development prior to that). In that framing, I think that it's a little unfair to include it in the "hip new thing" box.
fingerlocks
[delayed]
ratorx
Other than the choice problem of deciding what language to build new features in (which needs a clear policy), I don’t see why maintaining a mixed language codebase HAS to be terrible.
In my current job, also at FAANG, my team (albeit SRE team, not dev team), owns moderately sized codebases in C++, Go, Python and a small amount of Java. There are people “specialised” in each language, but also everyone is generally competent enough to at least read and vaguely understand code in other languages.
Now of course sometimes the issue is in the special semantics of the language and you need someone specialised to deal with it, but there’s also a large percentage which is logic problems that anyone should be able to spot, or minor changes which anyone can make.
The key problem in the situation you described seems to be the dysfunction in the teams about arguing for THEIR side, vs viewing the choice of language as any other technical decision that should be made with the bigger picture in mind. I think this partly stems from unclear leadership of how to evaluate the decision. Ideally you’d have guidance on which to prioritise between rapid development and consistency to guide your decisions and make your language choice based on that.
As your codebase scales beyond a certain point, siloing is pretty inevitable and it is better to focus on building a tree of systems and who is responsible for what. However that doesn’t absolve especially the leads from ONLY caring about their own system. Someone needs to understand things approximately to at least isolate problems between various connected systems, even if they don’t specialise in all of them.
wffurr
Is that really a “golden rule”?
I have worked on lots of cross language codebases. While it’s extremely useful to have experts in language or part, one can meaningfully contribute to parts written in other languages without being an expert. Certainly programmers on the level of kernel developers should readily be able to learn the basics of Rust.
There’s lots of use cases for shared business logic or rendering code with platform specific wrapper code, e.g. a C++ or Rust core with Swift, Kotlin, and TypeScript wrappers. Lots of high level languages have a low level API for fast implementations, like CPython, Ruby FFI, etc. The other way around lots of native code engines have scripting APIs for Lua, Python, etc.
w0m
I don't know if its golden rule or common-sence when applicable.
If our testing framework is in Python; writing a wrapper to code tests for your feature in Perl because you're more comfortable with it is the Wrong way to do it imo.
But if writing a FluentD plugin in Ruby solves a significant problem in the same infra; the additional language could be worth it.
Everything is about tradeoffs.
pseudocomposer
I’d argue that number of languages is less critical than how well-supported/stable the languages/frameworks chosen are, and whether the chosen tools offer good DX and UX. In simple terms… a project using 5 very-well-supported languages/frameworks (say, C, Rust, Java, Python, modern React/TS) is a lot better off than one with 3 obscure/constantly-shifting ones (say, Scala, Flutter, Groovy).
Anyway, I’m a bit of a Rust fanboy, and would generally argue that its use in kernel and other low-level applications is only a net benefit for everyone, and doesn’t add much complexity compared to the rest of these projects. But I could also see a 2030 version of C adding a borrow checker and more comparable macro features, and Rust just kind of disappearing from the scene over time, and its use in legacy C projects being something developers have to undo over time.
chamomeal
Is that possible? Could C add a borrow checker? Honest question, I have no idea how anything works
monideas
Direct link to Linus' email: https://lkml.org/lkml/2025/2/20/2066
macspoofing
As a C maintainer, you should care how the other side of the interface is implemented even if you're not actively involved in writing that code. I don't think it is reasonable, for software quality reasons, to have a policy where a maintainer can simply pretend the other side doesn't exist.
bluGill
That puts far too many chefs in the kitchen and worse(!) dilutes your time and understanding of the part of the code you know well. You need to trust your fellows in other areas of the code to make good decisions without you, and focus on what you know. Let other people do their own job without micromanaging them. Spend your time in your own lane.
Sometimes the other team proves incompetent and you are forced to do their job. However that is an unusual case. So trusting other teams to do their job well (which includes trying something you don't like) is a good rule.
MontagFTB
The API is the contract boundary. As long as it is well documented and satisfies its postconditions, it can be implemented in anything. Computing thrives on layers of abstraction like this.
tux1968
That's up to the maintainer; if they don't have any knowledge of Rust, then it's better they don't get involved anyway. They're still responsible for designing the best C interface to their subsystem as possible, which is what most of the kernel will be interacting with. It puts the burden firmly on the shoulders of the Rust advocates; who believe the task is manageable.
As for your concern about code quality, it's the exact same situation that already exists today. The maintainer is responsible for his code, not for the code that calls it. And the Rust code, is just another user.
dralley
Sure, and that's ideal for the maintainers that are willing to do that (and there are several), but for the C devs that just don't care and can't be forced to care, this is a pragmatic compromise. Not everyone has to be involved on both sides.
bena
You should care that it is usable, but how they use it should not concern you. If someone wants to use the usb driver to interface with a coin motor to build vibrating underwear, then that's none of your business. Your concern is if your driver works to spec and can be interfaced.
So if someone wants to write software in Rust that just uses the DMA driver, that should be fine. Linus is entirely in the right.
zubspace
It's an interesting discussion. There's always a divide when you slowly migrate from one thing to another.
What makes this interesting is that the difference between C code an Rust code is not something you can just ignore. You will lose developers who simply don't want or can spend the time to get into the intricacies of a new language. And you will temporarily have a codebase where 2 worlds collide.
I wonder how in retrospect they will think about the decisions they made today.
Sharlin
Most likely Rust will stay strictly on the driver side for several years still. It's a very natural Schelling fence for now, and the benefits are considerable, both in improving driver quality and making it less intimidating to contribute to driver code. It will also indirectly improve the quality of core code and documentation by forcing the many, many underspecified and byzantine API contracts to be made more rigorous (and hopefully simplified). This is precisely one of the primary things that have caused friction between RfL and the old guard: there are lots and lots of things you just "need to know" in order to soundly call many kernel APIs, and that doesn't square well with trying to write safe(r) Rust abstractions over them.
Tyr42
An example of the latter: drm_sched
12345hn6789
This is Hector Martin, a formal contributor who threatened social media attacks on kernel maintainers if they did not implement rust into the kernel.
bayindirh
I don't think changing to Rust code completely is something attainable. I guess some older or more closer to the metal parts will stay in C, but parts seeing more traffic and evolution will be more rusty after some time, and both will have its uses and have their islands inside the codebase.
gccrs will allow the whole thing to be built with GCC toolchain in a single swoop.
If banks are still using COBOL and FORTRAN here and there, this will be the most probable possibility in my eyes.
leonheld
> I guess some older or more closer to the metal parts will stay in C
I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things.
This reminds me I need to do some non-trivial embedded project with Rust to see how it behaves in that regard. I'm not sure if the abstraction gets in the way.
bayindirh
After writing some non-trivial and performance sensitive C/C++ code, you have feeling of how that code behave on the real metal. I have that kind of intuition, for example. I never had to dive to the level of generated ASM, but I can get ~80% of theoretical IPC with just minding what I'm doing in C++ (minimum branching, biasing branches towards a certain side, etc.).
So, I think if you do the same thing with Rust, you'll have that intuition, as well.
I have a friend who writes embedded Rust, and he said it's not as smooth as C, yet. I think Rust has finished the first 90% of its maturing, and has the other 90%.
xg15
> I suppose the biggest reason is that C programmers are more likely than not trained to kinda know what the assembly will look like in many cases, or have a very good idea of how an optimizer compiler will optimize things
This is the only way Hellwig's objection makes any kind of sense to me. Obviously, intra-kernel module boundaries are no REST-APIs, where providers and clients would be completely separated from each other. Here I imagine that both the DMA module as well as its API consumers are compiled together into a monolithic binary, so if assumptions about the API consumers change, this could affect how the module itself is compiled.
the__alchemist
I've done a non-trivial embedded project in C. (Quadcopter firmware). The language doesn't get in the way, but I had to write my own tooling in many areas.
flir
Is there a layer where C is the sweet spot? Something too high-level for ASM, and too low-level for Rust? (not my area, so genuine question).
chippiewill
Many people still have the mistaken belief that C is still trivial to map to assembly instructions and thus has an advantage over C++ and Rust in areas where understanding that is important - but in practice the importance is overstated, and modern C compilers are so capable at optimising at high optimisation levels that many C developers would be surprised at what was produced if they looked much further than small snippets.
Like half the point of high-level systems languages is to be able to express the _effects_ of a program and let a compiler work out how to implement that efficiently (C++ famously calls this the as-if rule, where the compiler can do just about anything to optimise so long as it behaves in terms of observable effects as-if the optimisation hadn't been performed - C works the same). I don't think there's really any areas left from a language perspective where C is more capable than C++ or Rust at that. If the produced code must work in a very specific way then in all cases you'll need to drop into assembly.
The thing Rust really still lacks is maturity from being used in an embedded setting, and by that I mostly mean either toolchains for embedded targets being fiddly to use (or nonexistent) and some useful abstractions not existing for safe rust in those settings (but it's not like those exist in C to begin with).
pjmlp
The exact reason why it was created in first place, a portable macro assembler for UNIX, and should stayed there, leaving place for other stuff on userspace like Perl/Tcl/... on UNIX, or Limbo on Inferno, as the UNIX authors revised their ideas of what UNIX v3 should look like, already on UNIX v2 aka Plan 9, there was a first attempt with Alef.
Or even C++, that many forget was also born at Bell Labs on the UNIX group, the main reason being Bjarne Stroutroup never wanted to repeat his Simula to BCPL downgrade ever again, thus C with Classes was originally designed for a distributed computing Bell Labs research project on UNIX, that Bjarne Stroutroup certainly wasn't going to repeat the previous experience, this time with C instead of BCPL.
bayindirh
Directly programming hardware with bit-banging, shifts, bitmasks and whatnot. Too cumbersome in ASM to do in large swaths, too low level for Rust or even for C++.
Plus for that kind of things you have "deterministic C" styles which guarantee things will be done your way, all day, every day.
For everyone answering: This is what I understood by chatting with people who write Rust in amateur and pro settings. It's not something of a "Rust is bad" bias or something. The general consensus was, C is closer to the hardware and allows handling of quirks of the hardware better, because you can do "seemingly dangerous" things which hardware needs to be done to initialize successfully. Older hardware is finicky, just remember that. Also, for anyone wondering. I'll start learning Rust the day gccrs becomes usable. I'm not a fan of LLVM, and have no problems with Rust.
lambdaone
Rust is a systems programming language by design; bit-banging is totally within its remit, and I can't think of anything in the kernel that Rust can't do but that C could. If you want really, really tight control of exactly which machine instructions get generated, you would still have to go to assembler anyway, in either Rust or C.
SSLy
maybe generic implementations of crypto primitives and math kernels.
Havoc
There was always going to be some kicking and screaming on this tbh. This strikes me as a reasonable middle ground
baq
It's reasonable, but calling it 'middle ground' where it's purely common sense is very generous.
mort96
Well it's a middle ground between two other realistic extremes, those being "subsystem maintainers must understand and support the Rust bindings to their APIs" and "subsystem maintainers can veto the introduction of Rust bindings to their APIs".
Joel_Mckay
Yup, they should have isolated the hype in a RustyLinux branch.
https://en.wikipedia.org/wiki/Second-system_effect
The outcome is well known... lol =3
homarp
previous discussion https://news.ycombinator.com/item?id=43123104
null
SeanLuke
I get the feeling that, no matter how slow Linus goes, this is going to lead to a split. If Linus eventually pushes through Rust, the old guard will fork to a C-only version, and that won't be good.
null
z0ltan
[dead]
jvillasante
[flagged]
cosmicradiance
Who's taking the baton?
knowknow
In what way?
perching_aix
Can't wait.
lrsa1218
[flagged]
ykonstant
There is nothing ambiguous here, if anything Torvalds is simply enforcing common sense: Rust devs cannot be divas, and C devs cannot be saboteurs.
If anything, the whole kerfuffle is astounding for the lack of common sense, and sense of camaraderie, among those kernel devs. It should not take a dictator to enforce the obvious, but in this case it seems like it does.
i80and
How is this an ambiguous stance? "Subsystem maintainers don't have to allow Rust in, but other subsystems can and will build their own bindings to your code" seems fairly clear-cut.
dralley
It's made even less ambiguous by a later follow-up
https://lore.kernel.org/rust-for-linux/2cbxfvvsau5sobm3zo5ds...
blueflow
Are you displeased with Linus' leadership because he make the decision you want him to make?
bayindirh
Care to elaborate for the uninitiated?
You can see that Linus actually makes an effort to be at least somewhat nice nowdays, while still sticking to pragmatic technical decisions.