Oxidizing Ubuntu: adopting Rust utilities by default
237 comments
·March 18, 2025hansvm
vlmutolo
I think that regardless of what references you have, Rust frees values at the end of their lexical “scope”.
For example, in the linked code below, x is clearly unused past the first line, but its “Drop” implementation executes after the print statement at the end of the function.
The takeaway is that if you want a value to drop early, just explicitly `drop` it. The borrow checker will make sure you don't have any dangling references.
https://play.rust-lang.org/?version=stable&mode=debug&editio...
In general, I think "lifetimes" only exist in the context of the borrow checker and have no influence on the semantics of Rust code. The language was designed so that the borrow checker pass could be omitted and everything would compile and run identically.
steveklabnik
This is correct.
mmoskal
The vec pattern has some advantages though, in particular you can often get away with 32 bit indices (instead of 64 bit pointers), making it a little more cache-friendly. I did it for regex ASTs that are supposed to be hash-consed, so never need to die until the whole matcher dies [0].
A more contrived example is Earley items in parser that point inside a grammar rule (all rules are in flat vec) and back into previous parser state - so I have 2 u32 offsets. If I had pointers, I would be tempted to have a pointer to grammar rule, index inside of it, and pointer to previous state [1], so probably 3x the space.
In both cases, pointers would be easier but slower. Annoying that Rust doesn't really let you make the choice though...
[0] https://github.com/microsoft/derivre/blob/main/src/hashcons.... [1] https://github.com/guidance-ai/llguidance/blob/main/parser/s...
hansvm
By all means. It's a pattern I use all the time, not just in Rust (often getting away with much less than 32-bit indices). You mentioned this and/or alluded to it, but my core complaints are:
1. You don't have much choice in the matter
2. When implementing that strategy, the default (happy path coding) is that you have almost zero choice in when the underlying memory is reclaimed
It's a pattern that doesn't have to be leaky, but especially when you're implementing it in Rust to circumvent borrow-checker limitations, most versions I've seen have been very leaky, not even supporting shrink/resize/reset/... operations to try to at least let the user manually be more careful with leaks.
baq
You should probably use something like https://docs.rs/generational-arena/latest/generational_arena... when you want to do this.
I understand the problem isn’t that the tools exist, it’s that there are Rust users who are not aware of the concept and its evolution over the years, which I’d argue is not a uniquely Rust issue.
MeetingsBrowser
> not recognizing the implications of an extra reference here and there when the language frees things based on static detections of being unused
Maybe I’m misunderstanding, but isn’t the point of the borrow checker to throw errors if a reference “outlives” the memory it references?
How can an extra reference extend the lifetime?
hansvm
Rust (most of the time, I'm not arguing about failure modes at all right now, let's pretend it's perfect) drops items when you're done with them. If you use the item longer, you have a longer implicit lifetime. The memory it references will also be claimed for longer (the reference does not outlive its corresponding memory).
You only fix that by explicitly considering lifetimes as you write your code -- adding in concrete lifetimes and letting the compiler tell you when you make a mistake (very hard to do as a holistic strategy, so nobody does), or just "git gud" (some people do this, but it takes time, and it's not the norm; you can code at a very high level without developing that particular skill subset, with the nearly inevitable result of "leaky" rust code that's otherwise quite nice).
littlestymaar
I don't know where you got this idea but this is wrong.
> you use the item longer, you have a longer implicit lifetime. The memory it references will also be claimed for longer
This is only true at the stack frame level, you cannot extend a variable beyond that, and there's no difference with a GC in that particular case (a GC will never remove an element for which there's a variable on the stack pointing towards it)
> You only fix that by explicitly considering lifetimes as you write your code
Lifetime parameters don't alter the the behavior, and their are either mandatory if the situation is ambiguous or entirely redundant. And it only works at the function boundaries, where lifetime extension cannot happen anyway. (See above)
Please stop spreading bullshit criticism that have no grounding in reality (Rust has real defects like everything else, but spreading nonesense isn't OK).
worik
> drops items when you're done with them.
That is what GC languages do. Too
You can get serious resource leaks in any language if you are careless
What Rust guarantees is you cannot derefernce a null pointer
humanfromearth9
Why doesn't anyone rewrite some of these tools in a language that can be compiled to a native binary by GraalVM and benefit of all Java's security guarantees? Would it be too slow? Don't the advantages outweigh the inconvenients?
jeroenhd
Why go for Java when you can go for .NET? Or Go? .NET seems to perform on par and seems to produce smaller executable, and Go seems to be faster in general.
Personally I don't really care what language common tools are written in (except for programs written in C(++), but I'll gladly use those after they've had a few years to catch most of the inevitable the memory bugs).
I think the difference is that there aren't many languages with projects that actually end up writing full suite replacements. There's a lot of unexpected complexity hidden within simple binaries that need to be implemented in a compatible way so scripts don't explode at you, and that's pretty tedious work. I know of projects in Rust and Zig that intend to be fully compatible, but I don't know if any in Java+GraalVM or Go. I wouldn't pick Zig for a distro until the language hits 1.0, though.
If these projects do exist, someone could probably compare them all in a compatibility and performance matrix to figure out which distribution is the fastest+smallest, but I suspect Rust may just end up winning in both areas there.
nicoburns
Why use Java when you can use Rust? In all seriousness, Rust is a joy to work with for these kind of tools which typically don't have complex lifetimes or ownership semantics.
On top of that you get better performance and probably smaller binaries. But I would pick Rust over Java for CLI tools just on the strengths of the language itself.
dpe82
You could do similar with the existing C by compiling it to WASM and then compiling that to machine code.
neonsunset
Because it’s not a good platform for this at all.
jvanderbot
The claim was "safe", not GC equivalent or memory minimal. You're right, all things have tradeoffs and borrow checker is no different.
kortilla
The claim was “leaks are hard to code by accident”. I agree with gp that this is false.
Preventing leaks is explicitly not a goal of rust and making lifetimes correct often involves giving up and making unnecessary static lifetimes. I see this all the time in async rpc stream stuff.
lifthrasiir
I think leaks are indeed harder to code by accident in non-async Rust, which was the original setting Rust was developed on. I wouldn't say it is absolutely hard (that really depends on the architectural complexity), but there seems some truth in that claim.
littlestymaar
Leaks are as hard to do in Rust as with a GC though.
That is, they aren't impossible and you'll eventually have to fight against one nasty one in your job, but it's far better than without a GC or borrowck.
haileys
When a type is 'static, it doesn’t mean it’s leaked for the lifetime of the program. It just means it owns everything it needs and doesn’t borrow anything. It will still free everything it owns when it is freed.
pjmlp
Unless one is writing something where pauses are a no go, even a tiny µs, I don't see a reason for rushing out to affine type systems, or variations thereof.
CLI applications are exactly a case, where it doesn't matter at all, see Inferno and Limbo.
littlestymaar
I genuinely don't understand why this is the top comment, as it is almost complete BS and the author confuses the behavior of the borrow checker with the one of a GC (and ironically claim a GC would solve the problem when in fact the problem doesn't exist outside of a GC's world)
hansvm
There's no confusion between what a borrow checker and a GC do. The borrow checker enforces safety by adding constraints to the set of valid programs so that it can statically know where to alloc/dealloc, among other benefits. A GC dynamically figures out what memory to drop. My claim is that those Rust constraints force you to write your code differently than you otherwise would (since unsafe is frowned upon, this looks like hand-rolled pointers using vecs as one option fairly frequently), which is sometimes good, but for more interesting data structures it encourages people to write leaks which wouldn't exist if you were to write that same program without those constraints and thus wouldn't exist in a GC language.
littlestymaar
> There's no confusion between what a borrow checker and a GC do
There is, with Rust you cannot extend the lifetime of an object by keeping a reference to it, when this is exactly the kind of things that will happen (and cause leaks) with GC.
> this looks like hand-rolled pointers using vecs as one option fairly frequently
Emphasis mine. Idk where you got the idea that was something frequent, but while it is an option that's on the table in order to manage data with cyclic references and is indeed used by a bunch of crate doing this kind of stuff, it's never something you'd have to use in practice.
(I say that as someone who's been writing Rust for ten years in every possible set-up, from embedded to front-end web, and who's been teaching Rust at university).
steveklabnik
The borrow checker does not change when things are dropped.
Fruitmaniac
Memory leaks are easy to code by accident in Java, so it must be even worse in Rust.
ZoomZoomZoom
> This is not symbolic of any pointed move away from GNU components - it's literally just about replacing coreutils with a more modern equivalent. Sure, the license is different, and it's a consideration, but it's by no means a driver in the decision making.
Sorry, don't believe this one bit. I'm very thankful for everything Canonical/Ubuntu did about 20 years ago, but no thanks. This comes from someone who loves Rust and what it made possible. However, freedoms are too important to not treat anything that looks like an attack on them as such.
imiric
Ubuntu lost its way a long time ago. Pushing Snap by default, ads in the Unity Dash search, ads in the terminal... Unity itself was a misstep, and around the time the distro started going downhill for me.
I don't follow it much these days, but nothing really surprises me from Canonical anymore. They leech off free software just like any other corporation.
saghm
Whatever happened to the whole debate around Ubuntu including ZFS modules by default in Ubuntu? At the time it originally got proposed, I feel like I remember basically everyone other than Canonical agreeing that this wasn't allowed by the licenses of Linux and ZFS, but they did it anyways, and from what I can tell, they basically got away with it?
Unless I'm remembering it wrong, I'm honestly not surprised that they might just be less worried about licensing in general after that; maybe this is the software licensing equivalent of "too big to fail"?
xoa
>Whatever happened to the whole debate around Ubuntu including ZFS modules by default in Ubuntu? At the time it originally got proposed, I feel like I remember basically everyone other than Canonical agreeing that this wasn't allowed by the licenses of Linux and ZFS
I think your recollection here is pretty colored, or else you were only reading stuff from one particular bubble. My own recollection is that there were very compelling arguments to the contrary, ones that I agreed with. Like, foundational to the concept of civil court cases is "standing" which is almost invariably down to a party suffering damages. Courts aren't a venue (at least in the US in general) for solving debates of abstract philosophy, that's what civil society is about, they're for applying the state's general monopoly on violence in order to rectify harms. Party A claims they were damaged by Party B (statutory, actual or both) in a way that violates the law. The remedy if they win is generally that they are made whole via money as best as possible or, rarely, via actual performance by Party B.
But if open source license GPL code is distributed with open source license CDDL code, with full access the source of both available, who is losing money on that? Who is getting damaged? What rights are being lost? Who has standing to sue over it? The idea that it's banned at all isn't tested legally anyway afaik, but more fundamentally if there is no harm then there simply isn't any court case. Say you saghm decided tomorrow that you were really steamed about ZFS being included with Ubuntu and decided to get going on your lawsuit on Monday, when the court asks "how much money did that cost you and what do you want us to do about it" what would your answer be, even putting aside "what's your theory of law on this one".
>and from what I can tell, they basically got away with it?
Well, that would be expected if there just isn't any case there right?
mustache_kimono
> from what I can tell, they basically got away with it?
Or they correctly interpreted the law?
Few of us care to admit how this "legal consensus" is mostly BS. Lots of evidence to the contrary, but, like politics these days, it sure is an appealing fantasy?
nosrepa
Ubuntu stopped being Ubuntu when it stopped being brown.
queuebert
I just realized this today. When trying to upgrade an EOL-ed Ubuntu machine using 'do-release-upgrade', it completely shat the bed. Now I'm in search of an alternative distro that has good GPU support, rolling releases, no systemd. Maybe OpenSUSE?
If there are any SV billionaires out there, can you fund CUDA on OpenBSD please? :-P
yjftsjthsd-h
No systemd really limits the options (including that OpenSUSE uses it, so I'm not sure how to read your comment). Maybe Artix Linux? That's rolling release systemd-free, though I don't know about graphics drivers. I would have suggested Pop!_OS, with has an awful name but is basically like Ubuntu without the bad stuff and excellent graphics drivers, but it has systemd and isn't rolling so YMMV.
anthk
You can run Trisquel Mate, but you need to set the propietary kernel and CUDA on your own. You can use the Xanmod kernel, the headers and the prop NV installer, but you will be less free on freedoom. With Intel and OpenCL (or whatever intel uses today), you might loss performance, but you gain compatibility and you are not bound to Intel X86, in case you want to try further Power9 based stations.
agalush
I don't know about the good GPU support and is not a rolling release, but Devuan is a systemd-free Debian derivative you might be interested with
Y_Y
I use Nonguix with a RTX2080 and am very happy with that. Similarly to Nix it's marvellous, but not for the faint of heart.
sanderjd
Why don't you believe it?
From my perspective, it seems imminently plausible. Who cares about the licensing?
But I'm interested in what leads you to the opposite conclusion.
kartoffelmos
> Who cares about the licencing.
Well, for one, the author of the Rust based utils cared enough to change it (or rather to not re-adopt GPL, but IMO that's the same ting). Why shouldn't we care about the licencing?
sanderjd
Yeah I was being tongue in cheek. (Or just overly dismissive I guess.)
I know people do care about licensing, but it's also plausible that it isn't what any given project is most focused on.
superb_dev
Which freedoms are being attacked?
globular-toast
The GPL ensures the software will always be free. "Permissive" licences don't. Permissive licences permit anyone to create a derivative version that isn't free. No source code, and it will be under copyright.
If there's one thing people need to understand it's that corporations will always strive to take as much as they can and give back as little as they can. It's incredibly naive to think they won't do it again after that's exactly what every corporation has done for the past several centuries. Give an inch and they'll take a mile. You can't seriously think companies like Microsoft and Oracle aren't already salivating at the thought of the community rewriting GNU/Linux in a permissive licence?
It's not that this particular case is "freedoms being attacked". It's that freedoms are constantly and relentlessly under attack and we can't let our guard down for one moment. GPL is our protection and voluntarily stepping out of it would be suicide.
viraptor
You mean the Microsoft which not that long ago releases the proprietary .Net code under mit? It's not all going one way, even in large corps. Windows also ships with wsl now - what would they even gain now from non-gnu Linux?
endgame
Remember when Red Hat tried to lock away the source code to subscribers only? I think they would have preferred not to provide source at all, but the enormous base of GPL code entangled with everything make that impractical.
I think we're really going to miss having the fundamental parts of the system available under strong copyleft, and it will be very hard to go back.
LeFantome
Red Hat is probably the biggest single provider of GPL software in the world. They consistently choose the GPL for software they author and release. They founded and fund the Fedora Project themselves explicitly to be a community driven distro independent of their commercial efforts. They provide the full source to their products even though a huge percentage of the code is MIT, BSD, or Apache and does not require them to. They provide everything required to produce the "aggregate" full product even though the GPL does not require them to.
I will go on record now with the prediction that, if uutils catches on, Red Hat will be one of the last companies to move away from GNU. They are probably the largest contributor to glibc and GCC.
What evidence do you have that they would have "preferred not to provide source at all"? Because their is a mountain of evidence otherwise.
As an individual user, Red Hat will give you a free license to their flagship product. Then they will tell you how to download every line of code for it. I don't have a license. I do not use Red Hat (other than occasionally using the also totally free RHEL9 container).
"Remember when Red Hat tried to lock away the source code to subscribers only?"
I do not remember them changing the policy you are talking about. If you think they did, it highlights how overblown the response was at the time. The biggest impact of the Red Hat change is that Alma Linux is now a better project that can actually innovate and contribute.
stouset
Everything in uutils is under the MIT license. The only thing you're missing is the "viral" nature of the GPL. Nobody in the BSD world seems to be particularly suffering under the thumb of unreleased forks of `chown` and `mkdir`.
zifpanachr23
Red Hat has been more consistently pro GPL and pro open source and invests more money into it than any other company.
Them putting the downloads of their enterprise OS behind a login screen (student and solo developer licenses can STILL be got for free) is something I 100% understand and am kind of sympathetic about.
I know that's out of like with how some people define free software, but I've always had ethical issues with the way people would redistribute Red Hats work for free. Just because it's legal doesn't make it ethical.
ddulaney
The last 2 paragraphs of this interview with David Chisnall really made me think differently about that: https://lobste.rs/s/ttr8op/lobsters_interview_with_david_chi...
In particular:
> I think the GPL has led to fairly noticeable increase in the amount of proprietary software in the world as companies that would happily adopt a BSDL component decide to create an in-house proprietary version rather than adopt a GPL’d component.
It also aligns with my experience: my company couldn’t find an LZO compression library that wasn’t GPL’d, so the decision was between implementing one in-house or cutting the feature. We ended up restricting use of the feature to in-house use only, but opening up our core source code was never an option.
If there had been a permissive license option available, we would’ve likely donated (as we do to several other dependencies), and would’ve contributed any fixes back (because that’s easier to explain to customers than “here’s our patched version”).
bigstrat2003
That isn't something Canonical could do just because the software has a permissive license instead of a copyleft license. They can do that if they own the copyright (which I can't imagine they do), but if they own the copyright they can do anything they want and nobody has a say in it. So again it has nothing to do with license.
The absolute most that Canonical can do is start making a fork of the software where they don't distribute the source code. But in that case, the original code is still right there. No freedom is lost at any point.
Kbelicius
User freedoms. Those don't exist under the so called "more permissive" licenses.
immibis
I see no problem with Rust. The problem here is the licensing. The new project is proprietary-compatible.
IshKebab
So what? There's zero interest in proprietary extensions to coreutils. This isn't Linux or GCC; they're just basic command line utilities.
Also anyone that really cared could use the BSD versions anyway.
bitwize
[flagged]
metroholografix
Stallman has done more to safeguard our freedoms than all the critics and corporations that are happy to exploit his work put together.
He’s also been extremely prescient and decades ahead of his time. He stood his ground against relentless attacks where so many others have sold out and betrayed whatever morals they thought they had.
knowknow
These comments are frightening since they throw away decades worth of work and stability for whatever is being advocated for right now. How do we know that we can trust Rust developers to actively maintain a project when they seem eager to follow whatever the most current thing is? It’s the same situation with the Asahi Linux lead dev that quit at temporary pushback. There’s no faith that they will actually be committed to it.
kouteiheika
> when they seem eager to follow whatever the most current thing is?
Rust hit 1.0 *10 years ago*. How many more years will it take for people to stop constantly insinuating that people only use it because of the hype, and not simply because it's a vastly better language than C?
PaulDavisThe1st
How much of the GNU Project was "presented by" Stallman? What would your threshold be where, if the figure was larger, using it would be unacceptable, but if smaller, it would be OK ?
desumeku
[flagged]
johnny22
i am not that concerned about more permissively licensed versions of what are effectively commodities (like coreutils). I am still very interested in keeping something like the kernel as GPL since it doesn't have a real substitute.
blueflow
The uutils project has the right goals - 1:1 compatibility with GNU coreutils to the extent that any difference in functionality is a bug.
The first comment on LWN is about a bug in the more(1) from uutils. I checked out that code, found some oddities (like doing stat() on a path to check if it exists, right before open()ing it) and went to check against how GNU coreutils does it.
Turns out coreutils does not do it at all, because more(1) is from util-linux. ta-dam.
LeFantome
I think this is an excellent point that a lot of people are missing.
There are a lot of GNU utils. For now, Ubuntu is only taking on the ones that are actually in coreutils. Those are the ones that uutils considers "production ready".
secondcoming
> > Klode said that it was a bad idea to allow users to select between Rust and non-Rust implementations on a per-command level, as it would make the resulting systems hard to support.
Wouldn't this imply that these aren't actually 1:1 replacements?
zamadatix
It would imply not all of the replacements are complete and/or bug free yet, and the article gives examples of just that earlier in the text, but it would not imply the replacements lack the goal of being 1:1 replacements like GP said.
This would seem to be one of the driving factors in the creation of oxidizr: to allow testing how ready the components are to be drop in replacements with easy recourse. You can read more about that in this section of the linked blog post by jnsgruk https://discourse.ubuntu.com/t/carefully-but-purposefully-ox...
blueflow
uutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
-- https://github.com/uutils/coreutils?tab=readme-ov-file#goalscodeguro
>The uutils project has the right goals
Their goals are to _replace GPL's code_. It's a subtle attack on Free Software by providing corporations a workaround so they don't have to abide by the terms of the license.
Nobody seriously doubts the efficiency or safety of the GNU Coreutils. They've been battle tested for over 30 years and whatever safety or optimization risks are left are negligible.
01HNNWZ0MV43FF
That's funny because just today I was reading the Rust docs for std::fs::File and saw a warning about that exact TOCTOU error, and had to fix my code to not hit it
Starlevel004
std::fs is generally not a good API with the complete lack of directory handles.
cWave
the more you know
samtheprogram
Not sure why you are downvoted but I’m curious why. I appreciated this tidbit, thanks.
jgtrosh
And? How does that do it?
IshKebab
I checked. It fopen's the file and then fstat's it. So it isn't vulnerable to TOCTOU.
However the TOCTOU is completely benign here. It's just an extra check before Rust opens the file so if you were to try to "exploit" it the only thing that would happen is you get a different error message.
oguz-ismail
> if you were to try to "exploit" it the only thing that would happen is you get a different error message
Can't reproduce this. If I do
sudo strace -e inject=stat:delay_exit=30s:when=2 ./coreutils more foo
on one terminal and rm foo
ln -s /etc/passwd foo
on another, I can see the contents of /etc/passwd on the first one.zoogeny
I hate to admit it, because I don't particularly like Rust, but I'm slowly coming around to the idea it should replace things.
The major change is a recent comment I made where I was musing about a future where LLMs actually write most of the code. That isn't a foregone conclusion but the recent improvements in coding LLMs suggests it isn't as far off a future as I once considered.
My thought was simple: if I am using a LLM to do the significant portion of generating code, how does that change what I think about the programming language I am using? Does the criteria I use to select the language change?
Upon reflection, Rust is probably the best candidate. It has strictness in ways that other languages do not have. And if the LLM is paying most of the cost of keeping the types and lifetimes in order, what do I care if the syntax is ugly? As long as I can read the output of the LLM and verify it (code review it) then I actually want the strictness. I want the most statically analyzable code possible since I have low trust in the LLM. The fact that Rust is also, ahem, blazingly fast, is icing on the cake.
As an aside to this aside, I was also thinking about modular kernels, like Minix. I wonder if there is a world where we take a userland like the one Ubuntu is trying and apply it to Minix. And then slowly replace the os modules with ones written in Rust. I think the modularity of something like Minix might be an advantage, but that might just be because I am completely naïve.
kortilla
The rust produced by LLMs is quite bad. It’s overly verbose (misses combinators) and often subtly wrong (swallows errors on result types when it shouldn’t). A single errant collect or clone call can destroy your performance and LLMs sprinkle them for no reason.
Unless you are experienced in rust, you have zero ability to catch the kind of mistakes LLMs make producing rust code.
sroussey
I just had an LLM (something 3.7) propose a 100 line implementation and after I stared at things for a while, I reduced it to one. I’m sure I’m in the minority of not just accepting the 100 line addition.
andai
Could you elaborate? What were those lines doing?
realusername
> Unless you are experienced in rust, you have zero ability to catch the kind of mistakes LLMs make producing rust code.
I'd say it's on par with other languages then... LLM are roughly 95% correct on code generation but that's not nearly enough for using them.
And spending time finding which 5% is looking good but actually wrong is a frustrating experience.
Those programs are making different kind of mistakes than humans and I find them much harder to spot.
_factor
It’s interesting it took you this long to substitute “junior coder” with LLM. The implicit safety is just as applicable to teams of human error prone devs.
Covenant0028
I've had similar thoughts as well. Rust or C would also be the ideal candidate from an energy consumption point of view, as they consume way less energy than Python, which is the language that LLMs most often default to in my experience.
However it's unlikely LLMs will generate a lot of Rust code. All these LLMs are generating is the most likely next token, based on what they've been trained on. And in their training set, it's likely there is massively more Python and JS code out there than Rust, simply because those are way more popular languages. So with Rust, it's more likely to hallucinate and make mistakes than with Python where the path is much better trodden
phanimahesh
However it is much easier to statically analyse rust and rust has compile time validations compared to an interpreted language like python. This makes it easier to produce code with write compile fix loop from an agentic llm in rust than python.
In my experience llms are not particularly bad with rust compared to python, though I've only toyed with them minimally.
saghm
While I'm lucky enough not to ever have had to spend a while trying to refactor/fix LLM-produced code, I've definitely had a much easier time trying to do major refactors to fix larger issues with human-written Rust code than any other language I've tried it with. The compiler obviously doesn't catch everything, but the types of things it _can_ catch seem to be very common to encounter when trying to make cascading changes to a codebase to fix the types of issues I've seen refactors attempt to address, and my instinct is that these might also be the types of things that end up needing to be fixed in LLM-produced code.
Elsewhere in this thread someone mentioned LLMs producing poor performing code that does stuff like collect iterators far too often and pointed out that fixing this sort of thing often requires significant Rust expertise, and I don't disagree with that. However, my impression is that it still ends up being less work for an experienced Rust programmer to claw back some memory from poor code than for someone similarly experienced in something other than Rust working in a code base in their own language. I've seen issues when three or four Go engineers spend weeks trying to track down and reduce memory overhead of slices due to the fact that it's not always easy to tell whether a given slice owns its own memory on the heap or is referencing a different one; looking for everywhere that `collect` is called (or another type known to heap allocate) is comparatively a lot simpler. Maybe Go being a lot simpler than Rust could make LLMs produce code that's so much better that it makes it easier for humans to debug and fix, but at least for some types of issues, my experience has been the opposite with Go code written by humans, so I wouldn't feel particularly confident about making that prediction at this point in time.
jdright
In practice, it is not what happens. I've been doing AI assisted Rust for some time, and it is very convincing that this is the way. I expect 6mo to 1y to be basically fully automated.
Rust has tons of code out there, and quality code. Different from js or Python that has an abundance of low quality to pure garbage code.
m00dy
same here. I think rust + llm combo is unbeatable.
alextingle
This is the worst kind of busy-work. Rewriting something for the sake of it is terrible practice.
There's a lot of refreshing energy amongst Rust coders eager to produce better, more modern command-line tools. That's amazing, and a real benefit to everyone. But rewriting simple, decades-old, perfectly functional tools, with the explicit goal of not improving them is just a self-absorbed hobby. And a dangerous one at that - any new code will contain bugs, and the best way to avoid that is not to write code if you don't have to.
immibis
It's not for the sake of it - it's to have MIT-licensed (able to be propriotized) coreutils. GPL is a thorn in the side to anyone who wishes to EEE it.
hexo
This is far from OK. Entire point is to have GPL not to run away from it. Again canonical shows they didn't get the point. Anyway Im staying on gnu coreutils as i see no benefit and zero reason for switch.
IshKebab
That isn't the motivation. It's about "resilience, performance and maintainability".
I doubt they'll get noticeably better performance (the main GNU tools are already very optimised). I'm not sure they really lack in resilience. And I don't think memory safety is a big factor here.
Maintenance is definitely a big plus though. It will be much nicer to work on modern Rust code than ancient C.
codeguro
If you look a bit deeper, this project actually cares deeply about its license, and is going out of its way to choose the license it is using, ignore complaints, and avoid ending up GPL.
https://www.youtube.com/watch?v=5qTyyMyU2hQ
In an interview with FOSS Weekly, Sylvestre Ledru (the main developer, who curiously has a background working on Debian and Firefox, before ending up getting seduced by the clang/LLVM ecosystem), firmly states "it is not about security".
immibis
To clarify, you think the obvious benefit that is something several organizations obviously want isn't the motivation, but improvements to 3 things that probably won't be improved are the real motivations?
psd1
How much maintenance do the tools need currently, and how does it compare to the effort?
i am surprised by the implication
TazeTSchnitzel
toybox and the BSDs already exist for those wanting permissively licensed utilities, the GPL is not a major motivator.
immibis
They do, but they're minimal versions, not one-for-one compatible with the GNU versions.
Every proprietary Linux OS (such as most Android phones) uses busybox, at a slight disadvantage to them, because they can't handle using GPL3 coreutils. Now they'll use the MIT-licensed drop-in replacement coreutils.
panick21_
Yeah because non GPL coreutils don't exist.
johnisgood
Amazing, another reason for avoiding Ubuntu.
Many of these utilities have had logic errors in them, you can find them on GitHub issues. sudo, for example, allowed you to bypass through some way I cannot remember.
And I bet you they are not a replacement for GNU utilities, i.e. have less features, and are possibly not optimized either, and perhaps they even have different (or less) flags/options.
I have written a lot of Bash scripts, I wonder how well (if at all) would work on Ubuntu with the Rust utilities.
Almondsetat
What an ignorant comment. The uutils README on GitHub explicitly states in the first section that every discrepancy wrt GNU utilities is considered a bug.
johnisgood
It is a comment coming from pragmatism, not ignorance.
At any rate, feel free to look around here: https://github.com/uutils/coreutils/issues?q=is%3Aissue%20st...
It is NOT a replacement of GNU coreutils AT ALL, as of yet.
Granted under "Goals" it says "uutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.", but please look around the opened (and closed) issues (since they should tell you about the major, yet simple logic bugs that has occurred). It is definitely not ready, and I am not sure when and if it will be ready.
FWIW the README also says: "some options might be missing or different behavior might be experienced.".
Future will tell, but right now, it is an extremely bad idea to replace GNU coreutils with Rust's uutils. Do you think otherwise? Elaborate as to why if so.
lolinder
Your first sentence is totally unnecessary and relies on an uncharitable reading of what they said. It's entirely possible that they knew that a goal of the project was to treat every discrepancy as a bug and were indicating that they were of the opinion that there were plenty of bugs left and no evidence that they would ever be all squashed.
Also, while we're slinging the README around, here's the first paragraph:
> uutils coreutils is a cross-platform reimplementation of the GNU coreutils in Rust. While all programs have been implemented, some options might be missing or different behavior might be experienced.
The "ignorant comment" above was actually just pointing out the thing that the developers thought was second-most-important for people to know—right after the fact that it's a reimplementation.
yawaramin
> uutils aims to be a drop-in replacement for the GNU utils. Differences with GNU are treated as bugs.
> some options might be missing or different behavior might be experienced.
Both of these statements cannot be true at the same time.
steveklabnik
They trivially can: "aims to be" means "in the future they will be this way," not "they are presently this way." That's what "differences are bugs" means, that you may currently experience incorrect behavior, but that it is intended to be fixed in the future.
null
bravetraveler
Canonical making light of licensing again, no surprise here.
This has ulterior motive written all over it. Best outcome: nothing happens.
jvsgx
I have a queston: Why do they need their utilities to be written in Rust? Most utilities do not have to face the network, and most don't need to have root privileges. Heck, most of them are one shot programs and don't even need to free memory by the programmer. They could exit and return all the memory space back to the OS.
grandiego
> Most utilities do not have to face the network
True, but code from those utilities may eventually be used in the network (for example, through copied functionality and shared libraries). Also, a creative pipeline may actually involve them (think of the Unix philosophy.)
Eventually plain C has to die or be relegated to unavoidable places like in the assembly cases, even if Rust is not the best alternative after all.
zifpanachr23
This is based on a common fallacy that people believe about CVEs vs. what actually gets exploited. I would go back and read the papers that got you so aggressively on board the memory safety train one more time and see if you can't detect motivated reasoning.
lolinder
If you have an argument to make, make it. This isn't a classroom and you're not an instructor, assigning homework to people who disagree with you isn't an effective argumentation technique.
treyd
What is the fallacy, specifically?
k_bx
Because things need to keep evolving, we see great new ideas coming from tools which add progress bar, make colorful output, have knowledge about .git structure etc. Current state of things is quite stalled, in my opinion.
kh_hk
It's funny git structure support is often cited as a good feature to have, baked in a tool that should outlive git itself.
klooney
Because people take input from the Internet and run it through them.
fulafel
Directly facing the network is just one way that a program may come process untrusted inputs.
jauntywundrkind
My long hope is that some day, someone starts to replace the perl-based Debian infrastructure with Rust (or really, anything).
I did a decent bit of mod_perl, love & respect perl, but here in 2025, it's pretty terrifying to me that Debian's main dependency is a pretty sizable perl runtime. That there seems to be very little that will budge this. The whole ecosystem is built atop a language that very few people have interest in, that has much less interest & activity than other modern alternatives.
It's fine if we start switching the user env over to more popular Rust based utilities. But what I really want is for the OS itself to modernize, to start getting away from it's ancient seemingly-unshakeable legacy. Nothing against Perl, but I'd love to see this family of OSes move beyond being Perl only.
kiney
There are non-Perl-based replacements for basically all the system utilities written in Perl in Debian in other distributions, but as things are currently, the Debian versions written in Perl are still unmatched.
stefan_
All you are doing is setting people up for more permanent bash-dash disasters. Isn't there something more useful you could be doing with your time? Say making drag&drop work in snaps (another unforced disaster)?
OsrsNeedsf2P
The "security" in Wayland, Snaps and Flatpaks are starting to irk me. Macros aren't working, clipboard forgets my copy, drag n' drop is flimsy - AppImages like Cursor can't even open on vanilla Ubuntu anymore. These "features" no one asked for should be opt-in.
Etheryte
In a way they are opt-in, as in you opt in by using Ubuntu and opt out by using something else.
null
porridgeraisin
Yep, exactly. The whole thing is just theater. I hate it. I know I can just use X11 (and I do), but the already small appetite for developing for linux will be stretched even more thin in the coming years and we will probably only have Wayland apps. The bright side however, is that Wayland could make itself so utterly garbage that people just ignore making apps for it (I'm not counting apps targeting just kde/gnome, which anyway diverge far enough from wayland protocols to warrant being mentioned separately). So we would have X11/KDE/Gnome support for linux apps. That's probably the best case scenario.
surajrmal
Do you think app permissions in Android and iOS apps are not helpful? There isn't really a reason that the permission models make sense on mobile but not desktop. Desktop applications are inherently more trustworthy. The fact the security features gets in your way is more of a product finesse problem which generally plagues open source projects in general. The technology is not conceptually flawed.
jeroenhd
> Desktop applications are inherently more trustworthy
Why? If 30 years of Windows being the most internet connected OS has proven anything, it's that 99% of desktop executable should be deleted before they make it through the disk write cache.
Mobile platforms show that mandatory sandboxes are a massive bonus to preventing malware. Desktop operating systems refuse to go that route because of the risk of breaking existing tools, which is why nobody bothers to actually use sandboxing APIs, which is why many third party sandboxing attempts keep breaking.
There's no reason why desktops can't exchange files through virtual file systems or dedicated sharing APIs like on mobile. My calculator doesn't need camera access and my music player can do without biometrics. The desktop is stuck in the 1990's permission model (or 1970s, for Unix-likes) because that's what people have gotten used to.
My experience with the Steam Deck is that Flatpak sandboxing on a read-only system image works absolutely fine. Things would work better if desktop applications would bother standardising and following existing standards, but the technology works. Sure, it's not good enough if you're a Linux kernel developer, but the 0.01% of users who are affected by those limitations shouldn't limit everyone else.
Some desktop users may think themselves to be too smart to get hacked, especially on Linux, but their desktops are not that different from smartphones.
knowitnone
I agree, they should rewrite the kernel in Rust
worik
> they should rewrite the kernel in Rust
I adore Rust
Please don't!
Can we get on with innovating on the backs of what went before, not reinvent it...
beanjuiceII
yea thats called a completely different project
Animats
Has someone built Busybox in Rust yet? That would be good for embedded.
chiffaa
technically uutils/coreutils should suffice for this goal still as it can build into a single-binary tool a la busybox (iirc that's the default actually)
lifeinthevoid
Busybox has very basic implementations of the tools though, to keep the size in check.
> Of course they are considered problematic in Rust. > And leaks are hard to code by accident in Rust. > ...
I enjoyed this gem and its descendants from the comments. What I see instead, commonly, even in big rust projects, is that it's easy to accidentally define something with a longer lifetime than you intend. Some of that happens accidentally (not recognizing the implications of an extra reference here and there when the language frees things based on static detections of being unused). Much more of it happens because the compiler fights against interesting data structures -- e.g., the pattern of allocating a vec as pseudo-RAM, using indices as pseudo-pointers, and never freeing anything till the container itself is unused.
There's nothing wrong with those techniques per se, but the language tends to paint you into a bit of a corner if you're not very good and very careful, so leaks are a fact of life in basically every major Rust project I've seen not written by somebody like BurntSushi, even when that same sort of project would not have a leak in a GC language.