Skip to content(if available)orjump to list(if available)

Oxidizing Ubuntu: adopting Rust utilities by default

hansvm

> Of course they are considered problematic in Rust. > And leaks are hard to code by accident in Rust. > ...

I enjoyed this gem and its descendants from the comments. What I see instead, commonly, even in big rust projects, is that it's easy to accidentally define something with a longer lifetime than you intend. Some of that happens accidentally (not recognizing the implications of an extra reference here and there when the language frees things based on static detections of being unused). Much more of it happens because the compiler fights against interesting data structures -- e.g., the pattern of allocating a vec as pseudo-RAM, using indices as pseudo-pointers, and never freeing anything till the container itself is unused.

There's nothing wrong with those techniques per se, but the language tends to paint you into a bit of a corner if you're not very good and very careful, so leaks are a fact of life in basically every major Rust project I've seen not written by somebody like BurntSushi, even when that same sort of project would not have a leak in a GC language.

humanfromearth9

Why doesn't anyone rewrite some of these tools in a language that can be compiled to a native binary by GraalVM and benefit of all Java's security guarantees? Would it be too slow? Don't the advantages outweigh the inconvenients?

dpe82

You could do similar with the existing C by compiling it to WASM and then compiling that to machine code.

KeplerBoy

Rust is trendy, Oracle is the opposite.

Of course this isn't a very nuanced or clever take, but certainly part of the truth

mmoskal

The vec pattern has some advantages though, in particular you can often get away with 32 bit indices (instead of 64 bit pointers), making it a little more cache-friendly. I did it for regex ASTs that are supposed to be hash-consed, so never need to die until the whole matcher dies [0].

A more contrived example is Earley items in parser that point inside a grammar rule (all rules are in flat vec) and back into previous parser state - so I have 2 u32 offsets. If I had pointers, I would be tempted to have a pointer to grammar rule, index inside of it, and pointer to previous state [1], so probably 3x the space.

In both cases, pointers would be easier but slower. Annoying that Rust doesn't really let you make the choice though...

[0] https://github.com/microsoft/derivre/blob/main/src/hashcons.... [1] https://github.com/guidance-ai/llguidance/blob/main/parser/s...

hansvm

By all means. It's a pattern I use all the time, not just in Rust (often getting away with much less than 32-bit indices). You mentioned this and/or alluded to it, but my core complaints are:

1. You don't have much choice in the matter

2. When implementing that strategy, the default (happy path coding) is that you have almost zero choice in when the underlying memory is reclaimed

It's a pattern that doesn't have to be leaky, but especially when you're implementing it in Rust to circumvent borrow-checker limitations, most versions I've seen have been very leaky, not even supporting shrink/resize/reset/... operations to try to at least let the user manually be more careful with leaks.

MeetingsBrowser

> not recognizing the implications of an extra reference here and there when the language frees things based on static detections of being unused

Maybe I’m misunderstanding, but isn’t the point of the borrow checker to throw errors if a reference “outlives” the memory it references?

How can an extra reference extend the lifetime?

hansvm

Rust (most of the time, I'm not arguing about failure modes at all right now, let's pretend it's perfect) drops items when you're done with them. If you use the item longer, you have a longer implicit lifetime. The memory it references will also be claimed for longer (the reference does not outlive its corresponding memory).

You only fix that by explicitly considering lifetimes as you write your code -- adding in concrete lifetimes and letting the compiler tell you when you make a mistake (very hard to do as a holistic strategy, so nobody does), or just "git gud" (some people do this, but it takes time, and it's not the norm; you can code at a very high level without developing that particular skill subset, with the nearly inevitable result of "leaky" rust code that's otherwise quite nice).

worik

> drops items when you're done with them.

That is what GC languages do. Too

You can get serious resource leaks in any language if you are careless

What Rust guarantees is you cannot derefernce a null pointer

jvanderbot

The claim was "safe", not GC equivalent or memory minimal. You're right, all things have tradeoffs and borrow checker is no different.

kortilla

The claim was “leaks are hard to code by accident”. I agree with gp that this is false.

Preventing leaks is explicitly not a goal of rust and making lifetimes correct often involves giving up and making unnecessary static lifetimes. I see this all the time in async rpc stream stuff.

lifthrasiir

I think leaks are indeed harder to code by accident in non-async Rust, which was the original setting Rust was developed on. I wouldn't say it is absolutely hard (that really depends on the architectural complexity), but there seems some truth in that claim.

haileys

When a type is 'static, it doesn’t mean it’s leaked for the lifetime of the program. It just means it owns everything it needs and doesn’t borrow anything. It will still free everything it owns when it is freed.

blueflow

The uutils project has the right goals - 1:1 compatibility with GNU coreutils to the extent that any difference in functionality is a bug.

The first comment on LWN is about a bug in the more(1) from uutils. I checked out that code, found some oddities (like doing stat() on a path to check if it exists, right before open()ing it) and went to check against how GNU coreutils does it.

Turns out coreutils does not do it at all, because more(1) is from util-linux. ta-dam.

LeFantome

I think this is an excellent point that a lot of people are missing.

There are a lot of GNU utils. For now, Ubuntu is only taking on the ones that are actually in coreutils. Those are the ones that uutils considers "production ready".

01HNNWZ0MV43FF

That's funny because just today I was reading the Rust docs for std::fs::File and saw a warning about that exact TOCTOU error, and had to fix my code to not hit it

secondcoming

> > Klode said that it was a bad idea to allow users to select between Rust and non-Rust implementations on a per-command level, as it would make the resulting systems hard to support.

Wouldn't this imply that these aren't actually 1:1 replacements?

zamadatix

It would imply not all of the replacements are complete and/or bug free yet, and the article gives examples of just that earlier in the text, but it would not imply the replacements lack the goal of being 1:1 replacements like GP said.

This would seem to be one of the driving factors in the creation of oxidizr: to allow testing how ready the components are to be drop in replacements with easy recourse. You can read more about that in this section of the linked blog post by jnsgruk https://discourse.ubuntu.com/t/carefully-but-purposefully-ox...

samtheprogram

Not sure why you are downvoted but I’m curious why. I appreciated this tidbit, thanks.

cWave

the more you know

jgtrosh

And? How does that do it?

ZoomZoomZoom

> This is not symbolic of any pointed move away from GNU components - it's literally just about replacing coreutils with a more modern equivalent. Sure, the license is different, and it's a consideration, but it's by no means a driver in the decision making.

Sorry, don't believe this one bit. I'm very thankful for everything Canonical/Ubuntu did about 20 years ago, but no thanks. This comes from someone who loves Rust and what it made possible. However, freedoms are too important to not treat anything that looks like an attack on them as such.

imiric

Ubuntu lost its way a long time ago. Pushing Snap by default, ads in the Unity Dash search, ads in the terminal... Unity itself was a misstep, and around the time the distro started going downhill for me.

I don't follow it much these days, but nothing really surprises me from Canonical anymore. They leech off free software just like any other corporation.

saghm

Whatever happened to the whole debate around Ubuntu including ZFS modules by default in Ubuntu? At the time it originally got proposed, I feel like I remember basically everyone other than Canonical agreeing that this wasn't allowed by the licenses of Linux and ZFS, but they did it anyways, and from what I can tell, they basically got away with it?

Unless I'm remembering it wrong, I'm honestly not surprised that they might just be less worried about licensing in general after that; maybe this is the software licensing equivalent of "too big to fail"?

mustache_kimono

> from what I can tell, they basically got away with it?

Or they correctly interpreted the law?

Few of us care to admit how this "legal consensus" is mostly BS. Lots of evidence to the contrary, but, like politics these days, it sure is an appealing fantasy?

queuebert

I just realized this today. When trying to upgrade an EOL-ed Ubuntu machine using 'do-release-upgrade', it completely shat the bed. Now I'm in search of an alternative distro that has good GPU support, rolling releases, no systemd. Maybe OpenSUSE?

If there are any SV billionaires out there, can you fund CUDA on OpenBSD please? :-P

yjftsjthsd-h

No systemd really limits the options (including that OpenSUSE uses it, so I'm not sure how to read your comment). Maybe Artix Linux? That's rolling release systemd-free, though I don't know about graphics drivers. I would have suggested Pop!_OS, with has an awful name but is basically like Ubuntu without the bad stuff and excellent graphics drivers, but it has systemd and isn't rolling so YMMV.

bschmidt991

[flagged]

superb_dev

Which freedoms are being attacked?

endgame

Remember when Red Hat tried to lock away the source code to subscribers only? I think they would have preferred not to provide source at all, but the enormous base of GPL code entangled with everything make that impractical.

I think we're really going to miss having the fundamental parts of the system available under strong copyleft, and it will be very hard to go back.

LeFantome

Red Hat is probably the biggest single provider of GPL software in the world. They consistently choose the GPL for software they author and release. They founded and fund the Fedora Project themselves explicitly to be a community driven distro independent of their commercial efforts. They provide the full source to their products even though a huge percentage of the code is MIT, BSD, or Apache and does not require them to. They provide everything required to produce the "aggregate" full product even though the GPL does not require them to.

I will go on record now with the prediction that, if uutils catches on, Red Hat will be one of the last companies to move away from GNU. They are probably the largest contributor to glibc and GCC.

What evidence do you have that they would have "preferred not to provide source at all"? Because their is a mountain of evidence otherwise.

As an individual user, Red Hat will give you a free license to their flagship product. Then they will tell you how to download every line of code for it. I don't have a license. I do not use Red Hat (other than occasionally using the also totally free RHEL9 container).

"Remember when Red Hat tried to lock away the source code to subscribers only?"

I do not remember them changing the policy you are talking about. If you think they did, it highlights how overblown the response was at the time. The biggest impact of the Red Hat change is that Alma Linux is now a better project that can actually innovate and contribute.

stouset

Everything in uutils is under the MIT license. The only thing you're missing is the "viral" nature of the GPL. Nobody in the BSD world seems to be particularly suffering under the thumb of unreleased forks of `chown` and `mkdir`.

zifpanachr23

Red Hat has been more consistently pro GPL and pro open source and invests more money into it than any other company.

Them putting the downloads of their enterprise OS behind a login screen (student and solo developer licenses can STILL be got for free) is something I 100% understand and am kind of sympathetic about.

I know that's out of like with how some people define free software, but I've always had ethical issues with the way people would redistribute Red Hats work for free. Just because it's legal doesn't make it ethical.

ddulaney

The last 2 paragraphs of this interview with David Chisnall really made me think differently about that: https://lobste.rs/s/ttr8op/lobsters_interview_with_david_chi...

In particular:

> I think the GPL has led to fairly noticeable increase in the amount of proprietary software in the world as companies that would happily adopt a BSDL component decide to create an in-house proprietary version rather than adopt a GPL’d component.

It also aligns with my experience: my company couldn’t find an LZO compression library that wasn’t GPL’d, so the decision was between implementing one in-house or cutting the feature. We ended up restricting use of the feature to in-house use only, but opening up our core source code was never an option.

If there had been a permissive license option available, we would’ve likely donated (as we do to several other dependencies), and would’ve contributed any fixes back (because that’s easier to explain to customers than “here’s our patched version”).

bigstrat2003

That isn't something Canonical could do just because the software has a permissive license instead of a copyleft license. They can do that if they own the copyright (which I can't imagine they do), but if they own the copyright they can do anything they want and nobody has a say in it. So again it has nothing to do with license.

The absolute most that Canonical can do is start making a fork of the software where they don't distribute the source code. But in that case, the original code is still right there. No freedom is lost at any point.

sanderjd

Why don't you believe it?

From my perspective, it seems imminently plausible. Who cares about the licensing?

But I'm interested in what leads you to the opposite conclusion.

bitwize

[flagged]

metroholografix

Stallman has done more to safeguard our freedoms than all the critics and corporations that are happy to exploit his work put together.

He’s also been extremely prescient and decades ahead of his time. He stood his ground against relentless attacks where so many others have sold out and betrayed whatever morals they thought they had.

knowknow

These comments are frightening since they throw away decades worth of work and stability for whatever is being advocated for right now. How do we know that we can trust Rust developers to actively maintain a project when they seem eager to follow whatever the most current thing is? It’s the same situation with the Asahi Linux lead dev that quit at temporary pushback. There’s no faith that they will actually be committed to it.

kouteiheika

> when they seem eager to follow whatever the most current thing is?

Rust hit 1.0 *10 years ago*. How many more years will it take for people to stop constantly insinuating that people only use it because of the hype, and not simply because it's a vastly better language than C?

desumeku

This post reads like parody, but unfortunately people really believe this. I hope the young "nonbinary folx" have fun enjoying their Rust replacements for ls, while they neglect learning the language that actually powers the Linux kernel and corporations exploit their MIT licensing for their own benefit.

PaulDavisThe1st

How much of the GNU Project was "presented by" Stallman? What would your threshold be where, if the figure was larger, using it would be unacceptable, but if smaller, it would be OK ?

johnny22

i am not that concerned about more permissively licensed versions of what are effectively commodities (like coreutils). I am still very interested in keeping something like the kernel as GPL since it doesn't have a real substitute.

zoogeny

I hate to admit it, because I don't particularly like Rust, but I'm slowly coming around to the idea it should replace things.

The major change is a recent comment I made where I was musing about a future where LLMs actually write most of the code. That isn't a foregone conclusion but the recent improvements in coding LLMs suggests it isn't as far off a future as I once considered.

My thought was simple: if I am using a LLM to do the significant portion of generating code, how does that change what I think about the programming language I am using? Does the criteria I use to select the language change?

Upon reflection, Rust is probably the best candidate. It has strictness in ways that other languages do not have. And if the LLM is paying most of the cost of keeping the types and lifetimes in order, what do I care if the syntax is ugly? As long as I can read the output of the LLM and verify it (code review it) then I actually want the strictness. I want the most statically analyzable code possible since I have low trust in the LLM. The fact that Rust is also, ahem, blazingly fast, is icing on the cake.

As an aside to this aside, I was also thinking about modular kernels, like Minix. I wonder if there is a world where we take a userland like the one Ubuntu is trying and apply it to Minix. And then slowly replace the os modules with ones written in Rust. I think the modularity of something like Minix might be an advantage, but that might just be because I am completely naïve.

Cloudef

LLM truly provides the garbage in, garbage out experience

null

[deleted]

kortilla

The rust produced by LLMs is quite bad. It’s overly verbose (misses combinators) and often subtly wrong (swallows errors on result types when it shouldn’t). A single errant collect or clone call can destroy your performance and LLMs sprinkle them for no reason.

Unless you are experienced in rust, you have zero ability to catch the kind of mistakes LLMs make producing rust code.

sroussey

I just had an LLM (something 3.7) propose a 100 line implementation and after I stared at things for a while, I reduced it to one. I’m sure I’m in the minority of not just accepting the 100 line addition.

andai

Could you elaborate? What were those lines doing?

realusername

> Unless you are experienced in rust, you have zero ability to catch the kind of mistakes LLMs make producing rust code.

I'd say it's on par with other languages then... LLM are roughly 95% correct on code generation but that's not nearly enough for using them.

And spending time finding which 5% is looking good but actually wrong is a frustrating experience.

Those programs are making different kind of mistakes than humans and I find them much harder to spot.

Diederich

Is this deficiency likely to persist long-term as LLMs grow more powerful?

queuebert

I think one issue is less extant Rust code to train on. As more Rust is written and published, LLMs should in theory get better.

_factor

It’s interesting it took you this long to substitute “junior coder” with LLM. The implicit safety is just as applicable to teams of human error prone devs.

bschmidt731

[flagged]

Covenant0028

I've had similar thoughts as well. Rust or C would also be the ideal candidate from an energy consumption point of view, as they consume way less energy than Python, which is the language that LLMs most often default to in my experience.

However it's unlikely LLMs will generate a lot of Rust code. All these LLMs are generating is the most likely next token, based on what they've been trained on. And in their training set, it's likely there is massively more Python and JS code out there than Rust, simply because those are way more popular languages. So with Rust, it's more likely to hallucinate and make mistakes than with Python where the path is much better trodden

phanimahesh

However it is much easier to statically analyse rust and rust has compile time validations compared to an interpreted language like python. This makes it easier to produce code with write compile fix loop from an agentic llm in rust than python.

In my experience llms are not particularly bad with rust compared to python, though I've only toyed with them minimally.

saghm

While I'm lucky enough not to ever have had to spend a while trying to refactor/fix LLM-produced code, I've definitely had a much easier time trying to do major refactors to fix larger issues with human-written Rust code than any other language I've tried it with. The compiler obviously doesn't catch everything, but the types of things it _can_ catch seem to be very common to encounter when trying to make cascading changes to a codebase to fix the types of issues I've seen refactors attempt to address, and my instinct is that these might also be the types of things that end up needing to be fixed in LLM-produced code.

Elsewhere in this thread someone mentioned LLMs producing poor performing code that does stuff like collect iterators far too often and pointed out that fixing this sort of thing often requires significant Rust expertise, and I don't disagree with that. However, my impression is that it still ends up being less work for an experienced Rust programmer to claw back some memory from poor code than for someone similarly experienced in something other than Rust working in a code base in their own language. I've seen issues when three or four Go engineers spend weeks trying to track down and reduce memory overhead of slices due to the fact that it's not always easy to tell whether a given slice owns its own memory on the heap or is referencing a different one; looking for everywhere that `collect` is called (or another type known to heap allocate) is comparatively a lot simpler. Maybe Go being a lot simpler than Rust could make LLMs produce code that's so much better that it makes it easier for humans to debug and fix, but at least for some types of issues, my experience has been the opposite with Go code written by humans, so I wouldn't feel particularly confident about making that prediction at this point in time.

jdright

In practice, it is not what happens. I've been doing AI assisted Rust for some time, and it is very convincing that this is the way. I expect 6mo to 1y to be basically fully automated.

Rust has tons of code out there, and quality code. Different from js or Python that has an abundance of low quality to pure garbage code.

m00dy

same here. I think rust + llm combo is unbeatable.

jauntywundrkind

My long hope is that some day, someone starts to replace the perl-based Debian infrastructure with Rust (or really, anything).

I did a decent bit of mod_perl, love & respect perl, but here in 2025, it's pretty terrifying to me that Debian's main dependency is a pretty sizable perl runtime. That there seems to be very little that will budge this. The whole ecosystem is built atop a language that very few people have interest in, that has much less interest & activity than other modern alternatives.

It's fine if we start switching the user env over to more popular Rust based utilities. But what I really want is for the OS itself to modernize, to start getting away from it's ancient seemingly-unshakeable legacy. Nothing against Perl, but I'd love to see this family of OSes move beyond being Perl only.

bschmidt803

[flagged]

jvsgx

I have a queston: Why do they need their utilities to be written in Rust? Most utilities do not have to face the network, and most don't need to have root privileges. Heck, most of them are one shot programs and don't even need to free memory by the programmer. They could exit and return all the memory space back to the OS.

k_bx

Because things need to keep evolving, we see great new ideas coming from tools which add progress bar, make colorful output, have knowledge about .git structure etc. Current state of things is quite stalled, in my opinion.

kh_hk

It's funny git structure support is often cited as a good feature to have, baked in a tool that should outlive git itself.

grandiego

> Most utilities do not have to face the network

True, but code from those utilities may eventually be used in the network (for example, through copied functionality and shared libraries). Also, a creative pipeline may actually involve them (think of the Unix philosophy.)

Eventually plain C has to die or be relegated to unavoidable places like in the assembly cases, even if Rust is not the best alternative after all.

zifpanachr23

This is based on a common fallacy that people believe about CVEs vs. what actually gets exploited. I would go back and read the papers that got you so aggressively on board the memory safety train one more time and see if you can't detect motivated reasoning.

nukem222

GNU is free to adopt better tools than C/C++. I adore gnu but I refuse to support blatant and obstinant idiocy. Using the license to extort people into supporting C/C++ is just scummy behavior. What the fuck were you doing the last ten years that this wasn't a gnu project? Instead we got fucking systemd and a dozen more knockoffs of the shittiest os on earth

Animats

Has someone built Busybox in Rust yet? That would be good for embedded.

chiffaa

technically uutils/coreutils should suffice for this goal still as it can build into a single-binary tool a la busybox (iirc that's the default actually)

lifeinthevoid

Busybox has very basic implementations of the tools though, to keep the size in check.

bschmidt991

[flagged]

worik

This is on brand for Canonical.

Bleeding edge beta (alpha?) software put in the core of their system

Reminds me of when they switched Gnome out for Ubuntu One, no way back if you did 'aptitude full-upgrade'.

It was a buggy leaky pos

Again, dear friends...

stefan_

All you are doing is setting people up for more permanent bash-dash disasters. Isn't there something more useful you could be doing with your time? Say making drag&drop work in snaps (another unforced disaster)?

OsrsNeedsf2P

The "security" in Wayland, Snaps and Flatpaks are starting to irk me. Macros aren't working, clipboard forgets my copy, drag n' drop is flimsy - AppImages like Cursor can't even open on vanilla Ubuntu anymore. These "features" no one asked for should be opt-in.

surajrmal

Do you think app permissions in Android and iOS apps are not helpful? There isn't really a reason that the permission models make sense on mobile but not desktop. Desktop applications are inherently more trustworthy. The fact the security features gets in your way is more of a product finesse problem which generally plagues open source projects in general. The technology is not conceptually flawed.

Etheryte

In a way they are opt-in, as in you opt in by using Ubuntu and opt out by using something else.

porridgeraisin

Yep, exactly. The whole thing is just theater. I hate it. I know I can just use X11 (and I do), but the already small appetite for developing for linux will be stretched even more thin in the coming years and we will probably only have Wayland apps. The bright side however, is that Wayland could make itself so utterly garbage that people just ignore making apps for it (I'm not counting apps targeting just kde/gnome, which anyway diverge far enough from wayland protocols to warrant being mentioned separately). So we would have X11/KDE/Gnome support for linux apps. That's probably the best case scenario.

knowitnone

I agree, they should rewrite the kernel in Rust

worik

> they should rewrite the kernel in Rust

I adore Rust

Please don't!

Can we get on with innovating on the backs of what went before, not reinvent it...

beanjuiceII

yea thats called a completely different project

black_13

[dead]