Skip to content(if available)orjump to list(if available)

Build It Yourself

Build It Yourself

160 comments

·January 24, 2025

palata

I like Rust-the-language, but I hate the Rust-dependencies situation. Honestly I like C++ a lot better for that. People will complain that "it's hard to add a dependency to a C++ project", but I actually see it as a feature. It forces you to think about whether or not it is worth it.

In C++, I control my dependencies. In Rust I get above 100 so quickly I just give up. In terms of security, let's be honest: I have no clue what I am shipping.

Also Rust does not have ABI compatibility and no culture of shared libraries (I guess it wouldn't be practical anyway given the number of libraries one needs). But that just destroys the OS package distribution model: when I choose a Linux distribution, I choose to trust those who build it. Say Canonical has a security team that tries to minimize the security issues in the packages that Ubuntu provides. Rust feels a lot more like Python in that sense, where anyone could push anything to PyPi.

fxtentacle

How is Debian / Ubuntu secure?

It's signed by a maintainer. And maintainers are vetted. You trust Debian/Ubuntu to only allow trustworthy people to sign packages.

How are Docker / Python / Rust secure? I don't know any of the people who created my docker images, PyPi packages, or Rust crates.

Yes.

We're basically back to sending around EXE and DLL files in a ZIP. It's just that now we call it a container and proudly start it as root.

BTW, I agree with the author of the article: Sometimes you're best off just merging dependency source code. It used to be called "vendoring" and was a big thing in Rails / Ruby. The big advantage is that you're not affected by future malicious package takeovers. But you can still merge security patches from upstream, if you choose to do that.

KronisLV

> How are Docker / Python / Rust secure? I don't know any of the people who created my docker images, PyPi packages, or Rust crates.

I know who created the Docker images, because I'm the person who built them!

A lot of the time you can build your images either from scratch, or based on the official base images like Alpine or Ubuntu/Debian or some of the RPM base images, not that much different than downloading an ISO for a VM. With a base, you can use apk/apt/dnf to get whatever packages you want if you trust that more, just remember to clean up the package cache so it's not persisted in the layers (arguably wastes space). For most software, it actually isn't as difficult as it might have initially seemed.

As an alternative, you can also look for vaguely trustworthy parties that have a variety of prepackaged images available and you can either borrow their Dockerfiles or just trust their images, for example, https://bitnami.com/stacks/containers and https://hub.docker.com/u/bitnami

Most likely you have to have trust somewhere in the mix, for example, I'm probably installing the Debian/Ubuntu packaged JDK instead of compiling mine in most cases, just because that's more convenient.

Also, rootless containers are pretty cool! People do like Podman and other solutions a lot, you can even do some remapping with Docker if there are issues with how the containers expect to be run https://docs.docker.com/engine/security/userns-remap/ or if you have to use Docker and need something a bit more serious you can try this https://docs.docker.com/engine/security/rootless/

woodruffw

I don’t follow this reasoning: you might trust this distribution packager to be honest, but this doesn’t stop them from honestly packaging malicious code. It’s unlikely that the typical distribution packager is reviewing more than a minority of the code in the packages they’re accepting, especially between updates.

There are significant advantages to the distribution model, including exact provenance at the point of delivery. But I think it’s an error to treat it as uniquely trustworthy: it’s the same code either way.

WhyNotHugo

"Trust" as two meanings in English:

- You can trust someone as in "think they're honest and won't betray me".

- You can trust someone as in "think they are competent and won't screw-up".

In this case, we trust distributions packagers in both ways. Not only do we trust that they are non-malicious, we also trust that they won't clumsily package code without even skimming through it.

palata

It's not only the packager. Some distros have an actual security team. That does not mean they audit everything, but it's vastly better than getting random code from PyPi.

tonyhart7

You use in house solution??? any of these guys that want build something minimal dependency free is valid concern, I agree with that

but but what if people just don't want that??? it given people jobs and things to do (lol, this is serious)

also there are certain things better to use third party library than develop in house like crypto

liontwist

> How is Debian / Ubuntu secure?

You’re also forgetting process isolation and roles which provide strong invariants to control what a given package can do.

No such guarantees exist for code you load into your process.

akerl_

The vetting process for open source maintainers has very little overlap with the vetting process for “is this person trustworthy”.

This is true for individual libraries and also for Linux distros.

jvanderbot

So, the feature of "Here's a set of easily pulled libraries" is an anti-feature because it makes it easy to pull supporting libraries? I suspect this is actually about developers and not Rust or JS. Developers choose what dependencies to pull. If nobody pulled a dependency it would wither and die. There are a lot of dependencies for most libraries because most developers prefer to use dependencies to build something.

But I digress. If we're talking build system, nobody is forcing you to use Crates.io with cargo, they just make it easy to. You can use path-based dependencies just like CMake/VCPkg,Conan, or you can DIY the library.

Even with crates.io, nobody is forcing you to not use version pinning if you want to avoid churn, but they just make it easy to get latest.

It's easy to build software on existing software in Rust. If you don't like the existing software or the rate it changes don't blame Cargo. Just do it the way you like it.

a-french-anon

> because it makes it easy to pull supporting libraries?

No, because it's used as an excuse for a lack of large(r) standard library. It's the equivalent of "the bazaar/free market will solve it!".

You basically end up with a R5RS level of fragmentation and cruft outside your pristine small language; something that the Scheme community decided is not fun and prompted the idea of R6RS/R7RS-large. Yeah, it's hard to make a good, large and useful stdlib, but punting it to the outside world isn't a proper long-term solution either.

It's really a combination of factors.

materielle

Standard library omissions aren’t there just because.

For almost any functionality missing in the standard library, you could point to 2-3 popular crates that solve the problem in mutually exclusive ways, for different use cases.

Higher level languages like Go or Python can create “good enough” standard libraries, that are correct for 99% of users.

Rust is really no different than C or C++ in this regard. Sure, C++ has a bigger standard library. But half of it is “oh don’t use that because it has foot guns for this use case, everyone uses this other external library anyways”.

The one big exception here is probably async. The language needs a better way for library writers to code against a generic runtime implementation without forcing a specific runtime onto the consumer.

WhyNotHugo

> If we're talking build system, nobody is forcing you to use Crates.io with cargo, they just make it easy to.

Using cargo with distributed dependencies (e.g.: using git repositories) has several missing features, like resolving the latest semver-compatible version, etc. No only is it _easier_ to use cargo with crates.io, it's harder to use with anything else because of missing or incomplete features.

> You can use path-based dependencies just like CMake/VCPkg,Conan, or you can DIY the library.

Have you tried to do this? Cargo is a "many things in one" kind of tool, compiling a Rust library (e.g.: dependency) without it is a pain. If cargo had instead been multiple programs that each on one thing, it might be easier to opt out of it for regular projects.

jvanderbot

Compared to ... cmake? vckpg? conan?

I have never had a good experience with those. However, using

mydep.path = <path>

in cargo has never been an issue.

And I hate to say it, path-based deps are much easier in C++ / C than other "make the build system do a coherent checkout" options like those mentioned above. So we're at worst parity for this use case, IMHO and subject to my own subjectivity, of course

johnnyjeans

yes a large component of it is about developers. if developers were perfect beings, we wouldn't need rust in the first place.

nindalf

Rust may not be what you want to write, but it's what you want your coworkers to write.

chrisco255

[flagged]

palata

> They also prefer languages with buffer overflow and use-after-free errors.

Bad faith? My first sentence clearly says that I like the language, not the dependency situation.

johnnyjeans

he literally said he likes rust as a programming language, so no. also it's not "optional" when it's the de-facto standard in the language. you lock yourself out of the broader tooling ecosystem. no language server, no libraries (because they all use cargo), etc. oftentimes you run into ergonomic problems with a language's module system because it's been hacked together in service of the enormous combo build/dependency management system rather than vice versa. you're running so far against the grain you might as well not use the language.

this kind of passive-aggressive snark whenever someone leverages this very valid criticism is ridiculous

SkiFire13

> People will complain that "it's hard to add a dependency to a C++ project"

The way I see it the issue is that it's hard to add a dependency _in such a way that no people will have issues building your project with it_. This is problematic because even if you manage to make it work on your machine it may not work on some potential user or contributor's.

> But that just destroys the OS package distribution model: when I choose a Linux distribution, I choose to trust those who build it.

Distros still build Rust packages from sources and vendor crate dependencies in their repos. It's more painful because there are usually more dependencies with more updates, but this has nothing to do with shared libraries.

palata

> The way I see it the issue is that it's hard to add a dependency _in such a way that no people will have issues building your project with it_.

From my point of view, if it's done properly I can just build/install the dependency and use pkgconfig. Whenever I have a problem, it's because it was done wrong. Because many (most?) developers can't be arsed to learn how to do it properly; it's easier to just say that dependency management in C++ sucks.

a-french-anon

That's what I say when I converse about my colleagues: having a package manager from the start that greatly lowers the friction of adding or publishing yet another package deprives a language's ecosystem from something very useful: a good measure of natural selection.

Add to that a free-for-all/no curation repository situation like pypi, npm or cargo together with a too small standard library and prepare to suffer.

XorNot

I wonder how much of this is just the move away from shared libraries.

In the .NET space nuget certainly makes it easy to add dependencies, but dependencies do seem to be overall fewer and the primary interesting difference I'd note is that a dependency is in fact it's own DLL file - to the extent that it's a feature that you can upgrade and replace them by dropping in a new file or changing configuration.

It strikes me that we'd perhaps see far less churn like this if more languages were back to having shared libraries and ABI compatibility as a first class priority. Because then the value of stable ABIs and more limited selections of upgrades would be much higher.

a-french-anon

The quest for performance makes macros, monomorphisation/specialization and LTO too attractive for simple dynamic linking to remain the norm, unfortunately. And in a way, I understand, a Stalin/MLton style whole-program optimizing compiler certainly is worth it when you have today's computing power.

lolinder

There's a corollary here to "build it yourself", which is "vet it yourself". Cargo, npm, and pip all default to a central registry which you're encouraged to trust implicitly, but we've seen time and time again that central registries do not adequately protect against broken or malicious code that causes major security flaws downstream. Each of these ecosystems trains its developers to use hundreds of dependencies—far more than they can personally vet—with the understanding that someone else must surely have done so, even though we've seen over and over again that the staff of these registries can't actually keep up and that even long-running and popular projects can suddenly become insecure.

I'd like to see an ecosystem develop that provides a small amount of convenience on top of plain old vendoring. No central repository to create a false sense of security. No overly clever version resolution scheme to hide the consequences of libraries depending on dozens of transitive dependencies. Just a minimal version resolution algorithm and a decentralized registry system—give each developer an index of their own libraries which they maintain and make downstream developers pick and choose which developers they actually trust.

Maybe a bit like Maven if Maven Central didn't exist?

oersted

This frankly sounds like a rationalization for an aesthetic preference. It is undeniable that being able to easily build on top of others' hard work is an enormous advantage in any domain.

Duplicate work should only happen if you are confident that your requirements are significantly different, and that you can deliver an implementation as good as a team that has likely focused on the problem for much longer and has acquired a lot more know-how.

It is true that such an attitude might be justifiable for certain security or performance critical applications, you might need end-to-end control. But even in those cases, the argument for trusting yourself by default over focused and experienced library authors is dubious.

Either way, a good dependency manager opens the door for better auditing, analysis and tagging of dependencies, so that such critical requirements can be properly guaranteed, again probably better than you can do yourself.

Ygg2

> In C++, I control my dependencies. In Rust I get above 100 so quickly I just

Just don't add dependencies, it's that simple. If you have enough time to control C++ dependencies, you can control them in Rust as well.

lolinder

That's not an answer when the entire ecosystem is built around the idea of adding lots of dependencies to do stuff. I don't need no dependencies, I'd like to live in a world where I can add two or three. But if the culture is so far gone that those two or three transitively import 20 each, I don't have that as an option—it's all or nothing.

Ygg2

> That's not an answer when the entire ecosystem is built around the idea of adding lots of dependencies to do stuff.

Again. Don't add dependencies. Just don't. Vendor it yourself. Or write it yourself. Absolutely nothing is forcing you to use the dependencies except your own desire to save time.

Cargo is giving you the option to A) save your own time B) minimize dependencies. You choose A) and blame Cargo.

Joker_vD

> it figures out your terminal dimensions. The underlying APIs it uses have effectively been stable since the earliest days of computing terminals—what, 50 years or so?

No, they haven't been stable, not really. The TIOCGWINSZ ioctl has never been standardized to my knowledge, and it has many different names on different Unixes and BSDs. The tcgetwinsize() function only got in POSIX in 2024, and this whole thing has really sad history, honestly [0], and that's before we even get to the Windows side of things.

[0] https://news.ycombinator.com/item?id=42039401

horsawlarway

This was vaguely my take away from the article: It's not that his replacements are simpler because they're better or made by him. They're simpler because they're only handling his use-cases.

Sometimes - that's fine.

Sometimes - that's making his software worse for folks who have different use-cases, or are running on systems he doesn't understand or use himself.

The real value of a library, even with all those dependencies (and to be clear, I disagree that 3 or 4 dependencies for a library that runs across windows/linux is "all that many", esp when his platform specific implementation still uses at least 1), is that it turns out even relatively simple problems have a wealth of complexity to them. The person who's job it is to write that library is going to be more experienced in the subject domain than you (at least in the good cases) and they can deal with it. Most importantly - they can deal with your unknown, unknowns. The places you don't even have the experience to know you're missing information.

liontwist

> They're simpler because they're only handling his use-cases.

This is a major part of the thesis of no dependencies. General code is bad code. It’s slow, branchy, complex, filled with mutexes, nan checks, etc. Read “the old new thing”, to see the extreme

When you have a concrete goal you can apply assumptions that simplify the problem space.

A good example of this was Casey’s work on a fast terminal. At first all the naysaying was “production terminals are really hard because you have to handle fonts and internationalization, accessibility, etc”. Indeed those problems suck, but he used a general windows API to render a concrete representation of the char set on demand, and the the rest was simple.

horsawlarway

> General code is bad code.

For whom?

I think most times, as a user of software, I almost always prefer to have something that solves my problem, even if it's got some rough edges or warts. That's what general code is - stuff that solves a problem for lots of people.

Would I prefer a tool that solves exactly my problem in the best way possible? Yeah, sure. Do I want to pay what that costs in money, time or attention? Usually no. The general purpose tool is plenty good enough to solve the problem now and let me move on.

The value I get from solving the problem isn't really tied to how optimally I solve the problem. If I can buy a single hammer that drives all the nails I need today - that's a BETTER solution for me than spending 10 hours speccing out the ideal hammer for each nail and each hand that might hold it, much less paying for them all.

I'll have already finished if I just pick the general purpose hammer, getting my job done and providing value.

---

So to your terminal example - I think you're genuinely arguing for more general code here.

There's performance in making a terminal run at 6k fps. It's an art. It's clearly a skill and I can respect it. Sounds like it's an edge case that dude wants, so I'm in favor of trying to make the terminal faster (and more general).

But... I also don't give a flying fuck for anything I do. Printing 1gb of text to the terminal is useless to me from a value perspective (it's nearly 1000 full length novels of text, I can't read that much in a year if it was all I did, so going from 5 minutes to 5 seconds is almost meaningless to me).

The sum total of the value I see from that change is "maybe once or twice a year when I cat a long file by mistake, I don't have to hit ctrl-c".

I also genuinely fail to understand how this guy gets meaningful value from printing 1gb of text to a terminal that quickly either... even the fastest of speed readers are still going to be SO MANY orders of magnitude slower to process that, and anything else he might want to do with that text is already plenty fast - copying it to a new file? already fast. Searching it? fast. Deleting it? fast. Editing it? fast.

So... I won't make any comment on why this case is slow or the discussion around it (I haven't read it, it sounds like it could be faster, and they made a lot of excuses not to solve his specific edge case). All I'll say is your argument sure sounds like adding an edge case that nearly no one has, there-by making the terminal more general.

Any terminal I wrote for myself sure as fuck wouldn't be as fast as that because I don't have the rendering experience he has, and my use case doesn't need it at all.

jvanderbot

100%. If OP is willing to maintain a rust crate that takes in no dependencies and can determine terminal size on any platform I choose to build for, then I will gladly use your crate.

OTOH, if minimizing dependencies is important for a very specific project, then the extra work of implementing the functionality falls on that project. It will be simpler because it must only support one project. It may not receive critical compatibility updates or security updates also.

It does not fall on the community to go and remove dependencies from all their battle tested crates that are in common use. I think anyone and everyone would choose a crate with fewer over more dependencies. So, go make them?

the_mitsuhiko

> If OP is willing to maintain a rust crate that takes in no dependencies and can determine terminal size on any platform I choose to build for, then I will gladly use your crate.

I already mentioned this on twitter but not a lot of people work this way. I don't have to point you farther than my sha1-smol crate. It was originally published under the sha1 name and was the one that the entire ecosystem used. As rust-crypto became more popular there were demands that the name was used for rust-crypto instead.

I have given up the name, moved the crate to sha1-smol. It has decent downloads, but it only has 40 dependents vs. >600 for sha1. Data would indicate that people don't really care all that much about it.

(Or the sha1 crate is that much better than sha1-smol, but I'm not sure if people actually end up noticing the minor performance improvements in practice)

the_mitsuhiko

It's not standardized but those calls do not change. The windows calls in particular are guaranteed ABI stable since they are compiled into a lot of binaries. There are definitely issues with ioctl but the changes landing in terminal-size or any of the dependencies that caused all these releases, are entirely unrelated to ioctl/TIOCGWINSZ constants/winsize struct. That code hasn't changed.

mrweasel

In this case the terminal-size crate just calls Rustix tcgetwinsize, which in turn just calls the libc tcgetwinsize. So I suppose you could save yourself a whole bunch of dependencies by just doing the same yourself. The only cost is Windows support.

If this particular API has been stable, or at least reasonably defined for 50 or 25 years is a detail, because the dependency doesn't even pretend to deal with that and the function is unlikely to change or be removed in the near future.

Joker_vD

> If this particular API has been stable

Well, it hasn't. The tcgetwinsize() was proposed (under this name) in 2017 and was standardized only in 2024. So it's less than a 10 year old API, which is missing from lots of libc implementations, see e.g. [0]. Before its appearance, you had to mess with doing ioctl's and hoping your libc has exposed the TIOCGWINSZ constant (which glibc by default didn't).

[0] https://www.gnu.org/software/gnulib/manual/html_node/tcgetwi...

mrweasel

I had to check the Rustix implementation again, because that would indicate that that terminal-size wouldn't work on a number of operating systems. However Rustix also uses TIOCGWINSZ in it's tcgetwinsize implementation.

titzer

Terminals are a good example of something that seems really simple but is a major PITA because of too many different vendors in the early days, and no industry standard emerged. What is the closest thing? VT100? VT102? I mostly write raw to those, but stuff like terminal size and various other features like raw (non-cooked) mode are crappy and require ioctl's and such. Frankly, it sucks.

...but the libraries suck even more! If you don't want to link against ncurses then may God have mercy on your soul.

Joker_vD

Previous summer I've toyed with trying to write an "async prompt" a-la Erlang's shell with output scrolling and line-editing (see e.g. [0] for example of what I am talking about), but it is so bloody difficult to do correctly, especially when the input spans several lines and there are some full-width characters on the screen, that I've abandoned it.

[0] https://asciinema.org/a/s2vmkOfj6XtJkQDzeM6g2RbPZ

jumpkick

I recently revived a web app I wrote in 2006, my first startup. It was a social media site focused on media sharing. A pretty simple LAMP stack for the time. PHP 5, MySQL 3.2, but it has all of your typical (for the time) social media features. I revived this app because I wanted some hands-on time with new CI/CD tech that I don't get to use at my day job, so I'm working to extremely over-engineer the app's deployment process as a learning project. I could have used Wordpress or some other Hello World app, but this is a lot more fun.

I had written nearly all of the PHP from scratch. I wrote libraries for authentication/authorization, templating, form processing etc. I used one PEAR library for sending email. The frontend was vanilla HTML and there was barely any JavaScript to speak of. We used Flash for media playback. In other words, myself and my small team built nearly all of it ourselves. This was just how you did most things in 2006.

It only took me about an hour to get the 19-year old app up and running. I had to update the old PHP mysql drivers to mysqli, and update the database schema and some queries to work in MySQL 8 (mostly wrapping now-reserved words with backticks and adjusting column defaults which are now more strict). The only thing that didn't work was the Flash.

An hour to revive an app from 2006. Contrast this with my day job, wherein we run scores of Spring Boot apps written in Java 8 that have pages of vulnerabilities from tens of dozens of dependencies, which are not easy to update because updating one library necessitates updating many other libraries, and oh my goodness, the transitive dependencies. It's a nightmare, and because of this we only do the bare minimum of work to update the most critical vulnerabilities. There's no real plan to update everything because it's just too tall of an order.

And the funny thing is, if you compare what this PHP app from 2006 did, which had truly, barely any dependencies, to what these Spring Boot apps do, there is not a lot of difference. At the end of the day, it's all CRUD, with a lot more enterprise dressing and tooling around it.

skydhash

Go and the C linux world have sold me on the fat library philosophy. You brought a library to solve a problem in a domain, then add your specific bits. You don't go and bring a dependency for each item in your check list. Yes there may be duplicate effort, but the upgrade path is way easier.

ok123456

Most of that new CI/CD tech is standard now precisely because of all the complexity added by maintaining third-party dependencies and constant changes in the runtime environment. This isn't a problem for the most part for an old LAMP application deployed by scp.

999900000999

I agree 100% .

Even though NodeJS is largely responsible for my career, NPM has given me more trauma than my messed up childhood.

Imagine you're a new programmer, you're working on a brand new app to show to all your friends. But you want to add a new dependency, it doesn't like all the other dependencies, cool you say I'll just update them. Next thing you know absolutely nothing works, Babel is screaming at you.

No worries, you'll figure something out. Next thing you know you're staring at open git issues where basic things literally don't work. Expo for example has an open issue where a default new react native project just won't build for Android .

It's like no one cares half the time, and in that case the solution isn't even in the node ecosystem, it's somewhere in the Android ecosystem. It's duct tape all the way down. But this can also inspire confidence if a billion dollar project can ship non-functional templates, then why do I have imposter syndrome when my side projects don't work half the time!

abound

This was something that surprised me about the Rust ecosystem, coming from Go. Even a mature Go project (e.g. some business' production web backend) may only have 10-20 dependencies including the transitive ones.

As noted in this post, even a small Rust project will likely have many more than that, and it's virtually guaranteed if you're doing async stuff.

No idea how much of it is cultural versus based on the language features, e.g. in Go interfaces are implicitly satisfied, no need to import anything to say you implement it.

Cyph0n

For Rust and Go in particular, the difference is in the standard library. The Rust stdlib is (intentionally) small.

chris_overseas

Agreed, and the small stdlib is one of the main reasons for this problem. I understand the reasoning why it's small, but I wish more people would acknowledge the (IMHO at least) rather large downside this brings, rather than just painting it as a strictly positive thing. The pain of dealing with a huge tree of dependencies, all of different qualities and moving at different trajectories, is very real. I've spent a large part of the last couple of days fighting exactly this in a Rust codebase, which is hugely frustrating.

palata

What is the reason to keep it small? Genuinely interested, I actually don't understand.

Embedded systems maybe?

chikere232

It might be a bad choice on rust's part.

IMO they should over time fold whatever ends up being the de-facto choice for things into the standard library. Otherwise this will forever be a barrier to entry, and a constant churn as ever new fashionable libraries to do the same basic thing pops up.

You don't need a dozen regex libraries, you just need one that's stable, widely used and likely to remain so.

burntsushi

> You don't need a dozen regex libraries, you just need one that's stable, widely used and likely to remain so.

That is the case today. Virtually everyone uses `regex`.

There are others, like `fancy-regex`. But those would still exist even if `regex` was in std. But then actually it would suck, because then `fancy-regex` can't share dependencies with `regex`, which it does today. And because of that, you get a much smoother migration experience where you know that if your regexes are valid with `regex`, they'll work the same way in `fancy-regex`.

A better example might be datetime handling, of which there are now 3 general purpose libraries one can reasonably choose. But it would have been an unmitigated disaster if we (I am on libs-api) had just added the first datetime library to std that arose in the ecosystem.

palata

Agreed.

> and a constant churn as ever new fashionable libraries

Isn't that the situation in Javascript? I don't work in Javascript but to me it feels like people migrate to a new cool framework every 2 months.

sesm

I would expect crates like `stdlib-terminal` and `stdlib-web-api` in that case.

Honestly, something feels off with Rust trying to advertise itself for embedded: no stdlib and encourage stack allocation, but then married to Clang (which doesn't have a good embedded target support) and have panic in the language.

Building a C++ replacement for a browser engine rewrite and building a C replacement for embedded have different and often conflicting design constraints. It seems like Rust is a C++ replacement with extra unnecessary constraints of a C replacement.

palata

I often wonder about this: obviously Rust is fashionable, and many people push to use it everywhere. But in a ton of situations, there are modern memory-safe languages (Go, Swift, Kotlin, Scala, Java, ...) that are better suited.

To me Rust is good when you need the performance (e.g. computer vision) and when you don't want a garbage collector (e.g. embedded). So really, a replacement for C/C++. Even though it takes time because C/C++ have a ton of libraries that may not have been ported to Rust (yet).

Anyway, I guess my point is that Rust should focus on the problem it solves. Sometimes I feel like people try to make it sound like a competitor to those other memory-safe languages and... even though I like Rust as a language, it's much easier to write Go, Swift or Kotlin than Rust (IMHO).

jitl

One big reason is because Go has a very nice complete standard library, and Rust really does not.

Things you can find in go’s built in to the language or in standard libraries that need a dependency in Rust:

- green threads

- channels

- regular expressions

- http client

- http server

- time

- command line flags

- a logger

- read and write animated GIFs

I don’t love the Go language, but it’s the leader for tooling and standard library, definitely the best I’ve used.

drrotmos

Which may or may not be fine in a Go binary that runs on a modern desktop CPU, but what if your code is supposed to run on say an ESP32-C3 with a whopping 160 MHz RISC-V core, 400 KB of RAM and maybe 2 MB of XIP flash storage?

You could of course argue that that's why no-std exists in Rust, or that your compiler might optimize out the animated GIF routines, but personally, I'd argue that in this context, it is bloat, that - while it could occasionally be useful - it could just as easily be a third party library.

jitl

It’s the same as in C, Rust, or any other programming language I’ve ever used. If you don’t use a library, it doesn’t end up linked in your executable. Don’t want to animate GIFs on your microcontroller, then you don’t write `import “image/gif”` in your source file.

For a microcontroller sized runtime, there’s https://tinygo.org/

I think the lack of strong standard library actually leads to more bloat in your program in the long run. Bloat is needing to deal with an ecosystem that has 4 competing packages for time, ending up with all 4 installed because other libraries you need didn’t agree, and then you need ancillary compatibility packages for converting between the different time packages.

chikere232

Does anyone use the full standard library for embedded targets? I've not seen it done in C, java has a special embedded edition, python has micro-python, rust seems to usually use no-std, but I might be wrong there.

It seems like a bad reason to constrain the regular standard library

7bit

I hate that. I don't want dependencies for serialization or logging. But you do and now you have to choose which of the dozen logging crates you need.

As a beginner this is horrible, because everybody knows serde, but I have to learn that serde is the defacto, and that is not easy because when coming from other languages, it sounds like the second best choice. And that is with most rust crates.

burntsushi

This is why things like https://blessed.rs exist. Although since they're unofficial, their discoverability is also likely a problem.

petecorreia

Go's vast standard library helps a lot with keeping dependency numbers down

lionkor

Exactly this. You want logging in Rust? You will need at least `log` and another logger crate, for example `env_logger`, maybe the `dotenvy` crate to read `.env` files automatically, you already have 3 direct dependencies + all the transitive ones.

In Go: https://pkg.go.dev/log

cpursley

> It's 2025 and it's faster for me to have ChatGPT or Cursor whip up a dependency free implementation of these common functions

I sort of stumbled upon this myself and am coming around to this viewpoint. Especially after dependency hell of a big react app.

And there's also the saas/3rd party services dependencies to consider. Many of them are common patterns and already solved problems that LLMs can clone quickly.

macNchz

I’ve definitely become much more likely to start with small internal utility functions implemented by AI before adding a library than I would have been in the past—it’s quite effective for contained problems, I can have the AI write much more complete/robust implementations than I would myself, and if I do eventually decide to add a library I have a natural encapsulation: I can change the implementation of my own functions to use the library without necessarily having to touch everywhere it’s being used. Makes it easy to test a couple of different libraries as well, when the time comes.

nostradumbasp

Love the thesis statement. There is a lot of hidden cost in allowing abstractions from other libraries to be exposed over your own. I'm not here to say that should never be done but the future costs really need to be balanced at the decision point.

If the encapsulating package churns over its design, or changes it's goals it creates instability. Functionality is lost or broken apart when previously it was previously the simplest form of guarantee in software engineering in existence. It also deters niche but expert owners who aren't career OSS contributors from taking part in an ecosystem."I made a clean way to do X!" being followed up by weeks of discussion, negotiation, politics so that "X fits under Y because maybe people like Z" is inefficient and wasteful of everyones time.

If there's one thing I've learned in my life it's that the simplest things survive the longest. Miniliths, and monoliths should be celebrated way more often. Rust isn't alone in this by the way, I've seen this across languages. I've often seen OSS communities, drive hard for atomistic size packages, and I often wonder if it's mostly for flag planting and ownership transfer purposes than it is to benefit the community that actually uses these things.

bluGill

Worse, sometimes the upstream is complex enough that you don't want to do it yourself - then the upstream quits maintaining their project. I have in my company some open source projects that we still use that haven't been touched upstream since 2012, but either there is no replacement or the replacement is so different it isn't worth the effort to upgrade. Fortunately none of these are projects where I worry about security issues, I'm just annoyed by the lack of support, but if they faced the network I'd be concerned (and we do have security people who would force a change)

adrianN

Software that hasn’t been touched in ten years and still does the job is about as ideal as a dependency can be.

palata

I tend to agree, but it may have downsides to: it may do the job and have serious security issues. If you don't know what it does, no reason to know about the security issues.

WhyNotHugo

In theory yes.

Although a ten year old dependency is written in Python, it's likely not going to work any more due to changes in the build system, stdlib, etc.

bluGill

But does it? If the software is a spell checker for a language I don't know I will have no idea if it is any good.

adrianN

The same can be true for the dependency that releases weekly updates.

qup

Maybe it's even actively maintained!

xnorswap

Or they bait-and-switch freedom.

So they start off with an open source Free solution, and then later switch to a paid for model, and abandon the Free version. This is particularly painful when it's a part of your system that's key to security.

You're left between wondering if you should just pay the ransom or switching to a different solution entirely, or gamble and leaving it on an old unpatched version.

( Looking at you, IdentityServer )

Either way you regret ever going with them.

hitchstory

The other extreme of this is:

* Bad abstractions which just stick around forever. There are some examples of this in UNIX which would never be invented in the way they are today but nonetheless aren't going anywhere (e.g. signal handling). This isn't good.

* Invent all of your own wheels. This isn't good either.

There's a balance that needs to be struck between all of these 3 extremes.

chikere232

I know it's just an example, but if you're on linux there's signalfd() which makes signals into IO so you can handle it in an epoll()-loop or whatever way you like doing IO

We can't remove the old way of course, as that would break things, but that doesn't stop improvements

IshKebab

Sometimes removing the old way is the improvement though. E.g. adding an alternative to symlinks doesn't help if symlinks are still allowed.

zelphirkalt

You need capable engineering for building it yourself. If you only got engineers, who only ever reached for libraries, in ecosystems like NPM or PyPI, you will find them hard-pressed to develop solutions for many things themselves, especially so, if they are supposed to be solutions, that stand the test of time, and have the flexibility they need. It takes a lot of practice to "avoid programming yourself into a corner".

Another thing I noticed is, that one can often easily do better than existing libraries. In one project I implemented a parser for a markdown variant, that has some metadata at the top of the file. Of course I wrote a little grammar, not even a screen of code, and just like that, I had a parser. But I did not expect the badness of the frontend library parsing the same file. That one broke, when you had hyphens in metadata identifiers. At first I was confused, why it could not deal with that. Then it turned out, that it directly used the metadata identifiers as object member names ... Instead of using a simple JSON object, they had knowingly or unknowingly chosen to artificially limit the choice of names and to break things, when there are hyphens like in "something-something". In the end my parser was abandoned, people arguing, that they would have to "maintain" it. Well, it just worked and could easily be adapted for grammar changes. There was nothing difficult to understand about it either, if you had just a little knowledge about parsers. Sounds incredible, but apparently no one except me on the team had written a parser by using a parser generator library before.

And like that, there are many other examples.

TZubiri

A metric that I would like to focus in is dependency depth. We had this with the OSI model way back, but at this point it seems that anything beyond layer 7 just gets bucketed into 8+.

We need to know if a dependency is level 1 or level 2 or level 3 or 45. And we need to know what the deepest dependency on our project is. I might be naive, but I think we should strive to reduce the depth of the dependency graph and maybe aim for like 4 or 5 layers deep for a web app, tops.

themk

I've often thought the same. I would love a depedency manager that not only surfaced this information, but required you to declare upfront what level your library is.

I think it would reign in the bloat.

semanser

I'm actually working on a linter for dependencies that checks all your dependencies on 15+ rules. https://github.com/DepshubHQ/depshub

It's true that dependency-free software is very rare these days. The most obvious reason is that people don't want to "reinvent the wheel" when doing something. While this is a 100% valid reason, sometimes people simply forget what they are building and for whom. Extensive usage of dependencies is just one of the forms of overengineering. Some engineering teams even do their planning and features because of the new shiny thing.

The problem of dependencies is massive these days, and most companies are focusing on producing more and more code instead of helping people manage what they already have.