Skip to content(if available)orjump to list(if available)

Build It Yourself

Build It Yourself

262 comments

·January 24, 2025

palata

I like Rust-the-language, but I hate the Rust-dependencies situation. Honestly I like C++ a lot better for that. People will complain that "it's hard to add a dependency to a C++ project", but I actually see it as a feature. It forces you to think about whether or not it is worth it.

In C++, I control my dependencies. In Rust I get above 100 so quickly I just give up. In terms of security, let's be honest: I have no clue what I am shipping.

Also Rust does not have ABI compatibility and no culture of shared libraries (I guess it wouldn't be practical anyway given the number of libraries one needs). But that just destroys the OS package distribution model: when I choose a Linux distribution, I choose to trust those who build it. Say Canonical has a security team that tries to minimize the security issues in the packages that Ubuntu provides. Rust feels a lot more like Python in that sense, where anyone could push anything to PyPi.

fxtentacle

How is Debian / Ubuntu secure?

It's signed by a maintainer. And maintainers are vetted. You trust Debian/Ubuntu to only allow trustworthy people to sign packages.

How are Docker / Python / Rust secure? I don't know any of the people who created my docker images, PyPi packages, or Rust crates.

Yes.

We're basically back to sending around EXE and DLL files in a ZIP. It's just that now we call it a container and proudly start it as root.

BTW, I agree with the author of the article: Sometimes you're best off just merging dependency source code. It used to be called "vendoring" and was a big thing in Rails / Ruby. The big advantage is that you're not affected by future malicious package takeovers. But you can still merge security patches from upstream, if you choose to do that.

KronisLV

> How are Docker / Python / Rust secure? I don't know any of the people who created my docker images, PyPi packages, or Rust crates.

I know who created the Docker images, because I'm the person who built them!

A lot of the time you can build your images either from scratch, or based on the official base images like Alpine or Ubuntu/Debian or some of the RPM base images, not that much different than downloading an ISO for a VM. With a base, you can use apk/apt/dnf to get whatever packages you want if you trust that more, just remember to clean up the package cache so it's not persisted in the layers (arguably wastes space). For most software, it actually isn't as difficult as it might have initially seemed.

As an alternative, you can also look for vaguely trustworthy parties that have a variety of prepackaged images available and you can either borrow their Dockerfiles or just trust their images, for example, https://bitnami.com/stacks/containers and https://hub.docker.com/u/bitnami

Most likely you have to have trust somewhere in the mix, for example, I'm probably installing the Debian/Ubuntu packaged JDK instead of compiling mine in most cases, just because that's more convenient.

Also, rootless containers are pretty cool! People do like Podman and other solutions a lot, you can even do some remapping with Docker if there are issues with how the containers expect to be run https://docs.docker.com/engine/security/userns-remap/ or if you have to use Docker and need something a bit more serious you can try this https://docs.docker.com/engine/security/rootless/

woodruffw

I don’t follow this reasoning: you might trust this distribution packager to be honest, but this doesn’t stop them from honestly packaging malicious code. It’s unlikely that the typical distribution packager is reviewing more than a minority of the code in the packages they’re accepting, especially between updates.

There are significant advantages to the distribution model, including exact provenance at the point of delivery. But I think it’s an error to treat it as uniquely trustworthy: it’s the same code either way.

WhyNotHugo

"Trust" as two meanings in English:

- You can trust someone as in "think they're honest and won't betray me".

- You can trust someone as in "think they are competent and won't screw-up".

In this case, we trust distributions packagers in both ways. Not only do we trust that they are non-malicious, we also trust that they won't clumsily package code without even skimming through it.

palata

It's not only the packager. Some distros have an actual security team. That does not mean they audit everything, but it's vastly better than getting random code from PyPi.

liontwist

> How is Debian / Ubuntu secure?

You’re also forgetting process isolation and roles which provide strong invariants to control what a given package can do.

No such guarantees exist for code you load into your process.

rad_gruchalski

> It's signed by a maintainer. And maintainers are vetted. You trust Debian/Ubuntu to only allow trustworthy people to sign packages.

> How are Docker / Python / Rust secure? I don't know any of the people who created my docker images, PyPi packages, or Rust crates.

Me neither. But the same goes for those from Debian/Ubuntu. In fact, neither I know anyone who vets those who sign and publish packages. What I know is that I can build my own images from container files and then I’m back to installing those apparently trusted packages from Debian/Ubuntu.

> We're basically back to sending around EXE and DLL files in a ZIP. It's just that now we call it a container and proudly start it as root.

I don’t get your point. And what’s an rpm or deb? You also potentially run stuff as root… sudo apt install -y… post install scripts…

tonyhart7

You use in house solution??? any of these guys that want build something minimal dependency free is valid concern, I agree with that

but but what if people just don't want that??? it given people jobs and things to do (lol, this is serious)

also there are certain things better to use third party library than develop in house like crypto

akerl_

The vetting process for open source maintainers has very little overlap with the vetting process for “is this person trustworthy”.

This is true for individual libraries and also for Linux distros.

LinXitoW

I genuinely don't know what people with this opinion work on that they can so easily choose to completely re-invent the wheel in every project, just because they're afraid of hypothetical dangers. Most of which are still present for private re-implementations (like bugs).

I cannot think of a single project where NIH syndrome would've been a net positive. Even dependencies that aren't "essential" are super helpful in saving time.

When you recreate parts of an "optional" dependency for every single project, how do you find the time to fix all the extra bugs, the edge cases, the support for different versions of lower level dependencies, of different platforms?

palata

> that they can so easily choose to completely re-invent the wheel in every project

Nobody, and I mean 0, chooses to completely re-invent the wheel in every project. I understand why you would find this weird. I don't start every project with nand gates.

But the tendency is to use a library to check if a number is even or odd. It is a gradient, and you have to choose what you write and when you rely on a dependency. If you need to parse an XML file, or run HTTP requests, most likely you should find a library for it. But personally, if I can write it in 1-2 days, I always do it instead of adding a dependency. And I'm pretty sure it's worth it.

> how do you find the time to fix all the extra bugs, the edge cases, the support for different versions of lower level dependencies, of different platforms

If you take e.g. C++, maintaining your dependencies takes time. If you support multiple platforms, probably you need to (cross-)compile the dependencies. You should probably update them too (or at least check for security vulnerabilities), and updating dependencies brings its lot of issues (also in Rust: most libraries are 0.x.x and can break the API whenever they please).

Then, if you miss a feature in the library, you're pretty much screwed: learning the codebase to contribute your feature may be a lot of work, and it may not be merged. Same for a bug (debugging a codebase that is not yours is harder). If you end up forking the dependency and having to live with a codebase you did not write, most likely it's better to write it yourself.

The people advocating the "not re-inventing the wheel because NIH is really bad" philosophy seem to assume that libraries are always good. Now if you have a team of good developers, it may well be that the code they write is better than the average library out there. So if they write their features themselves, you end up with less but better code to maintain, and the people who understand that code work for you. Doesn't that sound good?

pas

> if I can write it in 1-2 days, I always do it

Woah, that's a lot of days.

caseyohara

> Even dependencies that aren't "essential" are super helpful in saving time.

They might save time up front, but over the lifetime of a long-lived project, my experience is dependencies end up costing more time. Dependencies are a quintessential but often overlooked form of tech debt.

If you only work on short-lived projects or build-it-and-move-on type contract work where speed matters more than quality, sure, go nuts with dependencies. But if you care about the long term maintainability of a project, actively reducing dependencies as close to zero as possible is one of the best things you can do.

IshKebab

I think it probably depends heavily on the dependency (heh).

Would you reimplement a regex engine yourself? I hope not. Left-pad? Obviously yes. I don't think you can have blanket advice that dependencies should be avoided.

I suspect even quite simple dependencies are still a time saver in the long run. Probably people prefer reimplementing them because writing code is much more fun than updating dependencies, even if it actually takes longer and isn't as good.

a-french-anon

That's what I say when I converse about my colleagues: having a package manager from the start that greatly lowers the friction of adding or publishing yet another package deprives a language's ecosystem from something very useful: a good measure of natural selection.

Add to that a free-for-all/no curation repository situation like pypi, npm or cargo together with a too small standard library and prepare to suffer.

XorNot

I wonder how much of this is just the move away from shared libraries.

In the .NET space nuget certainly makes it easy to add dependencies, but dependencies do seem to be overall fewer and the primary interesting difference I'd note is that a dependency is in fact it's own DLL file - to the extent that it's a feature that you can upgrade and replace them by dropping in a new file or changing configuration.

It strikes me that we'd perhaps see far less churn like this if more languages were back to having shared libraries and ABI compatibility as a first class priority. Because then the value of stable ABIs and more limited selections of upgrades would be much higher.

a-french-anon

The quest for performance makes macros, monomorphisation/specialization and LTO too attractive for simple dynamic linking to remain the norm, unfortunately. And in a way, I understand, a Stalin/MLton style whole-program optimizing compiler certainly is worth it when you have today's computing power.

jvanderbot

So, the feature of "Here's a set of easily pulled libraries" is an anti-feature because it makes it easy to pull supporting libraries? I suspect this is actually about developers and not Rust or JS. Developers choose what dependencies to pull. If nobody pulled a dependency it would wither and die. There are a lot of dependencies for most libraries because most developers prefer to use dependencies to build something.

But I digress. If we're talking build system, nobody is forcing you to use Crates.io with cargo, they just make it easy to. You can use path-based dependencies just like CMake/VCPkg,Conan, or you can DIY the library.

Even with crates.io, nobody is forcing you to not use version pinning if you want to avoid churn, but they just make it easy to get latest.

It's easy to build software on existing software in Rust. If you don't like the existing software or the rate it changes don't blame Cargo. Just do it the way you like it.

a-french-anon

> because it makes it easy to pull supporting libraries?

No, because it's used as an excuse for a lack of large(r) standard library. It's the equivalent of "the bazaar/free market will solve it!".

You basically end up with a R5RS level of fragmentation and cruft outside your pristine small language; something that the Scheme community decided is not fun and prompted the idea of R6RS/R7RS-large. Yeah, it's hard to make a good, large and useful stdlib, but punting it to the outside world isn't a proper long-term solution either.

It's really a combination of factors.

materielle

Standard library omissions aren’t there just because.

For almost any functionality missing in the standard library, you could point to 2-3 popular crates that solve the problem in mutually exclusive ways, for different use cases.

Higher level languages like Go or Python can create “good enough” standard libraries, that are correct for 99% of users.

Rust is really no different than C or C++ in this regard. Sure, C++ has a bigger standard library. But half of it is “oh don’t use that because it has foot guns for this use case, everyone uses this other external library anyways”.

The one big exception here is probably async. The language needs a better way for library writers to code against a generic runtime implementation without forcing a specific runtime onto the consumer.

pie_flavor

And, what, because it's named `std` it'll be magically better? Languages with giant stdlibs routinely have modules rot away because code maintenance doesn't get any easier, like Python. There are plenty of crates on crates.io made by the Rust project developers, I trust the RustCrypto guys pretty much the same amount, and merging the two teams together wouldn't solve any problems.

WhyNotHugo

> If we're talking build system, nobody is forcing you to use Crates.io with cargo, they just make it easy to.

Using cargo with distributed dependencies (e.g.: using git repositories) has several missing features, like resolving the latest semver-compatible version, etc. No only is it _easier_ to use cargo with crates.io, it's harder to use with anything else because of missing or incomplete features.

> You can use path-based dependencies just like CMake/VCPkg,Conan, or you can DIY the library.

Have you tried to do this? Cargo is a "many things in one" kind of tool, compiling a Rust library (e.g.: dependency) without it is a pain. If cargo had instead been multiple programs that each on one thing, it might be easier to opt out of it for regular projects.

jvanderbot

Compared to ... cmake? vckpg? conan?

I have never had a good experience with those. However, using

mydep.path = <path>

in cargo has never been an issue.

And I hate to say it, path-based deps are much easier in C++ / C than other "make the build system do a coherent checkout" options like those mentioned above. So we're at worst parity for this use case, IMHO and subject to my own subjectivity, of course

croemer

Why would you want to automatically resolve the latest semver compatible version from git if you care about security? That's worse than cargo.io where tags are stable, whereas git allows tags to be edited.

> No only is it _easier_ to use cargo with crates.io, it's harder to use with anything else because of missing or incomplete features.

You're saying the same thing twice here: yes it's easier to use cargo with crates.io. It follows immediately that it's harder to use without crates.io. That doesn't make your argument stronger.

Here's an article on how to use cargo without crates: https://thomask.sdf.org/blog/2023/11/14/rust-without-crates-...

johnnyjeans

yes a large component of it is about developers. if developers were perfect beings, we wouldn't need rust in the first place.

nindalf

Rust may not be what you want to write, but it's what you want your coworkers to write.

chrisco255

[flagged]

palata

> They also prefer languages with buffer overflow and use-after-free errors.

Bad faith? My first sentence clearly says that I like the language, not the dependency situation.

johnnyjeans

he literally said he likes rust as a programming language, so no. also it's not "optional" when it's the de-facto standard in the language. you lock yourself out of the broader tooling ecosystem. no language server, no libraries (because they all use cargo), etc. oftentimes you run into ergonomic problems with a language's module system because it's been hacked together in service of the enormous combo build/dependency management system rather than vice versa. you're running so far against the grain you might as well not use the language.

this kind of passive-aggressive snark whenever someone leverages this very valid criticism is ridiculous

SkiFire13

> People will complain that "it's hard to add a dependency to a C++ project"

The way I see it the issue is that it's hard to add a dependency _in such a way that no people will have issues building your project with it_. This is problematic because even if you manage to make it work on your machine it may not work on some potential user or contributor's.

> But that just destroys the OS package distribution model: when I choose a Linux distribution, I choose to trust those who build it.

Distros still build Rust packages from sources and vendor crate dependencies in their repos. It's more painful because there are usually more dependencies with more updates, but this has nothing to do with shared libraries.

palata

> The way I see it the issue is that it's hard to add a dependency _in such a way that no people will have issues building your project with it_.

From my point of view, if it's done properly I can just build/install the dependency and use pkgconfig. Whenever I have a problem, it's because it was done wrong. Because many (most?) developers can't be arsed to learn how to do it properly; it's easier to just say that dependency management in C++ sucks.

pdimitar

Taking pride in being willing to take the longer and more error-prone and tedious path makes me wonder if you're not just flexing here.

There are thousands of tools and methodologies screaming for our attention to "be arsed to learn to do them properly".

They're not owed that attention, they must deserve it. And statistically + historically speaking, most have failed to do so.

You may choose to interpret this as you belonging to a small elite group of intellectuals who "are arsed to learn to do stuff properly" -- that's your right.

I choose to interpret it as "Cargo solves a real problem that has wasted numerous hours of my time in the past". And it wasn't because, for the third time, "I wasn't arsed to learn to do it properly", it's because nobody followed basic protocol and good practices to make the legendary "proper way" work reliably.

You're likely living and working in a bubble. It's much worse than the Wild West out there, man. Any tool that reduces the drudgery and allows for quicker doing of the annoying parts is a net positive.

lolinder

There's a corollary here to "build it yourself", which is "vet it yourself". Cargo, npm, and pip all default to a central registry which you're encouraged to trust implicitly, but we've seen time and time again that central registries do not adequately protect against broken or malicious code that causes major security flaws downstream. Each of these ecosystems trains its developers to use hundreds of dependencies—far more than they can personally vet—with the understanding that someone else must surely have done so, even though we've seen over and over again that the staff of these registries can't actually keep up and that even long-running and popular projects can suddenly become insecure.

I'd like to see an ecosystem develop that provides a small amount of convenience on top of plain old vendoring. No central repository to create a false sense of security. No overly clever version resolution scheme to hide the consequences of libraries depending on dozens of transitive dependencies. Just a minimal version resolution algorithm and a decentralized registry system—give each developer an index of their own libraries which they maintain and make downstream developers pick and choose which developers they actually trust.

Maybe a bit like Maven if Maven Central didn't exist?

pdimitar

Thought about it many times but never had the time or energy to tackle something as big and new. You?

patrick451

I agree. It's really nice how few dependencies get pulled into a typical c++ project. When I started playing with rust, I was shocked at at how many dependencies got pulled in to just build hello world. I'm just not interested in adopting the npm left-pad model.

pdimitar

While Rust projects unquestionably pull more dependency, there's zero of the left-pad culture in the community. It's simply pragmatism. And the OP is wrong about the terminal functionality by the way; stuff does actually change there still (sadly, though not very often).

No point reacting to something that is almost disinformation.

Ygg2

> In C++, I control my dependencies. In Rust I get above 100 so quickly I just

Just don't add dependencies, it's that simple. If you have enough time to control C++ dependencies, you can control them in Rust as well.

lolinder

That's not an answer when the entire ecosystem is built around the idea of adding lots of dependencies to do stuff. I don't need no dependencies, I'd like to live in a world where I can add two or three. But if the culture is so far gone that those two or three transitively import 20 each, I don't have that as an option—it's all or nothing.

null

[deleted]

Ygg2

> That's not an answer when the entire ecosystem is built around the idea of adding lots of dependencies to do stuff.

Again. Don't add dependencies. Just don't. Vendor it yourself. Or write it yourself. Absolutely nothing is forcing you to use the dependencies except your own desire to save time.

Cargo is giving you the option to A) save your own time B) minimize dependencies. You choose A) and blame Cargo.

Joker_vD

> it figures out your terminal dimensions. The underlying APIs it uses have effectively been stable since the earliest days of computing terminals—what, 50 years or so?

No, they haven't been stable, not really. The TIOCGWINSZ ioctl has never been standardized to my knowledge, and it has many different names on different Unixes and BSDs. The tcgetwinsize() function only got in POSIX in 2024, and this whole thing has really sad history, honestly [0], and that's before we even get to the Windows side of things.

[0] https://news.ycombinator.com/item?id=42039401

horsawlarway

This was vaguely my take away from the article: It's not that his replacements are simpler because they're better or made by him. They're simpler because they're only handling his use-cases.

Sometimes - that's fine.

Sometimes - that's making his software worse for folks who have different use-cases, or are running on systems he doesn't understand or use himself.

The real value of a library, even with all those dependencies (and to be clear, I disagree that 3 or 4 dependencies for a library that runs across windows/linux is "all that many", esp when his platform specific implementation still uses at least 1), is that it turns out even relatively simple problems have a wealth of complexity to them. The person who's job it is to write that library is going to be more experienced in the subject domain than you (at least in the good cases) and they can deal with it. Most importantly - they can deal with your unknown, unknowns. The places you don't even have the experience to know you're missing information.

liontwist

> They're simpler because they're only handling his use-cases.

This is a major part of the thesis of no dependencies. General code is bad code. It’s slow, branchy, complex, filled with mutexes, nan checks, etc. Read “the old new thing”, to see the extreme

When you have a concrete goal you can apply assumptions that simplify the problem space.

A good example of this was Casey’s work on a fast terminal. At first all the naysaying was “production terminals are really hard because you have to handle fonts and internationalization, accessibility, etc”. Indeed those problems suck, but he used a general windows API to render a concrete representation of the char set on demand, and the the rest was simple.

horsawlarway

> General code is bad code.

For whom?

I think most times, as a user of software, I almost always prefer to have something that solves my problem, even if it's got some rough edges or warts. That's what general code is - stuff that solves a problem for lots of people.

Would I prefer a tool that solves exactly my problem in the best way possible? Yeah, sure. Do I want to pay what that costs in money, time or attention? Usually no. The general purpose tool is plenty good enough to solve the problem now and let me move on.

The value I get from solving the problem isn't really tied to how optimally I solve the problem. If I can buy a single hammer that drives all the nails I need today - that's a BETTER solution for me than spending 10 hours speccing out the ideal hammer for each nail and each hand that might hold it, much less paying for them all.

I'll have already finished if I just pick the general purpose hammer, getting my job done and providing value.

---

So to your terminal example - I think you're genuinely arguing for more general code here.

There's performance in making a terminal run at 6k fps. It's an art. It's clearly a skill and I can respect it. Sounds like it's an edge case that dude wants, so I'm in favor of trying to make the terminal faster (and more general).

But... I also don't give a flying fuck for anything I do. Printing 1gb of text to the terminal is useless to me from a value perspective (it's nearly 1000 full length novels of text, I can't read that much in a year if it was all I did, so going from 5 minutes to 5 seconds is almost meaningless to me).

The sum total of the value I see from that change is "maybe once or twice a year when I cat a long file by mistake, I don't have to hit ctrl-c".

I also genuinely fail to understand how this guy gets meaningful value from printing 1gb of text to a terminal that quickly either... even the fastest of speed readers are still going to be SO MANY orders of magnitude slower to process that, and anything else he might want to do with that text is already plenty fast - copying it to a new file? already fast. Searching it? fast. Deleting it? fast. Editing it? fast.

So... I won't make any comment on why this case is slow or the discussion around it (I haven't read it, it sounds like it could be faster, and they made a lot of excuses not to solve his specific edge case). All I'll say is your argument sure sounds like adding an edge case that nearly no one has, there-by making the terminal more general.

Any terminal I wrote for myself sure as fuck wouldn't be as fast as that because I don't have the rendering experience he has, and my use case doesn't need it at all.

jvanderbot

100%. If OP is willing to maintain a rust crate that takes in no dependencies and can determine terminal size on any platform I choose to build for, then I will gladly use your crate.

OTOH, if minimizing dependencies is important for a very specific project, then the extra work of implementing the functionality falls on that project. It will be simpler because it must only support one project. It may not receive critical compatibility updates or security updates also.

It does not fall on the community to go and remove dependencies from all their battle tested crates that are in common use. I think anyone and everyone would choose a crate with fewer over more dependencies. So, go make them?

the_mitsuhiko

> If OP is willing to maintain a rust crate that takes in no dependencies and can determine terminal size on any platform I choose to build for, then I will gladly use your crate.

I already mentioned this on twitter but not a lot of people work this way. I don't have to point you farther than my sha1-smol crate. It was originally published under the sha1 name and was the one that the entire ecosystem used. As rust-crypto became more popular there were demands that the name was used for rust-crypto instead.

I have given up the name, moved the crate to sha1-smol. It has decent downloads, but it only has 40 dependents vs. >600 for sha1. Data would indicate that people don't really care all that much about it.

(Or the sha1 crate is that much better than sha1-smol, but I'm not sure if people actually end up noticing the minor performance improvements in practice)

the_mitsuhiko

It's not standardized but those calls do not change. The windows calls in particular are guaranteed ABI stable since they are compiled into a lot of binaries. There are definitely issues with ioctl but the changes landing in terminal-size or any of the dependencies that caused all these releases, are entirely unrelated to ioctl/TIOCGWINSZ constants/winsize struct. That code hasn't changed.

mrweasel

In this case the terminal-size crate just calls Rustix tcgetwinsize, which in turn just calls the libc tcgetwinsize. So I suppose you could save yourself a whole bunch of dependencies by just doing the same yourself. The only cost is Windows support.

If this particular API has been stable, or at least reasonably defined for 50 or 25 years is a detail, because the dependency doesn't even pretend to deal with that and the function is unlikely to change or be removed in the near future.

Joker_vD

> If this particular API has been stable

Well, it hasn't. The tcgetwinsize() was proposed (under this name) in 2017 and was standardized only in 2024. So it's less than a 10 year old API, which is missing from lots of libc implementations, see e.g. [0]. Before its appearance, you had to mess with doing ioctl's and hoping your libc has exposed the TIOCGWINSZ constant (which glibc by default didn't).

[0] https://www.gnu.org/software/gnulib/manual/html_node/tcgetwi...

mrweasel

I had to check the Rustix implementation again, because that would indicate that that terminal-size wouldn't work on a number of operating systems. However Rustix also uses TIOCGWINSZ in it's tcgetwinsize implementation.

titzer

Terminals are a good example of something that seems really simple but is a major PITA because of too many different vendors in the early days, and no industry standard emerged. What is the closest thing? VT100? VT102? I mostly write raw to those, but stuff like terminal size and various other features like raw (non-cooked) mode are crappy and require ioctl's and such. Frankly, it sucks.

...but the libraries suck even more! If you don't want to link against ncurses then may God have mercy on your soul.

Joker_vD

Previous summer I've toyed with trying to write an "async prompt" a-la Erlang's shell with output scrolling and line-editing (see e.g. [0] for example of what I am talking about), but it is so bloody difficult to do correctly, especially when the input spans several lines and there are some full-width characters on the screen, that I've abandoned it.

[0] https://asciinema.org/a/s2vmkOfj6XtJkQDzeM6g2RbPZ

jumpkick

I recently revived a web app I wrote in 2006, my first startup. It was a social media site focused on media sharing. A pretty simple LAMP stack for the time. PHP 5, MySQL 3.2, but it has all of your typical (for the time) social media features. I revived this app because I wanted some hands-on time with new CI/CD tech that I don't get to use at my day job, so I'm working to extremely over-engineer the app's deployment process as a learning project. I could have used Wordpress or some other Hello World app, but this is a lot more fun.

I had written nearly all of the PHP from scratch. I wrote libraries for authentication/authorization, templating, form processing etc. I used one PEAR library for sending email. The frontend was vanilla HTML and there was barely any JavaScript to speak of. We used Flash for media playback. In other words, myself and my small team built nearly all of it ourselves. This was just how you did most things in 2006.

It only took me about an hour to get the 19-year old app up and running. I had to update the old PHP mysql drivers to mysqli, and update the database schema and some queries to work in MySQL 8 (mostly wrapping now-reserved words with backticks and adjusting column defaults which are now more strict). The only thing that didn't work was the Flash.

An hour to revive an app from 2006. Contrast this with my day job, wherein we run scores of Spring Boot apps written in Java 8 that have pages of vulnerabilities from tens of dozens of dependencies, which are not easy to update because updating one library necessitates updating many other libraries, and oh my goodness, the transitive dependencies. It's a nightmare, and because of this we only do the bare minimum of work to update the most critical vulnerabilities. There's no real plan to update everything because it's just too tall of an order.

And the funny thing is, if you compare what this PHP app from 2006 did, which had truly, barely any dependencies, to what these Spring Boot apps do, there is not a lot of difference. At the end of the day, it's all CRUD, with a lot more enterprise dressing and tooling around it.

skydhash

Go and the C linux world have sold me on the fat library philosophy. You brought a library to solve a problem in a domain, then add your specific bits. You don't go and bring a dependency for each item in your check list. Yes there may be duplicate effort, but the upgrade path is way easier.

9dev

That works until you’re sitting between two fat libraries with overlapping, but incompatible concerns. It’s tradeoffs all the way down.

HumblyTossed

> That works until you’re sitting between two fat libraries with overlapping, but incompatible concerns. It’s tradeoffs all the way down.

This reads like the dev equivalent of manager speak.

ok123456

Most of that new CI/CD tech is standard now precisely because of all the complexity added by maintaining third-party dependencies and constant changes in the runtime environment. This isn't a problem for the most part for an old LAMP application deployed by scp.

madduci

While I might agree with you on some points, I don't want to spend time reinventing the wheel and introduce bugs into it.

Take for example standard communication message formats like FHIR or HL7. You definitely don't want to implement the whole definitions for the standard, which is already complicated.

Writing Cryptographic functions by yourself is also typically a shot in your foot, has proved in all these years of found critical security issues.

We live in a time where you want to actual solve a business problem, by focusing on the problem and not on how the solution is built properly. With the advent of AI this is even more critical, since all the code feels like stitched together blindly.

Spending time on developing all by yourself might give you a good shot in the long run, but first you need to survive the competition, who maybe has already caught the market, by using fast and throw-away code at the beginning.

fm2606

> my day job, wherein we run scores of Spring Boot apps written in Java 8 that have pages of vulnerabilities from tens of dozens of dependencies, which are not easy to update because updating one library necessitates updating many other libraries, and oh my goodness, the transitive dependencies.

At my job we have a fairly strict static analysis policy and starting in April it is going to get even more strict.

Have you looked at https://docs.openrewrite.org/ to automatically upgrade your dependencies?

I just migrated from Java 8, Spring Boot2 and Swagger to Java 17, Spring Boot 3.3 and OpenApi 3. It was pretty painless.

Now, I still have update some dependencies and transient dependencies but the biggest hurdles were taken care of by the migrations.

pie_flavor

An important point here is that the transitive dependency issue completely does not exist in Rust. If you upgrade a crate to a version which upgrades its public dependency, i.e. it uses it in its APIs and you need to interact with it to interact with those APIs, then you obviously need to upgrade your copy of the subdependency at the same time. But private transitive dependencies are totally irrelevant unless they link to C libraries. You can have as many semver-incompatible versions of a crate in the same dependency tree as you want, and even depend on multiple versions directly if you need to. No Java-style sweeping upgrades are ever needed, just upgrade the one thing with the vulnerability. (I believe C# has the same feature, though it's a little more baroque about it.)

kgilpin

I’m curious if the OpenRewrite project has any value to you in keeping your Java stuff up to date?

(I’m not affiliated with it; just curious about strategies for upgrading and maintaining apps that use big frameworks.)

999900000999

I agree 100% .

Even though NodeJS is largely responsible for my career, NPM has given me more trauma than my messed up childhood.

Imagine you're a new programmer, you're working on a brand new app to show to all your friends. But you want to add a new dependency, it doesn't like all the other dependencies, cool you say I'll just update them. Next thing you know absolutely nothing works, Babel is screaming at you.

No worries, you'll figure something out. Next thing you know you're staring at open git issues where basic things literally don't work. Expo for example has an open issue where a default new react native project just won't build for Android .

It's like no one cares half the time, and in that case the solution isn't even in the node ecosystem, it's somewhere in the Android ecosystem. It's duct tape all the way down. But this can also inspire confidence if a billion dollar project can ship non-functional templates, then why do I have imposter syndrome when my side projects don't work half the time!

pie_flavor

This is a problem for Node, but it isn't a problem for Rust. You don't need any dependencies to 'like' other dependencies. You can have all the versions at the same time and nothing breaks.

strawhatguy

Certainly nodejs and npm getting out of hand was a wakeup call for me, about 15 years ago.

I started to view it has a personal failure when I brought in that first dependency, which meant needing all of the packaging and organization just to use that first package.json, rather than just having plain js loaded from a script tag.

999900000999

I have a side project right now where I tried my hardest to use anything aside from NodeJS.

Even though it's just for my personal consumption, and I doubt more than five or six people will ever see it, I ended up having to use NodeJS and HTML because it's simply the best solution.

Imagine an alternate timeline where Google decided to offer other languages as first class citizens in the browser. Could you imagine Golang or Dart. No compiling down to JS, just running natively.

strawhatguy

Mozilla (Eich) wrote JS, and specifically Microsoft wrote the XMLHttpRequest, which allowed Google to exist. And supposedly WASM is to be that thing, so any language can compile down to it. Hasn't totally worked out I guess. A browser would have to take the leap to be WASM-only, and compile even JS to it.

I'm doing backend work mostly right now. If I start some other project, I'll probably try using htmx and tailwindcss as my two js script tags, and the rest will be controlled from the backend.

BrendanEich

See https://news.ycombinator.com/item?id=42838110 (downthread). Polyglot VMs are hard to pull off and always have a _primum inter pares_ (C# for .NET, Java for the JVM, JS for the JS+wasm "Web VM").

abound

This was something that surprised me about the Rust ecosystem, coming from Go. Even a mature Go project (e.g. some business' production web backend) may only have 10-20 dependencies including the transitive ones.

As noted in this post, even a small Rust project will likely have many more than that, and it's virtually guaranteed if you're doing async stuff.

No idea how much of it is cultural versus based on the language features, e.g. in Go interfaces are implicitly satisfied, no need to import anything to say you implement it.

jitl

One big reason is because Go has a very nice complete standard library, and Rust really does not.

Things you can find in go’s built in to the language or in standard libraries that need a dependency in Rust:

- green threads

- channels

- regular expressions

- http client

- http server

- time

- command line flags

- a logger

- read and write animated GIFs

I don’t love the Go language, but it’s the leader for tooling and standard library, definitely the best I’ve used.

7bit

I hate that. I don't want dependencies for serialization or logging. But you do and now you have to choose which of the dozen logging crates you need.

As a beginner this is horrible, because everybody knows serde, but I have to learn that serde is the defacto, and that is not easy because when coming from other languages, it sounds like the second best choice. And that is with most rust crates.

burntsushi

This is why things like https://blessed.rs exist. Although since they're unofficial, their discoverability is also likely a problem.

drrotmos

Which may or may not be fine in a Go binary that runs on a modern desktop CPU, but what if your code is supposed to run on say an ESP32-C3 with a whopping 160 MHz RISC-V core, 400 KB of RAM and maybe 2 MB of XIP flash storage?

You could of course argue that that's why no-std exists in Rust, or that your compiler might optimize out the animated GIF routines, but personally, I'd argue that in this context, it is bloat, that - while it could occasionally be useful - it could just as easily be a third party library.

jitl

It’s the same as in C, Rust, or any other programming language I’ve ever used. If you don’t use a library, it doesn’t end up linked in your executable. Don’t want to animate GIFs on your microcontroller, then you don’t write `import “image/gif”` in your source file.

For a microcontroller sized runtime, there’s https://tinygo.org/

I think the lack of strong standard library actually leads to more bloat in your program in the long run. Bloat is needing to deal with an ecosystem that has 4 competing packages for time, ending up with all 4 installed because other libraries you need didn’t agree, and then you need ancillary compatibility packages for converting between the different time packages.

chikere232

Does anyone use the full standard library for embedded targets? I've not seen it done in C, java has a special embedded edition, python has micro-python, rust seems to usually use no-std, but I might be wrong there.

It seems like a bad reason to constrain the regular standard library

Cyph0n

For Rust and Go in particular, the difference is in the standard library. The Rust stdlib is (intentionally) small.

chris_overseas

Agreed, and the small stdlib is one of the main reasons for this problem. I understand the reasoning why it's small, but I wish more people would acknowledge the (IMHO at least) rather large downside this brings, rather than just painting it as a strictly positive thing. The pain of dealing with a huge tree of dependencies, all of different qualities and moving at different trajectories, is very real. I've spent a large part of the last couple of days fighting exactly this in a Rust codebase, which is hugely frustrating.

palata

What is the reason to keep it small? Genuinely interested, I actually don't understand.

Embedded systems maybe?

chikere232

It might be a bad choice on rust's part.

IMO they should over time fold whatever ends up being the de-facto choice for things into the standard library. Otherwise this will forever be a barrier to entry, and a constant churn as ever new fashionable libraries to do the same basic thing pops up.

You don't need a dozen regex libraries, you just need one that's stable, widely used and likely to remain so.

burntsushi

> You don't need a dozen regex libraries, you just need one that's stable, widely used and likely to remain so.

That is the case today. Virtually everyone uses `regex`.

There are others, like `fancy-regex`. But those would still exist even if `regex` was in std. But then actually it would suck, because then `fancy-regex` can't share dependencies with `regex`, which it does today. And because of that, you get a much smoother migration experience where you know that if your regexes are valid with `regex`, they'll work the same way in `fancy-regex`.

A better example might be datetime handling, of which there are now 3 general purpose libraries one can reasonably choose. But it would have been an unmitigated disaster if we (I am on libs-api) had just added the first datetime library to std that arose in the ecosystem.

palata

Agreed.

> and a constant churn as ever new fashionable libraries

Isn't that the situation in Javascript? I don't work in Javascript but to me it feels like people migrate to a new cool framework every 2 months.

sesm

I would expect crates like `stdlib-terminal` and `stdlib-web-api` in that case.

Honestly, something feels off with Rust trying to advertise itself for embedded: no stdlib and encourage stack allocation, but then married to Clang (which doesn't have a good embedded target support) and have panic in the language.

Building a C++ replacement for a browser engine rewrite and building a C replacement for embedded have different and often conflicting design constraints. It seems like Rust is a C++ replacement with extra unnecessary constraints of a C replacement.

palata

I often wonder about this: obviously Rust is fashionable, and many people push to use it everywhere. But in a ton of situations, there are modern memory-safe languages (Go, Swift, Kotlin, Scala, Java, ...) that are better suited.

To me Rust is good when you need the performance (e.g. computer vision) and when you don't want a garbage collector (e.g. embedded). So really, a replacement for C/C++. Even though it takes time because C/C++ have a ton of libraries that may not have been ported to Rust (yet).

Anyway, I guess my point is that Rust should focus on the problem it solves. Sometimes I feel like people try to make it sound like a competitor to those other memory-safe languages and... even though I like Rust as a language, it's much easier to write Go, Swift or Kotlin than Rust (IMHO).

petecorreia

Go's vast standard library helps a lot with keeping dependency numbers down

lionkor

Exactly this. You want logging in Rust? You will need at least `log` and another logger crate, for example `env_logger`, maybe the `dotenvy` crate to read `.env` files automatically, you already have 3 direct dependencies + all the transitive ones.

In Go: https://pkg.go.dev/log

LinXitoW

The built-in "logging" in Go is barely more than a fancy Println. For example, where are the levels, like DEBUG and WARN?

nostradumbasp

Love the thesis statement. There is a lot of hidden cost in allowing abstractions from other libraries to be exposed over your own. I'm not here to say that should never be done but the future costs really need to be balanced at the decision point.

If the encapsulating package churns over its design, or changes it's goals it creates instability. Functionality is lost or broken apart when previously it was previously the simplest form of guarantee in software engineering in existence. It also deters niche but expert owners who aren't career OSS contributors from taking part in an ecosystem."I made a clean way to do X!" being followed up by weeks of discussion, negotiation, politics so that "X fits under Y because maybe people like Z" is inefficient and wasteful of everyones time.

If there's one thing I've learned in my life it's that the simplest things survive the longest. Miniliths, and monoliths should be celebrated way more often. Rust isn't alone in this by the way, I've seen this across languages. I've often seen OSS communities, drive hard for atomistic size packages, and I often wonder if it's mostly for flag planting and ownership transfer purposes than it is to benefit the community that actually uses these things.

cpursley

> It's 2025 and it's faster for me to have ChatGPT or Cursor whip up a dependency free implementation of these common functions

I sort of stumbled upon this myself and am coming around to this viewpoint. Especially after dependency hell of a big react app.

And there's also the saas/3rd party services dependencies to consider. Many of them are common patterns and already solved problems that LLMs can clone quickly.

macNchz

I’ve definitely become much more likely to start with small internal utility functions implemented by AI before adding a library than I would have been in the past—it’s quite effective for contained problems, I can have the AI write much more complete/robust implementations than I would myself, and if I do eventually decide to add a library I have a natural encapsulation: I can change the implementation of my own functions to use the library without necessarily having to touch everywhere it’s being used. Makes it easy to test a couple of different libraries as well, when the time comes.

dhagrow

This is deliciously ironic for me. For those who aren't aware, Armin is the original author of Flask, the Python web framework. Around the same time, there was a very similar library called Bottle. They were almkst identical in functionality, but while Flask became very popular, I always preferred to stick with Bottle because it was a single file with no dependencies, which meant it was very easy to just copy into your project.

It also made it very easy to hack around with, and I got to the point where I understood the entire thing. I did couple it with Gevent for the server and websockets, but I was able to put together some heavy-lifting projects that way.

I still feel a strong impulse to use it for small web projects. Sadly, it didn't keep up with a lot of the more modern practices that Python has introduced over the years, so it does feel a bit dated now.

zelphirkalt

You need capable engineering for building it yourself. If you only got engineers, who only ever reached for libraries, in ecosystems like NPM or PyPI, you will find them hard-pressed to develop solutions for many things themselves, especially so, if they are supposed to be solutions, that stand the test of time, and have the flexibility they need. It takes a lot of practice to "avoid programming yourself into a corner".

Another thing I noticed is, that one can often easily do better than existing libraries. In one project I implemented a parser for a markdown variant, that has some metadata at the top of the file. Of course I wrote a little grammar, not even a screen of code, and just like that, I had a parser. But I did not expect the badness of the frontend library parsing the same file. That one broke, when you had hyphens in metadata identifiers. At first I was confused, why it could not deal with that. Then it turned out, that it directly used the metadata identifiers as object member names ... Instead of using a simple JSON object, they had knowingly or unknowingly chosen to artificially limit the choice of names and to break things, when there are hyphens like in "something-something". In the end my parser was abandoned, people arguing, that they would have to "maintain" it. Well, it just worked and could easily be adapted for grammar changes. There was nothing difficult to understand about it either, if you had just a little knowledge about parsers. Sounds incredible, but apparently no one except me on the team had written a parser by using a parser generator library before.

And like that, there are many other examples.

gwbas1c

> But when you end up using one function, but you compile hundreds, some alarm bell should go off.

About a year ago I ran a project to update 3rd party dependencies.

One of the dependencies was a rich math library, full of all kinds of mathematical functions.

I did a little bit of digging, and we were only using one single method, to find the median of a list.

I pointed the engineer to the Wikipedia page and told him to eliminate the dependency and write a single method to perform the mathematical operation.

---

But, IMO, the real issue isn't using 3rd party dependencies: It's that we need a concept of pulling in a narrow slice of a library. If I just need a small part of a giant library, why do I have to pull in the whole thing? I think I've heard someone propose "microframeworks" as a way to do this.

saghm

In Rust, a library can define "features" that can be conditionally enabled or disabled when depending on it, which give a a built-in way to customize how much of the library is actually included. Tokio is a great example of this; people might be surprised to learn that the total number of direct dependencies that are required by tokio is only two[1]; everything else is optional!

Unfortunately, it doesn't seem like people are super diligent about looking into the default feature set that's used by their dependencies and proactively trimming that down. It doesn't help that the syntax for pulling in extra features is less verbose than removing optional but default ones (which requires both specifying "no-default-features" and then manually adding every one of the default features that you do still want back to the list of ones you pull in), and it _really_ doesn't help that the only way for libraries to expose the ability to prune unneeded features from their own dependencies to the users who inherit them is by manually making their own feature that maps to the features of every single one of their own dependencies. For example, if you're writing a library with five dependencies, and every one of them has one required dependency and four optional ones, giving the users of your library full control over what transitive features they pull in would mean making 20 features in your own library mapping to each of those transitive features, and that's not even counting the ones that you'd want to make for your own code in order to be a good citizen and not force downstream users to include all of your own code.

More and more I'm coming to the opinion that the ergonomics around features being so much worse for trying to cut down on the bloat is actually the catalyst for a lot of the issues around compile times in Rust. It doesn't seem to be super widely discussed when things like this come up though, so maybe it's time that I try to write a blog post or something with my strong feelings on this so at least I'll have something to point to assuming the status quo continues indefinitely.

[1]: https://github.com/tokio-rs/tokio/blob/ee19b0ed7371b069112b9...

ebiester

That's how we get things like leftpad in the JS ecosystem.

On one side, I think if we had a good system of trust, that's not a problem.

And part of me likes the idea of something like Shadcn - you like a component? Copy it into your library. However, if there ends up being a vulnerability, you have no idea if you are affected.

For some code, that's not a problem. For other code, we truly depend on having as many eyes as possible on it.

cluckindan

Wow. For those who don’t know, here’s a pseudocode implementation:

    Median(list) {
      let len = length(list)
      if len % 2 == 0 {
        let x = floor(len/2)
        return (list[x] + list[x+1]) / 2
      }
      return list[len/2]
    }
Note: assumes the list is already sorted.

Managed to resist calling is_odd there!

gpm

In most cases I'd probably use nearly this. I note that it contains a bug due to integer overflow if naively translated to most languages.

But if I have a big enough list that I care about space usage (I don't want to make a copy that I sort and then throw away), or speed (I care about O(n) vs O(n log(n))) I'd be looking for a library before implementing my own.

Here are the relevant algorithms if you really want to implement your own fast median code though: https://cs.stackexchange.com/questions/1914/find-median-of-u...

cluckindan

I use that math textbook algorithm in production to produce a median from a list which has a bounded size and is already sorted by the db, though that bound could technically grow to INT_MAX if someone managed to make that many requests in five minutes. Not very likely. :-)

TZubiri

A metric that I would like to focus in is dependency depth. We had this with the OSI model way back, but at this point it seems that anything beyond layer 7 just gets bucketed into 8+.

We need to know if a dependency is level 1 or level 2 or level 3 or 45. And we need to know what the deepest dependency on our project is. I might be naive, but I think we should strive to reduce the depth of the dependency graph and maybe aim for like 4 or 5 layers deep for a web app, tops.

themk

I've often thought the same. I would love a depedency manager that not only surfaced this information, but required you to declare upfront what level your library is.

I think it would reign in the bloat.

bluGill

Worse, sometimes the upstream is complex enough that you don't want to do it yourself - then the upstream quits maintaining their project. I have in my company some open source projects that we still use that haven't been touched upstream since 2012, but either there is no replacement or the replacement is so different it isn't worth the effort to upgrade. Fortunately none of these are projects where I worry about security issues, I'm just annoyed by the lack of support, but if they faced the network I'd be concerned (and we do have security people who would force a change)

adrianN

Software that hasn’t been touched in ten years and still does the job is about as ideal as a dependency can be.

palata

I tend to agree, but it may have downsides to: it may do the job and have serious security issues. If you don't know what it does, no reason to know about the security issues.

WhyNotHugo

In theory yes.

Although a ten year old dependency is written in Python, it's likely not going to work any more due to changes in the build system, stdlib, etc.

bluGill

But does it? If the software is a spell checker for a language I don't know I will have no idea if it is any good.

adrianN

The same can be true for the dependency that releases weekly updates.

qup

Maybe it's even actively maintained!

xnorswap

Or they bait-and-switch freedom.

So they start off with an open source Free solution, and then later switch to a paid for model, and abandon the Free version. This is particularly painful when it's a part of your system that's key to security.

You're left between wondering if you should just pay the ransom or switching to a different solution entirely, or gamble and leaving it on an old unpatched version.

( Looking at you, IdentityServer )

Either way you regret ever going with them.

rad_gruchalski

Or fork it and maintain yourself. Wanted to write it yourself? Now is the perfect opportunity.

csomar

> I'm just annoyed by the lack of support

You know you can hire someone to support them right?