Skip to content(if available)orjump to list(if available)

Supply chain attacks are exploiting our assumptions

tharne

This is something I've never totally understood when it comes to Rust's much loved memory safety vs. C's lack of memory safety. When it comes to replacing C code with Rust, aren't we just trading memory risk for supply chain risk?

Maybe one is more important than the other, I don't know. All the languages I use for work or hobbies are garbage collected and I'm not a security professional. But it does seem like the typical Rust program with it's massive number of "cargo adds" is an enormous attack surface.

bluGill

The supply chain attack always existed. C because it didn't have a package manager made it slightly harder in that a dependency wouldn't be automatically updated, while Rust can do that. However this is very slight - in linux many people use libraries from a package manager which gets updated when there is a new release - it wouldn't be hard to get a bad update into a package (xz did that).

If you have packages that don't come from a package manager - windows install, phone installs, snap, docker, flatpack, and likely more you have a different risk - a library may not have been updated and so you are vulnerable to a known flaw.

There is no good/easy answer to supply chain risk. It is slightly different on Rust because you can take the latest if you want (but there is plenty of ability to stay with an older release if you want), but this it doesn't move the needle on overall risk.

MattPalmer1086

It's rare not to use open source libraries no matter the language. Maybe C code tends to use fewer, I don't know.

This doesn't prove anything of course, but the only High severity vulnerability I had in production this year was a C library. And the vulnerability was a buffer overflow caused by lack of memory safety.

So I don't think it's a simple trade off of one sort of vuln for another. Memory safety is extremely important for security. Supply chain attacks also - but using C won't defend you from those necessarily.

immibis

There's no canonical package manager or packaging convention for C and C++ libraries, since they predate that sort of thing. As a result, there's a lot more friction to using dependencies and people tend to use less of them. Common OS libraries are fair game, and there are some large widely used libraries like boost, but it's extremely unusual for a C or C++ project to pull in 20+ very small libraries. A chunk of functionality has to be quite big and useful before it overcomes the friction of making it a library.

alganet

"Not made here syndrome" actually not be a syndrome.

thewebguyd

Agree. We took NIH too far.

You don't need to pull in a library for every little function, that's how you open yourself up to supply chain risk.

The left-pad fiasco, for example. Left-pad was 11 lines of code. Literally no reason to pull in an external dependency for that.

Rust is doomed to repeat the same mistakes because it also has an incredibly minimal standard library, so now we get micro-crates for simple string utilities, or scopeguard which itself is under ~400 LoC, and a much simpler RAII can be made yourself for your own project if you don't need everything in scopeguard.

The industry needs to stop being terrified of writing functionality that already exists elsewhere.

udev4096

Instead of securing the "chain", we should instead isolate every library we import and run it under a sandbox. We should adopt the model of QubesOS. It follows security by isolation. There are lots of native sandboxing in linux kernel. Bubblewrap, landlock, gvisor and kata (containers, not native), microVMs, namespaces (user, network), etc

criemen

> we should instead isolate every library we import and run it under a sandbox

I don't see how that'd be possible. Often we want the library to do useful things for the application, in the context of the application. What would incentivize developers to specify more fine-grained permissions per library than the union of everything their application requires?

I see more use in sandboxing entire applications, and giving them more selective access than "the entire user account" like we do these days. This is maybe more how smartphones operating systems work than desktop computers?

immibis

In languages without ambient I/O capabilities, it's not as hard as it sounds if you're used to languages with them. Suppose the only way you can write a file is if I give you a handle to that file - then I know you aren't going to write any other files. Of course, main() receives a handle from the OS to do everything.

If I want you to decode a JPEG, I pass you an input stream handle and you return an output memory buffer; because I didn't give you any other capabilities I know you can't do anything else. Apart from looping forever, presumably.

It still requires substantial discipline because the easiest way to write anything in this hypothetical language is to pass the do-everything handle to every function.

See also the WUFFS project: https://github.com/google/wuffs - where things like I/O simply do not exist in the language, and therefore, any WUFFS library is trustworthy. However, it's not a general-purpose language - it's designed for file format parsers only.

criemen

Fair enough, it makes more sense in a, say, Haskell-style pure functional language. Instead of getting a general IO monad, you pass in more restricted functionality.

Still, it'd be highly painful. Would it be worth the trade-off to prevent supply chain attacks?

whytevuhuni

I don't know what the next programming language after Rust will look like, but it will definitely have built-in effects and capabilities.

It won't fix everything (see TARmageddon), but left-pad-rs's build.rs file should definitely not be installing a sudo alias in my .bashrc file that steals my password when I cargo build my project.

darrenf

Can't help but think that Perl's tainted mode (which is > 30yrs old) had the right idea, and it's a bit strange how few other languages wanted to follow its example. Quoting `perldoc perlsec`:

You may not use data derived from outside your program to affect something else outside your program--at least, not by accident. All command line arguments, environment variables, locale information (see perllocale), results of certain system calls ("readdir()", "readlink()", the variable of "shmread()", the messages returned by "msgrcv()", the password, gcos and shell fields returned by the "getpwxxx()" calls), and all file input are marked as "tainted". Tainted data may not be used directly or indirectly in any command that invokes a sub-shell, nor in any command that modifies files, directories, or processes, with the following exceptions: [...]

bluGill

I hope you are right, but fear that there is no way to make such a thing that is usable. You likely end up with complex permissions that nobody understands and so you just "accept all", or programs that have things they must do under the same protection as the evil thing you want to block.

marcosdumay

> but fear that there is no way to make such a thing that is usable

The function declarations declare every action it can do on your system, and any change adding new ones is a breaking change on the library.

We've knew how to do it for ages. What we don't have is a good abstraction to let the compiler check them and transform the actions into high-level ones as they go through the stack.

cesarb

> Instead of securing the "chain", we should instead isolate every library we import and run it under a sandbox.

Didn't we have something like that in Java more than a decade ago? IIRC, you could, for instance, restrict which classes could do things like opening a file or talking to the network.

It didn't quite work, and was abandoned. Turns out it's hard to sandbox a library; the exposed surface ended up being too large, and there were plenty of sandbox escapes.

> There are lots of native sandboxing in linux kernel. Bubblewrap, landlock, gvisor and kata (containers, not native), microVMs, namespaces (user, network), etc

What all of these have in common, is that they isolate processes, not libraries. If you could isolate each library in a separate process, without killing performance with IPC costs, you could use them; one example is desktop thumbnailers, which parse untrusted data, and can use sandboxes to protect against bugs in the image and video codec libraries they use.

yupyupyups

If there is a kernel level feature to throw sections of a process memory into other namespaces then yes, that may work. If you mean running a xen hypervisor for sqlite.so, then no thanks.