Skip to content(if available)orjump to list(if available)

Demonstrably Secure Software Supply Chains with Nix

beardedwizard

The bummer about lots of supply chain work is that it does not address the attacks we see in the wild like xz where malicious code was added at the source, and attested all the way through.

There are gains to be had through these approaches, like inventory, but nobody has a good approach to stopping malicious code entering the ecosystem through the front door and attackers find this much easier than tampering with artifacts after the fact.

kuruczgy

Actually this is not quite true, in the xz hack part of the malicious code was in generated files only present in the release tarball.

When I personally package stuff using Nix, I go out of my way to build everything from source as much as possible. E.g. if some repo contains checked in generated files, I prefer to delete and regenerate them. It's nice that Nix makes adding extra build steps like this easy. I think most of the time the motivation for having generated files in repos (or release tarballs) is the limitations of various build systems.

throwawayqqq11

Your preference to compile your backdoors does not really fix the problem of malicious code supply.

I have this vague idea to fingerprint the relevant AST down to all syscalls and store it in a lock file to have a better chance of detection. But this isnt a true fix either.

kuruczgy

Yes you are right, what I am proposing is not a solution by itself, it's just a way to be reasonably confident that _if you audit the code_, that's going to be the actual logic running on your computer.

(I don't get the value of your AST checksumming idea over just checksumming the source text, which is what almost all distro packages do. I think the number of changes that change the code but not the AST are negligible. If the code (and AST) is changed, you have to audit the diff no matter what.)

The more interesting question that does not have a single good answer is how to do the auditing. In almost all cases right now the only metric you have is "how much you trust upstream", in very few cases is actually reading through all the code and changes viable. I like to look at how upstream does their auditing of changes, e.g. how they do code review and how clean is their VCS history (so that _if_ you discover something fishy in the code, there is a clean audit trail of where that piece of code came from).

XorNot

The xz attack did hit nix though. The problem is no one is inspecting the source code. Which is still true with nix, because everyone writes auto bump scripts for their projects).

I'd anyone was serious about this issue, we'd say way more focus on code signing and trust systems which are transferable: i.e. GitHub has no provision to let anyone sign specific lines of a diff or a file to say "I am staking my reputation that I inspected this with my own eyeballs".

yencabulator

I think a big part of the push is just being able to easily & conclusively answer "are we vulnerable or not" when a new attack is discovered. Exhaustive inventory already is huge.

tough

i read somewhere go has a great package for this that checks statically typed usage of the vuln specific functions not whole package deps

XiZhao

I run a sw supply chain company (fossa.com) -- agree that there's a lot of low hanging gains like inventory still around. There is a shocking amount of very basic but invisible surface area that leads to downstream attack vectors.

From a company's PoV -- I think you'd have to just assume all 3rd party code is popped and install some kind of control step given that assumption. I like the idea of reviewing all 3rd party code as if its your own which is now possible with some scalable code review tools.

nyrikki

Those projects seem to devolve into a boil the ocean style projects and tend to be viewed as intractable and thus ignorable.

In the days everything was http I use to set a proxy variable and have the proxy save all downloaded assets to compair later, today I would probably blacklist the public CAs and do an intercept, just for the data of what is grabbing what.

Fedramp was defunded and is moving forward with a GOA style agile model. If you have the resources I would highly encourage you to participate in conversations.

The timelines are tight and they are trying to move fast, so look into their GitHub discussions and see if you can move it forward.

There is a chance to make real changes but they need feedback now.

https://github.com/FedRAMP

sollewitt

Valuably you also get demonstrable _insecure_ status - half the pain for our org of log4js was figuring out where it was in the stacks, and at which versions. This kind of accounting is really valuable when you're trying to figure out if and where you are affected.

niam

> it offers integrity and reproducibility like no other tool (btw. guix also exists)

This rubs me the wrong way. They acknowledge that alternative tools exist, but willfully use the wrong-er statement in pursuit of a vacuous marketing idiom.

XorNot

This still doesn't fix the "trusting trust" attack: which Guix actually can, and which can bootstrap sideways to build other distros.

It also doesn't do anything which regular packaging systems don't (nix does have some interesting qualities, security ain't one of them): I.e. that big list of dependencies isn't automatic in any way, someone had to write them, which in turn makes it exactly the same as any other packaging systems build-deps.

gitroom

Hard agree on the pain of tracking all this - been there. Respect for the grind to actually lock this stuff down.

tucnak

The laborious extents to which people would go simply to not use Guix.

Zambyte

I also use Guix. Quickly skimming the article, I don't see anything that jumps out that Guix does all that different. What are you suggesting?