Skip to content(if available)orjump to list(if available)

XZ Utils Backdoor Still Lurking in Docker Images

DiabloD3

When I was doing my stuff at my former stint as a hatrack, I made the choice to ban Docker from anywhere inside the company.

_Docker_ is a security hazard, and anything it touches is toxic.

Every single package, every single dependency, that has an actively exploited security flaw is being exploited in the Docker images you're using, unless you built them yourself, with brand new binaries. Do not trust anyone except official distro packages (unless you're on Ubuntu, then don't trust them either).

And if you're going to do that... just go to _actual_ orchestration. And if you're not going to do that, because orchestration is too big for your use case, then just roll normal actual long lived VMs the way we've done it for the past 15 years.

cortesoft

> Every single package, every single dependency, that has an actively exploited security flaw is being exploited in the Docker images you're using, unless you built them yourself, with brand new binaries.

I don't quite understand what you mean with this part

kccqzy

Seems a bit drastic? You can ban using images built by others, but self-built images are mostly fine. Auditing and making rules on where dependencies come from is necessary, but banning the tool itself seems drastic.

And I'm not sure there's any dichotomy between long lived VMs and Docker. For small scale use cases, just provision long lived VMs and then pull new images every time your developers decide to release. The images can then be run as systemd units.

timbotron

I can understand criticism of docker specifically from a "requires root and daemon" perspective (rootless daemonless container runtimes exists) but this is such an odd take, using outdated software is completely unrelated to whether or not you use containers. Why would long lived VMs be better if they're also using old versions of software?

RVuRnvbM2e

This is hyperbolic.

Unpatched long-lived VMs are much more cumbersome to fix than an outdated Docker image. And good luck reproducing your long-lived VM with years of mutation-via-patch.

jiggawatts

> Then just roll normal actual long lived VMs the way we've done it for the past 15 years.

This is easy to say if your wallet exploded because it had too much money in it, and if you don't care about the speed of operations.

Just today I'm investigating hosting options for migrating a legacy web farm with about 200 distinct apps to the cloud.

If I put everything into one VM image, then patching/upgrades and any system-wide setting changes become terrifying. The VM image build itself takes hours because this is 40 GB of apps, dependencies, frameworks, etc... There is just no way to "iterate fast" on a build script like that. Packer doesn't help.

Not to mention that giant VM images are incompatible with per-app DevOps deployment automation. How does developer 'A' roll back their app in a hurry while developer 'B' is busy rolling theirs out?

Okay, sure, let's split this into an image-per-app and a VM scale set per app. No more conflicts, each developer gets their own pipeline and image!

But now the minimum cost of an app is 4x VMs because you need 2x in prod, 1x each in test and development (or whatever). I have 200 apps, so... 800 VMs. With some apps needing a bit of scale-out, let's round this up to 1,000 VMs. In public clouds you can't really go below $50/VM/mo so that's an eye-watering $50,000 per month to replace half a dozen VMs that were "too cheap to meter" on VMware!

Wouldn't it be nicer if we could "somehow" run nested VMs with a shared base VM disk image that was thin-cloned so that only the app-specific differences need to be kept? Better yet, script the builds somehow to utilise VM snapshots so that developers can iterate fast on app-specific build steps without having to wait 30 minutes for the base platform build steps each time they change something.

Uh-oh, we've reinvented Docker!

charcircuit

When deploying to a VM you don't need to build a new image. If setup right you can just copy the updated files over and then trigger a reload or restart of the service. Different team's services are in different directories and don't conflict.

twunde

This is much more viable than it was in the past with the advent and adoption of nvm, pyenv etc but the limiting factor becomes system dependencies. The typical example from yesteryear was upgrading openssl but inevitably you'll find that some dependency auto updates a system dependency silently or requires a newer version that requires upgrading the OS.

dima55

Important nitpick: this wasn't reported to the "Debian maintainers". In DEBIAN this was fixed long ago. The problem persists and was reported to people that work with Docker images, which is primarily people that don't want to use Debian the normal way, and don't benefit from many of the Debian niceties.

jchw

The summary of what they did on the page is largely accurate. I mean, the repository on GitHub that cooks the official Docker Debian images is indeed primarily maintained by a Debian maintainer who is a member of many different Debian teams, even if it is not an official artifact of Debian. And the problem is fixed in Docker, too, but it sounds like the issue is that they'd like the old Docker images with the backdoored packages to be removed.

And sure, you definitely lose some niceties of Debian when you run it under Docker. You also lose some niceties of Docker when you don't.

RVuRnvbM2e

I don't understand the point of this article. Container images are literally immutable packaged filesystems so old versions of affected packages are in old Docker images for every CVE ever patched in Debian.

How is this news?

BobbyTables2

Vulnerabilities are one thing. Many container images in development/testing are never actually exposed to anything hostile…

Active backdoors are quite another…

jchw

I'm not saying this isn't an issue, but I do wonder how many of these containers that contain the backdoor can feasibly trigger it. Wouldn't you need to run OpenSSH in the container? It's not unheard of, but it's atypical.

creatonez

This headline is so egregiously sensationalist.

The XZ backdoor never made it to Debian stable. It is "still lurking in docker images" because Debian publishes unstable testing images, under a tag that is segregated from the stable release tags. You can find vulnerable containers for literally any vulnerability you can imagine by searching for the exact snapshot where things went wrong.

And then downstream projects, if they choose to, can grab those images and create derivatives.

Basing your images on an experimental testing version of Debian and then never updating it is an obvious mistake. Whether XZ is backdoored is almost irrelevant at that point, it's already rotting.

> Upon discovering this issue, Binarly immediately notified the Debian maintainers and requested removal, but the affected images remain in place.

It is generally considered inappropriate to remove artifacts from an immutable repository for having a vulnerability. This wasn't even done for vulnerable Log4j versions in Maven repositories, despite Log4shell being one of the most potent vulnerabilities in history. It would just break reproducible builds and make it harder to piece together evidence related to the exploit.

Analemma_

I have a feeling a lot of users just reflexively upvote any story about security vulnerabilities without checking if the contents have any meat at all. It's a well-intentioned heuristic, but unfortunately it's easily exploited in practice, because there are a whole bunch of C- and D-list security consultancy firms who use blogspam about exaggerated threats to get cheap publicity.

This post is a classic example and should've been buried quickly as such. You wouldn't upvote a LinkedIn "look at what MyCorp has been up to!" post from a sales associate at MyCorp, a lot of this infosec stuff is no different.

torgoguys

I'm the one who submitted this link. (I have zero affiliation with the authors). What you say is fair enough, but I thought the article an interesting data point nonetheless. In particular, I found it interesting how a vulnerability: 1) with a tiny window during which it was published, 2) of very high potential severity, and 3) with SO MUCH publicity surrounding it could still be lingering where you might accidentally grab it. The threat isn't giant here, but I saw it as just today's reminder to keep shields up.

lmm

> The XZ backdoor never made it to Debian stable. It is "still lurking in docker images" because Debian publishes unstable testing images, under a tag that is segregated from the stable release tags. You can find vulnerable containers for literally any vulnerability you can imagine by searching for the exact snapshot where things went wrong.

To a first approximation nothing ever makes it into Debian stable. Anyone working in an actively developed ecosystem uses the thing they pretend is an "experimental testing version". It's a marketing startegy similar to how everything from Google used to be marked as "beta".

djkoolaide

Given my understanding of Debian, I don't believe this can be attributed to a "marketing strategy."

lmm

It probably evolved as such rather than being deliberately planned, but the end result is the same.

burnt-resistor

Not Debian images in particular, but zillions of derived images lacking updates. This is one the many problems with using "community provided", un-curated, other people's pre-baked "golden master", old garbage rather than properly using patched and maintained systems. Apparent convenience with failure to audit.

sugarpimpdorsey

Most Docker images have zero security anyway. Who cares if someone has a key to the back door when the front door and garage are unlocked (and running as root of course)?

LeoPanthera

Devs should consider migrating from xz to lzip, which is an improved LZMA container in multiple ways:

https://www.nongnu.org/lzip/xz_inadequate.html

lifthrasiir

Not only it is irrelevant in the context of Docker images, but also lzip is not that superior to xz; the linked post only covers minor concerns and both lzip and xz are substantially simpler than the actual meat---LZMA bitstream format.

Analemma_

That might be true but it’s not really relevant to this post: stale Docker images with vulnerabilities lingering on DockerHub can happen to any software package.