Skip to content(if available)orjump to list(if available)

Why did containers happen?

alphazard

Containers (meaning Docker) happened because CGroups and namespaces were arcane and required lots of specialized knowledge to create what most of us can intuitively understand as a "sandbox".

Cgroups and namespaces were added to Linux in an attempt to add security to a design (UNIX) which has a fundamentally poor approach to security (shared global namespace, users, etc.).

It's really not going all that well, and I hope something like SEL4 can replace Linux for cloud server workloads eventually. Most applications use almost none of the Linux kernel's features. We could have very secure, high performance web servers, which get capabilities to the network stack as initial arguments, and don't have access to anything more.

Drivers for virtual devices are simple, we don't need Linux's vast driver support for cloud VMs. We essentially need a virtual ethernet device driver for SEL4, a network stack that runs on SEL4, and a simple init process that loads the network stack with capabilities for the network device, and loads the application with a capability to the network stack. Make building an image for that as easy as compiling a binary, and you could eliminate maybe 10s of millions of lines of complexity from the deployment of most server applications. No Linux, no docker.

Because SEL4 is actually well designed, you can run a sub kernel as a process on SEL4 relatively easily. Tada, now you can get rid of K8s too.

tliltocatl

Containers and namespaces are not about security. They are about not having singleton objects at the OS level. Would have called it virtualization if the word wasn't so overloaded already. There is a big difference that somehow everyone misses. A bypassable security mechanism is worse than useless. A bypassable virtualization mechanism is useful. It is useful to be able to have a separate root filesystem just for this program - even if a malicious program is still able to detect it's not the true root.

As about SEL4 - it is so elegant because it leaves all the difficult problems to the upper layer (coincidentally making them much more difficult).

alphazard

> As about SEL4 - it is so elegant because it leaves all the difficult problems to the upper layer (coincidentally making them much more difficult).

I completely buy this as an explanation for why SEL4 for user environments hasn't (and probably will never) take off. But there's just not that much to do to connect a server application to the network, where it can access all of its resources. I think a better explanation for the lack of server side adoption is poor marketing, lack of good documentation, and no company selling support for it as a best practice.

frumplestlatz

The lack of adoption is because it’s not a complete operating system.

Using sel4 on a server requires complex software development to produce an operating environment in which you can actually do anything.

I’m not speaking ill of sel4; I’m a huge fan, and things like it’s take-grant capability model are extremely interesting and valuable contributions.

It’s just not a usable standalone operating system. It’s a tool kit for purpose-built appliances, or something that you could, with an enormous amount of effort, build a complete operating system on top of.

bombcar

Is that why containers started? I seem to recall them taking off because of dependency hell, back in the weird time when easy virtualization wasn't insanely available to everyone.

Trying to get the versions of software you needed to use all running on the same server was an exercise in fiddling.

mbreese

I think there were multiple reasons why containers started to gain traction. If you ask 3 people why they started using containers, you're likely to get 4 answers.

For me, it was avoiding dependencies and making it easier to deploy programs (not services) to different servers w/o needing to install dependencies.

I seem to remember a meetup in SF around 2013 where Docker (was it still dotCloud back then?) was describing a primary use-case was easier deployment of services.

I'm sure for someone else, it was deployment/coordination of related services.

dabockster

The big selling points for me were what you said about simplifying deployments, but also the fact that a container uses significantly less resource overhead than a full blown virtual machine. Containers really only work if your code works in user space and doesn't need anything super low level (eg TCP network stack), but as long as you stay in user space it's amazing.

alphazard

Yes, totally agree that's a contributor too. I should expand that by namespaces I mean user, network, and mount table namespaces. The initial contents of those is something you would have to provide when creating the sandbox. Most of it is small enough to be shipped around in a JSON file, but the initial contents of a mount table require filesystem images to be useful.

ctkhn

On a personal level, that's why I started using them for self hosting. At work, I think the simplicity of scaling from a pool of resources is a huge improvement over having to provision a new device. Currently at an on-prem team and even moving to kubernetes without going to cloud would solve some of the more painful operational problems that send us pages or we have to meet with our prod support team about.

chasd00

iirc full virtualization was expensive ( vmware ) and paravirtualization was pretty heavyweight and slow ( Xen ). I think Docker was like a user friendlier cgroups and everyone loved it. I can't remember the name but there was a "web hosting company in a box" software that relied heavily on LXC and probably was some inspiration for containerization too.

edit: came back in to add reference to LXC, it's been probably 2 decades since i've thought about that.

tptacek

This makes sense if you look at containers as simply a means to an end of setting up a sandbox, but not really much sense at all if you think of containers as a way to make it easy to get an arbitrary application up and running on an arbitrary server without altering host system dependencies.

ianburrell

I suspect that containers would have taken off even without isolation. I think the important innovation of Docker was the image. It let people deploy consistent version of their software or download outside software.

All of the hassle of installing things was in the Dockerfile, and it was run in containers so more reliable.

tptacek

I agree: I think the container image is what matters. As it turns out, getting more (or less) isolation given that image format is not a very hard problem.

zellyn

Agreed. There was a point where I thought AMIs would become the unit of open source deployment packaging, and I think docker filled that niche in a cloud-agnostic way

zellyn

ps I still miss the alternate universe where Kenton won the open source deployment battle :-)

orbifold

It would be great if we got "kernel independent" Nvidia drivers. I have some experience with bare-metal development and it really seems like most of what an operating system provides could be provided in a much better way as a set of libraries that make specific pieces of hardware work, plus a very good "build" system.

ants_everywhere

> Because SEL4 is actually well designed, you can run a sub kernel as a process on SEL4 relatively easily. Tada, now you can get rid of K8s too.

k8s is about managing clusters of machines as if they were a single resource. Hence the name "borg" of its predecessor.

AFAIK, this isn't a use case handled by SEL4?

alphazard

The K8s master is just a scheduling application. It can run anywhere, and doesn't depend on much (just etcd). The kublet (which runs on each node) is what manages the local resources. It has a plugin architecture, and when you include one of each necessary plugin, it gets very complicated. There are plugins for networking, containerization, storage.

If you are already running SEL4 and you want to spawn an application that is totally isolated, or even an entire sub-kernel it's not different than spawning a process on UNIX. There is no need for the containerization plugins on SEL4. Additionally the isolation for the storage and networking plugins would be much better on SEL4, and wouldn't even really require additional specialized code. A reasonable init system would be all you need to wire up isolated components that provide storage and networking.

Kubernetes is seen as this complicated and impressive piece of software, but it's only impressive given the complexity of the APIs it is built on. Providing K8s functionality on top of SEL4 would be trivial in comparison.

ants_everywhere

I understand what you're saying, and I'm a fan of SEL4. But isolation isn't one of the primary points of k8s.

Containerization is after all, as you mentioned, a plugin. As is network behavior. These are things that k8s doesn't have a strong opinion on beyond compliance with the required interface. You can switch container plugin and barely notice the difference. The job of k8s is to have control loops that manage fleets of resources.

That's why containers are called "containers". They're for shipping services around like containers on boats. Isolation, especially security isolation, isn't (or at least wasn't originally) the main idea.

You manage a fleet of machines and a fleet of apps. k8s is what orchestrates that. SEL4 is a microkernel -- it runs on a single machine. From the point of view of k8s, a single machine is disposable. From the point of view of SEL4, the machine is its whole world.

So while I see your point that SEL4 could be used on k8s nodes, it performs a very different function than k8s.

Eikon

    make tinyconfig 
can get you pretty lean already.

https://archive.kernel.org/oldwiki/tiny.wiki.kernel.org/

chatmasta

My headcanon is that Docker exists because Python packaging and dependency management was so bad that dotCloud had no choice but to invent some porcelain on top of Linux containers, just to provide a pleasant experience for deploying Python apps.

ecnahc515

Sure they definitely were using Docker for their own applications, but also dotCloud was itself a PaaS, so they were trying to compete with Heroku and similar offerings, which had buildpacks.

The problem is/was that buildpacks aren't as flexible and only work if the buildpack exists for your language/runtime/stack.

frumplestlatz

Pretty much this; systems with coherent isolated dependency management, like Java, never required OS-level container solutions.

They did have what you could call userspace container management via application servers, though.

drowsspa

NodeJS, Ruby, etc also have this problem, as does Go with CGO. So the problem is the binary dependencies with C/C++ code and make, configure, autotools, etc... The whole C/C++ compilation story is such a mess that almost 5 decades ago inventing containers was pretty much the only sane way of tackling it.

Java at least uses binary dependencies very rarely, and they usually have the decency of bundling the compiled dependencies... But it seems Java and Go just saw the writing on the wall and mostly just reimplement everything. I did have problems with the Snappy compression in the Kafka libraries, though, for instance .

skydhash

The issue is with cross platform package management without proper hooks for the platform themselves. That may be ok if the library is pure, but as soon as you have bindings to another ecosystem (C/C++ in most cases), then it should be user/configurable instead of the provider doing the configuration with post installs scripts and other hacky stuff.

If you look at most projects in the C world, they only provide the list of dependencies and some build config Makefile/Meson/Cmake/... But the latter is more of a sample and if your platform is not common or differs from the developer, you have the option to modify it (which is what most distros and port systems do).

But good luck doing that with the sprawling tree of modern packages managers. Where there's multiple copies of the same libraries inside the same project just because.

IshKebab

Exactly this, but not just Python. The traditional way most Linux apps work is that they are splayed over your filesystem with hard coded references to absolute paths and they expect you to provide all of their dependencies for them.

Basically the Linux world was actively designed to apps difficult to distribute.

dabockster

> Basically the Linux world was actively designed to apps difficult to distribute.

It has "too many experts", meaning that everyone has too much decision making power to force their own tiny variations into existing tools. So you end up needing 5+ different Python versions spread all over the file system just to run basic programs.

tacker2000

The author suggests that Docker doesnt help development and that devs just spin up databases, but I have to disagree with that and Im pretty sure i am not the only one.

All my projects (primarily web apps) are using docker compose which configures multiple containers (php/python/node runtime, nginx server, database, scheduler, etc) and run as a dev environment on my machine. The source code is mounted as a volume. This same compose file is then also used for the deployment to the production server (with minor changes that remove debug settings for example).

This approach has worked well for me as a solo dev creating web apps for my clients.

It has also enabled extreme flexibility in the stacks that I use, I can switch dev environments easily and quickly.

figassis

> I was always surprised someone didn't invent a tool for ftping to your container and updating the PHP

We thought of it, and were thankful that it was not obvious to our bosses, because lord forbid they would make it standard process and we would be right back where we started, with long lived images and filesystem changes, and hacks, and managing containers like pets.

aPoCoMiLogin

it happened because the story of dependencies (system & application) was terrible. the ability to run the app on different distribution/kernel/compiler/etc was hard. there were different solutions like vagrant, but they were heavy and the DX wasn't there

spullara

Because dependencies on Unix are terrible for some languages that assume things are installed globally.

BandButcher

"The compute we are wasting is at least 10x cheaper, but we have automation to waste it at scale now."

So much this. keep it simple, stupid (muah)

hedgehog

Containers happened because running an ad network and search engine means serving a lot of traffic for as little cost as possible, and part of keeping the cost down is bin packing workloads onto homogeneous hardware as efficiently as possible.

https://en.wikipedia.org/wiki/Cgroups

(arguably FreeBSD jails and various mainframe operating systems preceded Linux containers but not by that name)

cbdumas

What does the 'ad network and search engine' have to do with it? Wouldn't any organization who serves lots of traffic have the same cost cutting goals you mentioned?

wmf

It's an oblique way to say that Linux cgroups and namespaces were developed by Google.

hedgehog

Yes, to expand: Both search and ads mean serving immense amounts of traffic and users while earning tiny amounts of revenue per unit of each. The dominant mid-90s model of buying racks of Sun and NetApp gear, writing big checks to Oracle, etc, would have been too expensive for Google. Instead they made a big investment in Linux running on large quantities of commodity x86 PC hardware, and building software on top of that to get the most out of it. That means things like combining workloads with different profiles onto the same servers, and cgroups kind of falls out of that.

Other companies like Yahoo, Whatsapp, Netflix also followed interesting patterns of using strong understanding of how to be efficient on cheap hardware. Notably those three all were FreeBSD users at least in their early days.

guigar

I love this sentence about DevOps "Somehow it seems easier for people to relate to technology than culture, and the technology started working against the culture."

jcelerier

For me the main reason to use containers is "one-line install any linux distro userspace". So much simpler than installing a dozen VirtualBox boxes to test $APP on various versions of ubuntu, debian, nixos, arch, fedora, suse, centos etc.

kccqzy

Yeah nowadays we have the distrobox(1) command. Super useful. But certainly that's not why containers happened.

blu3h4t

You can laugh or not but its because they never finished gnu/hurd :D

rshnotsecure

Fascinating documentary on Kubernetes for those who have 50 minutes. Gives more background to the "Container Wars". The filmmakers also have documentaries on the history of Python, Argo, etc.

Some highlights:

- How far behind Kubernetes was at the time of launch. Docker Swarm was significantly more simple to use, and Apache Mesos scheduler could already handle 10,000 nodes (and was being used by Netflix).

- RedHat's early contributions were key, despite having the semi-competing project of OpenShift.

- The decision to Open Source K8S came down to one meeting brief meeting at Google. Many of the senior engineers attended remotely from Seattle, not bothering to fly out because they thought their request to go OS was going to get shutdown.

- Brief part at the end where Kelsey Hightower talks about what he thinks might come after Kubernetes. He mentions, and I thought this was very interesting ... Serverless making a return. It really seemed like Serverless would be "the thing" in 2016-2017 but containers were too powerful. Maybe now with KNative or some future fusing of Container Orchestration + K8S?

[1] - https://youtu.be/BE77h7dmoQU

btreecat

I feel that's going to be more interesting than this video. The speaker is very unpracticed.