Notes from a talk at LinuxCon/ContainerCon Europe 2016.
Scott McCarty is Senior Strategist in the area of Containers at RedHat. He describes himself as a curmudgeon and old-school naysayer regarding hyped technologies. The argument for Docker was always that the big value comes from standardization. But that is not really the case. Packaging systems like RPM, DEB and others already did that, and it did not solve everything. He likes to explain the contrast between traditional application delivery and containers with an analogy:
- Traditionally, applications are loaded into containers and onto ships at the dock.
- With containers, they are loaded already at the factory.
In the “RPM world”, package archives plus configuration management are the way to go. Of course there is virtualization and a lot of tooling around, but DevOps still has to manage the “last mile”. There is a lot of things still left to do and it can take a long time to apply. With containers this is done ahead of time, so the burden is moved further up the chain, logistically but also chronologically.
In the current hype, there is a common pattern or journey with the tooling around containers:
- Evaluate a some piece of technology
- Experiment with it
- Realization of quick wins
- Inventorize existing applications
- Determine technology
There are a lot of things in flux: OCI, Kubernetes, image format, Pivotal. Everything is exciting and people have epiphany after epiphany – what else can be put into containers? And then suddenly … security.
“Containers don’t contain”
When you move containers around, you run an entire user space on another kernel, wow! But are all kernels the same? An example from bis own experience was that they used a version of Ubuntu with an SELinux enabled Kernel to build images, but deploying them on a CentOS system without the SELinux code paths it naturally blew up.
Yum and others need root access. He asked the audience how exposes root privileges to application developers and got visibly uneasy when one of the audience members declared they give it to all developers. Scott was willing to compromise, maybe senior developers need to access for some troubleshooting, but overall it is not encouraged.
He elaborated on how namespaces work in the Linux kernel and the difference between clone() and fork() and what data structures are shared or not.
runC was mentioned as the actual subcomponent inside Docker that fires up the container.
Then there are cgroups, SELinux and seccomp. All of this should can protect containers very well, much better than standard installations. But there are still risks. For example it was possible for a long time to escalate to root privileges from root inside a container even though a user namespace was used. This was never fixed because in the normal world it would never be an issue. Other such things might lurk, but this is very specific and not specific to Docker.
Tame Your Fear
He recounts his experience in 1998 with administering a web server at the university, majoring in Computer Science and Anthropology. The server he maintained was shared among different groups, you could see all other processes. Process isolation was rocket science back then. Nowadays with virtual machines you would never share processes with other users. However, there are scenarios where process isolation is enough: High-Performance Computing workloads (Aurora, Condor, etc.) or large webfarms.
To define what is “enough” in terms of due diligence depends very much on the context and the actual workloads involved. Security is always a trade-off: What is good enough?
To combat container fears, he proposes the tenancy scale which is also described in one of his blog posts. He encourages us to “stop comparing virtual machines and containers and starting thinking about how we can use them together to achieve enough isolation to meet a given workload’s integrity requirements.”
Mitigate risk by employing:
- Limiting Root Access
- Using seccomp
- SELinux sVirt
- Read Only Containers
- Audit (use auditd to monitor configuration file changes?)
- Don’t deliver configuration as part of image, set environment variables, use secrets service or mount configuration into container
- Epiphany: containers provide a lot more control over security actually. Containers can be locked down beyond what is convenient with traditional process management (in VMs or on bare metal)
- There are amazing business benefits to containers. In 1999 immutable infrastructure was already all the rage, Scott and his colleagues tried to run web servers off of cdroms for read-only servers. Now this is actually feasible and accessible for everyone.
- Linux containers share the kernel.