So you’re thinking about jumping into container technologies and everywhere you look suggests that you “just use Docker”? Though Docker is quite popular and an excellent container runtime in many ways, it may or may not be the best fit depending on what applications you are attempting to containerize. You’ve probably heard of different container […]
So you’re thinking about jumping into container technologies and everywhere you look suggests that you “just use Docker”? Though Docker is quite popular and an excellent container runtime in many ways, it may or may not be the best fit depending on what applications you are attempting to containerize. You’ve probably heard of different container runtimes but don’t really know the differences between them or why you would pick one over the other. Before diving deep into the advantages and disadvantages of each, let’s zoom out a bit and define the two main classes of runtimes.
In general a container is a semi-isolated execution environment. A container is managed by the OS kernel running on the host system and typically has its own isolated memory, CPU, storage, process table, and networking interfaces –or some subset of these.
This definition implies nothing of what you decide to run within the container. Various container runtimes are opinionated about what is executed within a container, but it’s important to remember that the kernel constructs that make up a container (namespaces and cgroups) do not care about this, only the container runtimes do. There are two ways to use containers: as an application container or a system container.
Application containers are intended to be lean and include only your application and its runtime dependencies. By doing this the container, runtime can scale your application more efficiently by simply running multiple instances. This works well for running any load balanced service, such as microservices.
System containers have fewer constraining opinions –you can use them just like you would a VM in many ways. In my opinion they reduce the complexity of containerizing a legacy application. I would especially use a system container for applications that are not load balanced or cannot inherently distribute workloads across multiple instances; going to further effort to split up your application to run across multiple application containers would probably lead to little initial benefit without refactoring.
Containerizing legacy applications currently running on separate servers into system containers could allow you to increase the application density per server without modifying the application (and with at least 10x more density than with a VM). For this reason alone, this is a good first step to take, even if your eventual goal is to break the legacy application down across multiple application containers.
With either approach, there are anti-patterns that suggest that you are probably using the wrong container type.
One of the more prolific anti-patterns in the application container realm is the use of supervisord (or similar) to run multiple sibling processes within a single application container. This typically indicates that you should either use a system container with a real init system or break up your application further into more application containers.
A common anti-pattern for system containers is running a single service or short-lived services within the system container. While system container runtimes can certainly handle these types of services, running an init system you aren’t taking advantage of is likely a waste of time and CPU cycles.
I hope this article has informed and illuminated a few options for you as you dive into container solutions! This article was intended to be a brief and high-level overview on the topic of container types. In later posts we’ll explore each of these container solutions by containerizing a meaningful application with each of these runtimes (LXD/LXC, Systemd-nspawn, Docker, and rkt).