In April, Docker announced a $95 million series D round of funding. This is one of many events over the past year that has demonstrated how the industry has shifted towards the use of Linux containers (LXC) to deploy online services. Even giant cloud services companies, including Amazon, Google, Microsoft, Redhat, IBM and VMware, are pushing towards containerization. With the market leaning in the direction of containers, let’s take a deeper look at what they are, their history and current developments.
Container technology is not new. In fact, the first example of isolating workloads was probably the FreeBSD jail function over 20 years ago, which reused the chroot implementation. Next came Oracle Solaris 10 and IBM AIX, which were unique in that they allowed you to move running containers between systems. However at that time, containers never got a chance to be at the forefront, perhaps because VMware took the lead with its groundbreaking server consolidation technologies. Today, developers are aware of the great opportunities that are unleashed by running applications in containers.
What Is a Linux Container (LXC)?
Linux containers are lightweight virtualization mechanisms that run isolated workloads. They run their own init processes, filesystems and network stacks, which are virtualized using the root operating system (OS) running on the hardware. In comparison to a VM, Linux containers split the use of a single kernel by utilizing namespaces that run on a central host OS.
In addition, the LXC cgroup (control group) functionality enables resource management, including setting limits and resource prioritization for CPU, disk, I/O, and other resources that are allocated for containerized workloads. Containers consume very few system resources in comparison to VMs, which require a full OS copy for each isolated instance. Therefore, containers help consolidate resources, facilitate the portability of a containerized workload and ultimately optimize the underlying host’s infrastructure utilization.
Containers vs. VMs
The Two Types of Linux Containers:
- Operating system (OS) containers share the host operating system’s kernel, making them more agile than VMs. Seeing as containers are generally created from “golden” templates and images, the OS environment configuration can be the same across all OS containers.
- Application containerscontain application services with their resources such as required libraries. Application containers have a base layer that is common among containers working with the same application and have added layers for commands that are run in each container. When a container is run, all of the layers are combined and run in a single process.
As mentioned above, containers differ from VMs because they share a kernel with the OS, which comes with three main benefits:
- Density: You can have more containers than VMs hosted on single host. This is due to the fact that there is only one OS running, which isn’t the case when running multiple VMs on the same host. As a result, containers have higher utilization levels of their underlying hardware.
- Speed: Along the same lines, starting up a new container can take less than a second, whereas starting up a VM means taking an additional few minutes to boot up the full virtual system (including the OS).
- Low overhead management: Containers tend to have lower overhead management, given that the single OS that needs to be maintained includes patches, security and bug fixes. Once the OS is up to date, all containers are automatically updated. With VMs, each instance’s OS needs to be maintained separately.
- Portability: Encapsulating an application and its configuration simplifies the migration process. Leveraging this capability can greatly facilitate DevOps, including automatically creating new application instances throughout the development and delivery lifecycles. In addition, this capability facilitates multi-cloud deployment, which results in benefits such as less vendor lock-in and cross cloud disaster recovery configurations.
In the past, it was common to construct self-contained applications for application development, which are considered to be bulky and unwieldy by modern practices. To achieve better flexibility, developers now go for a microservices approach where applications can be sculpted to the needs of a business, yet can be easily restructured. The concept of containers dovetails with creating microservices-based applications as containers that are modular by design.
To apply a more practical application to using containers, consider a database that needs to perform maintenance tasks such as archiving or updates. Access to the database can be slowed down if these processes take up a large portion of compute power. However, if the database is in its own container, separate from containers containing any maintenance programs, you can set it to have higher priority and get a fixed percentage of CPU and memory. This can ensure that access to the database is never affected by other concurrently running tasks.
Still Not a Full Blown VM
There are limitations to using Linux containers. First, while the fact that containers share the kernel creates great benefits, it is also responsible for the limitation of running Windows and Linux in parallel on the same host. In addition, each of the containers is isolated from one another but can still see some data through the host. Therefore, there is still a chance that the shared kernel will be exploited, which could lead to performance degradation and security breaches on other containers that are running on the same host. This can be prevented with security protocols such as limiting root privileges and patching both the host and individual containers.
Great Market Trend
The market attention that container technology has been receiving is growing at an exponential rate. In May this year, DevOps.com conducted a survey in which more than 94% of enterprise respondents reported that they’ve investigated container technology over the past 12 months. In addition, according to the survey, Docker was the overwhelming majority of respondents’ (over 90%) container of choice. Meanwhile, Docker revealed that it logged 300 million instance downloads for its Docker Hub hosted offering and that 15 large Fortune 50 companies are now testing its Docker Hub Enterprise offering. In addition, more than 1,200 open source developers have contributed to the Docker platform.
However, the actual transition to linux containers will not happen overnight. Along those lines, the DevOps.com survery also reported that only 38% of respondents said they were using containers in actual production environments. Although they see containers as the future, large organizations today still face the challenge of restructuring their legacy applications into modern distributed systems. In these cases, continuing to use VMs coupled with advanced technologies such as pre/post copy live migration might be the only option.
Every new technology has its own path to evolution, and containers are no exception. In order to take the next step, gradual adoption is necessary. Smart private cloud solutions today help by allowing both containers and VMs to run side by side in the same data center in an effective and efficient manner. The decision to use containers or VMs comes down to a choice in deployment methodology considering both options’ capabilities and pitfalls.