The main benefits of containerized applications are increased developer productivity and natural support for microservices. Therefore, companies are using containers for development and testing, and using this technology for shipping and deploying applications to production. The popularity of containers has grown significantly over past years and as more IT organizations are using more containers, therefore better management and orchestration are required.
In this article we will take a look at Kubernetes, seemingly the most popular platform to orchestrate containerized applications across clusters of hosts. We also guide you through the installation steps. So if you want to know more about Kubernetes and how to install it, start here.
The containerized deployment model took the best features of the traditional and virtualized models. Specifically, containers are extremely lightweight and fast. The performance of the containerized applications is very close to the performance of native applications, running on bare operating system. At the same time containers are distributable, portable, scalable, and provide resource isolation as well as virtual machines. The isolation guarantees that any processes inside the container cannot see any processes or resources outside the container.
The key difference between a container and a virtual machine is that all containers share the same kernel of the host system.
However, sharing the same kernel also has its disadvantages. The main disadvantage is that a containerized application being installed on the host should work with the kernel of the host. Therefore, you cannot install a Windows container on a Linux host. Also the the isolation between the host and the container is not as strong as with traditional hypervisor and sometimes a process running in the container may escape into the kernel space of the host.
Presumably Docker is the most popular tool to build containers and run them on GNU/Linux, OS X, and Windows operating systems. Docker was released in 2013. For Linux it leverages Linux control groups (cgroups) and Linux namespaces. In addition to Docker, there are other tools to run containerized applications, such as Linux Containers (lxc) (released in 2008) and rkt (released in 2014). Docker can also use LXC as one of its execution engines. There are other operating-system-level virtualization implementations for GNU/Linux and other operating systems, such as chroot (released in 1982), FreeBSD jail (released in 2000), and Solaris Zones (released in 2008).
With container orchestration you can define how to coordinate the containers in the clusters consisting of multiple nodes, when complex containerized applications are deployed. This relates not only to the initial deployment but also for managing multiple containers as one entity for the purposes of scaling, availability and so on.
There are several container orchestration tools, such as Docker Swarm, Mesosphere Marathon, Kubernetes that allows building a private cluster. Also there are container orchestration services provided in the public cloud, such as Amazon EC2 Container Service (ECS), Azure Container Service (ACS), and Google Container Engine (built on Kubernetes). In this article we will focus on Kubernetes only.
Kubernetes is an open source platform to deploy and manage containerized applications across clusters of hosts. Originally designed by Google, Kubernetes was released in 2014 under Apache License 2.0.
A Kubernetes cluster contains one master node and one or more regular nodes.
The node (previously known as minion) is a physical server or virtual machine that is managed by Kubernetes. Every node contains a container runtime (for example, Docker Engine), kubelet (responsible for starting, stopping, and managing individual containers by requests from the control plane), and kube-proxy (responsible for networking and load balancing).
The master node runs the Kubernetes control plane, which consists of different processes, such as an API server (provides JSON over HTTP API), scheduler (selects nodes to run containers), controller manager (runs controllers, see below), and etcd (a globally available configuration store).
A Kubernetes cluster can be managed via the Kubernetes Dashboard, a web UI running on the master node. Also the cluster can be managed via the command line tool kubectl, which can be installed on any machine able to access the API server, running on the master node. This tool can be used to manage several Kubernetes clusters via specifying context defined in a configuration file.
A Kubernetes pod is a basic unit that Kubernetes operates for scheduling purposes. It represents one or more containers that should be placed (scheduled) to the same node. Pods can be managed manually or by controllers.
A Kubernetes controller manages a set of pods and makes sure that the cluster is in the specified state. There are different controller types, for example, a replication controller is responsible for running the specified number of pod’s copies across the cluster.
A Kubernetes service is is a set of pods that work together, for example a web server pod and a database pod. By default, a service is not accessible outside a cluster, but it can be exposed, so clients can access it from outside.
How-to: Kubernetes Installation via Minikube
Minikube is a tool that allows running a single-node Kubernetes cluster in a virtual machine. It can be used on GNU/Linux os OS X and requires VirtualBox, KVM (for Linux), xhyve (OS X), or VMware Fusion (OS X) to be installed on your computer. Minikube creates a new virtual machine with GNU/Linux, installs and configures Docker and Kubernetes.
In the following instructions we will use Minikube to install a single-node Kubernetes cluster on a machine with 64 bit GNU/Linux (Debian or Ubuntu) and KVM. Refer to the Minikube documentation if you want to use an alternative configuration.
In the beginning we will install the kubectl command line tool locally:
$ curl -Lo kubectl \ http://storage.googleapis.com/kubernetes-release/\ release/v1.3.0/bin/linux/amd64/kubectl \ && chmod +x kubectl \ && sudo mv kubectl /usr/local/bin/
In the next step we will install the KVM driver:
$ sudo curl -L \ https://github.com/dhiltgen/docker-machine-kvm/\ releases/download/v0.7.0/docker-machine-driver-kvm \ -o /usr/local/bin/docker-machine-driver-kvm $ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm
$ curl -Lo minikube \ https://storage.googleapis.com/minikube/\ releases/v0.6.0/minikube-linux-amd64 \ && chmod +x minikube \ && sudo mv minikube /usr/local/bin/
Now let’s start minikube cluster: $ minikube start --vm-driver=kvm Starting local Kubernetes cluster... Kubernetes is available at https://192.168.42.213:8443. Kubectl is now configured to use the cluster.
The Kubernetes cluster is up and running. Let’s do a simple deployment. We will use the existing image:
$ kubectl run hello-minikube \ --image=gcr.io/google_containers/echoserver:1.4 \ --port=8080 deployment "hello-minikube" created $ kubectl expose deployment hello-minikube --type=NodePort service "hello-minikube" exposed
Check that the pod is up and running:
$ kubectl get podNAME READY STATUS RESTARTS AGE hello-minikube-2433534028-ouxw8 1/1 Running 0 4m
The “STATUS” field should contain “Running”. If it contains “ContainerCreating”, then wait and repeat the last command again. Now check that our service works:
$ curl $(minikube service hello-minikube --url) CLIENT VALUES: client_address=172.17.0.1 command=GET real path=/ query=nil request_version=1.1 request_uri=http://192.168.42.213:8080/
server_version=nginx: 1.10.0 - lua: 10001
accept=*/* host=192.168.42.213:31759 user-agent=curl/7.35.0 BODY: -no body in request-
Execute the following command to open the Kubernetes Dashboard in your web browser:
$ minikube dashboard
To stop the cluster (shutdown the virtual machine and preserve its state), execute the following command:
$ minikube stop Stopping local Kubernetes cluster... Stopping "minikubeVM"...
To start the cluster again and restore it to the previous state, execute the following command:
$ minikube start
To delete the cluster (delete the virtual machine and its state), execute the following command:
$ minikube delete
One of the main use cases for containers is the hybrid cloud case, supporting mobility of workloads between private to public clouds. And as containers technologies take lead in this evolving modern market, so does that need to control and orchestrate their move. Backed by Google, Kubernetes seems to take the lead.