Application Containers: A Practical HowTo Guide

Overview

We have by now all heard plenty about Linux containers and for good reason. Containers change the way applications are operated and allow us to deploy applications at unprecedented speeds. Containers pick up where Virtual Machines left off, at the application layer. In this article we will focus on the journey to a container driven world and explore the phases along the way.

Container Rules

Before beginning our journey it is important to understand basic container rules:

  • A container should run one and only one application process
  • Containers are immutable, if something needs to be changed the container is thrown away and re-created
  • Containers are insulated from one another but not isolated in same way as VIrtual Machines
  • Containers share same Linux Kernel

Application Discovery

The first phase inolves indentifying applications. Not all applications are ideal to run in containers. Similar to cloud infrastructure such as OpenStack, containers require a certain application design. An application should exhibit the following behavior:

  • Application functionality should be broken into components
    • components should be standalone services and have no dependencies on any other components
    • all services should communicate with one another using external RESTful APIs
  • Application state change should be handled by using message buses or a distributed key/value store
  • Application must scale horizontally not vertically
  • Heavy components like databases should be operated on bare-metal or in Virtual Machines that can scale vertically

For the purpose of this article I chose to containerize an application that displays these characteristics. Integra is an integration, automation and orchestration platform. It exposes application capabilities through providers that provide standalone micro-services with a RESTful frontend. The Integra reactor is the brain and allows automation architects to build workflows from capabilities exposed by providers. There are providers for applicaitons, databases, hypervisors, storage systems and much more. The idea behind Integra is automate everything with no compromises. Integra sees no difference between backup, provisioning or other common tasks. Everything is a workflow that can be automated using a standard toolset.

Integra_Architecture

Since Integra is composed of many services it is important to place each service in its own container. This means the reactor and every provider, even the CLI get its own container. The container will need to provide all dependencies in order to run the application including exposing ports. Each service or component of an application should use a different unique port. For Docker alone it doesn’t matter as much but once we get into Kubernetes and pods this becomes very important since containers within a pod share same IP address.

Running Application in Container

Running applications in containers is not that much different than outside of a container. A container can run one command so typically we would create a small start script to lauch the application. Below is the start script run-integra.sh I am using for the Integra reactor:

#!/bin/sh
 pgrep -f "rest-1.0.2-uber.jar" | awk '{system("kill " $1)}'
 exec /usr/bin/java -jar /integra/rest-1.0.2-uber.jar

Before building our application container it is important to test and ensure things are working. Below are the commands I used to test the Integra reactor.

# docker pull debian
# docker run -i -t debian /bin/bash

At this point we are inside the container running debian as the base OS.

root@b4a37b0d8040:/# apt-get update
root@b4a37b0d8040:/# apt-get install -y openjdk-7-jre

We have now installed the application dependencies and can test the Integra reactor inside the container. Next we need to copy the application JAR to the container using the container id (long format).

# docker ps
CONTAINER     ID IMAGE         COMMAND       CREATED         STATUS
b4a37b0d8040  debian:latest    "/bin/bash"   5 minutes ago   Up 5 minutes
# docker inspect -f '{{.Id}}' b4a37b0d8040
b4a37b0d8040a0d624b7edc264425693a7e0e50444f72d09a54209ff6461b377
cp rest-1.0.2-uber.jar /var/lib/docker/devicemapper/mnt/b4a37b0d8040a0d624b7edc264425693a7e0e50444f72d09a54209ff6461b377/rootfs

Now that we have copied the JAR file from our host OS to the container we can run the Integra reactor and ensure it works.

root@b4a37b0d8040:/# java -jar rest-1.0.2-uber.jar

Finally we are ready to build our Docker application container!

Building Docker Image

Docker provides a standard for packaging containers. While container technology has been around for a long time in both Unix and Linux the tooling and portability that Docker provides is certainly game-changing. Docker uses a Dockerfile to define the container image. A docker image is a grouping of layers. In our example we have essentially three layers: the base OS (Debian), required dependencies (Java) and our application (JAR). Besides providing software layering a Dockerfile also enables us to expose application ports, run-time environment parameters and a tooling for executing standard OS commands. Below is the Dockerfile used to build the Integra Reactor.

# vi Dockerfile
# Integra Reactor
# VERSION 0.0.1
FROM debian MAINTAINER Keith Tenzer <maintainer@domain.com>

LABEL Description="This image is used to start the Integra Rest Server" Vendor="Emitrom" Version="1.02" RUN apt-get update && apt-get install -y openjdk-7-jre

RUN mkdir /integra 
COPY rest/* /integra/ 
COPY run-integra.sh / 
RUN chmod -R 755 /integra 
RUN chmod 755 /run-integra.sh

ENV JAVA_OPTS="-Xms512m -Xmx1152m -XX:MaxPermSize=256m -XX:MaxNewSize=256m"

EXPOSE 8080 8443

CMD ["/run-integra.sh"]

Once we are ready we can build our docker image. The docker build command will create the image and make it available in our local docker repository.

# docker build -t integra/reactor:v1.0.2 .
# docker images
REPOSITORY          TAG       IMAGE ID        CREATED         VIRTUAL SIZE
integra/reactor     v1.0.2    233be1b0b05f    16 hours ago    523.4 MB

Sharing Docker Images

Docker provides a public registry called Docker Hub and in addition allows us to run our own private registry in order to share trusted images internally. Once the docker image is created it can be shared with a registry using the the docker push command. In order to allow Docker to communicate with an insecure private registry the docker daemon must be started with the –insecure-registry option as follows:

docker -d --insecure-registry kubernetes.lab.com:5000 &

Tags are used to allow for versioning of Docker images. A special tag called latest is used as default whenever a tag is not specified. For example above we issued the command ‘docker run -i -t debian /bin/bash’. Since a tag was not specified docker run used latest. The most current version of a Docker image should also be taged with ‘latest’.

docker tag emitrom/server:v1.02 kubernetes.lab.com:5000/emitrom/integra-server:v1.02
docker tag emitrom/integra-server:v1.02 kubernetes.lab.com:5000/emitrom/integra-server:latest

Pusing Docker Image to Private Registry

docker push kubernetes.lab.com:5000/integra/reactor

Pusing Docker Image to Docker Hub

# docker push integra/reactor
Please login prior to push:
Username: integra
Password: *******
Email: maintainer@domain.com

To see all images for the user integra we can go directly to Docker Hub or run a command.

Docker_Hub_Integra

# docker search integra |grep "^integra/"

Running Docker Images

We have built our Docker image for the Integra reactor and shared it in Docker Hub or a private registry. At this point anyone can run the application on any system running Docker using two simple commands.

# docker pull integra/reactor
# docker run -i -t -d integra/reactor

This is the power of containers and portability of Docker. Think of how you would normally deploy such an application without containers? Think of the portability, your app can run on any system running Docker, anywhere.

Running Docker Images in Kubernetes

Running vanilla Docker is great for development or test environments but assuming we want to run this application in a production environment there are a few things missing. First we don’t have any mechanism to orchestrate or handle deploying our application on multiple hosts. Next the target application contains many services, each being it’s own container and connecting them together is quite a bit of work. There is no abstraction around services, a container may be temporary but an application service is certainly not. Finally we have no management around reliability or horizontal scaling. These are the gaps that Google’s Kubernetes fills. For more information on setting up Kubernetes read this article. Kubernetes creates an abstraction around containers called a pod. A pod contains one or more tightly coupled containers. In this case if we want a holistic deployment of Integra and all its providers, reactor and CLI we can encapsulate the entire application in a Kubernetes pod. Once we have a pod independent services and replication policies can be created, Kubernetes handles all this automatically. Below is an example of a multiple container pod configuration.

# vi integra-all.json
{
   "apiVersion": "v1beta1",
   "desiredState": {
      "manifest": {
      "containers": [
      {
         "image": "integra/reactor",
         "name": "integra-reactor",
         "ports": [
         {
            "containerPort": 8080,
            "hostPort": 8080,
            "protocol": "TCP"
         }
         ]
      },
      {
         "image": "integra/aws-provider",
         "name": "integra-aws",
         "ports": [
         {
            "containerPort": 9771,
            "hostPort": 9771,
            "protocol": "TCP"
         }
         ]
      },
      {
         "image": "integra/azure-provider",
         "name": "integra-azure",
         "ports": [
         {
            "containerPort": 9772,
            "hostPort": 9772,
            "protocol": "TCP"
         }
         ]
      }
      ],
      "id": "integra-all",
      "restartPolicy": {
      "always": {}
   },
   "version": "v1beta1",
   "volumes": null
   }
 },
 "id": "integra-all",
 "kind": "Pod",
 "labels": {
 "name": "integra-all"
 },
 "namespace": "default"
}

We can issue the following command in Kubernetes to deploy our pod:

kubectl create -f integra-reactor-svc.json
# kubectl get pods
POD            IP             CONTAINER(S)      IMAGE(S)                         HOST                 LABELS             STATUS
integra-all    10.100.77.3    integra-reactor   integra/reactor:latest           atomic01.lab.com/    name=integra-all   Running
                              integra-aws       integra/aws-provider:latest
                              integra-azure     integra/azure-provider:latest

Next we need to expose the Integra reactor running on port 8080 as a service. The nice thing about pods is that all containers within pod communicate using the same IP address. This makes it much simpler to configure communication between related application services.

# vi integra-all-svc.json

{
   "apiVersion": "v1beta1",
   "containerPort": 8080,
   "id": "integra-reactor-svc",
   "kind": "Service",
   "labels": {
      "name": "integra-reactor-svc"
   },
   "port": 8080,
   "publicIPs": [
      "10.10.1.114","10.10.1.115","10.10.1.116"
   ],
   "selector": {
      "name": "integra-all"
   }
 }

We can create the service and we are done!

 kubectl create -f integra-reactor-svc.json

Finally in order to test we can make a simple HTTP request to the Integra reactor.

curl -u admin:integra http://10.10.1.114:8080/rest
<appInfo>
   <name>Integra</name>
   <version>1.0.2</version>
   <buildTimestamp>20150420-2208</buildTimestamp>
</appInfo>

Summary

In this article we explored how to approach and prepare applications running inside containers using the Docker platform. We have seen how to build docker images and use a docker registry to share images. Finally we have observed how to run containers inside Docker and Kubernetes. It doesn’t stop there though, the story gets even better. In a future article I will discuss the need for PaaS and how PaaS can leverage these underlying technologies to provider even more value. In today’s world IT is all about speed and innovation, if you don’t have speed you cannot innovate, if you cannot innovate you will perish.
Happy Containerizing!

(c) 2015 Keith Tenzer

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s