OpenShift Enterprise v3 Lab Configuration: Innovate Faster, Deliver Sooner

Overview

OSE_LOGO

OpenShift Enterprise v3 by Red Hat is about building and running next-gen applications. If we look around, we have seen startups in virtually every market segment, turning the competitive landscape upside down. Startup companies like NetFlix, Spotify and Uber have literally pushed the incumbents to the brink of extinction and overtaken entire industries in a very short period of time. How have they been able to rival incumbents 100 times their size? The answer is simple, by bringing innovation to the market faster, much faster. Complacency and overcoming previous successes are very challenging for incumbents. It is much easier for a startup to innovate than an existing company with a degree of legacy. OpenShift v3 will level the playing field and provide organizations the appropriate tooling to rapidly reduce their time-to-market.

OpenShift v3 allows organizations to deliver innovation faster by:

  • Maximizing time developers actually spend developing
  • Enabling efficient clean hand-offs between Dev & Ops (DevOps)
  • Automating development pipelines and continuous integration / delivery
  • Increasing speed of innovation through more frequent experimentation
  • Providing state-of-the-art enterprise grade container infrastructure

In this article we will look at how to setup an OpenShift lab environment and get started on the journey to faster innovation cycles.

Pre Configuration Steps

OpenShift requires a master and one or more nodes. In this lab we will configure one master and a node. Install RHEL or CentOS 7.1 on two systems and configure hostname as well as network accordingly. On both systems run the following steps:

# subscription-manager repos --disable="*"
# subscription-manager repos --enable="rhel-7-server-rpms"
# subscription-manager repos --enable="rhel-7-server-extras-rpms"
# subscription-manager repos --enable="rhel-7-server-optional-rpms"
# subscription-manager repos --enable="rhel-7-server-ose-3.0-rpms"
#yum install wget git net-tools bind-utils iptables-services bridge-utils
#yum install python-virtualenv
#yum install gcc
#yum install httpd-tools
#yum install docker
#yum update

Once all the packages are installed it is important to configure Docker so that is allows for insecure registry communication on local network only.

#vi /etc/sysconfig/docker
OPTIONS=--selinux-enabled --insecure-registry 192.168.122.0/24

Setup ssh access from master to node.

#ssh-keygen
#ssh-copy-id -i .ssh/id_rsa.pub root@ose3-node.lab.com

Install OpenShift Enterprise v3

At this point both the master and node are prepared. We can now begin the install of OpenShift Enterprise v3. From the master run the following command:

#sh <(curl -s https://install.openshift.com/ose)

Note: if internet access is not available you can download the installer and run it locally on the master host.

https://install.openshift.com/portable/oo-install-ose.tgz

Configure OpenShift Enterprise v3

Once the installer completes an OpenShift master and node will exist. Now we can begin with the main configuration. By default OpenShift will use HTTP authentication. This is of course only recommended for lab or test environments. For production environments you will want to connect to LDAP or an identity management system. On the master we can edit the /etc/openshift/master/master-config.yaml and configure authentication.

#vi /etc/openshift/master/master-config.yaml
identityProviders:
- name: my_htpasswd_provider
challenge: true
login: true
provider:
apiVersion: v1
kind: HTPasswdPasswordIdentityProvider
file: /root/users.htpasswd
routingConfig:
 subdomain: lab.com

Next we need to create a standard user. OpenShift enterprise creates the system:admin account for default administration.

#htpasswd -c /root/users.htpasswd admin

Optionally we can give the newly created admin user, OpenShift cluster-admin permisions.

#oadm policy add-cluster-role-to-user cluster-admin admin

Configure Docker Registry

OpenShift uses the Docker registry for storing Docker container images. Anytime you build or change an application configuration, a new docker container is created and pushed to the registry. Each node can access this registry. You can and should use persistent storage for the registry. In this example we will use a host mountpoint on the node. The Docker registry runs as a container in the default namespace that only OpenShift admins can access.

On the node create a directory for the registry

#mkdir /images

On the master login in using the system:admin account, switch to the default project and create a Docker registry.

# oc login
Username: system:admin
#oc project default
#echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"registry"}}' | oc create -f -
#oc edit scc privileged
users:
- system:serviceaccount:openshift-infra:build-controller
- system:serviceaccount:default:registry
#oadm registry --service-account=registry --config=/etc/openshift/master/admin.kubeconfig  --credentials=/etc/openshift/master/openshift-registry.kubeconfig --images='registry.access.redhat.com/openshift3/ose-${component}:${version}' --mount-host=/images

Create Router

OpenShift v3 uses OpenVswitch as the software defined network. In order for isolation, proxy and load balancing capabilities a router is needed. The router similar to the Docker registry also runs in a container. Using the below command we can create a router in the default namespace.

#echo '{"kind":"ServiceAccount","apiVersion":"v1","metadata":{"name":"router"}}' | oc create -f -
#oc edit scc privileged
users:
 - system:serviceaccount:openshift-infra:build-controller
 - system:serviceaccount:default:registry
 - system:serviceaccount:default:router
#oadm router router-1 --replicas=1 --credentials='/etc/openshift/master/openshift-router.kubeconfig' --images='registry.access.redhat.com/openshift3/ose-${component}:${version}'

Configure DNS

OpenShift v3 requires a working DNS environment in order to handle URL resolution. The requirement is to create a DNS wildcard that points to the router. This should be the public IP of the node where the router container is running. In our example we have created a local DNS server that acts as a forwarder for the 192.168.122.0 network. In addition we have implemented a DNS wildcard that points to our nodes public or physical IP, where the router container is running.

#yum install bind-utils bind
#systemctl start named
#systemctl enable named
vi /etc/named.conf
options {listen-on port 53 { 192.168.122.1; };
forwarders {
10.38.5.26;
;
};
zone "lab.com" IN {
 type master;
 file "/var/named/dynamic/lab.com.zone";
 allow-update { none; };
};
vi /var/named/dynamic/lab.com.zone
$ORIGIN lab.com.
$TTL 86400
@ IN SOA dns1.lab.com. hostmaster.lab.com. (
 2001062501 ; serial
 21600 ; refresh after 6 hours
 3600 ; retry after 1 hour
 604800 ; expire after 1 week
 86400 ) ; minimum TTL of 1 day
;
;
 IN NS dns1.lab.com.
dns1 IN A 192.168.122.1 
 IN AAAA aaaa:bbbb::1
ose3-master IN A 192.168.122.60
ose3-node1 IN A 192.168.122.61
* 300 IN A 192.168.122.61
;
;

Install and Configure GitHub Lab

In most cases you will probably want to configure a local GitHub server. This is of course optional. In the example we are using the public GitHub service, however you could easily do this on an internal GitHub server. You can setup the GitHub server on the OpenShift v3 master. For demos it is recommended to use GitHub lab, since it is much easier to install and configure.

#yum install curl openssh-server
#systemctl enable sshd
#systemctl start sshd
#firewall-cmd --permanent --add-service=http
#systemctl reload firewalld
#curl https://packages.gitlab.com/install/repositories/gitlab/gitlab-ce/script.rpm.sh | bash
#yum install gitlab-ce
#gitlab-ctl reconfigure

Once the above steps are complete you can access GitHub by connecting through browser to the host.

Username: root
Password: 5iveL!fe

Using OpenShift v3

At this point we should have a functioning OpenShift v3 environment. We can now build and deloy applications. Here we will see how to deploy a mysql database using scaling and build a ruby hello world application from GitHub.

Deploying MySQL database

Though using the OpenShift CLI or API is certainly possible, let us at this point use the UI. To login to the UI open a browser and point it at the IP of the OpenShift v3 master, for example: https://ose3-master.lab.com:8443/console/. Create a new project for hosting containers. In OpenShift v3 each project maps to a namespace in Kubernetes. OSE_PROJECT_CREATE Under the demo project deploy a MySQL database by selecting “create” or “getting started”. Make sure you add a label, this is explained later. OSE_MYSQL_CREATE Once an application is created we see the status in the Overview. OSE_MYSQL_CREATED Each time an application is deployed we have a container deployer and the running container. Once the deployment is complete the deployer container is deleted and we just have the running container. The “oc get pods” command shows us all pods within the namespace. A pod is a Kubernetes construct and means one or more Docker containers that share deployment template. Pods run on nodes, grouping containers within pods is a way to ensure certain containers are localized.

# oc get pods
NAME            READY  REASON     RESTARTS  AGE
mysql-1-deploy  1/1    Running    0         8s
mysql-1-rz165   0/1    Running    0         5s

For every application deployed, OpenShift will also create a replication controller and service. These are also Kubernetes constructs. A replication controller is used for auto-scaling and determines how many instances of a given pod should exist.

# oc get rc
CONTROLLER  CONTAINER(S)  IMAGE(S)                              SELECTOR                REPLICAS
mysql-1     mysql         .../openshift3/mysql-55-rhel7:latest  deployment=mysql-1,...  1

The service creates a URL for the application and handles dynamically routing to individual pods. This is handled by the kube-proxy layer and the OpenShift routing layer.

# oc get services
NAME   LABELS                                        SELECTOR    IP(S)          PORT(S)
mysql  demo=mysql,template=mysql-ephemeral-template  name=mysql  172.30.76.121  3306/TCP

When creating applications it is very important to always define labels. Labels are applied to pods, replication controllers and services. When deleting an application it is very easy to for example reference the label instead of deleting individual components manually.

#oc delete all --selector="demo=mysql"

OpenShift v3 also supports auto-scaling. This capability leverages Kubernetes replication controllers. First we need to identify the replication controller using the “oc get rc” command. We can automatically scale our application by changing the number of replicas. In this example we will scale from one to three MySQL databases.

#oc scale --replicas=3 rc mysql-1

Upon scaling MySQL, we can quickly see the results in the UI.

OSE_MYSQL_SCALING

Building Ruby Hello World Application

So far we have seen how to provision application components such as databases or middleware in seconds. We have also obeserved how we can effortlessly scale these components. In the following example, we will build our own application code in OpenShift v3. OpenShift will provide the Ruby runtime environment and automatically build, as well as launch a container with our hello world code from GitHub. OpenShift utilizes a technology called “Source to Image” (S2I) that efficiently builds the container. Instead of rebuilding the entire container each time, S2I is able to re-use previous builds and only change the application layer within the container. Docker containers are immutable so any change always requires creating a new container. This is a wasteful, time consuming process without OpenShift and S2I.

To build our application select “create” from the project page in the OpenShift UI. Enter the URL to the GitHub repository https://github.com/ktenzer/ruby-hello-world and select “next”.

OSE_RUBY_CREATE_1

OpenShift asks us for the application build runtime. In this case we will select Ruby 2.0 since this is in fact a Ruby application.

OSE_RUBY_CREATE_2

In the final step we can provide any custom details about the build configuration and of course add a label.

OSE_RUBY_CREATE_3

OpenShift will create a container with Ruby 2.0 and our code from GitHub. It will also complete any required build steps. The end result is a complete application build, of a running application, inside a Docker container. Our application can now be automatically tested using Jenkins or other such continuous delivery tools. If tests pass, it can be automatically rolled out to production. Think about how much faster you can make code available to your customers with OpenShift? By selecting the URL for the Ruby hello world application we can also access the application directly.

OSE_RUBY_RUNNING

 

OSE_RUBY_APP

Troubleshooting

In this section we will go through some basic troubleshooting steps for OpenShift v3. In order to get logs we first need the pod name. Using the “oc get pods” command, we can get a list of pods.

#oc logs ruby-hello-world-1-65lgf
You might consider adding 'puma' into your Gemfile.
[2015-08-03 08:19:03] INFO WEBrick 1.3.1
[2015-08-03 08:19:03] INFO ruby 2.0.0 (2013-11-22) [x86_64-linux]
[2015-08-03 08:19:03] INFO WEBrick::HTTPServer#start: pid=1 port=8080
10.1.0.4 - - [03/Aug/2015 08:49:15] "GET / HTTP/1.1" 200 2496 0.0117
[2015-08-03 08:49:45] ERROR Errno::ECONNRESET: Connection reset by peer
 /opt/rh/ruby200/root/usr/share/ruby/webrick/httpserver.rb:80:in `eof?'
 /opt/rh/ruby200/root/usr/share/ruby/webrick/httpserver.rb:80:in `run'
 /opt/rh/ruby200/root/usr/share/ruby/webrick/server.rb:295:in `block in start_thread'

Beyond looking at a pods logs we can also access journald for docker, openshift-master and openshift-node. Using the below journalctl commands we can get a list of current log messages for the major OpenShift components.

#journalctl -f -l -u docker
#journalctl -f -l -u openshift-master
#journalctl -f -l -u openshift-node

Issue 1: Pod shows as pending and scheduled but never gets deployed on a node.

This problem can occur if the node docker image cache gets out of sync. In order to resolve this issue perform the following steps on the node:

#systemctl stop docker
#rm -rf /var/lib/docker/*
#reboot

Summary

In this article we have seen how to deploy an OpenShift Enterprise v3 lab environment. We have seen how to use OpenShift in order to deploy and build applications. This is just the tip of the iceberg of course. In a world where speed and agility becomes increasingly important, it is clear that container infrastructure will become the future platform for running applications. You simply can’t argue with being able to start 60 containers in the time it takes to start a single VM. Google deploys over two billion containers a week and everything you do from Google mail to search, runs in a container. Containers are enterprise ready and it is time to start understanding how to take advantage of this technology. OpenShift Enterprise v3 provides a platform for building and running applications on container infrastructure. OpenShift Enterprise v3 enables organizations to innovate faster, bringing that innovation to the market sooner. Don’t let your organization be overtaken by the next startup! If you found this article informative or helpful please share your thoughts.

Happy OpenShifting!

(c) 2015 Keith Tenzer

10 thoughts on “OpenShift Enterprise v3 Lab Configuration: Innovate Faster, Deliver Sooner

    • Well autoscaling isnt exactly a percise term, it is used rather broadly from what I have experienced. Maybe an example on what you consider autoscaling would be appropriate? There are many levels where autoscaling applies imho. From the application perspective OpenShift provides autoscaling, in that compute resources and incoming requests are dynamically scaled (more containers running on more container hosts). What is missing in my example is an automated trigger as opposed to manual, maybe that is what you mean? This can certainly be done in OpenShift, though I didnt specify. Here is an example using VMs in OpenStack on autoscaling simple HTTP server based on trigger…yes CPU usage is not the best trigger, it is only an example.

      https://keithtenzer.com/2015/10/05/auto-scaling-applications-with-openstack-heat/

      Like

  1. Hi,
    I followed installation steps as mentioned and when I try to execute the step “oadm policy add-cluster-role-to-user cluster-admin admin” it gives an error saying “Error: couldn’t read version from server: Get https://master.hostname.local:8443/api: Unknown Host see ‘oadm policy add-cluster-role-to-user -h’ for help.

    Like

    • Check your DNS: nslookup master.hostname.local. You need working DNS and you need A record for openshift master and any nodes in DNS in addition to wildcard for your application domain in OpenShift. This is also documented in the guide.

      99% of OpenShift problems are a result of not properly working DNS.

      Like

  2. Here are my DNS records defined in the name server.

    $TTL 604800
    @ IN SOA ns1.devspace.local. admin.devspace.local. (
    3 ; Serial
    604800 ; Refresh
    86400 ; Retry
    2419200 ; Expire
    604800 ) ; Negative Cache TTL
    ;
    ; name servers – NS records
    IN NS ns1.devspace.local.

    ; name servers – A records
    ns1.devspace.local. IN A 192.168.100.101

    ; 192.168.100.0/22 – A records
    *.devspace.local. IN A 192.168.100.110
    master.devspace.local. IN A 192.168.100.110
    node01.devspace.local. IN A 192.168.100.111
    node02.devspace.local. IN A 192.168.100.112

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s