OpenShift Enterprise 3.4: all-in-one Lab Environment

Screenshot from 2016-08-04 14:40:07


In this article we will setup a OpenShift Enterprise 3.4 all-in-one configuration.

OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:


Configure a VM with following:

  • RHEL 7.3
  • 2 CPUs
  • 4096 RAM
  • 30GB disk for OS
  • 25GB disk for docker images
# subscription-manager repos --disable="*"
# subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
# yum update -y
# yum install -y atomic-openshift-utils
# yum install atomic-openshift-excluder atomic-openshift-docker-excluder
# atomic-openshift-excluder unexclude
# yum install -y docker
# vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --insecure-registry'
# cat < /etc/sysconfig/docker-storage-setup
# docker-storage-setup
# systemctl enable docker
# systemctl start docker
# ssh-keygen
# ssh-copy-id -i /root/.ssh/id_rsa-pub
#vi /etc/hosts     ose3-master
# systemctl reboot

Install OpenShift.

Here we are enabling ovs-subnet SDN and setting authentication to use htpasswd. This is the most basic configuration as we are doing an all-in-one setup. For actual deployments you would want multi-master, dedicated nodes and seperate nodes for handling etcd.

#Create an OSEv3 group that contains the masters and nodes groups

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password

# If ansible_ssh_user is not root, ansible_become must be set to true


# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters

# host group for nodes, includes region info
[nodes] openshift_schedulable=True

Run Ansible playbook to install and configure OpenShift.

# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Configure OpenShift

Create local admin account and enable permissions.

[root@ose3-master ~]#oc login -u system:admin -n default
[root@ose3-master ~]#htpasswd -c /etc/origin/master/htpasswd admin
[root@ose3-master ~]#oadm policy add-cluster-role-to-user cluster-admin admin
[root@ose3-master ~]#oc login -u admin -n default

Configure OpenShift image registry. Image streams are stored in registry. When you build application, your application code will be added as a image stream. This enables S2I (Source to Image) and allows for fast build times.

[root@ose3-master ~]#oadm registry --service-account=registry \
--config=/etc/origin/master/admin.kubeconfig \

Configure OpenShift router. The OpenShift router is basically an HA-Proxy that sends incoming service requests to node where pod is running.

[root@ose3-master ~]#oadm router router --replicas=1 \
    --credentials='/etc/origin/master/openshift-router.kubeconfig' \


In this article we have seen how to configure an OpenShift 3.4 all-in-one lab environment. We have also seen how install and configuration can be adapted through ansible playbook. This environment is intended to be for a Lab and as such no best practices are given in regards to OpenShift. If you have any feedback please share.

Happy OpenShifting!

(c) 2017 Keith Tenzer

5 thoughts on “OpenShift Enterprise 3.4: all-in-one Lab Environment

  1. Hi Keith, I’ve seen a lot of all-in-one implementation of Openshift and it’s a bit confusing. Can you do a compare and contrast of different AiO builds such as the one listed here, the oc cluster -way and the minishift -way, etc. Also which one is better and easier to implement.

    Another suggestion, can you make a tutorial for multi node setup and a sample developer’s “workshop” such as creating image streams, STI, metrics/auto-scaling, etc. that would be very helpful.

    Thanks and keep up the good work!


    • Hi Jasper,

      It really depends on your goals. What I have documented is a complete running OpenShift environment. In order to build multi-node environment you simple change the Ansible inventory adding masters and nodes and re-run Ansible. If you really want to understand OpenShift from operations side then this is best way to go.
      The “oc cluster” method runs OpenShift as docker containers. This is easiest way to just get an environment up but there is a lot you cannot do. You can’t deploy or make changes using Ansible, many of the operation parts don’t work and it is not a supproted way to run OpenShift, it is just for testing and learning the dev related aspects around the platform.
      Minishift seems to be a full deployment of OpenShift however it deploys the VM and installs OpenShift so again you dont get to experience Ansible. It is also designed to run only on single VM. My method runs on single VM but you can easily expand it. I have not used minishift.

      I would say “oc cluster” is easiest then minishift and then my method. As far as learning OpenShift if you just interested in development side of it then “oc cluster” or minishift are fine. If you also interested in ops side and really learning about OpenShift deployment, updates, etc then I would recommend the method I outlined, it is same as deploying OpenShift in real world environments.

      Hope this helps



  2. Under the “Install OpenShift” heading, but before the portion describing “run ansible-playbook” – what is the intent of that block of text? It looks like it could be part of a playbook, but I’m uncertain as to what to do with it. Can you please help?


    • Hi Brian,

      That is ansible inventory file, you need to “vi /etc/ansible/hosts” and replace with text block from blog. The inventory file using group vars is how openshift is configured.



      • Thanks Keith! I was able to follow these instructions along with the OpenShift 3.5 Installation Guide to get my all-in-one setup working. I had to perform one simple modification – instead of configuring the registry and router, those are all done already. However, I had to apply set the region to infra for everything to work properly (oc label node region=infra)


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s