OpenShift Enterprise 3.4: all-in-one Lab Environment

5 minute read

Screenshot from 2016-08-04 14:40:07


In this article we will setup a OpenShift Enterprise 3.4 all-in-one configuration.

OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won't be configured. If you would like to read more about OpenShift I can recommend the following:


Configure a VM with following:

  • RHEL 7.3
  • 2 CPUs
  • 4096 RAM
  • 30GB disk for OS
  • 25GB disk for docker images
# subscription-manager repos --disable="*"
# subscription-manager repos \
    --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
# yum install wget git net-tools bind-utils iptables-services bridge-utils bash-completion
# yum update -y
# yum install -y atomic-openshift-utils
# yum install atomic-openshift-excluder atomic-openshift-docker-excluder
# atomic-openshift-excluder unexclude
# yum install -y docker
# vi /etc/sysconfig/docker
OPTIONS='--selinux-enabled --insecure-registry'
# cat < /etc/sysconfig/docker-storage-setup
# docker-storage-setup
# systemctl enable docker
# systemctl start docker
# ssh-keygen
# ssh-copy-id -i /root/.ssh/id_rsa-pub
#vi /etc/hosts     ose3-master
# systemctl reboot

Install OpenShift.

Here we are enabling ovs-subnet SDN and setting authentication to use htpasswd. This is the most basic configuration as we are doing an all-in-one setup. For actual deployments you would want multi-master, dedicated nodes and seperate nodes for handling etcd.

#Create an OSEv3 group that contains the masters and nodes groups

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password

# If ansible_ssh_user is not root, ansible_become must be set to true


# uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]

# host group for masters

# host group for nodes, includes region info
[nodes] openshift_schedulable=True

Run Ansible playbook to install and configure OpenShift.

# ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/byo/config.yml

Configure OpenShift

Create local admin account and enable permissions.

[root@ose3-master ~]#oc login -u system:admin -n default
[root@ose3-master ~]#htpasswd -c /etc/origin/master/htpasswd admin
[root@ose3-master ~]#oadm policy add-cluster-role-to-user cluster-admin admin
[root@ose3-master ~]#oc login -u admin -n default

Configure OpenShift image registry. Image streams are stored in registry. When you build application, your application code will be added as a image stream. This enables S2I (Source to Image) and allows for fast build times.

[root@ose3-master ~]#oadm registry --service-account=registry \
--config=/etc/origin/master/admin.kubeconfig \

Configure OpenShift router. The OpenShift router is basically an HA-Proxy that sends incoming service requests to node where pod is running.

[root@ose3-master ~]#oadm router router --replicas=1 \
    --credentials='/etc/origin/master/openshift-router.kubeconfig' \


In this article we have seen how to configure an OpenShift 3.4 all-in-one lab environment. We have also seen how install and configuration can be adapted through ansible playbook. This environment is intended to be for a Lab and as such no best practices are given in regards to OpenShift. If you have any feedback please share.

Happy OpenShifting!

(c) 2017 Keith Tenzer