OpenStack Multiple Node Configurations

14 minute read

Node Types

OpenStack can be deployed in a single-node or multi-node configuration. For the purpose of this post I am going to assume you understand OpenStack basics and have at least done a basic installation on a single-node using RDO or another installer. If not please refer to this post which covers the basics. OpenStack is of course a culmination of loosely coupled projects that define services. A node is nothing more than a grouping of OpenStack services that run on bare-metal, in a container or virtual machine. The purpose of a node is to provide horizontal scaling and HA. There are four possible node types in OpenStack: controller, compute, network and storage.

Controller Node

The controller node is the control plane for the OpenStack environment. The control pane handles identity (keystone), dashboard (Horizon), telemetry (ceilometer), orchestration (heat) and network server service (neutron).

Compute Node

The compute node runs a hypervisor (KVM, ESX, Hyper-V or XenServer). The compute node handles compute service (nova), telemetry (ceilometer) and network Open vSwitch agent service (neutron).

Network Node

The network node runs networking services (neutron). It runs the neutron services for L3, metadata, DHCP and Open vSwitch. The network node handles all networking between other nodes as well as tenant networking and routing. It provides services such as DHCP and floating IPs that allow instances to connect to public networks. Neutron sits on top of Open vSwitch using either the ml2 or openvswitch plugin. Using Open vSwitch Neutron builds three network bridges: br-int, br-tun and br-ex. The br-int bridge connects all instances. The br-tun bridge connects instances to the physical NIC of the hypervisor. The br-ex bridge connects instances to external (public) networks using floating IPs. Both the br-tun and br-int bridges are visible on compute and network nodes. The br-ex bridge is only visible on network nodes.

Storage Node

The storage node runs storage services. It handles image service (glance), block storage (cinder), object storage (swift) and in the future shared file storage (manila). Typically a storage node would run one type of storage service: object, block or file. Glance should run on nodes providing storage services for images (Cinder or Swift). Glance typically benefits from running on same node as its backing storage service. NetApp for example provides a storage backend that allows images to be cloned using the NetApp storage backend instead of the network.

Multi-node Configurations

While single-node configurations are acceptable for small environments, testing or POCs most production environments will require a multi-node configuration for various reasons. As mentioned multi-node configurations group similar OpenStack services and provide scalability as well as the possibility for high availability. One of the great things about OpenStack is the architecture. Every service is decoupled and all communication between services is done through RESTful API endpoints. This is the model architecture for cloud. The advantages are that we have tremendous flexibility in how to build a multi-node configuration. While a few standards have emerged there are many more possible variations and in the end we are not stuck to a rigid deployment model. The standards for deploying multi-node OpenStack are as a two-node, three-node or four-node configuration.

Two-node OpenStack Configuration

The two-node configuration has a controller and compute node. Here we can easily scale-out compute nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. Below is an illustration of a two-node OpenStack configuration:

openstack_multinode_2_architecture

Three-node OpenStack Configuration

The three-node configuration has a controller, compute and network node. Here we can easily scale-out compute or network nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. In addition we could also setup active / passive configuration for network node to achieve HA and horizontal scaling depending on resource requirements. Below is an illustration of a three-node OpenStack configuration:

openstack_multinode_3_architecture

Four-node OpenStack Configuration

The four-node configuration has a controller, compute, network and storage node. Here we can easily scale-out compute, network and storage nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. In addition we could also setup active / passive configuration for network and storage nodes to achieve HA as well as horizontal scaling depending on resource requirements. Below is an illustration of a four-node OpenStack configuration:

openstack_multinode_4_architecture

Three-node OpenStack Installation

Before we start the installation we will need to provision three nodes running RHEL (Red Hat Enterprise Linux). The nodes can be bare-metal, containers or virtual machines. In my environment I created three virtual machines running RHEL 7 under RHEV 3.4 (Red Hat Enterprise Virtualization). If you are interested in how to setup RHEV you can get more information here. Below are the steps to deploy a three-node OpenStack installation using RHEL 7 and latest Red Hat OpenStack distribution:

Steps to preform on each node

  • Install RHEL 7 and enable one NIC interface (eth0)
  • Register subscription
  • #subscription-manager register
  • List available subscriptions
  • #subscription-manager list --available
  • Attach a specific subscription (the pool id is listed in above command)
  • #subscription-manager attach --pool=8a85f9814a7ea2ec014a813b19433cc8
  • Clear existing repositories and enable correct ones to grab latest Red Hat OpenStack distro
  • #subscription-manager repos --disable=*
  • #subscription-manager repos --enable=rhel-7-server-rpms
  • #subscription-manager repos --enable=rhel-7-server-optional-rpms
  • #subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms

or

  • #subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms
  • Install required packages
  • #yum install -y yum-plugin-priorities yum-utils
  • #yum-config-manager --setopt=”rhel-7-server-openstack-5.0-rpms.priority=1” --enable rhel-7-server-openstack-5.0-rpms

or

  • #yum-config-manager --setopt=”rhel-7-server-openstack-6.0-rpms.priority=1” --enable rhel-7-server-openstack-6.0-rpms
  • #yum update -y
  • #yum install -y openstack-packstack
  • Setup hostname
  • #hostname ostack-ctr.openstack
  • #vi /etc/hostname
  • ostack-ctr.openstack
  • #vi /etc/sysconfig/network
  • HOSTNAME=ostack-ctr.openstack
    GATEWAY=192.168.2.1
  • Setup hosts file if DNS resolution is not configured
  • #vi /etc/hosts
  • 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.2.205 ostack-ctr ostack-ctr.openstack
    192.168.2.206 ostack-cmp ostack-cmp.openstack
    192.168.2.207 ostack-net ostack-net.openstack
  • Configure eth0 network interface
  • #vi /etc/sysconfig/network-scripts/ifcfg-eth0
  • TYPE=Ethernet
    BOOTPROTO=none
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no
    NAME=eth0
    UUID=6c53462f-f735-48c0-ae85-c0ec61a53688
    ONBOOT=yes
    HWADDR=00:1A:4A:DE:DB:CC
    IPADDR0=192.168.2.205
    PREFIX0=24
    GATEWAY0=192.168.2.1
    DNS1=192.168.2.1
    NM_CONTROLLED=no

Note: ensure you set the MAC address (HWADDR) correctly. You can find the MAC address using the "ip a" command.

  • Disable Network Manager
  • #systemctl disable NetworkManager
  • reboot node

Steps to perform on controller node

  • Generate the default answers file
  • #packstack --gen-answer-file=/root/answer_file.txt
  • Update the following in answers file
  • CONFIG_CONTROLLER_HOSTS=192.168.2.205
    CONFIG_COMPUTE_HOSTS=192.168.2.206
    CONFIG_NETWORK_HOSTS=192.168.2.207
    CONFIG_STORAGE_HOST=192.168.2.205
    CONFIG_HORIZON_SSL=y
    CONFIG_PROVISION_DEMO=n
  • Install OpenStack using packstack answers file
  • #packstack --answer-file=/root/answer_file.txt

Network Node Configuration

In order to connect a existing physical network, eth0 must be added as a port to the Open vSwitch br-ex bridge. The below steps should be preformed on the network node:

  • Update network config for eth0
  • #vi /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    HWADDR=00:1A:4A:DE:DB:9C
    TYPE=OVSPort
    DEVICETYPE=ovs
    OVS_BRIDGE=br-ex
    ONBOOT=yes

Note: ensure that the MAC address (HWADDR) is correct

  • Update network config for br-ex
  • #vi /etc/sysconfig/network-scripts/ifcfg-br-ex
  • DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=192.168.2.207
    NETMASK=255.255.255.0
    GATEWAY=192.168.2.1
    DNS1=192.168.2.1
    ONBOOT=yes
  • Add eth0 to br-ex bridge and restart networking
  • #ovs-vsctl add-port br-ex eth0 ; systemctl restart network.service
  • Verify Open vSwitch configuration on network node (eth0 should be connected to br-ex)
  • #ovs-vsctl show
    348cc676-f177-4ee3-a522-8f02aeb4dcd6
     Bridge br-int
        fail_mode: secure
        Port br-int
           Interface br-int
              type: internal
     Bridge br-tun
        Port patch-int
           Interface patch-int
              type: patch
              options: {peer=patch-tun}
        Port "gre-1"
           Interface "gre-1"
              type: gre
              options: {in_key=flow, local_ip="192.168.2.207", out_key=flow, remote_ip="192.168.2.206"}
        Port br-tun
           Interface br-tun
              type: internal
     Bridge br-ex
        Port "eth0"
           Interface "eth0"
        Port br-ex
           Interface br-ex
           type: internal
     ovs_version: "2.1.3"

Networking Configuration

Public networks allow instances to connect to existing external networks. This is achieved by allocating floating IPs  from existing external networks that are shared across tenants. Private networks are tenant networks that provide complete isolation to all instances within a tenant.

Create Private Network

  • #neutron net-create private
  • #neutron subnet-create private 10.0.0.0/24 --name private_subnet

Create Public Network

  • #neutron net-create public --shared --router:external=True
  • #neutron subnet-create public 192.168.2.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=192.168.2.220,end=192.168.2.240 --gateway=192.168.2.1

Create Router

  • #neutron router-create router1
  • #neutron router-interface-add router1 private
  • #neutron router-gateway-set router1 public

Note: make sure you source the /root/keystonerc_admin file otherwise neutron commands will not work

Once we have created the private and public networks we should see the below network topology. In addition we can connect instances to just the private network or both by allocating a floating IP.

openstack_router

Summary

This guide covered the very basics of multi-node OpenStack deployments and networking. From here hopefully you should be able to deploy your own OpenStack multi-node configurations using Red Hat RDO. I really recommend setting up multi-node environments, it is the best way to understand and learn how the different OpenStack projects interact with one another. In addition if you would like to do everything from scratch without RDO or another distro you can follow the complete manual guide here. I hope this guide has been interesting and useful. I always appreciate commends and feedback so please leave some.

Happy Stacking!

(c) 2015 Keith Tenzer