OpenStack Multiple Node Configurations

Node Types

OpenStack can be deployed in a single-node or multi-node configuration. For the purpose of this post I am going to assume you understand OpenStack basics and have at least done a basic installation on a single-node using RDO or another installer. If not please refer to this post which covers the basics. OpenStack is of course a culmination of loosely coupled projects that define services. A node is nothing more than a grouping of OpenStack services that run on bare-metal, in a container or virtual machine. The purpose of a node is to provide horizontal scaling and HA. There are four possible node types in OpenStack: controller, compute, network and storage.

Controller Node

The controller node is the control plane for the OpenStack environment. The control pane handles identity (keystone), dashboard (Horizon), telemetry (ceilometer), orchestration (heat) and network server service (neutron).

Compute Node

The compute node runs a hypervisor (KVM, ESX, Hyper-V or XenServer). The compute node handles compute service (nova), telemetry (ceilometer) and network Open vSwitch agent service (neutron).

Network Node

The network node runs networking services (neutron). It runs the neutron services for L3, metadata, DHCP and Open vSwitch. The network node handles all networking between other nodes as well as tenant networking and routing. It provides services such as DHCP and floating IPs that allow instances to connect to public networks. Neutron sits on top of Open vSwitch using either the ml2 or openvswitch plugin. Using Open vSwitch Neutron builds three network bridges: br-int, br-tun and br-ex. The br-int bridge connects all instances. The br-tun bridge connects instances to the physical NIC of the hypervisor. The br-ex bridge connects instances to external (public) networks using floating IPs. Both the br-tun and br-int bridges are visible on compute and network nodes. The br-ex bridge is only visible on network nodes.

Storage Node

The storage node runs storage services. It handles image service (glance), block storage (cinder), object storage (swift) and in the future shared file storage (manila). Typically a storage node would run one type of storage service: object, block or file. Glance should run on nodes providing storage services for images (Cinder or Swift). Glance typically benefits from running on same node as its backing storage service. NetApp for example provides a storage backend that allows images to be cloned using the NetApp storage backend instead of the network.

Multi-node Configurations

While single-node configurations are acceptable for small environments, testing or POCs most production environments will require a multi-node configuration for various reasons. As mentioned multi-node configurations group similar OpenStack services and provide scalability as well as the possibility for high availability. One of the great things about OpenStack is the architecture. Every service is decoupled and all communication between services is done through RESTful API endpoints. This is the model architecture for cloud. The advantages are that we have tremendous flexibility in how to build a multi-node configuration. While a few standards have emerged there are many more possible variations and in the end we are not stuck to a rigid deployment model. The standards for deploying multi-node OpenStack are as a two-node, three-node or four-node configuration.

Two-node OpenStack Configuration

The two-node configuration has a controller and compute node. Here we can easily scale-out compute nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. Below is an illustration of a two-node OpenStack configuration:

openstack_multinode_2_architecture

Three-node OpenStack Configuration

The three-node configuration has a controller, compute and network node. Here we can easily scale-out compute or network nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. In addition we could also setup active / passive configuration for network node to achieve HA and horizontal scaling depending on resource requirements. Below is an illustration of a three-node OpenStack configuration:

openstack_multinode_3_architecture

Four-node OpenStack Configuration

The four-node configuration has a controller, compute, network and storage node. Here we can easily scale-out compute, network and storage nodes. Most likely we would run just one controller node or we could setup an active / passive HA configuration for the controller node using Pacemaker. In addition we could also setup active / passive configuration for network and storage nodes to achieve HA as well as horizontal scaling depending on resource requirements. Below is an illustration of a four-node OpenStack configuration:

openstack_multinode_4_architecture

Three-node OpenStack Installation

Before we start the installation we will need to provision three nodes running RHEL (Red Hat Enterprise Linux). The nodes can be bare-metal, containers or virtual machines. In my environment I created three virtual machines running RHEL 7 under RHEV 3.4 (Red Hat Enterprise Virtualization). If you are interested in how to setup RHEV you can get more information here. Below are the steps to deploy a three-node OpenStack installation using RHEL 7 and latest Red Hat OpenStack distribution:

Steps to preform on each node

  • Install RHEL 7 and enable one NIC interface (eth0)
  • Register subscription
  • #subscription-manager register
  • List available subscriptions
  • #subscription-manager list --available
  • Attach a specific subscription (the pool id is listed in above command)
  • #subscription-manager attach --pool=8a85f9814a7ea2ec014a813b19433cc8
  • Clear existing repositories and enable correct ones to grab latest Red Hat OpenStack distro
  • #subscription-manager repos --disable=*
  • #subscription-manager repos --enable=rhel-7-server-rpms
  • #subscription-manager repos --enable=rhel-7-server-optional-rpms
  • #subscription-manager repos --enable=rhel-7-server-openstack-5.0-rpms

or

  • #subscription-manager repos --enable=rhel-7-server-openstack-6.0-rpms
  • Install required packages
  • #yum install -y yum-plugin-priorities yum-utils
  • #yum-config-manager --setopt=”rhel-7-server-openstack-5.0-rpms.priority=1” --enable rhel-7-server-openstack-5.0-rpms

or

  • #yum-config-manager --setopt=”rhel-7-server-openstack-6.0-rpms.priority=1” --enable rhel-7-server-openstack-6.0-rpms
  • #yum update -y
  • #yum install -y openstack-packstack
  • Setup hostname
  • #hostname ostack-ctr.openstack
  • #vi /etc/hostname
  • ostack-ctr.openstack
  • #vi /etc/sysconfig/network
  • HOSTNAME=ostack-ctr.openstack
    GATEWAY=192.168.2.1
  • Setup hosts file if DNS resolution is not configured
  • #vi /etc/hosts
  • 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.2.205 ostack-ctr ostack-ctr.openstack
    192.168.2.206 ostack-cmp ostack-cmp.openstack
    192.168.2.207 ostack-net ostack-net.openstack
  • Configure eth0 network interface
  • #vi /etc/sysconfig/network-scripts/ifcfg-eth0
  • TYPE=Ethernet
    BOOTPROTO=none
    DEFROUTE=yes
    IPV4_FAILURE_FATAL=no
    IPV6INIT=no
    IPV6_AUTOCONF=yes
    IPV6_DEFROUTE=yes
    IPV6_PEERDNS=yes
    IPV6_PEERROUTES=yes
    IPV6_FAILURE_FATAL=no
    NAME=eth0
    UUID=6c53462f-f735-48c0-ae85-c0ec61a53688
    ONBOOT=yes
    HWADDR=00:1A:4A:DE:DB:CC
    IPADDR0=192.168.2.205
    PREFIX0=24
    GATEWAY0=192.168.2.1
    DNS1=192.168.2.1
    NM_CONTROLLED=no

Note: ensure you set the MAC address (HWADDR) correctly. You can find the MAC address using the “ip a” command.

  • Disable Network Manager
  • #systemctl disable NetworkManager
  • reboot node

Steps to perform on controller node

  • Generate the default answers file
  • #packstack --gen-answer-file=/root/answer_file.txt
  • Update the following in answers file
  • CONFIG_CONTROLLER_HOSTS=192.168.2.205
    CONFIG_COMPUTE_HOSTS=192.168.2.206
    CONFIG_NETWORK_HOSTS=192.168.2.207
    CONFIG_STORAGE_HOST=192.168.2.205
    CONFIG_HORIZON_SSL=y
    CONFIG_PROVISION_DEMO=n
  • Install OpenStack using packstack answers file
  • #packstack --answer-file=/root/answer_file.txt

Network Node Configuration

In order to connect a existing physical network, eth0 must be added as a port to the Open vSwitch br-ex bridge. The below steps should be preformed on the network node:

  • Update network config for eth0
  • #vi /etc/sysconfig/network-scripts/ifcfg-eth0
    DEVICE=eth0
    HWADDR=00:1A:4A:DE:DB:9C
    TYPE=OVSPort
    DEVICETYPE=ovs
    OVS_BRIDGE=br-ex
    ONBOOT=yes

Note: ensure that the MAC address (HWADDR) is correct

  • Update network config for br-ex
  • #vi /etc/sysconfig/network-scripts/ifcfg-br-ex
  • DEVICE=br-ex
    DEVICETYPE=ovs
    TYPE=OVSBridge
    BOOTPROTO=static
    IPADDR=192.168.2.207
    NETMASK=255.255.255.0
    GATEWAY=192.168.2.1
    DNS1=192.168.2.1
    ONBOOT=yes
  • Add eth0 to br-ex bridge and restart networking
  • #ovs-vsctl add-port br-ex eth0 ; systemctl restart network.service
  • Verify Open vSwitch configuration on network node (eth0 should be connected to br-ex)
  • #ovs-vsctl show
    348cc676-f177-4ee3-a522-8f02aeb4dcd6
     Bridge br-int
        fail_mode: secure
        Port br-int
           Interface br-int
              type: internal
     Bridge br-tun
        Port patch-int
           Interface patch-int
              type: patch
              options: {peer=patch-tun}
        Port "gre-1"
           Interface "gre-1"
              type: gre
              options: {in_key=flow, local_ip="192.168.2.207", out_key=flow, remote_ip="192.168.2.206"}
        Port br-tun
           Interface br-tun
              type: internal
     Bridge br-ex
        Port "eth0"
           Interface "eth0"
        Port br-ex
           Interface br-ex
           type: internal
     ovs_version: "2.1.3"

Networking Configuration

Public networks allow instances to connect to existing external networks. This is achieved by allocating floating IPs  from existing external networks that are shared across tenants. Private networks are tenant networks that provide complete isolation to all instances within a tenant.

Create Private Network

  • #neutron net-create private
  • #neutron subnet-create private 10.0.0.0/24 --name private_subnet

Create Public Network

  • #neutron net-create public --shared --router:external=True
  • #neutron subnet-create public 192.168.2.0/24 --name public_subnet --enable_dhcp=False --allocation-pool start=192.168.2.220,end=192.168.2.240 --gateway=192.168.2.1

Create Router

  • #neutron router-create router1
  • #neutron router-interface-add router1 private
  • #neutron router-gateway-set router1 public

Note: make sure you source the /root/keystonerc_admin file otherwise neutron commands will not work

Once we have created the private and public networks we should see the below network topology. In addition we can connect instances to just the private network or both by allocating a floating IP.

openstack_router

Summary

This guide covered the very basics of multi-node OpenStack deployments and networking. From here hopefully you should be able to deploy your own OpenStack multi-node configurations using Red Hat RDO. I really recommend setting up multi-node environments, it is the best way to understand and learn how the different OpenStack projects interact with one another. In addition if you would like to do everything from scratch without RDO or another distro you can follow the complete manual guide here. I hope this guide has been interesting and useful. I always appreciate commends and feedback so please leave some.

Happy Stacking!

(c) 2015 Keith Tenzer

20 thoughts on “OpenStack Multiple Node Configurations

  1. Hi Keith,
    I was curious on the sizes of the VM’s you created for the nodes?
    Thanks looking forward to building this lab this week while studying for Red Hat OS cert.

    Like

    • Hi Stephen,

      I would create 50GB VMs for the nodes assuming you use LVM for cinder backend. For lab I would use LVM storage and enable thin provisioning. I used to do 30GB but several times ran out of room and extending storage is a bit of work unless you are using a cinder backend. If you think you are going to need a lot of space for cinder volumes then I would setup a backend via NFS, that is most flexible. I also typically use ephermal instances and try to do all the customizations via cloud-init. Good luck with Red Hat OSP cert!

      Like

    • For using RDO (packstack) you only run installer from one node. It uses puppet to install components on other nodes. Use the –gen-answers-file option to create answers file. Then you can specify hostnames for controller, network, compute. They can be same system or different systems. Use the –answer-file option to pass in your customized answers file.

      Hope this helps!

      Like

  2. Great tutorial, just a quick question, if i want to use 2 compute nodes instead, how do I set that up in the answer file? Or, if I went ahead with just one compute node setup, how can I add another compute node after the setup? Thank you.

    Like

    • Hi Jasper,

      You simply need to add the compute IPs to compute hosts defined in answers file. For example:
      CONFIG_COMPUTE_HOSTS=compute1-IP,compute2-IP

      You can then re-run packstack and it should update environment, adding additional compute nodes.

      Just word of caution packstack is a very basic installer, if you start changing your openstack configuration, especially networking…re-running packstack may not work. This is why if you gonna run a production openstack environment you should move away from packstack / RDO and use Red Hat OpenStack Platform. A key component of the OSP platform is the OSP director which handles life-cycle, upgrades configuration of Ceph and lots more.

      Hope this helps!

      Keith

      Like

      • Thanks for the reply Kieth, I’m getting error “Unable to find subnet with name ‘private'” when doing this command – # neutron router-interface-add router1 private , should it be private_subnet instead? I went ahead and used private_subnet.

        Also noticed in my network topology – Router1 under interfaces – router_interface is ACTIVE but router_gateway is DOWN.

        Here is my setup:

        3 ESXi vms
        – controller: 10.1.1.70/24
        – compute: 10.1.1.71/24
        – network: 10.1.1.72/24

        GW / DNS1 = 10.1.1.2

        Under Access & Security, I setup a new group – newsec with rules Ingress ports 22, 443, 80 remote 0.0.0.0/0, ICMP is also added in Ingress.

        My issue is that I cannot ping the instance that I created (which has an IP address 10.1.1.221) from any of my machines on the 10.1.1 network, also true from the cirros instance when pinging the GW 10.1.1.2.

        Appreciate the help.

        -jasper

        Like

  3. Hi Keith, I can ping/ssh the launched instance from the network node but not from the controller or compute nodes. How can I access the launched instance (via the floating IP) from my existing external network? Thanks.

    Like

    • I have not tried but this should work. If anything doesnt work it would be due to some changes in RDO installer, but the steps after install should work 100%. Let me know if you encounter any issues?

      Like

      • Hi Keith,

        Wanted to use 2 NICs (eth0 = Floating IP, eth1= Openstack internal) using vxlan with this multi-node install, what changes should I make in my answers file? Also, I wanted to try this using Mitaka, will this instruction work as well? Thanks.

        Like

      • Yes things should work fine using mitaka based RDO.

        I haven’t tried splitting traffic on two interfaces using RDO, I am not sure this is possible. Looking at answer file it isnt. You can separate it looks like provate and public compute host traffic but this is not OpenStack internal. If you really want to do this I would like at RHEL OSP Director.

        Regards,

        Keith

        Like

      • I am having three systems in my lab, all systems are of I5, 4 GB RAM and 500 GB Hard disk configuration. I installed ubuntu 16 lts in all the three machines, I connected three machines with 6 port switch and the switch is connected to the college router. I am confused how to proceed further. please help me in physically connecting these systems.

        Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s