HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director
Overview
In this article we will look at how to deploy an OpenStack cloud using TripleO, the upstream project from the Red Hat OpenStack Director. Regardless of what OpenStack distribution you are using OpenStack is essentially OpenStack. Everyone has the same code-base to work with. The main differences between distributions are around what OpenStack projects are part of distribution, how it is supported and the deployment of the distribution. Every distribution has their own OpenStack deployment tool. Clearly deployments differ as they are based on support decisions each distribution makes. However many distributions have created their own proprietary installers. Shouldn't the OpenStack community unite around a common installer? What would be better than using OpenStack to deploy OpenStack? Why should OpenStack administrators have to learn separate proprietary tooling? Why should we be creating unnecessary vendor lock-in for OpenStack's deployment tooling? Installing OpenStack is one thing but what about upgrade and life-cycle management?
This is the promise of TripleO! The TripleO (OpenStack on OpenStack) project was started to solve these problems and bring unification around OpenStack deployment as well as eventually life-cycle management. This has taken quite some time and been a journey but finally the first distribution is using TripleO. Red Hat Enterprise OpenStack Platform 7 has shifted away from foreman/puppet and is now based largely on TripleO. Red Hat is bringing its expertise and learning over the past years around OpenStack deployments and contributing heavily to TripleO.
TripleO Concepts
Before getting into the weeds, we should understand some basic concepts. First TripleO uses OpenStack to deploy OpenStack. It mainly utilizes Ironic for provisioning and Heat for orchestration. Under the hood puppet is used for configuration management. TripleO first deploys an OpenStack cloud used to deploy other OpenStack clouds. This is referred to as the undercloud. The OpenStack cloud environment deployed from undercloud is known as overcloud. The networking requirement is that all systems share a non-routed provisioning network. TripleO also uses PXE to boot and install initial OS image (bootstrap). There are different types of nodes or roles a node can have. In addition to controller and compute you can have nodes for Cinder, CEPH or Swift storage. CEPH storage is also integrated and since most OpenStack deployments use CEPH this is an obvious advantage.
Environment
In this environment we have the KVM hypervisor host (Laptop), the undercloud (single VM) and overcloud (1 X compute, 1 Xcontroller). The undercloud and overcloud are all VMs running on the KVM hypervisor host (Laptop). The KVM hypervisor host is on the 192.168.122.0/24 network and has IP of 192.168.122.1. The undercloud runs on a single VM on the 192.168.122.0/24 management network and 192.168.126.0/24 (provisioning) netowrk. The undercloud has an IP address of 192.168.122.90 (eth0). The overcloud is on the 192.168.126.0/24 (provisioning) and 192.168.125.0/24 (external) network. This is a very simple network configuration. In a real production environment there will be many more networks used in overcloud.
Deploying Undercloud
In this section we will configure the undercloud. Normally you would deploy OpenStack nodes on bare-metal but since this is designed to run on Laptop or in lab, we are using KVM virtualization. Before beginning install RHEL or CentOS 7.1 on your KVM hypervisor.
Disable NetworkManager.
undercloud# systemctl stop NetworkManager undercloud# systemctl disable NetworkManager
Enable port forwarding.
undercloud# vi /etc/sysctl.conf net.ipv4.ip_forward = 1
undercloud# sysctl -p /etc/sysctl.conf
Ensure hostname is static.
undercloud# hostnamectl set-hostname undercloud.lab.com undercloud# systemctl restart network
Register to subscription manager and enable appropriate repositories for RHEL.
undercloud# subscription-manager register undercloud# subscription-manager list --available undercloud# subscription-manager attach --pool=8a85f9814f2c669b014f3b872de132b5 undercloud# subscription-manager repos --disable=* undercloud# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-7.0-rpms --enable=rhel-7-server-openstack-7.0-director-rpms
Perform yum update and reboot system.
undercloud# yum update -y && reboot
Install facter and ensure hostname is set properly in /etc/hosts.
undercloud# yum install facter -y undercloud# ipaddr=$(facter ipaddress_eth0) undercloud# echo -e "$ipaddr\t\tundercloud.lab.com\tundercloud" >> /etc/hosts
Install TripleO packages.
undercloud#
yum install python-rdomanager-oscplugin -y
Create a stack user.
undercloud# useradd stack undercloud# echo "redhat" | passwd stack --stdin undercloud# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack undercloud# chmod 0440 /etc/sudoers.d/stack undercloud# su - stack
Determine network settings for undercloud. At minimum you need two networks. One for provisioning and the other for the overcloud which should be external network. In this case we have two networks. The undercloud provisioning network 192.168.126.0/24 and the overcloud external network 192.168.125.0/24.
[stack@undercloud ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[stack@undercloud ~]$ vi ~/undercloud.conf [DEFAULT] local_ip = 192.168.126.1/24 undercloud_public_vip = 192.168.126.10 undercloud_admin_vip = 192.168.126.11 local_interface = eth1 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.120 network_cidr = 192.168.126.0/24 network_gateway = 192.168.126.1 discovery_iprange = 192.168.126.130,192.168.126.150 [auth]
Install the undercloud.
[stack@undercloud ~]$ openstack undercloud install ############################################################################# instack-install-undercloud complete. The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf. There is also a stackrc file at /home/stack/stackrc. These files are needed to interact with the OpenStack services, and should be secured. #############################################################################
Verify undercloud.
[stack@undercloud ~]$ source ~/stackrc [stack@undercloud ~]$ openstack catalog show nova +-----------+------------------------------------------------------------------------------+ | Field | Value | +-----------+------------------------------------------------------------------------------+ | endpoints | regionOne | | | publicURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | internalURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | adminURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | | | name | nova | | type | compute | +-----------+------------------------------------------------------------------------------+
Deploying Overcloud
The overcloud is as mentioned a separate cloud from the undercloud. They are not sharing any resources, other than the provisioning network. Over and under sometimes confuse people into thinking the overcloud is sitting on top of undercloud, from networking perspective. This is of course not the case. In reality the clouds are sitting side-by-side from one another. The term over and under really refers to a logical relationship between both clouds. We will do a minimal deployment for the overcloud, 1 X controller and 1 X compute.
Create directory for storing undercloud images. These are the images used by Ironic to provision an OpenStack node.
[stack@undercloud]$ mkdir ~/images
Download images from https://access.redhat.com/downloads/content/191/ver=7/rhel---7/7/x86_64/product-downloads and copy to ~/images.
[stack@undercloud images]$ ls -l total 2307076 -rw-r-----. 1 stack stack 61419520 Oct 12 16:11 deploy-ramdisk-ironic-7.1.0-39.tar -rw-r-----. 1 stack stack 155238400 Oct 12 16:11 discovery-ramdisk-7.1.0-39.tar -rw-r-----. 1 stack stack 964567040 Oct 12 16:12 overcloud-full-7.1.0-39.tar
Extract image tarballs.
[stack@undercloud ~]$ cd ~/images [stack@undercloud ~]$ for tarfile in *.tar; do tar -xf $tarfile; done
Upload images to Glance.
[stack@undercloud ~]$ openstack overcloud image upload --image-path /home/stack/images
[stack@undercloud ~]$ openstack image list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | 31c01b42-d164-4898-b615-4787c12d3a53 | bm-deploy-ramdisk | | e38057f6-24f2-42d1-afae-bb54dead864d | bm-deploy-kernel | | f1708a15-5b9b-41ac-8363-ffc9932534f3 | overcloud-full | | 318768c2-5300-43cb-939d-44fb7abca7de | overcloud-full-initrd | | 28422b76-c37f-4413-b885-cccb24a4611c | overcloud-full-vmlinuz | +--------------------------------------+------------------------+
Configure DNS for undercloud. The undercloud system is connected to a network 192.168.122.0/24 that provides DNS.
[stack@undercloud]$ neutron subnet-list +--------------------------------------+------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+------------------+--------------------------------------------------------+ | 532f3344-57ed-4a2f-b438-67a5d60c71fc | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.120"} | +--------------------------------------+------+------------------+--------------------------------------------------------+
[stack@undercloud ~]$ neutron subnet-update 532f3344-57ed-4a2f-b438-67a5d60c71fc --dns-nameserver 192.168.122.1
Since we are in nested virtual environment it is necessary to tweak timeouts.
undercloud#sudo su - undercloud# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_response_timeout 600 undercloud#
openstack-config --set /etc/ironic/ironic.conf DEFAULT rpc_response_timeout 600 undercloud#
openstack-service restart nova undercloud# openstack-service restart ironic undercloud#
exit
Create provisioning and external networks on KVM Hypervisor host. Ensure that NAT forwarding and DHCP is enabled on the external network. The provisioning network should be non-routable and DHCP disabled. The undercloud will handle DHCP services for the provisioning network.
[ktenzer@ktenzer ~]$ cat > /tmp/external.xml <<EOF <network> <name>external</name> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <ip address='192.168.125.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.125.2' end='192.168.125.254'/> </dhcp> </ip> </network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/external.xml [ktenzer@ktenzer ~]$ virsh net-autostart external [ktenzer@ktenzer ~]$ virsh net-start external
[ktenzer@ktenzer ~]$ cat > /tmp/provisioning.xml <<EOF <network> <name>provisioning</name> <ip address='192.168.126.254' netmask='255.255.255.0'> </ip> </network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/provisioning.xml [ktenzer@ktenzer ~]$ virsh net-autostart provisioning [ktenzer@ktenzer ~]$ virsh net-start provisioning
Create VM hulls in KVM using virsh on hypervisor host. You will need to change the disk path to suit your needs.
ktenzer# cd /home/ktenzer/VirtualMachines ktenzer# for i in {1..2}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done ktenzer# for i in {1..2}; do virt-install --ram 4096 --vcpus 4 --os-variant rhel7 --disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:overcloud --name overcloud-node$i --cpu SandyBridge,+vmx --dry-run --print-xml > /tmp/overcloud-node$i.xml; virsh define --file /tmp/overcloud-node$i.xml; done
Enable access on KVM hypervisor host so that Ironic can control VMs.
ktenzer# cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla [libvirt Management Access] Identity=unix-user:stack Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes EOF
Copy ssh key from undercloud system to KVM hypervisor host for stack user.
undercloud$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122.1
Save the MAC addresses for the provisioning network on the VMs. Ironic needs to know what MAC addresses a node has associated for provisioning network.
[stack@undercloud ~]$ for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk '$3 == "mgmt" {print $5};'; done > /tmp/nodes.txt
[stack@undercloud ~]$ cat /tmp/nodes.txt 52:54:00:44:60:2b 52:54:00:ea:e7:2e
Create JSON file for Ironic baremetal node configuration. In this case we are configuring two nodes which are of course the virtual machines we already created. The pm_addr IP is set to IP of the KVM hypervisor host.
[stack@undercloud ~]$ jq . << EOF > ~/instackenv.json { "ssh-user": "stack", "ssh-key": "$(cat ~/.ssh/id_rsa)", "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager", "host-ip": "192.168.122.1", "arch": "x86_64", "nodes": [ { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 1p /tmp/nodes.txt)" ], "cpu": "4", "memory": "4096", "disk": "60", "arch": "x86_64", "pm_user": "stack" }, { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 2p /tmp/nodes.txt)" ], "cpu": "4", "memory": "4096", "disk": "60", "arch": "x86_64", "pm_user": "stack" } ] } EOF
Validate JSON file.
[stack@undercloud ~]$ curl -O https://raw.githubusercontent.com/rthallisey/clapper/master/instackenv-validator.py
python instackenv-validator.py -f instackenv.json INFO:__main__:Checking node 192.168.122.1 DEBUG:__main__:Identified virtual node INFO:__main__:Checking node 192.168.122.1 DEBUG:__main__:Identified virtual node DEBUG:__main__:Baremetal IPs are all unique. DEBUG:__main__:MAC addresses are all unique. -------------------- SUCCESS: instackenv validator found 0 errors
Add nodes to Ironic
[stack@undercloud ~]$ openstack baremetal import --json instackenv.json
List newly added baremetal nodes.
[stack@undercloud ~]$ openstack baremetal list +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power off | available | False | | 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+-----------------+-------------+
Enable nodes for baremetal provisioning and inspect ram and kernel images.
[stack@undercloud ~]$ openstack baremetal configure boot
[stack@undercloud ~]$ ironic node-show cd620ad0-4563-44a5-8078-531b7f906188 | grep -A1 deploy | driver_info | {u'ssh_username': u'stack', u'deploy_kernel': u'50125b15-9de3-4f03-bfbb- | | | 76e740741b68', u'deploy_ramdisk': u'25b55027-ca57-4f15-babe- | | | 6e14ba7d0b0c', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- |
[stack@undercloud ~]$ openstack image show 50125b15-9de3-4f03-bfbb-76e740741b68 +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | checksum | 061e63c269d9c5b9a48a23f118c865de | | container_format | aki | | created_at | 2015-10-12T10:22:38.000000 | | deleted | False | | disk_format | aki | | id | 50125b15-9de3-4f03-bfbb-76e740741b68 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | bm-deploy-kernel | | owner | 2ad8c320cf7040ef9ec0440e94238f58 | | properties | {} | | protected | False | | size | 5027584 | | status | active | | updated_at | 2015-10-12T10:22:38.000000 | +------------------+--------------------------------------+
[stack@undercloud ~]$ openstack image show 25b55027-ca57-4f15-babe-6e14ba7d0b0c +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | checksum | eafcb9601b03261a7c608bebcfdff41c | | container_format | ari | | created_at | 2015-10-12T10:22:38.000000 | | deleted | False | | disk_format | ari | | id | 25b55027-ca57-4f15-babe-6e14ba7d0b0c | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | bm-deploy-ramdisk | | owner | 2ad8c320cf7040ef9ec0440e94238f58 | | properties | {} | | protected | False | | size | 56355601 | | status | active | | updated_at | 2015-10-12T10:22:40.000000 | +------------------+--------------------------------------+ /pre> Ironic at this point only supports IPMI booting and since we are using VMs we need to use ssh_pxe. This is a workaround to allow that to work.
[stack@undercloud ~]$ sudo su - undercloud# cat << EOF > /usr/bin/bootif-fix #!/usr/bin/env bash while true; do find /httpboot/ -type f ! -iname "kernel" ! -iname "ramdisk" ! -iname "*.kernel" ! -iname "*.ramdisk" -exec sed -i 's|{mac|{net0/mac|g' {} +; done EOF undercloud# chmod a+x /usr/bin/bootif-fix undercloud# cat << EOF > /usr/lib/systemd/system/bootif-fix.service [Unit] Description=Automated fix for incorrect iPXE BOOFIF [Service] Type=simple ExecStart=/usr/bin/bootif-fix [Install] WantedBy=multi-user.target EOF undercloud# systemctl daemon-reload undercloud# systemctl enable bootif-fix undercloud# systemctl start bootif-fix undercloud# exit
Create new flavor for the baremetal nodes and set boot option to local.
undercloud$ openstack flavor create --id auto --ram 4096 --disk 58 --vcpus 4 baremetal
undercloud$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" baremetal
Perform introspection on baremetal nodes. This will discover hardware and configure node roles.
[stack@undercloud ~]$ openstack baremetal introspection bulk start Setting available nodes to manageable... Starting introspection of node: 79f2a51c-a0f0-436f-9e8a-c082ee61f938 Starting introspection of node: 8ba244fd-5362-45fe-bb6c-5f15f2949912 Waiting for discovery to finish... Discovery for UUID 79f2a51c-a0f0-436f-9e8a-c082ee61f938 finished successfully. Discovery for UUID 8ba244fd-5362-45fe-bb6c-5f15f2949912 finished successfully. Setting manageable nodes to available... Node 79f2a51c-a0f0-436f-9e8a-c082ee61f938 has been set to available. Node 8ba244fd-5362-45fe-bb6c-5f15f2949912 has been set to available.
To check progress of introspection.
[stack@undercloud ~]$ sudo journalctl -f -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -f
List the Ironic baremetal nodes. Nodes should be available if introspection worked.
[stack@undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power on | available | False | | 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power on | available | False | +--------------------------------------+------+---------------+-------------+-----------------+-------------+
Deploy overcloud.
[stack@undercloud ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan Overcloud Endpoint: http://192.168.126.119:5000/v2.0/ Overcloud Deployed
Check status of Heat resources to monitor status of overcloud deployment.
[stack@undercloud ~]$ heat resource-list -n 5 overcloud
Once the OS install is complete on the baremetal nodes you can follow progress of the OpenStack overcloud configuration.
[stack@undercloud ~]$ nova list +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+ | 507d1172-fc73-476b-960f-1d9bf7c1c270 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.126.103| | ff0e5e15-5bb8-4c77-81c3-651588802ebd | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.126.102| +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
[stack@undercloud ~]$ ssh heat-admin@192.168.126.102 overcloud-controller-0$ sudo -i overcloud-controller-0# journalctl -f -u os-collect-config
Deploying using the OpenStack Director UI
The overcloud deployment can be done using the UI. You can even do the preliminary configuration using the CLI and run deployment from UI.
We can see exactly what OpenStack services will be configured in the overcloud.
Deployment status is shown and using the UI it is also to see when baremetal nodes have been completely provisioned.
Deployment details are available in the deployment log.
Once deployment is complete using the UI, the overcloud must be initialized.
Upon completion the overcloud is available and can be accessed.
Summary
In this article we have discussed how OpenStack distributions have a proprietary mindset in regards to their deployment tools. We have discussed the need for a OpenStack community sponsored upstream project responsible for deployment and life-cycle management. That project is TripleO and Red Hat is the first distribution to ship its deployment tool based on TripleO. Using OpenStack to deploy OpenStack not only benefits entire community but also administrators and end-users. Finally we have seen how to deploy both the undercloud as well as overcloud using TripleO and the Red Hat OpenStack Director. Hopefully you found this article informative and useful. I would be very interested in hearing your feedback on this topic, so please share.
Happy OpenStacking!
(c) 2015 Keith Tenzer