HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director

ooo

Overview

In this article we will look at how to deploy an OpenStack cloud using TripleO, the upstream project from the Red Hat OpenStack Director. Regardless of what OpenStack distribution you are using OpenStack is essentially OpenStack. Everyone has the same code-base to work with. The main differences between distributions are around what OpenStack projects are part of distribution, how it is supported and the deployment of the distribution. Every distribution has their own OpenStack deployment tool. Clearly deployments differ as they are based on support decisions each distribution makes. However many distributions have created their own proprietary installers. Shouldn’t the OpenStack community unite around a common installer? What would be better than using OpenStack to deploy OpenStack? Why should OpenStack administrators have to learn separate proprietary tooling? Why should we be creating unnecessary vendor lock-in for OpenStack’s deployment tooling? Installing OpenStack is one thing but what about upgrade and life-cycle management?

This is the promise of TripleO! The TripleO (OpenStack on OpenStack) project was started to solve these problems and bring unification around OpenStack deployment as well as eventually life-cycle management. This has taken quite some time and been a journey but finally the first distribution is using TripleO. Red Hat Enterprise OpenStack Platform 7 has shifted away from foreman/puppet and is now based largely on TripleO. Red Hat is bringing its expertise and learning over the past years around OpenStack deployments and contributing heavily to TripleO.

TripleO Concepts

Before getting into the weeds, we should understand some basic concepts. First TripleO uses OpenStack to deploy OpenStack. It mainly utilizes Ironic for provisioning and Heat for orchestration. Under the hood puppet is used for configuration management. TripleO first deploys an OpenStack cloud used to deploy other OpenStack clouds. This is referred to as the undercloud. The OpenStack cloud environment deployed from undercloud is known as overcloud. The networking requirement is that all systems share a non-routed provisioning network. TripleO also uses PXE to boot and install initial OS image (bootstrap). There are different types of nodes or roles a node can have. In addition to controller and compute you can have nodes for Cinder, CEPH or Swift storage. CEPH storage is also integrated and since most OpenStack deployments use CEPH this is an obvious advantage.

Environment

In this environment we have the KVM hypervisor host (Laptop), the undercloud (single VM) and overcloud (1 X compute, 1 Xcontroller). The undercloud and overcloud are all VMs running on the KVM hypervisor host (Laptop). The KVM hypervisor host is on the 192.168.122.0/24 network and has IP of 192.168.122.1. The undercloud runs on a single VM on the 192.168.122.0/24 management network and 192.168.126.0/24 (provisioning) netowrk. The undercloud has an IP address of 192.168.122.90 (eth0). The overcloud is on the 192.168.126.0/24 (provisioning) and 192.168.125.0/24 (external) network. This is a very simple network configuration. In a real production environment there will be many more networks used in overcloud.

OSP_7_Network_Architecture

Deploying Undercloud

In this section we will configure the undercloud. Normally you would deploy OpenStack nodes on bare-metal but since this is designed to run on Laptop or in lab, we are using KVM virtualization. Before beginning install RHEL or CentOS 7.1 on your KVM hypervisor.

Disable NetworkManager.

undercloud# systemctl stop NetworkManager
undercloud# systemctl disable NetworkManager

Enable port forwarding.

undercloud# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
undercloud# sysctl -p /etc/sysctl.conf

Ensure hostname is static.

undercloud# hostnamectl set-hostname undercloud.lab.com
undercloud# systemctl restart network

Register to subscription manager and enable appropriate repositories for RHEL.

undercloud# subscription-manager register
undercloud# subscription-manager list --available
undercloud# subscription-manager attach --pool=8a85f9814f2c669b014f3b872de132b5
undercloud# subscription-manager repos --disable=*
undercloud# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-7.0-rpms --enable=rhel-7-server-openstack-7.0-director-rpms

Perform yum update and reboot system.

undercloud# yum update -y && reboot

Install facter and ensure hostname is set properly in /etc/hosts.

undercloud# yum install facter -y
undercloud# ipaddr=$(facter ipaddress_eth0)
undercloud# echo -e "$ipaddr\t\tundercloud.lab.com\tundercloud" >> /etc/hosts

Install TripleO packages.

undercloud# yum install python-rdomanager-oscplugin -y 

Create a stack user.

undercloud# useradd stack
undercloud# echo "redhat" | passwd stack --stdin
undercloud# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack undercloud# chmod 0440 /etc/sudoers.d/stack
undercloud# su - stack

Determine network settings for undercloud. At minimum you need two networks. One for provisioning and the other for the overcloud which should be external network. In this case we have two networks. The undercloud provisioning network 192.168.126.0/24 and the overcloud external network 192.168.125.0/24.

[stack@undercloud ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[stack@undercloud ~]$ vi ~/undercloud.conf
 [DEFAULT]
 local_ip = 192.168.126.1/24
 undercloud_public_vip = 192.168.126.10
 undercloud_admin_vip = 192.168.126.11
 local_interface = eth1
 masquerade_network = 192.168.126.0/24
 dhcp_start = 192.168.126.100
 dhcp_end = 192.168.126.120
 network_cidr = 192.168.126.0/24
 network_gateway = 192.168.126.1
 discovery_iprange = 192.168.126.130,192.168.126.150
 [auth]

Install the undercloud.

[stack@undercloud ~]$ openstack undercloud install
#############################################################################
instack-install-undercloud complete.
The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf.
There is also a stackrc file at /home/stack/stackrc.
These files are needed to interact with the OpenStack services, and should be secured.
#############################################################################

Verify undercloud.

 [stack@undercloud ~]$ source ~/stackrc
 [stack@undercloud ~]$ openstack catalog show nova
 +-----------+------------------------------------------------------------------------------+
 | Field | Value |
 +-----------+------------------------------------------------------------------------------+
 | endpoints | regionOne                                                                    |
 |           | publicURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a     |
 |           | internalURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a   |
 |           | adminURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a      |
 |           |                                                                              |
 | name      | nova                                                                         |
 | type      | compute                                                                      |
 +-----------+------------------------------------------------------------------------------+

Deploying Overcloud

The overcloud is as mentioned a separate cloud from the undercloud. They are not sharing any resources, other than the provisioning network. Over and under sometimes confuse people into thinking the overcloud is sitting on top of undercloud, from networking perspective. This is of course not the case. In reality the clouds are sitting side-by-side from one another. The term over and under really refers to a logical relationship between both clouds. We will do a minimal deployment for the overcloud, 1 X controller and 1 X compute.

Create directory for storing undercloud images. These are the images used by Ironic to provision an OpenStack node.

[stack@undercloud]$ mkdir ~/images

Download images from https://access.redhat.com/downloads/content/191/ver=7/rhel—7/7/x86_64/product-downloads and copy to ~/images.

[stack@undercloud images]$ ls -l
total 2307076
-rw-r-----. 1 stack stack 61419520 Oct 12 16:11 deploy-ramdisk-ironic-7.1.0-39.tar
-rw-r-----. 1 stack stack 155238400 Oct 12 16:11 discovery-ramdisk-7.1.0-39.tar
-rw-r-----. 1 stack stack 964567040 Oct 12 16:12 overcloud-full-7.1.0-39.tar

Extract image tarballs.

[stack@undercloud ~]$ cd ~/images
[stack@undercloud ~]$ for tarfile in *.tar; do tar -xf $tarfile; done

Upload images to Glance.

[stack@undercloud ~]$ openstack overcloud image upload --image-path /home/stack/images
[stack@undercloud ~]$ openstack image list
 +--------------------------------------+------------------------+
 | ID | Name |
 +--------------------------------------+------------------------+
 | 31c01b42-d164-4898-b615-4787c12d3a53 | bm-deploy-ramdisk |
 | e38057f6-24f2-42d1-afae-bb54dead864d | bm-deploy-kernel |
 | f1708a15-5b9b-41ac-8363-ffc9932534f3 | overcloud-full |
 | 318768c2-5300-43cb-939d-44fb7abca7de | overcloud-full-initrd |
 | 28422b76-c37f-4413-b885-cccb24a4611c | overcloud-full-vmlinuz |
 +--------------------------------------+------------------------+

Configure DNS for undercloud. The undercloud system is connected to a network 192.168.122.0/24 that provides DNS.

[stack@undercloud]$ neutron subnet-list
+--------------------------------------+------+------------------+--------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+------------------+--------------------------------------------------------+
| 532f3344-57ed-4a2f-b438-67a5d60c71fc | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.120"} |
+--------------------------------------+------+------------------+--------------------------------------------------------+
[stack@undercloud ~]$ neutron subnet-update 532f3344-57ed-4a2f-b438-67a5d60c71fc --dns-nameserver 192.168.122.1

Since we are in nested virtual environment it is necessary to tweak timeouts.

undercloud# sudo su -
undercloud# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_response_timeout 600
undercloud# openstack-config --set /etc/ironic/ironic.conf DEFAULT rpc_response_timeout 600
undercloud# openstack-service restart nova 
undercloud# openstack-service restart ironic
undercloud# exit

Create provisioning and external networks on KVM Hypervisor host. Ensure that NAT forwarding and DHCP is enabled on the external network. The provisioning network should be non-routable and DHCP disabled. The undercloud will handle DHCP services for the provisioning network.

[ktenzer@ktenzer ~]$ cat > /tmp/external.xml <<EOF
<network>
   <name>external</name>
   <forward mode='nat'>
      <nat> <port start='1024' end='65535'/>
      </nat>
   </forward>
   <ip address='192.168.125.1' netmask='255.255.255.0'>
      <dhcp> <range start='192.168.125.2' end='192.168.125.254'/>
      </dhcp>
   </ip>
</network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/external.xml
[ktenzer@ktenzer ~]$ virsh net-autostart external
[ktenzer@ktenzer ~]$ virsh net-start external
[ktenzer@ktenzer ~]$ cat > /tmp/provisioning.xml <<EOF
<network>
   <name>provisioning</name>
   <ip address='192.168.126.254' netmask='255.255.255.0'>
   </ip>
</network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/provisioning.xml
[ktenzer@ktenzer ~]$ virsh net-autostart provisioning
[ktenzer@ktenzer ~]$ virsh net-start provisioning

Create VM hulls in KVM using virsh on hypervisor host. You will need to change the disk path to suit your needs.

ktenzer# cd /home/ktenzer/VirtualMachines
ktenzer# for i in {1..2}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done
ktenzer# for i in {1..2}; do virt-install --ram 4096 --vcpus 4 --os-variant rhel7 --disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:overcloud --name overcloud-node$i --cpu SandyBridge,+vmx --dry-run --print-xml > /tmp/overcloud-node$i.xml; virsh define --file /tmp/overcloud-node$i.xml; done

Enable access on KVM hypervisor host so that Ironic can control VMs.

ktenzer# cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

Copy ssh key from undercloud system to KVM hypervisor host for stack user.

undercloud$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122.1

Save the MAC addresses for the provisioning network on the VMs. Ironic needs to know what MAC addresses a node has associated for provisioning network.

[stack@undercloud ~]$ for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk '$3 == "mgmt" {print $5};'; done > /tmp/nodes.txt
[stack@undercloud ~]$ cat /tmp/nodes.txt
52:54:00:44:60:2b
52:54:00:ea:e7:2e

Create JSON file for Ironic baremetal node configuration. In this case we are configuring two nodes which are of course the virtual machines we already created. The pm_addr IP is set to IP of the KVM hypervisor host.

[stack@undercloud ~]$ jq . << EOF > ~/instackenv.json
{
  "ssh-user": "stack",
  "ssh-key": "$(cat ~/.ssh/id_rsa)",
  "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager",
  "host-ip": "192.168.122.1",
  "arch": "x86_64",
  "nodes": [
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "mac": [
        "$(sed -n 1p /tmp/nodes.txt)"
      ],
      "cpu": "4",
      "memory": "4096",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "mac": [
        "$(sed -n 2p /tmp/nodes.txt)"
      ],
      "cpu": "4",
      "memory": "4096",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    }
  ]
}
EOF

Validate JSON file.

[stack@undercloud ~]$ curl -O https://raw.githubusercontent.com/rthallisey/clapper/master/instackenv-validator.py
python instackenv-validator.py -f instackenv.json
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
DEBUG:__main__:Baremetal IPs are all unique.
DEBUG:__main__:MAC addresses are all unique.

--------------------
SUCCESS: instackenv validator found 0 errors

Add nodes to Ironic

[stack@undercloud ~]$ openstack baremetal import --json instackenv.json

List newly added baremetal nodes.

[stack@undercloud ~]$ openstack baremetal list
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power off | available | False |
| 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power off | available | False |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+

Enable nodes for baremetal provisioning and inspect ram and kernel images.

[stack@undercloud ~]$ openstack baremetal configure boot
[stack@undercloud ~]$ ironic node-show cd620ad0-4563-44a5-8078-531b7f906188 | grep -A1 deploy

| driver_info | {u'ssh_username': u'stack', u'deploy_kernel': u'50125b15-9de3-4f03-bfbb- |
| | 76e740741b68', u'deploy_ramdisk': u'25b55027-ca57-4f15-babe- |
| | 6e14ba7d0b0c', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- |
[stack@undercloud ~]$ openstack image show 50125b15-9de3-4f03-bfbb-76e740741b68
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| checksum | 061e63c269d9c5b9a48a23f118c865de |
| container_format | aki |
| created_at | 2015-10-12T10:22:38.000000 |
| deleted | False |
| disk_format | aki |
| id | 50125b15-9de3-4f03-bfbb-76e740741b68 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | bm-deploy-kernel |
| owner | 2ad8c320cf7040ef9ec0440e94238f58 |
| properties | {} |
| protected | False |
| size | 5027584 |
| status | active |
| updated_at | 2015-10-12T10:22:38.000000 |
+------------------+--------------------------------------+
[stack@undercloud ~]$ openstack image show 25b55027-ca57-4f15-babe-6e14ba7d0b0c
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| checksum | eafcb9601b03261a7c608bebcfdff41c |
| container_format | ari |
| created_at | 2015-10-12T10:22:38.000000 |
| deleted | False |
| disk_format | ari |
| id | 25b55027-ca57-4f15-babe-6e14ba7d0b0c |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | bm-deploy-ramdisk |
| owner | 2ad8c320cf7040ef9ec0440e94238f58 |
| properties | {} |
| protected | False |
| size | 56355601 |
| status | active |
| updated_at | 2015-10-12T10:22:40.000000 |
+------------------+--------------------------------------+
/pre>
Ironic at this point only supports IPMI booting and since we are using VMs we need to use ssh_pxe. This is a workaround to allow that to work.
[stack@undercloud ~]$ sudo su -
undercloud# cat << EOF > /usr/bin/bootif-fix
#!/usr/bin/env bash

while true;
        do find /httpboot/ -type f ! -iname "kernel" ! -iname "ramdisk" ! -iname "*.kernel" ! -iname "*.ramdisk" -exec sed -i 's|{mac|{net0/mac|g' {} +;
done
EOF

undercloud# chmod a+x /usr/bin/bootif-fix
undercloud# cat << EOF > /usr/lib/systemd/system/bootif-fix.service
[Unit]
Description=Automated fix for incorrect iPXE BOOFIF

[Service]
Type=simple
ExecStart=/usr/bin/bootif-fix

[Install]
WantedBy=multi-user.target
EOF

undercloud# systemctl daemon-reload
undercloud# systemctl enable bootif-fix
undercloud# systemctl start bootif-fix
undercloud# exit

Create new flavor for the baremetal nodes and set boot option to local.

undercloud$ openstack flavor create --id auto --ram 4096 --disk 58 --vcpus 4 baremetal
undercloud$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" baremetal

Perform introspection on baremetal nodes. This will discover hardware and configure node roles.

[stack@undercloud ~]$ openstack baremetal introspection bulk start
Setting available nodes to manageable...
Starting introspection of node: 79f2a51c-a0f0-436f-9e8a-c082ee61f938
Starting introspection of node: 8ba244fd-5362-45fe-bb6c-5f15f2949912
Waiting for discovery to finish...
Discovery for UUID 79f2a51c-a0f0-436f-9e8a-c082ee61f938 finished successfully.
Discovery for UUID 8ba244fd-5362-45fe-bb6c-5f15f2949912 finished successfully.
Setting manageable nodes to available...
Node 79f2a51c-a0f0-436f-9e8a-c082ee61f938 has been set to available.
Node 8ba244fd-5362-45fe-bb6c-5f15f2949912 has been set to available.

To check progress of introspection.

[stack@undercloud ~]$ sudo journalctl -f -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -f

 

List the Ironic baremetal nodes. Nodes should be available if introspection worked.

[stack@undercloud ~]$ ironic node-list 
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power on | available | False |
| 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power on | available | False |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+

Deploy overcloud.

[stack@undercloud ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan
Overcloud Endpoint: http://192.168.126.119:5000/v2.0/
Overcloud Deployed
 

Check status of Heat resources to monitor status of overcloud deployment.

[stack@undercloud ~]$ heat resource-list -n 5 overcloud

Once the OS install is complete on the baremetal nodes you can follow progress of the OpenStack overcloud configuration.

[stack@undercloud ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks                |
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
| 507d1172-fc73-476b-960f-1d9bf7c1c270 | overcloud-compute-0    | ACTIVE | -          | Running     | ctlplane=192.168.126.103|
| ff0e5e15-5bb8-4c77-81c3-651588802ebd | overcloud-controller-0 | ACTIVE | -          | Running     | ctlplane=192.168.126.102|
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
[stack@undercloud ~]$ ssh heat-admin@192.168.126.102
overcloud-controller-0$ sudo -i
overcloud-controller-0# journalctl -f -u os-collect-config

Deploying using the OpenStack Director UI

The overcloud deployment can be done using the UI. You can even do the preliminary configuration using the CLI and run deployment from UI.

OSP_7_Director_INitialize

We can see exactly what OpenStack services will be configured in the overcloud.

OSP_7_director_deploy_2

Deployment status is shown and using the UI it is also to see when baremetal nodes have been completely provisioned.

OSP_7_DIrector_Progress_2

Deployment details are available in the deployment log.

OSP_7_DIRECTOR_deployment_log

Once deployment is complete using the UI, the overcloud must be initialized.

OSP_Director_Initialize

Upon completion the overcloud is available and can be accessed.

OSP_7_director_deploy_complete

Summary

In this article we have discussed how OpenStack distributions have a proprietary mindset in regards to their deployment tools. We have discussed the need for a OpenStack community sponsored upstream project responsible for deployment and life-cycle management. That project is TripleO and Red Hat is the first distribution to ship its deployment tool based on TripleO. Using OpenStack to deploy OpenStack not only benefits entire community but also administrators and end-users. Finally we have seen how to deploy both the undercloud as well as overcloud using TripleO and the Red Hat OpenStack Director. Hopefully you found this article informative and useful. I would be very interested in hearing your feedback on this topic, so please share.

Happy OpenStacking!

(c) 2015 Keith Tenzer

37 thoughts on “HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director

  1. Hi Keith, Not sure if I am missing anything, but the article seems to be a little confusing.

    By hostname, it appears that you are starting off on the undercloud virtual machine, but the instruction says it is the kvm host. Also , what networks are physical (if any) and what are virtual?

    Like

    • Hi,

      Yes there are two hosts involved. The KVM host, this is hosting the VM where the undercloud is running. The undercloud then provisions the overcloud that in turn creates VMs on the KVM host. The KVM host is basically the infrastructure. In my case it is virtualized using KVM. There are two virtual networks involved on the KVM host 192.168.125.0/24 and 192.168.126.0/24. Both of these are simply bridges configured in KVM. Hope this helps?

      Like

  2. Hi Keith,

    Thanks for the wonderful article. I am following the same thing in my production environment, where all of them are physical boxes with controller node in a 3-node cluster. All the introspection were successful and when i am doing comparison with ahc tool it throws error and I couldn’t start the overcloud deployment. Also, in the ironic node-list, the power state for all the nodes seems to be “none”. Can you shed some light on the troubleshooting part as well. Below is for your reference.

    [stack@rhxxxxxx01 ~]$ openstack baremetal introspection bulk start
    Setting available nodes to manageable…
    Starting introspection of node: 476229d8-b263-4d8e-b643-4435789ac8c5
    Starting introspection of node: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064
    Starting introspection of node: 39a0798c-7718-4d4d-9c37-209b0dfab479
    Starting introspection of node: 39b53ffc-9ab6-471d-8d8f-4ca63b699712
    Starting introspection of node: 903bc620-3fea-4314-9a12-dd4d6f402876
    Waiting for discovery to finish…
    Discovery for UUID a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 finished successfully.
    Discovery for UUID 39a0798c-7718-4d4d-9c37-209b0dfab479 finished successfully.
    Discovery for UUID 39b53ffc-9ab6-471d-8d8f-4ca63b699712 finished successfully.
    Discovery for UUID 476229d8-b263-4d8e-b643-4435789ac8c5 finished successfully.
    Discovery for UUID 903bc620-3fea-4314-9a12-dd4d6f402876 finished successfully.
    Setting manageable nodes to available…
    Node 476229d8-b263-4d8e-b643-4435789ac8c5 has been set to available.
    Node a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 has been set to available.
    Node 39a0798c-7718-4d4d-9c37-209b0dfab479 has been set to available.
    Node 39b53ffc-9ab6-471d-8d8f-4ca63b699712 has been set to available.
    Node 903bc620-3fea-4314-9a12-dd4d6f402876 has been set to available.
    Discovery completed.
    [stack@rhxxxxxx01 ~]$ ironic node-list
    +————————————–+——+—————+————-+—————–+————-+
    | UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
    +————————————–+——+—————+————-+—————–+————-+
    | 476229d8-b263-4d8e-b643-4435789ac8c5 | None | None | None | available | True |
    | a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 | None | None | None | available | True |
    | 39a0798c-7718-4d4d-9c37-209b0dfab479 | None | None | None | available | True |
    | 39b53ffc-9ab6-471d-8d8f-4ca63b699712 | None | None | None | available | True |
    | 903bc620-3fea-4314-9a12-dd4d6f402876 | None | None | None | available | True |
    +————————————–+——+—————+————-+—————–+————-+
    [stack@rhxxxxxx01 ~]$ openstack flavor list
    +————————————–+———–+——–+——+———–+——-+———–+
    | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
    +————————————–+———–+——–+——+———–+——-+———–+
    | 2b1bbcba-c578-45d5-955a-1dcdaf74fb95 | compute | 131072 | 300 | 0 | 1 | True |
    | 476a8e94-f49b-488e-9f87-71875bdfd7f9 | baremetal | 4096 | 40 | 0 | 1 | True |
    | 5c915660-c73a-41b7-8703-ef61d1d638d7 | control | 131072 | 300 | 0 | 1 | True |
    +————————————–+———–+——–+——+———–+——-+———–+
    [stack@rhxxxxxx01 ~]$ sudo ahc-match
    ERROR:ahc_tools.match:Failed to match node uuid: 476229d8-b263-4d8e-b643-4435789ac8c5.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 39a0798c-7718-4d4d-9c37-209b0dfab479.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 39b53ffc-9ab6-471d-8d8f-4ca63b699712.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 903bc620-3fea-4314-9a12-dd4d6f402876.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:The following nodes did not match any profiles and will not be updated: 476229d8-b263-4d8e-b643-4435789ac8c5,a97a80a6-bbe2-4d2d-b5d7-53d0926d8064,39a0798c-7718-4d4d-9c37-209b0dfab479,39b53ffc-9ab6-471d-8d8f-4ca63b699712,903bc620-3fea-4314-9a12-dd4d6f402876

    Like

    • The power state usually gets set once you provision. Since it is set to available that means the nodes are available for deployment. Once deployed the power state shows. As for AHC I know this is needed in order for director to choose appropriate flavor based on hardware spec of node. I also had issues and could not get this to work. When I get a chance I will revisit and write another article on my findings and addition more info on how to tweak director. For example how to provide your own heat templates for customizing installations.

      Like

  3. Hi Keith, how can I login to the Director UI? what ip addr, username/password should I use? Also, after creating the overcloud and launched instances, are they accessible from my existing network? Thank you.

    Like

    • Should be reachable via the undercloud_admin_vip or undercloud_public_vip, in this case 192.168.122.10 and 192.168.122.11. The username/password you will find under /root or wherever you ran installer, a file gets created called undercloud_passwords. Once you create overcloud it creates a file called overcloud_passwords. The IP of the overcloud controller you can see be using “nova list” on undercloud, under root there is also an authentification file that you can source for getting access to undercloud. The overcloud runs as VMS inside the undercloud. To ssh to systems you can use ssh heat-admin@ipm from undercloud. The systems should also be accessible to any other hosts on undercloud network.

      Like

      • Hi Keith, don’t know if it’s a typo, but shouldn’t be 192.168.126.10 admin_vip) and 192.168.126.11 (public_vip) as entered in ~/undercloud.conf? Thanks.

        Like

      • Hi Jasper,

        Not sure I understand issue? The undercloud needs admin and public VIP, these should be on provisioning network. Director by default will put everything on provisioning network. If you need to devide traffic then you need to edit the network heat templates and customize things.

        Like

  4. Hey Keith, thank you very much for an excellent post. I have tried following the exact steps and I get stuck in the introspection step. In my case it never finishes…

    After initial activity, the logs show every 10 seconds the following:
    Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/332f7223-f0f9-4843-a562-076df005b4ba HTTP/1.1” 200 –
    Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/2e1f6363-e890-4125-95c0-7a4f44c50eaf HTTP/1.1” 200 –

    I am lost what I could have done wrong. Any hints you can give me based on this brief problem description?

    Thanks a lot
    Kali

    Like

    • This sounds like a communications problem between ironic and the baremetal nodes. If you followed my instructions you are using VMs under KVM hypervisor. The problem I would guess is firewall related that the ironic host cant communicate with KVM VMs. Were you able to run virsh commands from baremetal host against remote KVM hypervisor? You may need to open libvirt port, it is in the guide. Let me know?

      Like

      • I also had hit same issue. Disabling firewall/adding libvirt ports did not help. But rebooting undercloud VM and rerunning introspect step helped me to overcome this said issue

        Like

  5. Hi,

    I want to test RDO-manager as a bare metal on KVM(QEMU) host,
    This problem is ssh connection to KVM host.
    When I want import the virtual machine as a bare metal.
    I see and i think that the all information are well on .json file
    Connection between undercloud and KVM host also is well
    [stack@undercloud ~]$ openstack baremetal import –json instackenv.json
    Request returned failure status.
    SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
    Traceback (most recent call last):

    File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
    return func(*args, **kwargs)

    File “/usr/lib/python2.7/site-packages/ironic/conductor/manager.py”, line 439, in change_node_power_state
    task.driver.power.validate(task)

    File “/usr/lib/python2.7/site-packages/ironic/drivers/modules/ssh.py”, line 540, in validate

    InvalidParameterValue: SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1

    Thanks

    Like

    • Applogies but I have not tested rdomanager but if you are trying to connect with KVM there are some hacks I believe since Ironic (at least in kilo) did not support talking directly to libvirt. I mention these in my guide you read but if those dont work then only thing I can think of is firewall on KVM host or libvirt security is preventing access. Did you try running libvirt commands remotely from rdomanager?

      Like

  6. Hi aleksandarstanisavevski,
    I just ran into the same issue as you and it seems to be “work as designed” – unfortunately -. You need to edit your instackenv.json and add your ssh private key as like in this article: https://access.redhat.com/solutions/1603243
    For a home lab it’s okay as you’re “playing” with that stuff. But for production? … 😉
    I hope that helps…

    Cheers,
    JustAnotherMichael

    Like

  7. Hi Keith

    I want to try this in my home, so in this case how should I run it? Is there any free repository I can configure to download these package.

    Many Thanks

    Like

    • Hi Cj,

      Yes I would recommend following this guide especially since you will want to use KVM not bare-metal for a lab environment and this guide explains how to do that. You can try RDOmanager that is the community platform for Director but I haven’t tried it. I of course would recommend OSP Director but you need a subscription for that, if you dont have one then RDOmanager would be next best thing. Let me know if there are issues? I can try and help.

      Like

  8. Hi Keith,

    OSP 8 was just released, don’t know if you took a look at it already. I wanted to know if there’s any network requirement changes from OSP 7 in terms of baremetal installation. Also, is this guide still valid for OSP8 when using KVM. Thanks!

    Like

  9. Can anyone post the results of the following (from control and compute) if they follow this guide to setup and if it is working? My instances can’t ping the externel ips and gateway 192.168.125.1.

    ifconfig
    ovs-vsctl show

    Thanks
    Paras,

    Like

    • Hi Paras,

      The 192.168.125.1 network, is this a virtual network? Did you configure this network on KVM side? Assuming the network is OK you can try configuring flat network for external. I have seen issues at least in previous OpenStack deployments using VXLAN.

      neutron net-create external –provider:network_type flat –provider:physical_network physnet-external –router:external=True
      neutron subnet-create external –name external_subnet –allocation-pool start=192.168.125.100,end=192.168.125.200 –disable-dhcp –gateway 192.168.125.1 192.168.125.0/24

      Like

      • Hi
        Yes its the vitrual network. I have replicated everything as per this article. Even though there is a vxlan flag when we do overcloud deploy, can we still create flat external network? If I use flat the instance is not getting IP saying “sending discover…” and no ip on the instance’s eth0.
        With vxlan I the instance boots normally.

        Thanks
        Paras

        Like

    • Network interfaces look right…what happens if you create a host and add them to these networks, can they ping gateway? You could simply try adding interfaces on undercloud system. I am not sure your issue is within OpenStack…vxlan is just a tunneling protocol and if you want to use floating ips that is what you want.

      Like

    • The KVM configuration for the VMs is documented in article, maybe it isnt easy to see. Any command line with ktenzer on it is the hypervisor. Besides installing KVM that is all I changed.

      I also have two iptables rules to allow libvirt, I think you just need 16509
      -A INPUT -m state –state NEW -m tcp -p tcp –dport 16509 -j ACCEPT
      -A INPUT -m state –state NEW -m tcp -p tcp –dport 1883 -j ACCEPT

      Hope this helps?

      Regards,

      Keith

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s