HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director

ooo

Overview

In this article we will look at how to deploy an OpenStack cloud using TripleO, the upstream project from the Red Hat OpenStack Director. Regardless of what OpenStack distribution you are using OpenStack is essentially OpenStack. Everyone has the same code-base to work with. The main differences between distributions are around what OpenStack projects are part of distribution, how it is supported and the deployment of the distribution. Every distribution has their own OpenStack deployment tool. Clearly deployments differ as they are based on support decisions each distribution makes. However many distributions have created their own proprietary installers. Shouldn’t the OpenStack community unite around a common installer? What would be better than using OpenStack to deploy OpenStack? Why should OpenStack administrators have to learn separate proprietary tooling? Why should we be creating unnecessary vendor lock-in for OpenStack’s deployment tooling? Installing OpenStack is one thing but what about upgrade and life-cycle management?

This is the promise of TripleO! The TripleO (OpenStack on OpenStack) project was started to solve these problems and bring unification around OpenStack deployment as well as eventually life-cycle management. This has taken quite some time and been a journey but finally the first distribution is using TripleO. Red Hat Enterprise OpenStack Platform 7 has shifted away from foreman/puppet and is now based largely on TripleO. Red Hat is bringing its expertise and learning over the past years around OpenStack deployments and contributing heavily to TripleO.

TripleO Concepts

Before getting into the weeds, we should understand some basic concepts. First TripleO uses OpenStack to deploy OpenStack. It mainly utilizes Ironic for provisioning and Heat for orchestration. Under the hood puppet is used for configuration management. TripleO first deploys an OpenStack cloud used to deploy other OpenStack clouds. This is referred to as the undercloud. The OpenStack cloud environment deployed from undercloud is known as overcloud. The networking requirement is that all systems share a non-routed provisioning network. TripleO also uses PXE to boot and install initial OS image (bootstrap). There are different types of nodes or roles a node can have. In addition to controller and compute you can have nodes for Cinder, CEPH or Swift storage. CEPH storage is also integrated and since most OpenStack deployments use CEPH this is an obvious advantage.

Environment

In this environment we have the KVM hypervisor host (Laptop), the undercloud (single VM) and overcloud (1 X compute, 1 Xcontroller). The undercloud and overcloud are all VMs running on the KVM hypervisor host (Laptop). The KVM hypervisor host is on the 192.168.122.0/24 network and has IP of 192.168.122.1. The undercloud runs on a single VM on the 192.168.122.0/24 management network and 192.168.126.0/24 (provisioning) netowrk. The undercloud has an IP address of 192.168.122.90 (eth0). The overcloud is on the 192.168.126.0/24 (provisioning) and 192.168.125.0/24 (external) network. This is a very simple network configuration. In a real production environment there will be many more networks used in overcloud.

OSP_7_Network_Architecture

Deploying Undercloud

In this section we will configure the undercloud. Normally you would deploy OpenStack nodes on bare-metal but since this is designed to run on Laptop or in lab, we are using KVM virtualization. Before beginning install RHEL or CentOS 7.1 on your KVM hypervisor.

Disable NetworkManager.

undercloud# systemctl stop NetworkManager
undercloud# systemctl disable NetworkManager

Enable port forwarding.

undercloud# vi /etc/sysctl.conf
net.ipv4.ip_forward = 1
undercloud# sysctl -p /etc/sysctl.conf

Ensure hostname is static.

undercloud# hostnamectl set-hostname undercloud.lab.com
undercloud# systemctl restart network

Register to subscription manager and enable appropriate repositories for RHEL.

undercloud# subscription-manager register
undercloud# subscription-manager list --available
undercloud# subscription-manager attach --pool=8a85f9814f2c669b014f3b872de132b5
undercloud# subscription-manager repos --disable=*
undercloud# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-7.0-rpms --enable=rhel-7-server-openstack-7.0-director-rpms

Perform yum update and reboot system.

undercloud# yum update -y && reboot

Install facter and ensure hostname is set properly in /etc/hosts.

undercloud# yum install facter -y
undercloud# ipaddr=$(facter ipaddress_eth0)
undercloud# echo -e "$ipaddr\t\tundercloud.lab.com\tundercloud" >> /etc/hosts

Install TripleO packages.

undercloud# yum install python-rdomanager-oscplugin -y 

Create a stack user.

undercloud# useradd stack
undercloud# echo "redhat" | passwd stack --stdin
undercloud# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack undercloud# chmod 0440 /etc/sudoers.d/stack
undercloud# su - stack

Determine network settings for undercloud. At minimum you need two networks. One for provisioning and the other for the overcloud which should be external network. In this case we have two networks. The undercloud provisioning network 192.168.126.0/24 and the overcloud external network 192.168.125.0/24.

[stack@undercloud ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[stack@undercloud ~]$ vi ~/undercloud.conf
 [DEFAULT]
 local_ip = 192.168.126.1/24
 undercloud_public_vip = 192.168.126.10
 undercloud_admin_vip = 192.168.126.11
 local_interface = eth1
 masquerade_network = 192.168.126.0/24
 dhcp_start = 192.168.126.100
 dhcp_end = 192.168.126.120
 network_cidr = 192.168.126.0/24
 network_gateway = 192.168.126.1
 discovery_iprange = 192.168.126.130,192.168.126.150
 [auth]

Install the undercloud.

[stack@undercloud ~]$ openstack undercloud install
#############################################################################
instack-install-undercloud complete.
The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf.
There is also a stackrc file at /home/stack/stackrc.
These files are needed to interact with the OpenStack services, and should be secured.
#############################################################################

Verify undercloud.

 [stack@undercloud ~]$ source ~/stackrc
 [stack@undercloud ~]$ openstack catalog show nova
 +-----------+------------------------------------------------------------------------------+
 | Field | Value |
 +-----------+------------------------------------------------------------------------------+
 | endpoints | regionOne                                                                    |
 |           | publicURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a     |
 |           | internalURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a   |
 |           | adminURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a      |
 |           |                                                                              |
 | name      | nova                                                                         |
 | type      | compute                                                                      |
 +-----------+------------------------------------------------------------------------------+

Deploying Overcloud

The overcloud is as mentioned a separate cloud from the undercloud. They are not sharing any resources, other than the provisioning network. Over and under sometimes confuse people into thinking the overcloud is sitting on top of undercloud, from networking perspective. This is of course not the case. In reality the clouds are sitting side-by-side from one another. The term over and under really refers to a logical relationship between both clouds. We will do a minimal deployment for the overcloud, 1 X controller and 1 X compute.

Create directory for storing undercloud images. These are the images used by Ironic to provision an OpenStack node.

[stack@undercloud]$ mkdir ~/images

Download images from https://access.redhat.com/downloads/content/191/ver=7/rhel—7/7/x86_64/product-downloads and copy to ~/images.

[stack@undercloud images]$ ls -l
total 2307076
-rw-r-----. 1 stack stack 61419520 Oct 12 16:11 deploy-ramdisk-ironic-7.1.0-39.tar
-rw-r-----. 1 stack stack 155238400 Oct 12 16:11 discovery-ramdisk-7.1.0-39.tar
-rw-r-----. 1 stack stack 964567040 Oct 12 16:12 overcloud-full-7.1.0-39.tar

Extract image tarballs.

[stack@undercloud ~]$ cd ~/images
[stack@undercloud ~]$ for tarfile in *.tar; do tar -xf $tarfile; done

Upload images to Glance.

[stack@undercloud ~]$ openstack overcloud image upload --image-path /home/stack/images
[stack@undercloud ~]$ openstack image list
 +--------------------------------------+------------------------+
 | ID | Name |
 +--------------------------------------+------------------------+
 | 31c01b42-d164-4898-b615-4787c12d3a53 | bm-deploy-ramdisk |
 | e38057f6-24f2-42d1-afae-bb54dead864d | bm-deploy-kernel |
 | f1708a15-5b9b-41ac-8363-ffc9932534f3 | overcloud-full |
 | 318768c2-5300-43cb-939d-44fb7abca7de | overcloud-full-initrd |
 | 28422b76-c37f-4413-b885-cccb24a4611c | overcloud-full-vmlinuz |
 +--------------------------------------+------------------------+

Configure DNS for undercloud. The undercloud system is connected to a network 192.168.122.0/24 that provides DNS.

[stack@undercloud]$ neutron subnet-list
+--------------------------------------+------+------------------+--------------------------------------------------------+
| id | name | cidr | allocation_pools |
+--------------------------------------+------+------------------+--------------------------------------------------------+
| 532f3344-57ed-4a2f-b438-67a5d60c71fc | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.120"} |
+--------------------------------------+------+------------------+--------------------------------------------------------+
[stack@undercloud ~]$ neutron subnet-update 532f3344-57ed-4a2f-b438-67a5d60c71fc --dns-nameserver 192.168.122.1

Since we are in nested virtual environment it is necessary to tweak timeouts.

undercloud# sudo su -
undercloud# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_response_timeout 600
undercloud# openstack-config --set /etc/ironic/ironic.conf DEFAULT rpc_response_timeout 600
undercloud# openstack-service restart nova 
undercloud# openstack-service restart ironic
undercloud# exit

Create provisioning and external networks on KVM Hypervisor host. Ensure that NAT forwarding and DHCP is enabled on the external network. The provisioning network should be non-routable and DHCP disabled. The undercloud will handle DHCP services for the provisioning network.

[ktenzer@ktenzer ~]$ cat > /tmp/external.xml <<EOF
<network>
   <name>external</name>
   <forward mode='nat'>
      <nat> <port start='1024' end='65535'/>
      </nat>
   </forward>
   <ip address='192.168.125.1' netmask='255.255.255.0'>
      <dhcp> <range start='192.168.125.2' end='192.168.125.254'/>
      </dhcp>
   </ip>
</network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/external.xml
[ktenzer@ktenzer ~]$ virsh net-autostart external
[ktenzer@ktenzer ~]$ virsh net-start external
[ktenzer@ktenzer ~]$ cat > /tmp/provisioning.xml <<EOF
<network>
   <name>provisioning</name>
   <ip address='192.168.126.254' netmask='255.255.255.0'>
   </ip>
</network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/provisioning.xml
[ktenzer@ktenzer ~]$ virsh net-autostart provisioning
[ktenzer@ktenzer ~]$ virsh net-start provisioning

Create VM hulls in KVM using virsh on hypervisor host. You will need to change the disk path to suit your needs.

ktenzer# cd /home/ktenzer/VirtualMachines
ktenzer# for i in {1..2}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done
ktenzer# for i in {1..2}; do virt-install --ram 4096 --vcpus 4 --os-variant rhel7 --disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:overcloud --name overcloud-node$i --cpu SandyBridge,+vmx --dry-run --print-xml > /tmp/overcloud-node$i.xml; virsh define --file /tmp/overcloud-node$i.xml; done

Enable access on KVM hypervisor host so that Ironic can control VMs.

ktenzer# cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla
[libvirt Management Access]
Identity=unix-user:stack
Action=org.libvirt.unix.manage
ResultAny=yes
ResultInactive=yes
ResultActive=yes
EOF

Copy ssh key from undercloud system to KVM hypervisor host for stack user.

undercloud$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122.1

Save the MAC addresses for the provisioning network on the VMs. Ironic needs to know what MAC addresses a node has associated for provisioning network.

[stack@undercloud ~]$ for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk '$3 == "mgmt" {print $5};'; done > /tmp/nodes.txt
[stack@undercloud ~]$ cat /tmp/nodes.txt
52:54:00:44:60:2b
52:54:00:ea:e7:2e

Create JSON file for Ironic baremetal node configuration. In this case we are configuring two nodes which are of course the virtual machines we already created. The pm_addr IP is set to IP of the KVM hypervisor host.

[stack@undercloud ~]$ jq . << EOF > ~/instackenv.json
{
  "ssh-user": "stack",
  "ssh-key": "$(cat ~/.ssh/id_rsa)",
  "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager",
  "host-ip": "192.168.122.1",
  "arch": "x86_64",
  "nodes": [
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "mac": [
        "$(sed -n 1p /tmp/nodes.txt)"
      ],
      "cpu": "4",
      "memory": "4096",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    },
    {
      "pm_addr": "192.168.122.1",
      "pm_password": "$(cat ~/.ssh/id_rsa)",
      "pm_type": "pxe_ssh",
      "mac": [
        "$(sed -n 2p /tmp/nodes.txt)"
      ],
      "cpu": "4",
      "memory": "4096",
      "disk": "60",
      "arch": "x86_64",
      "pm_user": "stack"
    }
  ]
}
EOF

Validate JSON file.

[stack@undercloud ~]$ curl -O https://raw.githubusercontent.com/rthallisey/clapper/master/instackenv-validator.py
python instackenv-validator.py -f instackenv.json
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
INFO:__main__:Checking node 192.168.122.1
DEBUG:__main__:Identified virtual node
DEBUG:__main__:Baremetal IPs are all unique.
DEBUG:__main__:MAC addresses are all unique.

--------------------
SUCCESS: instackenv validator found 0 errors

Add nodes to Ironic

[stack@undercloud ~]$ openstack baremetal import --json instackenv.json

List newly added baremetal nodes.

[stack@undercloud ~]$ openstack baremetal list
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power off | available | False |
| 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power off | available | False |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+

Enable nodes for baremetal provisioning and inspect ram and kernel images.

[stack@undercloud ~]$ openstack baremetal configure boot
[stack@undercloud ~]$ ironic node-show cd620ad0-4563-44a5-8078-531b7f906188 | grep -A1 deploy

| driver_info | {u'ssh_username': u'stack', u'deploy_kernel': u'50125b15-9de3-4f03-bfbb- |
| | 76e740741b68', u'deploy_ramdisk': u'25b55027-ca57-4f15-babe- |
| | 6e14ba7d0b0c', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- |
[stack@undercloud ~]$ openstack image show 50125b15-9de3-4f03-bfbb-76e740741b68
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| checksum | 061e63c269d9c5b9a48a23f118c865de |
| container_format | aki |
| created_at | 2015-10-12T10:22:38.000000 |
| deleted | False |
| disk_format | aki |
| id | 50125b15-9de3-4f03-bfbb-76e740741b68 |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | bm-deploy-kernel |
| owner | 2ad8c320cf7040ef9ec0440e94238f58 |
| properties | {} |
| protected | False |
| size | 5027584 |
| status | active |
| updated_at | 2015-10-12T10:22:38.000000 |
+------------------+--------------------------------------+
[stack@undercloud ~]$ openstack image show 25b55027-ca57-4f15-babe-6e14ba7d0b0c
+------------------+--------------------------------------+
| Field | Value |
+------------------+--------------------------------------+
| checksum | eafcb9601b03261a7c608bebcfdff41c |
| container_format | ari |
| created_at | 2015-10-12T10:22:38.000000 |
| deleted | False |
| disk_format | ari |
| id | 25b55027-ca57-4f15-babe-6e14ba7d0b0c |
| is_public | True |
| min_disk | 0 |
| min_ram | 0 |
| name | bm-deploy-ramdisk |
| owner | 2ad8c320cf7040ef9ec0440e94238f58 |
| properties | {} |
| protected | False |
| size | 56355601 |
| status | active |
| updated_at | 2015-10-12T10:22:40.000000 |
+------------------+--------------------------------------+
/pre>
Ironic at this point only supports IPMI booting and since we are using VMs we need to use ssh_pxe. This is a workaround to allow that to work.
[stack@undercloud ~]$ sudo su -
undercloud# cat << EOF > /usr/bin/bootif-fix
#!/usr/bin/env bash

while true;
        do find /httpboot/ -type f ! -iname "kernel" ! -iname "ramdisk" ! -iname "*.kernel" ! -iname "*.ramdisk" -exec sed -i 's|{mac|{net0/mac|g' {} +;
done
EOF

undercloud# chmod a+x /usr/bin/bootif-fix
undercloud# cat << EOF > /usr/lib/systemd/system/bootif-fix.service
[Unit]
Description=Automated fix for incorrect iPXE BOOFIF

[Service]
Type=simple
ExecStart=/usr/bin/bootif-fix

[Install]
WantedBy=multi-user.target
EOF

undercloud# systemctl daemon-reload
undercloud# systemctl enable bootif-fix
undercloud# systemctl start bootif-fix
undercloud# exit

Create new flavor for the baremetal nodes and set boot option to local.

undercloud$ openstack flavor create --id auto --ram 4096 --disk 58 --vcpus 4 baremetal
undercloud$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" baremetal

Perform introspection on baremetal nodes. This will discover hardware and configure node roles.

[stack@undercloud ~]$ openstack baremetal introspection bulk start
Setting available nodes to manageable...
Starting introspection of node: 79f2a51c-a0f0-436f-9e8a-c082ee61f938
Starting introspection of node: 8ba244fd-5362-45fe-bb6c-5f15f2949912
Waiting for discovery to finish...
Discovery for UUID 79f2a51c-a0f0-436f-9e8a-c082ee61f938 finished successfully.
Discovery for UUID 8ba244fd-5362-45fe-bb6c-5f15f2949912 finished successfully.
Setting manageable nodes to available...
Node 79f2a51c-a0f0-436f-9e8a-c082ee61f938 has been set to available.
Node 8ba244fd-5362-45fe-bb6c-5f15f2949912 has been set to available.

To check progress of introspection.

[stack@undercloud ~]$ sudo journalctl -f -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -f

 

List the Ironic baremetal nodes. Nodes should be available if introspection worked.

[stack@undercloud ~]$ ironic node-list 
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+
| cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power on | available | False |
| 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power on | available | False |
+--------------------------------------+------+---------------+-------------+-----------------+-------------+

Deploy overcloud.

[stack@undercloud ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan
Overcloud Endpoint: http://192.168.126.119:5000/v2.0/
Overcloud Deployed
 

Check status of Heat resources to monitor status of overcloud deployment.

[stack@undercloud ~]$ heat resource-list -n 5 overcloud

Once the OS install is complete on the baremetal nodes you can follow progress of the OpenStack overcloud configuration.

[stack@undercloud ~]$ nova list
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
| ID                                   | Name                   | Status | Task State | Power State | Networks                |
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
| 507d1172-fc73-476b-960f-1d9bf7c1c270 | overcloud-compute-0    | ACTIVE | -          | Running     | ctlplane=192.168.126.103|
| ff0e5e15-5bb8-4c77-81c3-651588802ebd | overcloud-controller-0 | ACTIVE | -          | Running     | ctlplane=192.168.126.102|
+--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
[stack@undercloud ~]$ ssh heat-admin@192.168.126.102
overcloud-controller-0$ sudo -i
overcloud-controller-0# journalctl -f -u os-collect-config

Deploying using the OpenStack Director UI

The overcloud deployment can be done using the UI. You can even do the preliminary configuration using the CLI and run deployment from UI.

OSP_7_Director_INitialize

We can see exactly what OpenStack services will be configured in the overcloud.

OSP_7_director_deploy_2

Deployment status is shown and using the UI it is also to see when baremetal nodes have been completely provisioned.

OSP_7_DIrector_Progress_2

Deployment details are available in the deployment log.

OSP_7_DIRECTOR_deployment_log

Once deployment is complete using the UI, the overcloud must be initialized.

OSP_Director_Initialize

Upon completion the overcloud is available and can be accessed.

OSP_7_director_deploy_complete

Summary

In this article we have discussed how OpenStack distributions have a proprietary mindset in regards to their deployment tools. We have discussed the need for a OpenStack community sponsored upstream project responsible for deployment and life-cycle management. That project is TripleO and Red Hat is the first distribution to ship its deployment tool based on TripleO. Using OpenStack to deploy OpenStack not only benefits entire community but also administrators and end-users. Finally we have seen how to deploy both the undercloud as well as overcloud using TripleO and the Red Hat OpenStack Director. Hopefully you found this article informative and useful. I would be very interested in hearing your feedback on this topic, so please share.

Happy OpenStacking!

(c) 2015 Keith Tenzer

55 thoughts on “HOWTO: OpenStack Deployment using TripleO and the Red Hat OpenStack Director

  1. Hi Keith, Not sure if I am missing anything, but the article seems to be a little confusing.

    By hostname, it appears that you are starting off on the undercloud virtual machine, but the instruction says it is the kvm host. Also , what networks are physical (if any) and what are virtual?

    Like

    • Hi,

      Yes there are two hosts involved. The KVM host, this is hosting the VM where the undercloud is running. The undercloud then provisions the overcloud that in turn creates VMs on the KVM host. The KVM host is basically the infrastructure. In my case it is virtualized using KVM. There are two virtual networks involved on the KVM host 192.168.125.0/24 and 192.168.126.0/24. Both of these are simply bridges configured in KVM. Hope this helps?

      Like

  2. Hi Keith,

    Thanks for the wonderful article. I am following the same thing in my production environment, where all of them are physical boxes with controller node in a 3-node cluster. All the introspection were successful and when i am doing comparison with ahc tool it throws error and I couldn’t start the overcloud deployment. Also, in the ironic node-list, the power state for all the nodes seems to be “none”. Can you shed some light on the troubleshooting part as well. Below is for your reference.

    [stack@rhxxxxxx01 ~]$ openstack baremetal introspection bulk start
    Setting available nodes to manageable…
    Starting introspection of node: 476229d8-b263-4d8e-b643-4435789ac8c5
    Starting introspection of node: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064
    Starting introspection of node: 39a0798c-7718-4d4d-9c37-209b0dfab479
    Starting introspection of node: 39b53ffc-9ab6-471d-8d8f-4ca63b699712
    Starting introspection of node: 903bc620-3fea-4314-9a12-dd4d6f402876
    Waiting for discovery to finish…
    Discovery for UUID a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 finished successfully.
    Discovery for UUID 39a0798c-7718-4d4d-9c37-209b0dfab479 finished successfully.
    Discovery for UUID 39b53ffc-9ab6-471d-8d8f-4ca63b699712 finished successfully.
    Discovery for UUID 476229d8-b263-4d8e-b643-4435789ac8c5 finished successfully.
    Discovery for UUID 903bc620-3fea-4314-9a12-dd4d6f402876 finished successfully.
    Setting manageable nodes to available…
    Node 476229d8-b263-4d8e-b643-4435789ac8c5 has been set to available.
    Node a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 has been set to available.
    Node 39a0798c-7718-4d4d-9c37-209b0dfab479 has been set to available.
    Node 39b53ffc-9ab6-471d-8d8f-4ca63b699712 has been set to available.
    Node 903bc620-3fea-4314-9a12-dd4d6f402876 has been set to available.
    Discovery completed.
    [stack@rhxxxxxx01 ~]$ ironic node-list
    +————————————–+——+—————+————-+—————–+————-+
    | UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
    +————————————–+——+—————+————-+—————–+————-+
    | 476229d8-b263-4d8e-b643-4435789ac8c5 | None | None | None | available | True |
    | a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 | None | None | None | available | True |
    | 39a0798c-7718-4d4d-9c37-209b0dfab479 | None | None | None | available | True |
    | 39b53ffc-9ab6-471d-8d8f-4ca63b699712 | None | None | None | available | True |
    | 903bc620-3fea-4314-9a12-dd4d6f402876 | None | None | None | available | True |
    +————————————–+——+—————+————-+—————–+————-+
    [stack@rhxxxxxx01 ~]$ openstack flavor list
    +————————————–+———–+——–+——+———–+——-+———–+
    | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
    +————————————–+———–+——–+——+———–+——-+———–+
    | 2b1bbcba-c578-45d5-955a-1dcdaf74fb95 | compute | 131072 | 300 | 0 | 1 | True |
    | 476a8e94-f49b-488e-9f87-71875bdfd7f9 | baremetal | 4096 | 40 | 0 | 1 | True |
    | 5c915660-c73a-41b7-8703-ef61d1d638d7 | control | 131072 | 300 | 0 | 1 | True |
    +————————————–+———–+——–+——+———–+——-+———–+
    [stack@rhxxxxxx01 ~]$ sudo ahc-match
    ERROR:ahc_tools.match:Failed to match node uuid: 476229d8-b263-4d8e-b643-4435789ac8c5.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 39a0798c-7718-4d4d-9c37-209b0dfab479.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 39b53ffc-9ab6-471d-8d8f-4ca63b699712.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:Failed to match node uuid: 903bc620-3fea-4314-9a12-dd4d6f402876.
    ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
    ERROR:ahc_tools.match:The following nodes did not match any profiles and will not be updated: 476229d8-b263-4d8e-b643-4435789ac8c5,a97a80a6-bbe2-4d2d-b5d7-53d0926d8064,39a0798c-7718-4d4d-9c37-209b0dfab479,39b53ffc-9ab6-471d-8d8f-4ca63b699712,903bc620-3fea-4314-9a12-dd4d6f402876

    Like

    • The power state usually gets set once you provision. Since it is set to available that means the nodes are available for deployment. Once deployed the power state shows. As for AHC I know this is needed in order for director to choose appropriate flavor based on hardware spec of node. I also had issues and could not get this to work. When I get a chance I will revisit and write another article on my findings and addition more info on how to tweak director. For example how to provide your own heat templates for customizing installations.

      Like

  3. Hi Keith, how can I login to the Director UI? what ip addr, username/password should I use? Also, after creating the overcloud and launched instances, are they accessible from my existing network? Thank you.

    Like

    • Should be reachable via the undercloud_admin_vip or undercloud_public_vip, in this case 192.168.122.10 and 192.168.122.11. The username/password you will find under /root or wherever you ran installer, a file gets created called undercloud_passwords. Once you create overcloud it creates a file called overcloud_passwords. The IP of the overcloud controller you can see be using “nova list” on undercloud, under root there is also an authentification file that you can source for getting access to undercloud. The overcloud runs as VMS inside the undercloud. To ssh to systems you can use ssh heat-admin@ipm from undercloud. The systems should also be accessible to any other hosts on undercloud network.

      Like

      • Hi Keith, don’t know if it’s a typo, but shouldn’t be 192.168.126.10 admin_vip) and 192.168.126.11 (public_vip) as entered in ~/undercloud.conf? Thanks.

        Like

      • Hi Jasper,

        Not sure I understand issue? The undercloud needs admin and public VIP, these should be on provisioning network. Director by default will put everything on provisioning network. If you need to devide traffic then you need to edit the network heat templates and customize things.

        Like

  4. Hey Keith, thank you very much for an excellent post. I have tried following the exact steps and I get stuck in the introspection step. In my case it never finishes…

    After initial activity, the logs show every 10 seconds the following:
    Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/332f7223-f0f9-4843-a562-076df005b4ba HTTP/1.1” 200 –
    Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/2e1f6363-e890-4125-95c0-7a4f44c50eaf HTTP/1.1” 200 –

    I am lost what I could have done wrong. Any hints you can give me based on this brief problem description?

    Thanks a lot
    Kali

    Like

    • This sounds like a communications problem between ironic and the baremetal nodes. If you followed my instructions you are using VMs under KVM hypervisor. The problem I would guess is firewall related that the ironic host cant communicate with KVM VMs. Were you able to run virsh commands from baremetal host against remote KVM hypervisor? You may need to open libvirt port, it is in the guide. Let me know?

      Like

      • I also had hit same issue. Disabling firewall/adding libvirt ports did not help. But rebooting undercloud VM and rerunning introspect step helped me to overcome this said issue

        Like

  5. Hi,

    I want to test RDO-manager as a bare metal on KVM(QEMU) host,
    This problem is ssh connection to KVM host.
    When I want import the virtual machine as a bare metal.
    I see and i think that the all information are well on .json file
    Connection between undercloud and KVM host also is well
    [stack@undercloud ~]$ openstack baremetal import –json instackenv.json
    Request returned failure status.
    SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
    Traceback (most recent call last):

    File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
    return func(*args, **kwargs)

    File “/usr/lib/python2.7/site-packages/ironic/conductor/manager.py”, line 439, in change_node_power_state
    task.driver.power.validate(task)

    File “/usr/lib/python2.7/site-packages/ironic/drivers/modules/ssh.py”, line 540, in validate

    InvalidParameterValue: SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1

    Thanks

    Like

    • Applogies but I have not tested rdomanager but if you are trying to connect with KVM there are some hacks I believe since Ironic (at least in kilo) did not support talking directly to libvirt. I mention these in my guide you read but if those dont work then only thing I can think of is firewall on KVM host or libvirt security is preventing access. Did you try running libvirt commands remotely from rdomanager?

      Like

  6. Hi aleksandarstanisavevski,
    I just ran into the same issue as you and it seems to be “work as designed” – unfortunately -. You need to edit your instackenv.json and add your ssh private key as like in this article: https://access.redhat.com/solutions/1603243
    For a home lab it’s okay as you’re “playing” with that stuff. But for production? … 😉
    I hope that helps…

    Cheers,
    JustAnotherMichael

    Like

  7. Hi Keith

    I want to try this in my home, so in this case how should I run it? Is there any free repository I can configure to download these package.

    Many Thanks

    Like

    • Hi Cj,

      Yes I would recommend following this guide especially since you will want to use KVM not bare-metal for a lab environment and this guide explains how to do that. You can try RDOmanager that is the community platform for Director but I haven’t tried it. I of course would recommend OSP Director but you need a subscription for that, if you dont have one then RDOmanager would be next best thing. Let me know if there are issues? I can try and help.

      Like

  8. Hi Keith,

    OSP 8 was just released, don’t know if you took a look at it already. I wanted to know if there’s any network requirement changes from OSP 7 in terms of baremetal installation. Also, is this guide still valid for OSP8 when using KVM. Thanks!

    Like

  9. Can anyone post the results of the following (from control and compute) if they follow this guide to setup and if it is working? My instances can’t ping the externel ips and gateway 192.168.125.1.

    ifconfig
    ovs-vsctl show

    Thanks
    Paras,

    Like

    • Hi Paras,

      The 192.168.125.1 network, is this a virtual network? Did you configure this network on KVM side? Assuming the network is OK you can try configuring flat network for external. I have seen issues at least in previous OpenStack deployments using VXLAN.

      neutron net-create external –provider:network_type flat –provider:physical_network physnet-external –router:external=True
      neutron subnet-create external –name external_subnet –allocation-pool start=192.168.125.100,end=192.168.125.200 –disable-dhcp –gateway 192.168.125.1 192.168.125.0/24

      Like

      • Hi
        Yes its the vitrual network. I have replicated everything as per this article. Even though there is a vxlan flag when we do overcloud deploy, can we still create flat external network? If I use flat the instance is not getting IP saying “sending discover…” and no ip on the instance’s eth0.
        With vxlan I the instance boots normally.

        Thanks
        Paras

        Like

    • Network interfaces look right…what happens if you create a host and add them to these networks, can they ping gateway? You could simply try adding interfaces on undercloud system. I am not sure your issue is within OpenStack…vxlan is just a tunneling protocol and if you want to use floating ips that is what you want.

      Like

    • The KVM configuration for the VMs is documented in article, maybe it isnt easy to see. Any command line with ktenzer on it is the hypervisor. Besides installing KVM that is all I changed.

      I also have two iptables rules to allow libvirt, I think you just need 16509
      -A INPUT -m state –state NEW -m tcp -p tcp –dport 16509 -j ACCEPT
      -A INPUT -m state –state NEW -m tcp -p tcp –dport 1883 -j ACCEPT

      Hope this helps?

      Regards,

      Keith

      Like

  10. Hey Keith!

    Thank you for the valuable post!

    I had a question:

    When defining the Overcloud VM Hulls, you have defined the network as “provisioning” in the command:
    ===
    # for i in {1..2}; do virt-install –ram 4096 –vcpus 4 –os-variant rhel7 –disk path=/root/virtual_machines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 –noautoconsole –vnc –network network:provisioning –network network:overcloud –name overcloud-node$i –cpu SandyBridge,+vmx –dry-run –print-xml > /tmp/overcloud-node$i.xml; virsh define –file /tmp/overcloud-node$i.xml; done
    ===

    However, while collecting the MAC Addresses of the hulls for the provisioning network, you are searching for “mgmt” network in the command, and for me, that turned up an empty /tmp/nodes.txt:

    ===
    for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk ‘$3 == “mgmt” {print $5};’; done > /tmp/nodes.txt
    ===

    Am I missing something here?

    I changed the “mgmt” to “provisioning” and that helped me get the MAC Addresses associated to the provisioning n/w in the hulls.

    I am a newbie at this, so kindly forgive me if I am pointing out something considered obvious 🙂

    Like

  11. Hi,

    I have followed the above steps. I have 3 Vms on a host where one is director and other two are overcloud nodes. I have facing issue while introspection. The overcloud VMs are getting started but unable to get Ip from DHCP. Eventually introspection fails. Can you tell me what might gone wrong?

    Like

  12. Hi Keith
    I am using 192.0.2.0/24 as provisioning network. New environment: The KVM hypervisor host is on the 172.16.73.0/24 network and has IP of 172.16.73.136 . The undercloud runs on a single VM on the 172.16.73.0/24 management network and 192.0.2.0/24 (provisioning) netowrk. The undercloud has an IP address of 172.16.73.146 (eth0). The overcloud is on the 192.0.2.0/24 (provisioning) and 192.0.3.0/24(external) network.
    I have facing issue while introspection (command: openstack baremetal introspection bulk start). The overcloud VMs are getting started but unable to configure network interface. Eventually introspection fails. Can you tell me what might have gone wrong?

    Like

    • Hi There,

      This sounds like a DHCP server issue that the VMs are not getting an IP and as such can’t be bootstrapped by the undercloud. The cause of this problem is usually another DHCP server. You need to make sure on the provisioning network no other DHCP server is operating. To test this you can just start a VM on that network and network boot it. Does it get a DHCP address? Can you ping it from undercloud VM?

      Hope this helps

      Keith

      Like

  13. Hi Keith , I am a new bee in this domain but what i know based on RedHat recommendations the Undercloud always run on physical host and Over cloud on the VM’s provisioned by Under Cloud in Step ? Can you help make this point clear and what about networks since over cloud lies in production so can i plan its network including provision totally separate than under cloud ? Thanks for your support

    Like

    • Yes but I am not focused on how to setup production environments or even best practices. Rather what I document is how to get things setup in a lab environment running on your laptop for learning, etc.

      As for your question the undercloud and overcloud need to share same provisioning network else things wont work. The other networks can be separated API, public, management, storage management, storage, etc.

      Keith

      Like

  14. Hello Mr Keith,

    thanks for sharing this information… it’s very helpful..

    From my side, I am stucked on the step of provisioning the baremetal. ‘openstack baremetal bluck start:
    Both VMs start, got the ip and the installation is on. At the end, both machines carche and their status (openstack baremetal node list ) manageable and poweroff.

    I could find any error in the ironirc-inspector-*.log (from journalctl) and when I peneterate in the vm after the boot (via ssh passing in the ipxe) no obivous error was found.

    any thoughts ?

    Cheers,
    JM

    Like

    • Sorry to hear that, I haven’t seen issues like this usually when you get ot point where the VMs boot and get their image things install at least OS. I wonder if maybe there are memory issues or constraints? That could cause VMs to mysteriously crash.

      Like

      • Thanks Ktenzer for the reply,

        The three VMs have 4GB RAM and (undercloud 20 GB storage and 60GB per VM on overcloud).

        during the boot, I got this message on VM on /var/log/messages:
        Apr 7 23:06:24 localhost kernel: device eth0 entered promiscuous mode
        Apr 7 23:06:24 localhost kdumpctl: Error: /boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 not found.
        Apr 7 23:06:24 localhost kdumpctl: Starting kdump: [FAILED]
        Apr 7 23:06:24 localhost ironic-python-agent: 2017-04-07 23:06:24.046 755 DEBUG ironic_python_agent.netutils [-] Binding interface eth0 for protocol 35020 __enter__ /usr/lib/python2.7/site-packages/ironic_python_agent/netutils.py:72
        Apr 7 23:06:24 localhost systemd: kdump.service: main process exited, code=exited, status=1/FAILURE
        Apr 7 23:06:24 localhost systemd: Failed to start Crash recovery kernel arming.
        Apr 7 23:06:24 localhost systemd: Startup finished in 8.622s (kernel) + 15.548s (userspace) = 24.170s.
        Apr 7 23:06:24 localhost systemd: Unit kdump.service entered failed state.
        Apr 7 23:06:24 localhost systemd: kdump.service failed.

        I share also: the full log https://pastebin.com/fBgGff5n

        Like

  15. Hi keith,

    I am following your Tripleo OSPD steps to install Undercloud and overcloud in virtual env , I am using RHEL Hypervisior .I am seeing failure at his steps
    >> openstack baremetal import –json instackenv.json command its not executing but when i issue “openstack baremetal list” cli i can see 2 instance are created and powerstate is NONE.

    When i checked “ironic node-show f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab” i dont see any kernal or ram_disk info is present

    I checked the ssh access to hypervisior (192.168.122.1) its working without password from undercloud to hypervisior.

    My only susupect is ssh_password in instackenv.json file , verified couple of times i dont see any error it excatly matching with your file.

    openstack baremetal import –json instackenv.json

    openstack baremetal import –json instackenv.json
    WARNING: ironicclient.common.http Request returned failure status.
    ERROR: openstack SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
    Traceback (most recent call last):

    [stack@undercloud ~]$ ssh stack@192.168.122.1
    Last login: Thu Apr 6 18:07:24 2017 from 192.168.122.3
    [stack@ospd ~]$

    Error logs
    ========================================
    openstack baremetal import –json instackenv.json
    WARNING: ironicclient.common.http Request returned failure status.
    ERROR: openstack SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
    Traceback (most recent call last):

    File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
    return func(*args, **kwargs)

    File “/usr/lib/python2.7/site-packages/ironic/conductor/manager.py”, line 435, in change_node_power_state
    task.driver.power.validate(task)

    File “/usr/lib/python2.7/site-packages/ironic/drivers/modules/ssh.py”, line 514, in validate
    ” be established: %s”) % e)

    InvalidParameterValue: SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
    (HTTP 400)
    [stack@undercloud ~]$ openstack baremetal list
    +————————————–+——+—————+————-+—————–+————-+
    | UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
    +————————————–+——+—————+————-+—————–+————-+
    | f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab | None | None | None | available | False |
    | 9e461d9f-9cf9-431e-9273-a2367e40965c | None | None | None | available | False |
    +————————————–+——+—————+————-+—————–+————-+
    [stack@undercloud ~]$
    [stack@undercloud ~]$
    [stack@undercloud ~]$ ironic node-show f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab
    +————————+————————————————————————-+
    | Property | Value |
    +————————+————————————————————————-+
    | target_power_state | None |
    | extra | {} |
    | last_error | None |
    | updated_at | 2017-04-06T22:07:52+00:00 |
    | maintenance_reason | None |
    | provision_state | available |
    | uuid | f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab |
    | console_enabled | False |
    | target_provision_state | None |
    | maintenance | False |
    | inspection_started_at | None |
    | inspection_finished_at | None |
    | power_state | None |
    | driver | pxe_ssh |
    | reservation | None |
    | properties | {u’memory_mb’: u’4096′, u’cpu_arch’: u’x86_64′, u’local_gb’: u’60’, |
    | | u’cpus’: u’4′} |
    | instance_uuid | None |
    | name | None |
    | driver_info | {u’ssh_username’: u’stack’, u’ssh_virt_type’: u’virsh’, u’ssh_address’: |
    | | u’192.168.122.1′, u’ssh_key_contents’: u’echo $(cat ~/.ssh/id_rsa)’} |
    | created_at | 2017-04-06T22:03:31+00:00 |
    | driver_internal_info | {} |
    | chassis_uuid | |
    | instance_info | {} |
    +————————+————————————————————————-+

    Please let me know your suggestion

    Like

      • Thanks for quick reply, am using RHEL Openstack 7 as per the guide and From Undercloud VM i am able to password ssh to 192.168.122.1 (hypervisior) . one more thing i created bridge interface for eth0 to eth2 with bridge interface br0 to br2 in hypervsisior
        Undercloud VM network is using br0 ,br2 and br3

        ip add sh
        1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
        inet 127.0.0.1/8 scope host lo
        valid_lft forever preferred_lft forever
        inet6 ::1/128 scope host
        valid_lft forever preferred_lft forever
        2: eth0: mtu 9000 qdisc mq master br0 state UP qlen 1000
        link/ether 00:25:b5:ff:00:1e brd ff:ff:ff:ff:ff:ff
        inet6 fe80::225:b5ff:feff:1e/64 scope link
        valid_lft forever preferred_lft forever
        3: eth1: mtu 9000 qdisc mq master br1 state UP qlen 1000
        link/ether 00:25:b5:ff:00:0e brd ff:ff:ff:ff:ff:ff
        inet6 fe80::225:b5ff:feff:e/64 scope link
        valid_lft forever preferred_lft forever
        4: eth2: mtu 9000 qdisc mq master br2 state UP qlen 1000
        link/ether 00:25:b5:ff:00:3e brd ff:ff:ff:ff:ff:ff
        inet6 fe80::225:b5ff:feff:3e/64 scope link
        valid_lft forever preferred_lft forever
        5: eth3: mtu 9000 qdisc mq master br3 state UP qlen 1000
        link/ether 00:25:b5:ff:00:2e brd ff:ff:ff:ff:ff:ff
        inet6 fe80::225:b5ff:feff:2e/64 scope link
        valid_lft forever preferred_lft forever
        6: eth4: mtu 9000 qdisc noop state DOWN qlen 1000
        link/ether 00:25:b5:ff:00:6e brd ff:ff:ff:ff:ff:ff
        7: br0: mtu 9000 qdisc noqueue state UP qlen 1000
        link/ether 00:25:b5:ff:00:1e brd ff:ff:ff:ff:ff:ff
        inet 10.88.192.17/24 brd 10.88.192.255 scope global br0
        valid_lft forever preferred_lft forever
        inet6 fe80::225:b5ff:feff:1e/64 scope link
        valid_lft forever preferred_lft forever
        8: br1: mtu 9000 qdisc noqueue state UP qlen 1000
        link/ether 00:25:b5:ff:00:0e brd ff:ff:ff:ff:ff:ff
        inet 192.168.125.2/24 brd 192.168.125.255 scope global br1
        valid_lft forever preferred_lft forever
        inet6 fe80::225:b5ff:feff:e/64 scope link
        valid_lft forever preferred_lft forever
        9: br2: mtu 9000 qdisc noqueue state UP qlen 1000
        link/ether 00:25:b5:ff:00:3e brd ff:ff:ff:ff:ff:ff
        inet 192.168.126.2/24 brd 192.168.126.255 scope global br2
        valid_lft forever preferred_lft forever
        inet6 fe80::225:b5ff:feff:3e/64 scope link
        valid_lft forever preferred_lft forever
        10: br3: mtu 9000 qdisc noqueue state UP qlen 1000
        link/ether 00:25:b5:ff:00:2e brd ff:ff:ff:ff:ff:ff
        inet 192.168.122.1/24 brd 192.168.122.255 scope global br3
        valid_lft forever preferred_lft forever
        inet6 fe80::225:b5ff:feff:2e/64 scope link
        valid_lft forever preferred_lft forever
        11: vnet0: mtu 9000 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000
        link/ether fe:54:00:8b:f9:dc brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe8b:f9dc/64 scope link
        valid_lft forever preferred_lft forever
        12: vnet1: mtu 9000 qdisc pfifo_fast master br2 state UNKNOWN qlen 1000
        link/ether fe:54:00:9d:13:73 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe9d:1373/64 scope link
        valid_lft forever preferred_lft forever
        13: vnet2: mtu 9000 qdisc pfifo_fast master br3 state UNKNOWN qlen 1000
        link/ether fe:54:00:16:4b:33 brd ff:ff:ff:ff:ff:ff
        inet6 fe80::fc54:ff:fe16:4b33/64 scope link
        valid_lft forever preferred_lft forever

        [root@ospd ~]# virsh domiflist undercloud
        Interface Type Source Model MAC
        ——————————————————-
        vnet0 bridge br0 virtio 52:54:00:8b:f9:dc
        vnet1 bridge br2 virtio 52:54:00:9d:13:73
        vnet2 bridge br3 virtio 52:54:00:16:4b:33

        whereas overcloud is in virbr interface

        [root@ospd ~]# virsh domiflist overcloud-node1
        Interface Type Source Model MAC
        ——————————————————-
        – bridge br2 virtio 52:54:00:26:f8:e7
        – bridge br0 virtio 52:54:00:9f:11:6c

        [root@ospd ~]#
        [root@ospd ~]#
        [root@ospd ~]# virsh domiflist overcloud-node2
        Interface Type Source Model MAC
        ——————————————————-
        – bridge br2 virtio 52:54:00:44:fd:70
        – bridge br0 virtio 52:54:00:8c:a5:27

        I am not sure i need to use Virbr or bridge interface as per your instruction it was using external and provisioing created under

        virsh # net-list –all
        Name State Autostart Persistent
        ———————————————————-
        external inactive yes yes
        provisioning inactive yes yes

        virsh # net-info external
        Name: external
        UUID: 6e8990c8-10f1-4e63-baf2-d23c4d8dd205
        Active: no
        Persistent: yes
        Autostart: yes
        Bridge: virbr1

        net-info provisioning
        Name: provisioning
        UUID: 44dd7ad7-38f3-4b08-9224-65145b9a2180
        Active: no
        Persistent: yes
        Autostart: yes
        Bridge: virbr0

        Like

  16. Hi Keith,

    I fixed the ssh issue , I saw one of the comments in the blog i re-edited the instackenv.json it worked . https://access.redhat.com/solutions/1603243 . Thanks very much for the article .

    only issue now is see here Overcloud controller and overcloud compute is having ip address for Provisioning and eth0 is associated to BR-EX

    Please find the overcloud controller IFCONFIG output
    https://pastebin.com/7jb4F2w9

    Issue 1: how could i login to overcloud Nodes from outside ? only heat-admin only username is allowed or are we able to login as root user in overcloud-nodes?

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s