Overview
In this article we will look at how to deploy an OpenStack cloud using TripleO, the upstream project from the Red Hat OpenStack Director. Regardless of what OpenStack distribution you are using OpenStack is essentially OpenStack. Everyone has the same code-base to work with. The main differences between distributions are around what OpenStack projects are part of distribution, how it is supported and the deployment of the distribution. Every distribution has their own OpenStack deployment tool. Clearly deployments differ as they are based on support decisions each distribution makes. However many distributions have created their own proprietary installers. Shouldn’t the OpenStack community unite around a common installer? What would be better than using OpenStack to deploy OpenStack? Why should OpenStack administrators have to learn separate proprietary tooling? Why should we be creating unnecessary vendor lock-in for OpenStack’s deployment tooling? Installing OpenStack is one thing but what about upgrade and life-cycle management?
This is the promise of TripleO! The TripleO (OpenStack on OpenStack) project was started to solve these problems and bring unification around OpenStack deployment as well as eventually life-cycle management. This has taken quite some time and been a journey but finally the first distribution is using TripleO. Red Hat Enterprise OpenStack Platform 7 has shifted away from foreman/puppet and is now based largely on TripleO. Red Hat is bringing its expertise and learning over the past years around OpenStack deployments and contributing heavily to TripleO.
TripleO Concepts
Before getting into the weeds, we should understand some basic concepts. First TripleO uses OpenStack to deploy OpenStack. It mainly utilizes Ironic for provisioning and Heat for orchestration. Under the hood puppet is used for configuration management. TripleO first deploys an OpenStack cloud used to deploy other OpenStack clouds. This is referred to as the undercloud. The OpenStack cloud environment deployed from undercloud is known as overcloud. The networking requirement is that all systems share a non-routed provisioning network. TripleO also uses PXE to boot and install initial OS image (bootstrap). There are different types of nodes or roles a node can have. In addition to controller and compute you can have nodes for Cinder, CEPH or Swift storage. CEPH storage is also integrated and since most OpenStack deployments use CEPH this is an obvious advantage.
Environment
In this environment we have the KVM hypervisor host (Laptop), the undercloud (single VM) and overcloud (1 X compute, 1 Xcontroller). The undercloud and overcloud are all VMs running on the KVM hypervisor host (Laptop). The KVM hypervisor host is on the 192.168.122.0/24 network and has IP of 192.168.122.1. The undercloud runs on a single VM on the 192.168.122.0/24 management network and 192.168.126.0/24 (provisioning) netowrk. The undercloud has an IP address of 192.168.122.90 (eth0). The overcloud is on the 192.168.126.0/24 (provisioning) and 192.168.125.0/24 (external) network. This is a very simple network configuration. In a real production environment there will be many more networks used in overcloud.
Deploying Undercloud
In this section we will configure the undercloud. Normally you would deploy OpenStack nodes on bare-metal but since this is designed to run on Laptop or in lab, we are using KVM virtualization. Before beginning install RHEL or CentOS 7.1 on your KVM hypervisor.
Disable NetworkManager.
undercloud# systemctl stop NetworkManager undercloud# systemctl disable NetworkManager
Enable port forwarding.
undercloud# vi /etc/sysctl.conf net.ipv4.ip_forward = 1
undercloud# sysctl -p /etc/sysctl.conf
Ensure hostname is static.
undercloud# hostnamectl set-hostname undercloud.lab.com undercloud# systemctl restart network
Register to subscription manager and enable appropriate repositories for RHEL.
undercloud# subscription-manager register undercloud# subscription-manager list --available undercloud# subscription-manager attach --pool=8a85f9814f2c669b014f3b872de132b5 undercloud# subscription-manager repos --disable=* undercloud# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-optional-rpms --enable=rhel-7-server-extras-rpms --enable=rhel-7-server-openstack-7.0-rpms --enable=rhel-7-server-openstack-7.0-director-rpms
Perform yum update and reboot system.
undercloud# yum update -y && reboot
Install facter and ensure hostname is set properly in /etc/hosts.
undercloud# yum install facter -y undercloud# ipaddr=$(facter ipaddress_eth0) undercloud# echo -e "$ipaddr\t\tundercloud.lab.com\tundercloud" >> /etc/hosts
Install TripleO packages.
undercloud#
yum install python-rdomanager-oscplugin -y
Create a stack user.
undercloud# useradd stack undercloud# echo "redhat" | passwd stack --stdin undercloud# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack undercloud# chmod 0440 /etc/sudoers.d/stack undercloud# su - stack
Determine network settings for undercloud. At minimum you need two networks. One for provisioning and the other for the overcloud which should be external network. In this case we have two networks. The undercloud provisioning network 192.168.126.0/24 and the overcloud external network 192.168.125.0/24.
[stack@undercloud ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf
[stack@undercloud ~]$ vi ~/undercloud.conf [DEFAULT] local_ip = 192.168.126.1/24 undercloud_public_vip = 192.168.126.10 undercloud_admin_vip = 192.168.126.11 local_interface = eth1 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.120 network_cidr = 192.168.126.0/24 network_gateway = 192.168.126.1 discovery_iprange = 192.168.126.130,192.168.126.150 [auth]
Install the undercloud.
[stack@undercloud ~]$ openstack undercloud install ############################################################################# instack-install-undercloud complete. The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf. There is also a stackrc file at /home/stack/stackrc. These files are needed to interact with the OpenStack services, and should be secured. #############################################################################
Verify undercloud.
[stack@undercloud ~]$ source ~/stackrc [stack@undercloud ~]$ openstack catalog show nova +-----------+------------------------------------------------------------------------------+ | Field | Value | +-----------+------------------------------------------------------------------------------+ | endpoints | regionOne | | | publicURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | internalURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | adminURL: http://192.168.126.1:8774/v2/e6649719251f40569200fec7fae6988a | | | | | name | nova | | type | compute | +-----------+------------------------------------------------------------------------------+
Deploying Overcloud
The overcloud is as mentioned a separate cloud from the undercloud. They are not sharing any resources, other than the provisioning network. Over and under sometimes confuse people into thinking the overcloud is sitting on top of undercloud, from networking perspective. This is of course not the case. In reality the clouds are sitting side-by-side from one another. The term over and under really refers to a logical relationship between both clouds. We will do a minimal deployment for the overcloud, 1 X controller and 1 X compute.
Create directory for storing undercloud images. These are the images used by Ironic to provision an OpenStack node.
[stack@undercloud]$ mkdir ~/images
Download images from https://access.redhat.com/downloads/content/191/ver=7/rhel—7/7/x86_64/product-downloads and copy to ~/images.
[stack@undercloud images]$ ls -l total 2307076 -rw-r-----. 1 stack stack 61419520 Oct 12 16:11 deploy-ramdisk-ironic-7.1.0-39.tar -rw-r-----. 1 stack stack 155238400 Oct 12 16:11 discovery-ramdisk-7.1.0-39.tar -rw-r-----. 1 stack stack 964567040 Oct 12 16:12 overcloud-full-7.1.0-39.tar
Extract image tarballs.
[stack@undercloud ~]$ cd ~/images [stack@undercloud ~]$ for tarfile in *.tar; do tar -xf $tarfile; done
Upload images to Glance.
[stack@undercloud ~]$ openstack overcloud image upload --image-path /home/stack/images
[stack@undercloud ~]$ openstack image list +--------------------------------------+------------------------+ | ID | Name | +--------------------------------------+------------------------+ | 31c01b42-d164-4898-b615-4787c12d3a53 | bm-deploy-ramdisk | | e38057f6-24f2-42d1-afae-bb54dead864d | bm-deploy-kernel | | f1708a15-5b9b-41ac-8363-ffc9932534f3 | overcloud-full | | 318768c2-5300-43cb-939d-44fb7abca7de | overcloud-full-initrd | | 28422b76-c37f-4413-b885-cccb24a4611c | overcloud-full-vmlinuz | +--------------------------------------+------------------------+
Configure DNS for undercloud. The undercloud system is connected to a network 192.168.122.0/24 that provides DNS.
[stack@undercloud]$ neutron subnet-list +--------------------------------------+------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+------------------+--------------------------------------------------------+ | 532f3344-57ed-4a2f-b438-67a5d60c71fc | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.120"} | +--------------------------------------+------+------------------+--------------------------------------------------------+
[stack@undercloud ~]$ neutron subnet-update 532f3344-57ed-4a2f-b438-67a5d60c71fc --dns-nameserver 192.168.122.1
Since we are in nested virtual environment it is necessary to tweak timeouts.
undercloud#sudo su - undercloud# openstack-config --set /etc/nova/nova.conf DEFAULT rpc_response_timeout 600 undercloud#
openstack-config --set /etc/ironic/ironic.conf DEFAULT rpc_response_timeout 600 undercloud#
openstack-service restart nova undercloud# openstack-service restart ironic undercloud#
exit
Create provisioning and external networks on KVM Hypervisor host. Ensure that NAT forwarding and DHCP is enabled on the external network. The provisioning network should be non-routable and DHCP disabled. The undercloud will handle DHCP services for the provisioning network.
[ktenzer@ktenzer ~]$ cat > /tmp/external.xml <<EOF <network> <name>external</name> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <ip address='192.168.125.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.125.2' end='192.168.125.254'/> </dhcp> </ip> </network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/external.xml [ktenzer@ktenzer ~]$ virsh net-autostart external [ktenzer@ktenzer ~]$ virsh net-start external
[ktenzer@ktenzer ~]$ cat > /tmp/provisioning.xml <<EOF <network> <name>provisioning</name> <ip address='192.168.126.254' netmask='255.255.255.0'> </ip> </network>
[ktenzer@ktenzer ~]$ virsh net-define /tmp/provisioning.xml [ktenzer@ktenzer ~]$ virsh net-autostart provisioning [ktenzer@ktenzer ~]$ virsh net-start provisioning
Create VM hulls in KVM using virsh on hypervisor host. You will need to change the disk path to suit your needs.
ktenzer# cd /home/ktenzer/VirtualMachines ktenzer# for i in {1..2}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done ktenzer# for i in {1..2}; do virt-install --ram 4096 --vcpus 4 --os-variant rhel7 --disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:overcloud --name overcloud-node$i --cpu SandyBridge,+vmx --dry-run --print-xml > /tmp/overcloud-node$i.xml; virsh define --file /tmp/overcloud-node$i.xml; done
Enable access on KVM hypervisor host so that Ironic can control VMs.
ktenzer# cat << EOF > /etc/polkit-1/localauthority/50-local.d/50-libvirt-user-stack.pkla [libvirt Management Access] Identity=unix-user:stack Action=org.libvirt.unix.manage ResultAny=yes ResultInactive=yes ResultActive=yes EOF
Copy ssh key from undercloud system to KVM hypervisor host for stack user.
undercloud$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122.1
Save the MAC addresses for the provisioning network on the VMs. Ironic needs to know what MAC addresses a node has associated for provisioning network.
[stack@undercloud ~]$ for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk '$3 == "mgmt" {print $5};'; done > /tmp/nodes.txt
[stack@undercloud ~]$ cat /tmp/nodes.txt 52:54:00:44:60:2b 52:54:00:ea:e7:2e
Create JSON file for Ironic baremetal node configuration. In this case we are configuring two nodes which are of course the virtual machines we already created. The pm_addr IP is set to IP of the KVM hypervisor host.
[stack@undercloud ~]$ jq . << EOF > ~/instackenv.json { "ssh-user": "stack", "ssh-key": "$(cat ~/.ssh/id_rsa)", "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager", "host-ip": "192.168.122.1", "arch": "x86_64", "nodes": [ { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 1p /tmp/nodes.txt)" ], "cpu": "4", "memory": "4096", "disk": "60", "arch": "x86_64", "pm_user": "stack" }, { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 2p /tmp/nodes.txt)" ], "cpu": "4", "memory": "4096", "disk": "60", "arch": "x86_64", "pm_user": "stack" } ] } EOF
Validate JSON file.
[stack@undercloud ~]$ curl -O https://raw.githubusercontent.com/rthallisey/clapper/master/instackenv-validator.py
python instackenv-validator.py -f instackenv.json INFO:__main__:Checking node 192.168.122.1 DEBUG:__main__:Identified virtual node INFO:__main__:Checking node 192.168.122.1 DEBUG:__main__:Identified virtual node DEBUG:__main__:Baremetal IPs are all unique. DEBUG:__main__:MAC addresses are all unique. -------------------- SUCCESS: instackenv validator found 0 errors
Add nodes to Ironic
[stack@undercloud ~]$ openstack baremetal import --json instackenv.json
List newly added baremetal nodes.
[stack@undercloud ~]$ openstack baremetal list +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power off | available | False | | 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+-----------------+-------------+
Enable nodes for baremetal provisioning and inspect ram and kernel images.
[stack@undercloud ~]$ openstack baremetal configure boot
[stack@undercloud ~]$ ironic node-show cd620ad0-4563-44a5-8078-531b7f906188 | grep -A1 deploy | driver_info | {u'ssh_username': u'stack', u'deploy_kernel': u'50125b15-9de3-4f03-bfbb- | | | 76e740741b68', u'deploy_ramdisk': u'25b55027-ca57-4f15-babe- | | | 6e14ba7d0b0c', u'ssh_key_contents': u'-----BEGIN RSA PRIVATE KEY----- |
[stack@undercloud ~]$ openstack image show 50125b15-9de3-4f03-bfbb-76e740741b68 +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | checksum | 061e63c269d9c5b9a48a23f118c865de | | container_format | aki | | created_at | 2015-10-12T10:22:38.000000 | | deleted | False | | disk_format | aki | | id | 50125b15-9de3-4f03-bfbb-76e740741b68 | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | bm-deploy-kernel | | owner | 2ad8c320cf7040ef9ec0440e94238f58 | | properties | {} | | protected | False | | size | 5027584 | | status | active | | updated_at | 2015-10-12T10:22:38.000000 | +------------------+--------------------------------------+
[stack@undercloud ~]$ openstack image show 25b55027-ca57-4f15-babe-6e14ba7d0b0c +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | checksum | eafcb9601b03261a7c608bebcfdff41c | | container_format | ari | | created_at | 2015-10-12T10:22:38.000000 | | deleted | False | | disk_format | ari | | id | 25b55027-ca57-4f15-babe-6e14ba7d0b0c | | is_public | True | | min_disk | 0 | | min_ram | 0 | | name | bm-deploy-ramdisk | | owner | 2ad8c320cf7040ef9ec0440e94238f58 | | properties | {} | | protected | False | | size | 56355601 | | status | active | | updated_at | 2015-10-12T10:22:40.000000 | +------------------+--------------------------------------+ /pre> Ironic at this point only supports IPMI booting and since we are using VMs we need to use ssh_pxe. This is a workaround to allow that to work.
[stack@undercloud ~]$ sudo su - undercloud# cat << EOF > /usr/bin/bootif-fix #!/usr/bin/env bash while true; do find /httpboot/ -type f ! -iname "kernel" ! -iname "ramdisk" ! -iname "*.kernel" ! -iname "*.ramdisk" -exec sed -i 's|{mac|{net0/mac|g' {} +; done EOF undercloud# chmod a+x /usr/bin/bootif-fix undercloud# cat << EOF > /usr/lib/systemd/system/bootif-fix.service [Unit] Description=Automated fix for incorrect iPXE BOOFIF [Service] Type=simple ExecStart=/usr/bin/bootif-fix [Install] WantedBy=multi-user.target EOF undercloud# systemctl daemon-reload undercloud# systemctl enable bootif-fix undercloud# systemctl start bootif-fix undercloud# exit
Create new flavor for the baremetal nodes and set boot option to local.
undercloud$ openstack flavor create --id auto --ram 4096 --disk 58 --vcpus 4 baremetal
undercloud$ openstack flavor set --property "cpu_arch"="x86_64" --property "capabilities:boot_option"="local" baremetal
Perform introspection on baremetal nodes. This will discover hardware and configure node roles.
[stack@undercloud ~]$ openstack baremetal introspection bulk start Setting available nodes to manageable... Starting introspection of node: 79f2a51c-a0f0-436f-9e8a-c082ee61f938 Starting introspection of node: 8ba244fd-5362-45fe-bb6c-5f15f2949912 Waiting for discovery to finish... Discovery for UUID 79f2a51c-a0f0-436f-9e8a-c082ee61f938 finished successfully. Discovery for UUID 8ba244fd-5362-45fe-bb6c-5f15f2949912 finished successfully. Setting manageable nodes to available... Node 79f2a51c-a0f0-436f-9e8a-c082ee61f938 has been set to available. Node 8ba244fd-5362-45fe-bb6c-5f15f2949912 has been set to available.
To check progress of introspection.
[stack@undercloud ~]$ sudo journalctl -f -l -u openstack-ironic-discoverd -u openstack-ironic-discoverd-dnsmasq -f
List the Ironic baremetal nodes. Nodes should be available if introspection worked.
[stack@undercloud ~]$ ironic node-list +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | UUID | Name | Instance UUID | Power State | Provision State | Maintenance | +--------------------------------------+------+---------------+-------------+-----------------+-------------+ | cd620ad0-4563-44a5-8078-531b7f906188 | None | None | power on | available | False | | 44df8163-7381-46a7-b016-a0dd18bfee53 | None | None | power on | available | False | +--------------------------------------+------+---------------+-------------+-----------------+-------------+
Deploy overcloud.
[stack@undercloud ~]$ openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan Overcloud Endpoint: http://192.168.126.119:5000/v2.0/ Overcloud Deployed
Check status of Heat resources to monitor status of overcloud deployment.
[stack@undercloud ~]$ heat resource-list -n 5 overcloud
Once the OS install is complete on the baremetal nodes you can follow progress of the OpenStack overcloud configuration.
[stack@undercloud ~]$ nova list +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+ | 507d1172-fc73-476b-960f-1d9bf7c1c270 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.126.103| | ff0e5e15-5bb8-4c77-81c3-651588802ebd | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.126.102| +--------------------------------------+------------------------+--------+------------+-------------+-------------------------+
[stack@undercloud ~]$ ssh heat-admin@192.168.126.102 overcloud-controller-0$ sudo -i overcloud-controller-0# journalctl -f -u os-collect-config
Deploying using the OpenStack Director UI
The overcloud deployment can be done using the UI. You can even do the preliminary configuration using the CLI and run deployment from UI.
We can see exactly what OpenStack services will be configured in the overcloud.
Deployment status is shown and using the UI it is also to see when baremetal nodes have been completely provisioned.
Deployment details are available in the deployment log.
Once deployment is complete using the UI, the overcloud must be initialized.
Upon completion the overcloud is available and can be accessed.
Summary
In this article we have discussed how OpenStack distributions have a proprietary mindset in regards to their deployment tools. We have discussed the need for a OpenStack community sponsored upstream project responsible for deployment and life-cycle management. That project is TripleO and Red Hat is the first distribution to ship its deployment tool based on TripleO. Using OpenStack to deploy OpenStack not only benefits entire community but also administrators and end-users. Finally we have seen how to deploy both the undercloud as well as overcloud using TripleO and the Red Hat OpenStack Director. Hopefully you found this article informative and useful. I would be very interested in hearing your feedback on this topic, so please share.
Happy OpenStacking!
(c) 2015 Keith Tenzer
Hi Keith, Not sure if I am missing anything, but the article seems to be a little confusing.
By hostname, it appears that you are starting off on the undercloud virtual machine, but the instruction says it is the kvm host. Also , what networks are physical (if any) and what are virtual?
LikeLike
Hi,
Yes there are two hosts involved. The KVM host, this is hosting the VM where the undercloud is running. The undercloud then provisions the overcloud that in turn creates VMs on the KVM host. The KVM host is basically the infrastructure. In my case it is virtualized using KVM. There are two virtual networks involved on the KVM host 192.168.125.0/24 and 192.168.126.0/24. Both of these are simply bridges configured in KVM. Hope this helps?
LikeLike
Hi Keith,
I wanted to try this on physical boxes instead of nested KVM, how does the networking look like? Can you tell me how many network segments do I need for each box and their corresponding network usage (e.g. provisioning, external, etc.) with regards to undercloud/overcloud setup. Also, do I have to have a separate network just for the IPMI connectivity? I will use 3 physical nodes with this PoC.
Thank you,
Cesar
LikeLike
Hi Cesar,
For doing physical you would follow same steps, just ignore steps to get KVM working through ssh libvirt connector. You need at minimum two networks. The IPMI network needs to be separate. By default OSP-d will put all other OpenStack networks on the other network. If this is desired you can just follow my steps except for KVM stuff. If you would like to split networks I recommend following this setup for now: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux_OpenStack_Platform/7/html/Director_Installation_and_Usage/chap-Installing_the_Overcloud.html#sect-Scenario_2_Using_the_CLI_to_Create_a_Basic_Overcloud
I will try and get a blog on custom networking templates soon, I havent had chance yet but hope this helps.
Keith
LikeLike
Hi Keith,
Thanks for the wonderful article. I am following the same thing in my production environment, where all of them are physical boxes with controller node in a 3-node cluster. All the introspection were successful and when i am doing comparison with ahc tool it throws error and I couldn’t start the overcloud deployment. Also, in the ironic node-list, the power state for all the nodes seems to be “none”. Can you shed some light on the troubleshooting part as well. Below is for your reference.
[stack@rhxxxxxx01 ~]$ openstack baremetal introspection bulk start
Setting available nodes to manageable…
Starting introspection of node: 476229d8-b263-4d8e-b643-4435789ac8c5
Starting introspection of node: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064
Starting introspection of node: 39a0798c-7718-4d4d-9c37-209b0dfab479
Starting introspection of node: 39b53ffc-9ab6-471d-8d8f-4ca63b699712
Starting introspection of node: 903bc620-3fea-4314-9a12-dd4d6f402876
Waiting for discovery to finish…
Discovery for UUID a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 finished successfully.
Discovery for UUID 39a0798c-7718-4d4d-9c37-209b0dfab479 finished successfully.
Discovery for UUID 39b53ffc-9ab6-471d-8d8f-4ca63b699712 finished successfully.
Discovery for UUID 476229d8-b263-4d8e-b643-4435789ac8c5 finished successfully.
Discovery for UUID 903bc620-3fea-4314-9a12-dd4d6f402876 finished successfully.
Setting manageable nodes to available…
Node 476229d8-b263-4d8e-b643-4435789ac8c5 has been set to available.
Node a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 has been set to available.
Node 39a0798c-7718-4d4d-9c37-209b0dfab479 has been set to available.
Node 39b53ffc-9ab6-471d-8d8f-4ca63b699712 has been set to available.
Node 903bc620-3fea-4314-9a12-dd4d6f402876 has been set to available.
Discovery completed.
[stack@rhxxxxxx01 ~]$ ironic node-list
+————————————–+——+—————+————-+—————–+————-+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+————————————–+——+—————+————-+—————–+————-+
| 476229d8-b263-4d8e-b643-4435789ac8c5 | None | None | None | available | True |
| a97a80a6-bbe2-4d2d-b5d7-53d0926d8064 | None | None | None | available | True |
| 39a0798c-7718-4d4d-9c37-209b0dfab479 | None | None | None | available | True |
| 39b53ffc-9ab6-471d-8d8f-4ca63b699712 | None | None | None | available | True |
| 903bc620-3fea-4314-9a12-dd4d6f402876 | None | None | None | available | True |
+————————————–+——+—————+————-+—————–+————-+
[stack@rhxxxxxx01 ~]$ openstack flavor list
+————————————–+———–+——–+——+———–+——-+———–+
| ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public |
+————————————–+———–+——–+——+———–+——-+———–+
| 2b1bbcba-c578-45d5-955a-1dcdaf74fb95 | compute | 131072 | 300 | 0 | 1 | True |
| 476a8e94-f49b-488e-9f87-71875bdfd7f9 | baremetal | 4096 | 40 | 0 | 1 | True |
| 5c915660-c73a-41b7-8703-ef61d1d638d7 | control | 131072 | 300 | 0 | 1 | True |
+————————————–+———–+——–+——+———–+——-+———–+
[stack@rhxxxxxx01 ~]$ sudo ahc-match
ERROR:ahc_tools.match:Failed to match node uuid: 476229d8-b263-4d8e-b643-4435789ac8c5.
ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:Failed to match node uuid: a97a80a6-bbe2-4d2d-b5d7-53d0926d8064.
ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:Failed to match node uuid: 39a0798c-7718-4d4d-9c37-209b0dfab479.
ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:Failed to match node uuid: 39b53ffc-9ab6-471d-8d8f-4ca63b699712.
ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:Failed to match node uuid: 903bc620-3fea-4314-9a12-dd4d6f402876.
ERROR: Unable to match requirements on the following available roles in /etc/ahc-tools/edeploy: control, compute
ERROR:ahc_tools.match:The following nodes did not match any profiles and will not be updated: 476229d8-b263-4d8e-b643-4435789ac8c5,a97a80a6-bbe2-4d2d-b5d7-53d0926d8064,39a0798c-7718-4d4d-9c37-209b0dfab479,39b53ffc-9ab6-471d-8d8f-4ca63b699712,903bc620-3fea-4314-9a12-dd4d6f402876
LikeLike
The power state usually gets set once you provision. Since it is set to available that means the nodes are available for deployment. Once deployed the power state shows. As for AHC I know this is needed in order for director to choose appropriate flavor based on hardware spec of node. I also had issues and could not get this to work. When I get a chance I will revisit and write another article on my findings and addition more info on how to tweak director. For example how to provide your own heat templates for customizing installations.
LikeLike
Hi Keith, how can I login to the Director UI? what ip addr, username/password should I use? Also, after creating the overcloud and launched instances, are they accessible from my existing network? Thank you.
LikeLike
Should be reachable via the undercloud_admin_vip or undercloud_public_vip, in this case 192.168.122.10 and 192.168.122.11. The username/password you will find under /root or wherever you ran installer, a file gets created called undercloud_passwords. Once you create overcloud it creates a file called overcloud_passwords. The IP of the overcloud controller you can see be using “nova list” on undercloud, under root there is also an authentification file that you can source for getting access to undercloud. The overcloud runs as VMS inside the undercloud. To ssh to systems you can use ssh heat-admin@ipm from undercloud. The systems should also be accessible to any other hosts on undercloud network.
LikeLike
Hi Keith, don’t know if it’s a typo, but shouldn’t be 192.168.126.10 admin_vip) and 192.168.126.11 (public_vip) as entered in ~/undercloud.conf? Thanks.
LikeLike
Hi Jasper,
Not sure I understand issue? The undercloud needs admin and public VIP, these should be on provisioning network. Director by default will put everything on provisioning network. If you need to devide traffic then you need to edit the network heat templates and customize things.
LikeLike
Hey Keith, thank you very much for an excellent post. I have tried following the exact steps and I get stuck in the introspection step. In my case it never finishes…
After initial activity, the logs show every 10 seconds the following:
Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/332f7223-f0f9-4843-a562-076df005b4ba HTTP/1.1” 200 –
Dec 17 05:37:21 osp7-undercloud.sdnlab.cisco.com ironic-discoverd[589]: INFO:werkzeug:192.168.130.74 – – [17/Dec/2015 05:37:21] “GET /v1/introspection/2e1f6363-e890-4125-95c0-7a4f44c50eaf HTTP/1.1” 200 –
I am lost what I could have done wrong. Any hints you can give me based on this brief problem description?
Thanks a lot
Kali
LikeLike
This sounds like a communications problem between ironic and the baremetal nodes. If you followed my instructions you are using VMs under KVM hypervisor. The problem I would guess is firewall related that the ironic host cant communicate with KVM VMs. Were you able to run virsh commands from baremetal host against remote KVM hypervisor? You may need to open libvirt port, it is in the guide. Let me know?
LikeLike
I also had hit same issue. Disabling firewall/adding libvirt ports did not help. But rebooting undercloud VM and rerunning introspect step helped me to overcome this said issue
LikeLike
Reblogged this on Fatmin.com.
LikeLike
Hi,
I want to test RDO-manager as a bare metal on KVM(QEMU) host,
This problem is ssh connection to KVM host.
When I want import the virtual machine as a bare metal.
I see and i think that the all information are well on .json file
Connection between undercloud and KVM host also is well
[stack@undercloud ~]$ openstack baremetal import –json instackenv.json
Request returned failure status.
SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
Traceback (most recent call last):
File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
return func(*args, **kwargs)
File “/usr/lib/python2.7/site-packages/ironic/conductor/manager.py”, line 439, in change_node_power_state
task.driver.power.validate(task)
File “/usr/lib/python2.7/site-packages/ironic/drivers/modules/ssh.py”, line 540, in validate
InvalidParameterValue: SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1
Thanks
LikeLike
Applogies but I have not tested rdomanager but if you are trying to connect with KVM there are some hacks I believe since Ironic (at least in kilo) did not support talking directly to libvirt. I mention these in my guide you read but if those dont work then only thing I can think of is firewall on KVM host or libvirt security is preventing access. Did you try running libvirt commands remotely from rdomanager?
LikeLike
Hi aleksandarstanisavevski,
I just ran into the same issue as you and it seems to be “work as designed” – unfortunately -. You need to edit your instackenv.json and add your ssh private key as like in this article: https://access.redhat.com/solutions/1603243
For a home lab it’s okay as you’re “playing” with that stuff. But for production? … 😉
I hope that helps…
Cheers,
JustAnotherMichael
LikeLike
Finally got another chance to do this, worked like a charm 🙂
LikeLike
Great news! Always feels good when something finally works 😀
LikeLike
Hello,
Can I install 2 different over clouds (or more) using ONE under cloud ?
LikeLike
At the moment as of OSP 7 DIrector can only manage one overcloud, however it is planned to allow for multiple and this is on roadmap.
https://access.redhat.com/solutions/2019433
LikeLike
Hi Keith
I want to try this in my home, so in this case how should I run it? Is there any free repository I can configure to download these package.
Many Thanks
LikeLike
Hi Cj,
Yes I would recommend following this guide especially since you will want to use KVM not bare-metal for a lab environment and this guide explains how to do that. You can try RDOmanager that is the community platform for Director but I haven’t tried it. I of course would recommend OSP Director but you need a subscription for that, if you dont have one then RDOmanager would be next best thing. Let me know if there are issues? I can try and help.
LikeLike
CJ, since you’re trying, you can get a 60 (or 30) days trial subscription from Red Hat
LikeLike
thanks Keith and Fzied.
LikeLike
Hi Keith,
OSP 8 was just released, don’t know if you took a look at it already. I wanted to know if there’s any network requirement changes from OSP 7 in terms of baremetal installation. Also, is this guide still valid for OSP8 when using KVM. Thanks!
LikeLike
Hi Jasper,
Yes my guide should work for OSP 8, you will need to use different repositories. They changed from 7.0 to 8 so rhel7-server-openstack-8-rpms not rhel7-server-openstack-8.0-rpms for example. Here are the official docs: https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/director-installation-and-usage/director-installation-and-usage
I have not yet tested OSP 8 director but it is on my list. As for network requirements, nothing should have changed, it is still best practice to separarte the PXE (provisioning) network on its own NIC. I will also shortly post a blog with some network templates for customizing things. Out of box director will use two networks: provisioning and then all openstack traffic on another, that is ok for lab but not what you want for POC and production obviously.
Regards,
Keith
LikeLike
Can anyone post the results of the following (from control and compute) if they follow this guide to setup and if it is working? My instances can’t ping the externel ips and gateway 192.168.125.1.
ifconfig
ovs-vsctl show
Thanks
Paras,
LikeLike
Hi Paras,
The 192.168.125.1 network, is this a virtual network? Did you configure this network on KVM side? Assuming the network is OK you can try configuring flat network for external. I have seen issues at least in previous OpenStack deployments using VXLAN.
neutron net-create external –provider:network_type flat –provider:physical_network physnet-external –router:external=True
neutron subnet-create external –name external_subnet –allocation-pool start=192.168.125.100,end=192.168.125.200 –disable-dhcp –gateway 192.168.125.1 192.168.125.0/24
LikeLike
Hi
Yes its the vitrual network. I have replicated everything as per this article. Even though there is a vxlan flag when we do overcloud deploy, can we still create flat external network? If I use flat the instance is not getting IP saying “sending discover…” and no ip on the instance’s eth0.
With vxlan I the instance boots normally.
Thanks
Paras
LikeLike
Hi Keith, I have pasted the output of ifconfig . Do they look okay? http://pastebin.com/vxe7tGid
Thanks
LikeLike
Network interfaces look right…what happens if you create a host and add them to these networks, can they ping gateway? You could simply try adding interfaces on undercloud system. I am not sure your issue is within OpenStack…vxlan is just a tunneling protocol and if you want to use floating ips that is what you want.
LikeLike
Keith can you please post your undercloud kvm configuration, I am unable to get the nodes to power on by themselves.
LikeLike
The KVM configuration for the VMs is documented in article, maybe it isnt easy to see. Any command line with ktenzer on it is the hypervisor. Besides installing KVM that is all I changed.
I also have two iptables rules to allow libvirt, I think you just need 16509
-A INPUT -m state –state NEW -m tcp -p tcp –dport 16509 -j ACCEPT
-A INPUT -m state –state NEW -m tcp -p tcp –dport 1883 -j ACCEPT
Hope this helps?
Regards,
Keith
LikeLike
I worked it out. Just want to say a big thanks to you.
Your guide for 8 allowed me to train for my EX210 exam.
Totally excellent, you deserve a medal.
LikeLike
Hi Grant,
Congrats on passing the Ex210 Exam and glad guide helped with your preparations.
Regards,
Keith
LikeLike
Hey Keith!
Thank you for the valuable post!
I had a question:
When defining the Overcloud VM Hulls, you have defined the network as “provisioning” in the command:
===
# for i in {1..2}; do virt-install –ram 4096 –vcpus 4 –os-variant rhel7 –disk path=/root/virtual_machines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 –noautoconsole –vnc –network network:provisioning –network network:overcloud –name overcloud-node$i –cpu SandyBridge,+vmx –dry-run –print-xml > /tmp/overcloud-node$i.xml; virsh define –file /tmp/overcloud-node$i.xml; done
===
However, while collecting the MAC Addresses of the hulls for the provisioning network, you are searching for “mgmt” network in the command, and for me, that turned up an empty /tmp/nodes.txt:
===
for i in {1..2}; do virsh -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i | awk ‘$3 == “mgmt” {print $5};’; done > /tmp/nodes.txt
===
Am I missing something here?
I changed the “mgmt” to “provisioning” and that helped me get the MAC Addresses associated to the provisioning n/w in the hulls.
I am a newbie at this, so kindly forgive me if I am pointing out something considered obvious 🙂
LikeLike
mgmt network is the name of the virsh network, substitute this for whatever you are calling it or use names I am using. Good catch.
LikeLike
Hi,
I have followed the above steps. I have 3 Vms on a host where one is director and other two are overcloud nodes. I have facing issue while introspection. The overcloud VMs are getting started but unable to get Ip from DHCP. Eventually introspection fails. Can you tell me what might gone wrong?
LikeLike
Hi Keith
I am using 192.0.2.0/24 as provisioning network. New environment: The KVM hypervisor host is on the 172.16.73.0/24 network and has IP of 172.16.73.136 . The undercloud runs on a single VM on the 172.16.73.0/24 management network and 192.0.2.0/24 (provisioning) netowrk. The undercloud has an IP address of 172.16.73.146 (eth0). The overcloud is on the 192.0.2.0/24 (provisioning) and 192.0.3.0/24(external) network.
I have facing issue while introspection (command: openstack baremetal introspection bulk start). The overcloud VMs are getting started but unable to configure network interface. Eventually introspection fails. Can you tell me what might have gone wrong?
LikeLike
Hi There,
This sounds like a DHCP server issue that the VMs are not getting an IP and as such can’t be bootstrapped by the undercloud. The cause of this problem is usually another DHCP server. You need to make sure on the provisioning network no other DHCP server is operating. To test this you can just start a VM on that network and network boot it. Does it get a DHCP address? Can you ping it from undercloud VM?
Hope this helps
Keith
LikeLike
Also check the DHCP leases on undercloud VM.
#cat /var/lib/dhcpd/dhcpd.leases
Regards,
Keith
LikeLike
Hi Keith , I am a new bee in this domain but what i know based on RedHat recommendations the Undercloud always run on physical host and Over cloud on the VM’s provisioned by Under Cloud in Step ? Can you help make this point clear and what about networks since over cloud lies in production so can i plan its network including provision totally separate than under cloud ? Thanks for your support
LikeLike
Yes but I am not focused on how to setup production environments or even best practices. Rather what I document is how to get things setup in a lab environment running on your laptop for learning, etc.
As for your question the undercloud and overcloud need to share same provisioning network else things wont work. The other networks can be separated API, public, management, storage management, storage, etc.
Keith
LikeLike
Hello Mr Keith,
thanks for sharing this information… it’s very helpful..
From my side, I am stucked on the step of provisioning the baremetal. ‘openstack baremetal bluck start:
Both VMs start, got the ip and the installation is on. At the end, both machines carche and their status (openstack baremetal node list ) manageable and poweroff.
I could find any error in the ironirc-inspector-*.log (from journalctl) and when I peneterate in the vm after the boot (via ssh passing in the ipxe) no obivous error was found.
any thoughts ?
Cheers,
JM
LikeLike
Sorry to hear that, I haven’t seen issues like this usually when you get ot point where the VMs boot and get their image things install at least OS. I wonder if maybe there are memory issues or constraints? That could cause VMs to mysteriously crash.
LikeLike
Thanks Ktenzer for the reply,
The three VMs have 4GB RAM and (undercloud 20 GB storage and 60GB per VM on overcloud).
during the boot, I got this message on VM on /var/log/messages:
Apr 7 23:06:24 localhost kernel: device eth0 entered promiscuous mode
Apr 7 23:06:24 localhost kdumpctl: Error: /boot/vmlinuz-3.10.0-514.10.2.el7.x86_64 not found.
Apr 7 23:06:24 localhost kdumpctl: Starting kdump: [FAILED]
Apr 7 23:06:24 localhost ironic-python-agent: 2017-04-07 23:06:24.046 755 DEBUG ironic_python_agent.netutils [-] Binding interface eth0 for protocol 35020 __enter__ /usr/lib/python2.7/site-packages/ironic_python_agent/netutils.py:72
Apr 7 23:06:24 localhost systemd: kdump.service: main process exited, code=exited, status=1/FAILURE
Apr 7 23:06:24 localhost systemd: Failed to start Crash recovery kernel arming.
Apr 7 23:06:24 localhost systemd: Startup finished in 8.622s (kernel) + 15.548s (userspace) = 24.170s.
Apr 7 23:06:24 localhost systemd: Unit kdump.service entered failed state.
Apr 7 23:06:24 localhost systemd: kdump.service failed.
I share also: the full log https://pastebin.com/fBgGff5n
LikeLike
Try more memory 4GB is rather small or try just one controller and one compute host.
LikeLike
Hi keith,
I am following your Tripleo OSPD steps to install Undercloud and overcloud in virtual env , I am using RHEL Hypervisior .I am seeing failure at his steps
>> openstack baremetal import –json instackenv.json command its not executing but when i issue “openstack baremetal list” cli i can see 2 instance are created and powerstate is NONE.
When i checked “ironic node-show f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab” i dont see any kernal or ram_disk info is present
I checked the ssh access to hypervisior (192.168.122.1) its working without password from undercloud to hypervisior.
My only susupect is ssh_password in instackenv.json file , verified couple of times i dont see any error it excatly matching with your file.
openstack baremetal import –json instackenv.json
openstack baremetal import –json instackenv.json
WARNING: ironicclient.common.http Request returned failure status.
ERROR: openstack SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
Traceback (most recent call last):
[stack@undercloud ~]$ ssh stack@192.168.122.1
Last login: Thu Apr 6 18:07:24 2017 from 192.168.122.3
[stack@ospd ~]$
Error logs
========================================
openstack baremetal import –json instackenv.json
WARNING: ironicclient.common.http Request returned failure status.
ERROR: openstack SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
Traceback (most recent call last):
File “/usr/lib/python2.7/site-packages/oslo_messaging/rpc/server.py”, line 142, in inner
return func(*args, **kwargs)
File “/usr/lib/python2.7/site-packages/ironic/conductor/manager.py”, line 435, in change_node_power_state
task.driver.power.validate(task)
File “/usr/lib/python2.7/site-packages/ironic/drivers/modules/ssh.py”, line 514, in validate
” be established: %s”) % e)
InvalidParameterValue: SSH connection cannot be established: Failed to establish SSH connection to host 192.168.122.1.
(HTTP 400)
[stack@undercloud ~]$ openstack baremetal list
+————————————–+——+—————+————-+—————–+————-+
| UUID | Name | Instance UUID | Power State | Provision State | Maintenance |
+————————————–+——+—————+————-+—————–+————-+
| f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab | None | None | None | available | False |
| 9e461d9f-9cf9-431e-9273-a2367e40965c | None | None | None | available | False |
+————————————–+——+—————+————-+—————–+————-+
[stack@undercloud ~]$
[stack@undercloud ~]$
[stack@undercloud ~]$ ironic node-show f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab
+————————+————————————————————————-+
| Property | Value |
+————————+————————————————————————-+
| target_power_state | None |
| extra | {} |
| last_error | None |
| updated_at | 2017-04-06T22:07:52+00:00 |
| maintenance_reason | None |
| provision_state | available |
| uuid | f6d112d9-b90b-4b5b-9bce-b8f228b4b6ab |
| console_enabled | False |
| target_provision_state | None |
| maintenance | False |
| inspection_started_at | None |
| inspection_finished_at | None |
| power_state | None |
| driver | pxe_ssh |
| reservation | None |
| properties | {u’memory_mb’: u’4096′, u’cpu_arch’: u’x86_64′, u’local_gb’: u’60’, |
| | u’cpus’: u’4′} |
| instance_uuid | None |
| name | None |
| driver_info | {u’ssh_username’: u’stack’, u’ssh_virt_type’: u’virsh’, u’ssh_address’: |
| | u’192.168.122.1′, u’ssh_key_contents’: u’echo $(cat ~/.ssh/id_rsa)’} |
| created_at | 2017-04-06T22:03:31+00:00 |
| driver_internal_info | {} |
| chassis_uuid | |
| instance_info | {} |
+————————+————————————————————————-+
Please let me know your suggestion
LikeLike
What version of OpenStack are you using? You can ssh as stack user to 192.168.122.1 which is you KVM hypervisor without password?
LikeLike
Thanks for quick reply, am using RHEL Openstack 7 as per the guide and From Undercloud VM i am able to password ssh to 192.168.122.1 (hypervisior) . one more thing i created bridge interface for eth0 to eth2 with bridge interface br0 to br2 in hypervsisior
Undercloud VM network is using br0 ,br2 and br3
ip add sh
1: lo: mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 9000 qdisc mq master br0 state UP qlen 1000
link/ether 00:25:b5:ff:00:1e brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:b5ff:feff:1e/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 9000 qdisc mq master br1 state UP qlen 1000
link/ether 00:25:b5:ff:00:0e brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:b5ff:feff:e/64 scope link
valid_lft forever preferred_lft forever
4: eth2: mtu 9000 qdisc mq master br2 state UP qlen 1000
link/ether 00:25:b5:ff:00:3e brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:b5ff:feff:3e/64 scope link
valid_lft forever preferred_lft forever
5: eth3: mtu 9000 qdisc mq master br3 state UP qlen 1000
link/ether 00:25:b5:ff:00:2e brd ff:ff:ff:ff:ff:ff
inet6 fe80::225:b5ff:feff:2e/64 scope link
valid_lft forever preferred_lft forever
6: eth4: mtu 9000 qdisc noop state DOWN qlen 1000
link/ether 00:25:b5:ff:00:6e brd ff:ff:ff:ff:ff:ff
7: br0: mtu 9000 qdisc noqueue state UP qlen 1000
link/ether 00:25:b5:ff:00:1e brd ff:ff:ff:ff:ff:ff
inet 10.88.192.17/24 brd 10.88.192.255 scope global br0
valid_lft forever preferred_lft forever
inet6 fe80::225:b5ff:feff:1e/64 scope link
valid_lft forever preferred_lft forever
8: br1: mtu 9000 qdisc noqueue state UP qlen 1000
link/ether 00:25:b5:ff:00:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.125.2/24 brd 192.168.125.255 scope global br1
valid_lft forever preferred_lft forever
inet6 fe80::225:b5ff:feff:e/64 scope link
valid_lft forever preferred_lft forever
9: br2: mtu 9000 qdisc noqueue state UP qlen 1000
link/ether 00:25:b5:ff:00:3e brd ff:ff:ff:ff:ff:ff
inet 192.168.126.2/24 brd 192.168.126.255 scope global br2
valid_lft forever preferred_lft forever
inet6 fe80::225:b5ff:feff:3e/64 scope link
valid_lft forever preferred_lft forever
10: br3: mtu 9000 qdisc noqueue state UP qlen 1000
link/ether 00:25:b5:ff:00:2e brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global br3
valid_lft forever preferred_lft forever
inet6 fe80::225:b5ff:feff:2e/64 scope link
valid_lft forever preferred_lft forever
11: vnet0: mtu 9000 qdisc pfifo_fast master br0 state UNKNOWN qlen 1000
link/ether fe:54:00:8b:f9:dc brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe8b:f9dc/64 scope link
valid_lft forever preferred_lft forever
12: vnet1: mtu 9000 qdisc pfifo_fast master br2 state UNKNOWN qlen 1000
link/ether fe:54:00:9d:13:73 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe9d:1373/64 scope link
valid_lft forever preferred_lft forever
13: vnet2: mtu 9000 qdisc pfifo_fast master br3 state UNKNOWN qlen 1000
link/ether fe:54:00:16:4b:33 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe16:4b33/64 scope link
valid_lft forever preferred_lft forever
[root@ospd ~]# virsh domiflist undercloud
Interface Type Source Model MAC
——————————————————-
vnet0 bridge br0 virtio 52:54:00:8b:f9:dc
vnet1 bridge br2 virtio 52:54:00:9d:13:73
vnet2 bridge br3 virtio 52:54:00:16:4b:33
whereas overcloud is in virbr interface
[root@ospd ~]# virsh domiflist overcloud-node1
Interface Type Source Model MAC
——————————————————-
– bridge br2 virtio 52:54:00:26:f8:e7
– bridge br0 virtio 52:54:00:9f:11:6c
[root@ospd ~]#
[root@ospd ~]#
[root@ospd ~]# virsh domiflist overcloud-node2
Interface Type Source Model MAC
——————————————————-
– bridge br2 virtio 52:54:00:44:fd:70
– bridge br0 virtio 52:54:00:8c:a5:27
I am not sure i need to use Virbr or bridge interface as per your instruction it was using external and provisioing created under
virsh # net-list –all
Name State Autostart Persistent
———————————————————-
external inactive yes yes
provisioning inactive yes yes
virsh # net-info external
Name: external
UUID: 6e8990c8-10f1-4e63-baf2-d23c4d8dd205
Active: no
Persistent: yes
Autostart: yes
Bridge: virbr1
net-info provisioning
Name: provisioning
UUID: 44dd7ad7-38f3-4b08-9224-65145b9a2180
Active: no
Persistent: yes
Autostart: yes
Bridge: virbr0
LikeLike
Hi Keith,
I fixed the ssh issue , I saw one of the comments in the blog i re-edited the instackenv.json it worked . https://access.redhat.com/solutions/1603243 . Thanks very much for the article .
only issue now is see here Overcloud controller and overcloud compute is having ip address for Provisioning and eth0 is associated to BR-EX
Please find the overcloud controller IFCONFIG output
https://pastebin.com/7jb4F2w9
Issue 1: how could i login to overcloud Nodes from outside ? only heat-admin only username is allowed or are we able to login as root user in overcloud-nodes?
LikeLike
Awesome great work
As for network configuration of overcloud, correct if you go with defaults you end up with overcloud running on provisioning network, not what you want.
I created some templates for customizing overcloud network configuration you can use as example. In my example I configure the 192.168.122.x network for overcloud using bonding without vlans, means I have two NICs on 192.168.122.x.
https://github.com/ktenzer/openstack-heat-templates/tree/master/director/lab/osp8/templates
You should just need to clone those templates, update network-environment.yaml and re-run install
LikeLike
Thanks a lot keith for your help.
Regards
Solomon
LikeLike
Pingback: Virtual Director – Site Title
HI Keith,
Thanks for sharing wonderful article on OOO installation. I am doing on barematel servers and stuck in underclould installation. would appreciate if you can help. will share my network settings soon after you positive response.
BR,
Asim
LikeLike
Hi Asim, sure you can send network settings
LikeLike
Thanks for your response. I have installed director successfully. two bare metal nodes (1 control, 1 compute ) are added successfully. Introspection is complete successfully. below are outputs:
$openstack baremetal node list
+————————————–+—————-+—————+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+—————-+—————+————-+——————–+————-+
| 9d68a0d7-e06a-4522-8667-7620fceff659 | osp-controller | None | power off | available | False |
| fe90406f-a183-411d-a573-1f745d5a7294 | osp-compute-1 | None | power off | available | False |
+————————————–+—————-+—————+————-+——————–+————-+
—————————————————————————————————
$openstack overcloud profiles list
+————————————–+—————-+—————–+—————–+——————-+
| Node UUID | Node Name | Provision State | Current Profile | Possible Profiles |
+————————————–+—————-+—————–+—————–+——————-+
| 9d68a0d7-e06a-4522-8667-7620fceff659 | osp-controller | available | control | |
| fe90406f-a183-411d-a573-1f745d5a7294 | osp-compute-1 | available | compute | |
+————————————–+—————-+—————–+—————–+——————-+
My external network is 10.3.3.0/24 on one NIC
My provisional network is on 192.168.24.0/24 on second NIC.
Now i am ready to deploy over cloud but need your advise what template files and environment i need to setup in overcloud deploy command. if you can provide me sample command or any guide about it. i have set my network-environment.yaml file as below:
$more network-environment.yaml
#This file is an example of an environment file for defining the isolated
#networks and related parameters.
resource_registry:
# Network Interface templates to use (these files must exist)
OS::TripleO::BlockStorage::Net::SoftwareConfig:
../network/config/single-nic-vlans/cinder-storage.yaml
OS::TripleO::Compute::Net::SoftwareConfig:
../network/config/single-nic-vlans/compute.yaml
OS::TripleO::Controller::Net::SoftwareConfig:
../network/config/single-nic-vlans/controller.yaml
OS::TripleO::ObjectStorage::Net::SoftwareConfig:
../network/config/single-nic-vlans/swift-storage.yaml
OS::TripleO::CephStorage::Net::SoftwareConfig:
../network/config/single-nic-vlans/ceph-storage.yaml
parameter_defaults:
# This section is where deployment-specific configuration is done
# CIDR subnet mask length for provisioning network
ControlPlaneSubnetCidr: ’24’
# Gateway router for the provisioning network (or Undercloud IP)
ControlPlaneDefaultRoute: 192.168.24.254
EC2MetadataIp: 192.168.24.1 # Generally the IP of the Undercloud
# Customize the IP subnets to match the local environment
InternalApiNetCidr: 172.17.0.0/24
StorageNetCidr: 172.18.0.0/24
StorageMgmtNetCidr: 172.19.0.0/24
TenantNetCidr: 172.16.0.0/24
ExternalNetCidr: 10.3.3.0/24
# Customize the VLAN IDs to match the local environment
InternalApiNetworkVlanID: 20
StorageNetworkVlanID: 30
StorageMgmtNetworkVlanID: 40
TenantNetworkVlanID: 50
ExternalNetworkVlanID: 10
# Customize the IP ranges on each network to use for static IPs and VIPs
InternalApiAllocationPools: [{‘start’: ‘172.17.0.10’, ‘end’: ‘172.17.0.200’}]
StorageAllocationPools: [{‘start’: ‘172.18.0.10’, ‘end’: ‘172.18.0.200’}]
StorageMgmtAllocationPools: [{‘start’: ‘172.19.0.10’, ‘end’: ‘172.19.0.200’}]
TenantAllocationPools: [{‘start’: ‘172.16.0.10’, ‘end’: ‘172.16.0.200’}]
# Leave room if the external network is also used for floating IPs
ExternalAllocationPools: [{‘start’: ‘10.3.3.70’, ‘end’: ‘10.3.3.100’}]
# Gateway router for the external network
ExternalInterfaceDefaultRoute: 10.3.3.1
# Uncomment if using the Management Network (see network-management.yaml)
# ManagementNetCidr: 10.0.1.0/24
# ManagementAllocationPools: [{‘start’: ‘10.0.1.10’, ‘end’: ‘10.0.1.50’}]
# Use either this parameter or ControlPlaneDefaultRoute in the NIC templates
# ManagementInterfaceDefaultRoute: 10.0.1.1
# Define the DNS servers (maximum 2) for the overcloud nodes
DnsServers: [“8.8.8.8″,”8.8.4.4″]
# List of Neutron network types for tenant networks (will be used in order)
NeutronNetworkType: ‘vxlan,vlan’
# The tunnel type for the tenant network (vxlan or gre). Set to ” to disable tunneling.
NeutronTunnelTypes: ‘vxlan’
# Neutron VLAN ranges per network, for example ‘datacentre:1:499,tenant:500:1000’:
NeutronNetworkVLANRanges: ‘datacentre:1:1000’
# Customize bonding options, e.g. “mode=4 lacp_rate=1 updelay=1000 miimon=100”
# for Linux bonds w/LACP, or “bond_mode=active-backup” for OVS active/backup.
BondInterfaceOvsOptions: “bond_mode=active-backup”
—————————————————————————————————–
Please advise further.
BR,
Asim
LikeLike
Hi,
I am configuring/installing the OOO with your article, but facing the pxe booting issue while performing introspection on baremetal nodes. Here we are using pxe_ilo driver. Could you please help…
Undercloud VM Interface details:
10.80.133.16/28 dev eth0 proto kernel scope link src 10.80.133.18 — External Access (eth0)
192.168.126.0/24 dev br-ctlplane proto kernel scope link src 192.168.126.20 — Provisioning (eh1)
instackenv.json
{
“nodes”: [
{
“mac”:[
“00:9C:02:99:xx:xx”
],
“capabilities”: “profile:control,boot_option:local”,
“pm_type”:”pxe_ilo”,
“pm_user”: “ilo_web_gui_user”,
“pm_password”: “ilo_web_gui_pwd”,
“pm_addr”: “10.80.133.3”
}
]
}
[stack@director ~]$ openstack baremetal node list
+————————————–+——+—————+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+——+—————+————-+——————–+————-+
| 2656f73f-1e7f-45f8-8c4f-bc5684d13203 | None | None | power on | manageable | False |
+————————————–+——+—————+————-+——————–+————-+
[stack@director ~]$ openstack overcloud node introspect –all-manageable –provide
Started Mistral Workflow. Execution ID: 9e775928-103d-4010-a5ec-72d05b0ea648
Waiting for introspection to finish…
Introspection for UUID 0d44842b-d118-4564-9a41-043e924c9935 finished with error: Introspection timeout
Introspection completed with errors:
0d44842b-d118-4564-9a41-043e924c9935: Introspection timeout
DUCT_NAME’: {‘PRODUCT_NAME’: {‘VALUE’: ‘ProLiant SL230s Gen8 ‘}}, ‘VERSION’: ‘2.23’, ‘RESPONSE’: {‘STATUS’: ‘0x0000’, ‘MESSAGE’: ‘No error’}} _execute_command /usr/lib/python2.7/site-packages/proliantutils/ilo/ribcl.py:340
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.369 9692 DEBUG proliantutils.ilo.client [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] [iLO 10.80.133.3] SNMP credentials not provided. SNMP inspection will not be performed. _validate_snmp /usr/lib/python2.7/site-packages/proliantutils/ilo/client.py:129
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.369 9692 DEBUG proliantutils.ilo.client [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] [iLO 10.80.133.3] IloClient object created. Model: ProLiant SL230s Gen8 __init__ /usr/lib/python2.7/site-packages/proliantutils/ilo/client.py:78
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.369 9692 DEBUG proliantutils.ilo.client [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] [iLO 10.80.133.3] Using RIBCLOperations for method get_host_power_status. _call_method /usr/lib/python2.7/site-packages/proliantutils/ilo/client.py:141
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.370 9692 DEBUG proliantutils.ilo.ribcl [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] [iLO 10.80.133.3] POST https://10.80.133.3:443/ribcl with request data: {‘headers’: {‘Content-length’: ‘185’}, ‘data’: ‘\r\n\r\n\r\n\r\n\r\n\r\n\r\n’, ‘verify’: False} _request_ilo /usr/lib/python2.7/site-packages/proliantutils/ilo/ribcl.py:146
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.707 9692 DEBUG proliantutils.ilo.ribcl [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] [iLO 10.80.133.3] Received response data: {‘GET_HOST_POWER’: {‘HOST_POWER’: ‘ON’}, ‘VERSION’: ‘2.23’, ‘RESPONSE’: {‘STATUS’: ‘0x0000’, ‘MESSAGE’: ‘No error’}} _execute_command /usr/lib/python2.7/site-packages/proliantutils/ilo/ribcl.py:340
Oct 18 10:45:51 director ironic-conductor[9692]: 2017-10-18 10:45:51.707 9692 DEBUG ironic.conductor.task_manager [req-c23defcb-8763-41b7-9bd0-1d04ad176b86 – – – – -] Successfully released shared lock for power state sync on node 2656f73f-1e7f-45f8-8c4f-bc5684d13203 (lock was held 0.67 sec) release_resources /usr/lib/python2.7/site-packages/ironic/conductor/task_manager.py:331
Oct 18 10:45:57 director ironic-inspector[13907]: 2017-10-18 10:45:57.022 13907 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_clean_up’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:45:58 director ironic-inspector[13907]: 2017-10-18 10:45:58.776 13907 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:45:58 director ironic-inspector[13907]: 2017-10-18 10:45:58.779 13907 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
Oct 18 10:46:13 director ironic-inspector[13907]: 2017-10-18 10:46:13.777 13907 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:46:13 director ironic-inspector[13907]: 2017-10-18 10:46:13.779 13907 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
Oct 18 10:46:28 director ironic-inspector[13907]: 2017-10-18 10:46:28.778 13907 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:46:28 director ironic-inspector[13907]: 2017-10-18 10:46:28.780 13907 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
Oct 18 10:46:43 director ironic-inspector[13907]: 2017-10-18 10:46:43.780 13907 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:46:43 director ironic-inspector[13907]: 2017-10-18 10:46:43.783 13907 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
Oct 18 10:46:50 director ironic-conductor[9692]: 2017-10-18 10:46:50.655 9692 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic.conductor.manager.ConductorManager._sync_local_state’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:46:51 director ironic-conductor[9692]: 2017-10-18 10:46:51.011 9692 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic.conductor.manager.ConductorManager._check_cleanwait_timeouts’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
Oct 18 10:46:51 director ironic-conductor[9692]: 2017-10-18 10:46:51.014 9692 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic.conductor.manager.ConductorManager._check_deploy_timeouts’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
LikeLike
To me this looks like a firewalld issue, make sure the firewall is not blocking the ssh_pxe communications. Let me know if that was issue or something else?
Keith
LikeLike
hi Keith;
I tried with “pxe_ssh” and “pxe_ipmitool” but received error message with following error, However provisioning got successful with fake_pxe but with this option “introspection” got failed. Can you please help me on this
Successfully registered node UUID 9dbbdb03-3709-451c-b133-fa2ef71216cc
Successfully registered node UUID 383dd5a8-7913-4498-beb7-3999cea0fc86
Successfully registered node UUID a6c67817-f1a1-4960-a389-8f5238c67e94
Started Mistral Workflow. Execution ID: 5ce6813c-4c02-4735-b868-b0211720a9cc
Failed to set nodes to available state: IronicAction.node.set_provision_state failed: : The requested action “provide” can not be performed on node “9dbbdb03-3709-451c-b133-fa2ef71216cc” while it is in state “enroll”.
IronicAction.node.set_provision_state failed: : The requested action “provide” can not be performed on node “383dd5a8-7913-4498-beb7-3999cea0fc86” while it is in state “enroll”.
IronicAction.node.set_provision_state failed: : The requested action “provide” can not be performed on node “a6c67817-f1a1-4960-a389-8f5238c67e94” while it is in state “enroll”.
LikeLike
OSP-D is not really supported or designed to run on VMs…for me the steps worked and hard for me to help you w/your env. I would ensure communications work and that nothing is blocking or retry wipe ironic nodes and re-add.
LikeLike
Hello,
I am a beginner in openstack as well as in Linux administration. I have tried to setup the environment prior the installation of openstack. But I’m bit confused with it.
Here I have a desktop PC (Ram 64gb, 24 Cpus, 1 Tb Hard disk space). Initially I installed Virt-manager in host machine and tried to create VMs. But how to create multiple nics in VM, I was not aware of. Later installed Virtual box and created a VM using it, with 2 NICs. Nic 1 is bridge –adapter and NIC2 is NAT’ed. VM2 also created with 2 NICs in the same manner. Am I doing the right procedure here? VM1 ifconfigs as follows,
[root@localhost network-scripts]# ifconfig
enp0s3: flags=4163 mtu 1500
inet 192.168.140.21 netmask 255.255.255.0 broadcast 192.168.140.255
inet6 fe80::2b09:713d:7901:5473 prefixlen 64 scopeid 0x20
ether 08:00:27:4d:a5:4f txqueuelen 1000 (Ethernet)
RX packets 224151 bytes 331916861 (316.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 130390 bytes 9052230 (8.6 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
enp0s8: flags=4163 mtu 1500
inet 10.0.3.15 netmask 255.255.255.0 broadcast 10.0.3.255
inet6 fe80::ce45:a206:8a6c:13af prefixlen 64 scopeid 0x20
ether 08:00:27:82:63:5c txqueuelen 1000 (Ethernet)
RX packets 2 bytes 1180 (1.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 39 bytes 5445 (5.3 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73 mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10
loop txqueuelen 1 (Local Loopback)
RX packets 24 bytes 2808 (2.7 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24 bytes 2808 (2.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
virbr0: flags=4099 mtu 1500
inet 192.168.122.1 netmask 255.255.255.0 broadcast 192.168.122.255
ether 52:54:00:e7:ee:cc txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@localhost network-scripts]#
Regards,
Krishna.
LikeLike
This won’t work with virtual desktop, you need a real hypervisor like kvm because tripleO needs to talk to it to turn on/off machines and manage them.
If you are new to Linux I recommend strengthening your skills before jumping into something like openstack…focus on bridging, networking, lvm, kvm, network namespaces and openvswitch. Without solid Linux foundation imho you are wasting your time w/openstack and won’t be successful, just some advice.
LikeLike
In RHEL 7 ‘virt-install’ syntax got changed and also network should be ‘external’ instead of ‘overcloud’. The following syntax is working:
# for i in {1..9}; do virt-install –name overcloud-node$i -r 4 –vcpus 4 –os-variant rhel7 –disk path=/var/lib/libvirt/images/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 –noautoconsole –vnc –network network:provisioning –network network:external –cpu SandyBridge-IBRS,+vmx –dry-run –print-xml > /tmp/overcloud-node$i.xml; virsh define –file /tmp/overcloud-node$i.xml; done
I am deploying in HA mode, so using 9 nodes including 3 for ceph.
LikeLike
Thanks for feedback and tips!!!
LikeLike
Small typo in the earlier syntax: -r 4096
LikeLike
Thanks for pointing that out!
LikeLike