Overview
In this article we will setup an OpenStack environment based off Newton using the Red Hat OpenStack Platform. OpenStack is OpenStack but every distribution differs in what capabilities or technologies are supported and how OpenStack is installed, configured as well as upgraded.
The Red Hat OpenStack Platform uses OpenStack director based on the TripleO (OpenStack on OpenStack) project to install, configure and update OpenStack. Director is a lifecycle management tool for OpenStack. Red Hat’s approach is to make OpenStack easy to manage, without compromising on the “Open” part of OpenStack. If management of OpenStack can be simpler and the learning curve brought down then it has a real chance to be the next-gen virtualization platform. What company wouldn’t want to be able to consume their internal IT resources like using AWS, GCE or Azure if they didn’t give up anything to do so? We aren’t there yet but Red Hat is making bold strides and as you will see in this article, is on a journey to make OpenStack consumable for everyone!
Red Hat OpenStack Platform
The Red Hat OpenStack platform uses director to build, manage and upgrade Red Hat OpenStack. Director is in fact a minimal OpenStack deployment itself, with everything needed to deploy OpenStack. The main piece outside of the OpenStack core (Nova, Neutron, Glance, Swift and Heat) is Ironic. The Ironic project is focused on baremetal-as-a-service.
Director allows you to add physical nodes to Ironic and assign them OpenStack roles: compute, control, storage, network, etc. Once roles are assigned an OpenStack environment can be deployed, upgraded and even scaled. As mentioned director is a complete life-cycle management tool that uses OpenStack to manage OpenStack.
In this article we will deploy director (undercloud) on a single VM. We will add three baremetal nodes (VMs) and then deploy OpenStack (overcloud) in a minimal configuration (1 controller node and 1 compute node). I am able to run this on a laptop with just 12GB RAM.
Lab Environment
My idea for this configuration was build the most minimal OpenStack environment possible, something that would run on my laptop with just 12GB RAM using Red Hat OpenStack Director. In the end this experiment was successful and the configuration used is as follows:
- KVM Hypervisor Physical Laptop: RHEL 7.3, CentOS or Fedora, Dual core, 12 GB RAM and 250GB disk
- Undercloud VM: RHEL 7.3, 2x vCPUs, 4GB RAM, 1 x NIC 8(provisioning), 1 x NIC (external) and 40GB disk
- Overcloud Controller VM: RHEL 7.3, 2 x vCPUs, 6GB RAM, 1 x NIC (provisioning), 2 x NICs (external) and 30GB disk
- Overcloud Compute VM: RHEL 7.3, 2 x vCPU, 4GB RAM, 1 x NIC (provisioning), 2 x NICs (external) and 20GB disk
Networking Setup
In this configuration we are using virtual networks provided by the hypervisor host (my laptop). Create provisioning and external networks on KVM Hypervisor host. Ensure that NAT forwarding is enabled and DHCP is disabled on the external network. We run OpenStack overcloud on the external network. The provisioning network should be non-routable and DHCP disabled. The undercloud will handle DHCP services for the provisioning network and other IPs will be statically assigned.
[Hypervisor]
Create external network for the overcloud.
[ktenzer@ktenzer ~]$ cat > /tmp/external.xml <<EOF <network> <name>external</name> <forward mode='nat'> <nat> <port start='1024' end='65535'/> </nat> </forward> <ip address='192.168.122.1' netmask='255.255.255.0'> </ip> </network>
Note: hypervisor is 192.168.122.1 and reachable via this IP from undercloud.
[ktenzer@ktenzer ~]$ sudo virsh net-define /tmp/external.xml [ktenzer@ktenzer ~]$ sudo virsh net-autostart external [ktenzer@ktenzer ~]$ sudo virsh net-start external
Create provisioning network for undercloud.
Note: gateway is 192.168.126.254 as we will use 192.168.126.1 as IP for the VM running our undercloud.
[ktenzer@ktenzer ~]$ cat > /tmp/provisioning.xml <<EOF <network> <name>provisioning</name> <ip address='192.168.126.254' netmask='255.255.255.0'> </ip> </network>
[ktenzer@ktenzer ~]$ sudo virsh net-define /tmp/provisioning.xml [ktenzer@ktenzer ~]$ sudo virsh net-autostart provisioning [ktenzer@ktenzer ~]$ sudo virsh net-start provisioning
Deploy Undercloud
First install Red Hat Enterprise Linux (RHEL) 7.3 on undercloud VM. Register with subscription manager and configure required RPM repositories for Red Hat OpenStack Platform.
[Undercloud]
[root@director ~]# subscription-manager register
[root@director ~]#subscription-manager list --available \ subscription-manager attach --pool=
[root@director ~]# subscription-manager repos --disable=*
[root@director ~]# subscription-manager repos --enable=rhel-7-server-rpms \ --enable=rhel-7-server-extras-rpms \ --enable=rhel-7-server-rh-common-rpms \ --enable=rhel-ha-for-rhel-7-server-rpms \ --enable=rhel-7-server-openstack-10-rpms
Update all pacakges and reboot.
[root@director ~]# yum update -y
[root@director ~]# systemctl reboot
Install Director Packages.
[root@director ~]# yum install -y python-tripleoclient
Ensure host is defined in /etc/hosts.
[root@director ~]# vi /etc/hosts 192.168.122.90 ospd.lab.com ospd
Create Stack User.
[root@director ~]# useradd stack [root@director ~]# passwd stack # specify a password
Configure user with sudo permissions.
[root@director ~]# echo "stack ALL=(root) NOPASSWD:ALL" | tee -a /etc/sudoers.d/stack [root@director ~]# chmod 0440 /etc/sudoers.d/stack
Switch to new stack user.
[root@director ~]# su - stack [stack@director ~]$
Create directories for images and templates. Images are used to boot initial systems and provide baseline OS. Templates are used to customize deployment.
[stack@director ~]$ mkdir ~/images
[stack@director ~]$ mkdir ~/templates
Configure Director using the sample.
[stack@director ~]$ cp /usr/share/instack-undercloud/undercloud.conf.sample \ ~/undercloud.conf
In my environment the 192.168.126.0/24 network is the undercloud network and used for provisioning as well as deploying the overcloud.
[stack@undercloud ~]$ vi ~/undercloud.conf [DEFAULT] local_ip = 192.168.126.1/24 undercloud_public_vip = 192.168.126.2 undercloud_admin_vip = 192.168.126.3 local_interface = eth1 masquerade_network = 192.168.126.0/24 dhcp_start = 192.168.126.100 dhcp_end = 192.168.126.150 network_cidr = 192.168.126.0/24 network_gateway = 192.168.126.1 inspection_iprange = 192.168.126.130,192.168.126.99 generate_service_certificate = true certificate_generation_ca = local
Install the undercloud.
[stack@odpd ~]$ openstack undercloud install ############################################################################# Undercloud install complete. The file containing this installation's passwords is at /home/stack/undercloud-passwords.conf. There is also a stackrc file at /home/stack/stackrc. These files are needed to interact with the OpenStack services, and should be secured. #############################################################################
Import overcloud images.
[stack@odpd ~]$ source stackrc
[stack@odpd ~]$ sudo yum install -y \ rhosp-director-images rhosp-director-images-ipa
[stack@odpd ~]$ cd ~/images $ for i in \ /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar \ /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar; \ do tar -xvf $i; done
[stack@odpd ~]$ openstack overcloud image upload --image-path \ /home/stack/images/
Configure DNS on undercloud network.
[stack@odpd ~]$ neutron subnet-list +--------------------------------------+------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+------------------+--------------------------------------------------------+ | 294ff536-dc8b-49a3-8327-62d9792d30a6 | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.200"} | +--------------------------------------+------+------------------+--------------------------------------------------------+
[stack@odpd ~]$ neutron subnet-update 294ff536-dc8b-49a3-8327-62d9792d30a6 \ --dns-nameserver 8.8.8.8
[Hypervisor]
Registering Overcloud Nodes. Create VM hulls in KVM using virsh on hypervisor host.
Note: You will need to change the disk path to suit your needs.
ktenzer$ cd /home/ktenzer/VirtualMachines ktenzer$ sudo for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done ktenzer$ sudo for i in {1..3}; do virt-install --ram 4096 --vcpus 4 --os-variant rhel7 --disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --network network:external --name overcloud-node$i --cpu SandyBridge,+vmx --dry-run --print-xml > /tmp/overcloud-node$i.xml; virsh define --file /tmp/overcloud-node$i.xml; done
[Undercloud]
Copy ssh key from undercloud system to KVM hypervisor host for stack user.
[stack@odpd ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122.1
Save the MAC addresses of the NICs used for provisioning.
Note: Ironic needs to know what MAC addresses a node has associated for provisioning network.
[stack@odpd images]$ for i in {1..3}; do virsh \ -c qemu+ssh://stack@192.168.122.1/system domiflist overcloud-node$i \ | awk '$3 == "provisioning" {print $5};'; done > /tmp/nodes.txt
[stack@odpd images]$ cat /tmp/nodes.txt 52:54:00:7e:d8:01 52:54:00:f6:a6:73 52:54:00:c9:b2:84
[stack@undercloud ~]$ jq . << EOF > ~/instackenv.json { "ssh-user": "stack", "ssh-key": "$(cat ~/.ssh/id_rsa)", "power_manager": "nova.virt.baremetal.virtual_power_driver.VirtualPowerManager", "host-ip": "192.168.122.1", "arch": "x86_64", "nodes": [ { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 1p /tmp/nodes.txt)" ], "cpu": "2", "memory": "4096", "disk": "60", "arch": "x86_64", "pm_user": "stack" }, { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 2p /tmp/nodes.txt)" ], "cpu": "4", "memory": "2048", "disk": "60", "arch": "x86_64", "pm_user": "stack" }, { "pm_addr": "192.168.122.1", "pm_password": "$(cat ~/.ssh/id_rsa)", "pm_type": "pxe_ssh", "mac": [ "$(sed -n 3p /tmp/nodes.txt)" ], "cpu": "4", "memory": "2048", "disk": "60", "arch": "x86_64", "pm_user": "stack" } ] } EOF
Validate introspection configuration.
[stack@odpd ~]$ curl -O https://raw.githubusercontent.com/rthallisey/clapper/master/instackenv-validator.py
Import nodes into Ironic and set them to bootable.
[stack@odpd ~]$ openstack baremetal import --json ~/instackenv.json
[stack@odpd ~]$ openstack baremetal configure boot
[stack@odpd ~]$ openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | ea61b158-9cbd-46d2-93e9-eadaccb1589b | None | None | power off | available | False | | 11aaf849-361e-4bda-81f5-74c245f554af | None | None | power off | available | False | | 275448c1-aa8d-4854-bb3b-bc73e1e1a794 | None | None | power off | available | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+
Set nodes to managed.
[stack@odpd ~]$ for node in $(openstack baremetal node list -c UUID \ -f value) ; do openstack baremetal node manage $node ; done
List nodes.
[stack@odpd ~]$ openstack baremetal node list +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+------+---------------+-------------+--------------------+-------------+ | ea61b158-9cbd-46d2-93e9-eadaccb1589b | None | None | power off | manageable | False | | 11aaf849-361e-4bda-81f5-74c245f554af | None | None | power off | manageable | False | | 275448c1-aa8d-4854-bb3b-bc73e1e1a794 | None | None | power off | manageable | False | +--------------------------------------+------+---------------+-------------+--------------------+-------------+
Run introspection against all managed nodes.
Note: Nodes are booted using ramdisk and their hardware inspected. Introspection prepares nodes for deployment into overcloud.
[stack@odpd ~]$ openstack overcloud node introspect --all-manageable \ --provide
Tag Control Nodes.
Note: tagging nodes allows us to associate a node with a specific role in the overcloud.
[stack@odpd ~]$ openstack baremetal node set \ --property capabilities='profile:control,boot_option:local' \ 0e30226f-f208-41d3-9780-15fa5fdabbde
Tag Compute Nodes.
[stack@odpd ~]$ openstack baremetal node set \ --property capabilities='profile:compute,boot_option:local' \ cd3e3422-e7db-45c7-9645-858503a2cdc8
[stack@odpd ~]$ openstack baremetal node set \ --property capabilities='profile:compute,boot_option:local' \ 5078e1c1-fbe5-4d7f-a222-0c0fd32af423
Check Overcloud Profiles.
[stack@odpd ~]$ openstack overcloud profiles list +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-----------+-----------------+-----------------+-------------------+ | 0e30226f-f208-41d3-9780-15fa5fdabbde | | available | control | | | cd3e3422-e7db-45c7-9645-858503a2cdc8 | | available | compute | | | 5078e1c1-fbe5-4d7f-a222-0c0fd32af423 | | available | compute | | +--------------------------------------+-----------+-----------------+-----------------+-------------------+
Deploy Overcloud
There are two ways to deploy overcloud 1) default 2) customize. You will pretty much always want to customize your deployment but for starting out the default method can be a good way to simplify things and rule out potential problems. I recommend always doing default install just to get a baseline working environment and then throwing it away, redeploying with a customized install
[Undercloud]
Option 1: Default Deployment
The default deployment will put the overcloud on the provisioning network. That means you end up with one network hosting both undercloud and overcloud. The external network is not used.
[stack@odpd ~]$ openstack overcloud deploy --templates --control-scale 1 \ --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan
Option 2: Customized Deployment
The really nice thing about director is you have a high degree of customization. In this example we are setting overcloud up on a single 192.168.122.0/24 network. However normally you would have separate networks for OpenStack management, API, public, storage, etc.
Clone my github repository.
[stack@odpd ~]$ git clone https://github.com/ktenzer/openstack-heat-templates.git
Copy my templates to your local ~/templates directory.
[stack@odpd ~]$ cp ~/openstack-heat-templates/director/lab/osp10/templates/* ~/templates
Deploy overcloud using templates.
[stack@odpd ~]$ openstack overcloud deploy --templates \ -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \ -e ~/templates/network-environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml \ -e ~/templates/firstboot-environment.yaml --control-scale 1 \ --compute-scale 1 --control-flavor control \ --compute-flavor compute --ntp-server pool.ntp.org \ --neutron-network-type vxlan --neutron-tunnel-types vxlan 2017-04-12 14:46:11Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully 2017-04-12 14:46:12Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed 2017-04-12 14:46:12Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Started Mistral Workflow. Execution ID: 000ecec3-46aa-4e3f-96d9-8a240d34d6aa /home/stack/.ssh/known_hosts updated. Original contents retained as /home/stack/.ssh/known_hosts.old Overcloud Endpoint: http://192.168.122.106:5000/v2.0 Overcloud Deployed
List overcloud nodes.
[stack@odpd ~]$ nova list +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | 1e286764-9334-4ecd-9baf-e37a49a4fbd5 | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.126.106 | | a21a14f5-94df-4a3a-8629-ba8d851525ff | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.126.103 | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
Connect to overcloud controller from undercloud.
[stack@odpd ~]$ ssh heat-admin@192.168.126.103
[Overcloud Controller]
Get overcloud admin password.
Overcloud parameters generated during deployment such as password are stored in hiera.
[root@overcloud-controller-0 ~]$ sudo -i
[root@overcloud-controller-0 ~]# hiera keystone::admin_password HngV6vc4ZP2bZ78ePfgWAvHAh
[Undercloud]
Create overcloud keystone source file.
[stack@odpd ~]$ vi overcloudrc export OS_NO_CACHE=True export OS_CLOUDNAME=overcloud export OS_AUTH_URL=http://192.168.122.106:5000/v2.0 export NOVA_VERSION=1.1 export COMPUTE_API_VERSION=1.1 export OS_USERNAME=admin export OS_PASSWORD=HngV6vc4ZP2bZ78ePfgWAvHAh export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available" export OS_TENANT_NAME=admin export PS1='[\u@\h \W]$ '
Source overcloudrc.
[stack@odpd ~]$ source overcloudrc
List hypervisor hosts in overcloud.
[stack@odpd ~]$ nova hypervisor-list +----+---------------------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +----+---------------------------------+-------+---------+ | 1 | overcloud-compute-0.localdomain | up | enabled | +----+---------------------------------+-------+---------+
Troubleshooting Deployment
Let’s face it in OpenStack there is a lot that can go wrong. I like this quote from Dirk Wallerstorfer.
“In short, OpenStack networking is a lot like Venice—there are masquerades and bridges all over the place!”
-Dirk Wallerstorfer
source: https://www.dynatrace.com/blog/openstack-network-mystery-2-bytes-cost-me-two-days-of-trouble/
[Undercloud]
Red Hat is making it much easier to troubleshoot deployment problems.While the deployment is running you can follow along in Heat by showing nested steps.
[stack@odpd ~]$ heat stack-list --show-nested
If for some reason the deployment fails, there is now a command to gather up all the information to make it really easy to find out what happened.
[stack@odpd ~]$ openstack stack failures list --long overcloud
Summary
OpenStack is the way of the future for virtualization platforms and I think in the future many traditional virtualization environments will be moving to OpenStack. The choice is simple either they will stay on-premise and become OpenStack or move to public cloud. Of course there will be those that stick with traditional virtualization, there are still lots and lots of mainframes around but clear trend will be to public cloud or OpenStack. The only thing holding OpenStack back is complexity and manageability. Red Hat is focused on making OpenStack simple without losing the “Open” in OpenStack. In other words without compromising on what makes OpenStack a great cloud computing platform. As you have seen in this article Red Hat OpenStack Platform is making great strides and the fact that you can setup an OpenStack environment using enterprise production grade tooling on a 12GB RAM laptop is a good sign.
Happy OpenStacking!
(c) 2017 Keith Tenzer
Hi,
Thanks for great openstack series guide.
I think you missed DHCP option for external network. Please correct me If I am wrong.
LikeLike
Quick question the password that is hiera keystone::admin_password generated. Is this the same password we use to login to horizon dashboard?
LikeLike
Yes
LikeLike
Awesome! thanks alot.
LikeLike
Hi,
Please write a post for LDAP integration with openstack if possible. It will be useful for beginners like me. Thanks
LikeLike
Hi Ravi,
You mean something like this?
https://keithtenzer.com/2016/03/08/openstack-keystone-integrating-ldap-with-ipa/
Keith
LikeLike
Thanks for your refrence. It is good topic and nice explaination for LDAP inegration with openstack. But I am looking for Active Directory integartion with openstack, because many corporates are using Active Directory as centralized user management. Please write such step by step guide if you have time. it will be really useful. Thanks
Ravi.K
LikeLike
Hi Ravi,
I’ll see what I can do thanks for feedback.
Keith
LikeLike
While doing “openstack undercloud install” got below error “file /usr/lib64/python2.7/site-packages/M2Crypto-0.21.1-py2.7.egg-info from install of m2crypto-0.21.1.pulp-13.el7sat.x86_64 conflicts with file from package m2crypto-0.21.1-17.el7.x86_64” . Is there anything that I might have missed or messed up in the installation?
LikeLike
You using RHEL? Did you ensure proper repos are enabled? Did you do yum update?
Keith
LikeLike
Thanks it was because i didn’t make sure of what repos were enabled. Once I fixed that it went through to some further point and threw some error again. I am using ubuntu has host OS and KVM running on it. I created VM on KVM and installed RHEL on it.. onto which I was trying to install director. Do you think that even the host OS should be RHEL?
LikeLike
Thanks it was because i didn’t make sure of what repos were enabled. Once I fixed that it went through to some further point and threw some error again. I am using ubuntu has host OS and KVM running on it. I created VM on KVM and installed RHEL on it.. onto which I was trying to install director. Do you think that even the host OS should be RHEL?
LikeLike
I am very new to openstack , so how to install openstack dashboard in this deployment ? is there something I need to install for getting horizon dashboard ? please help
LikeLike
No Horizon is automatically setup if everything went well. You should be able to access overcloud public ip via http or https.
Keith
LikeLike
Thanks 🙂
LikeLike
Hi Keith,
I know you are trying to create overcloud vm body without OS installed , but I am not able to understand the technical details of the command , BTW , once we created VM body, is there anything we need to do for PXE booting , I mean how to enable network booting in the overcloud nodes ?
Could you kindly explain what this section does , it is quiet difficult for a newbie for me , just an overview is enough sir , ” sudo for i in {1..3}; do qemu-img create -f qcow2 -o preallocation=metadata overcloud-node$i.qcow2 60G; done . and
sudo for i in {1..3}; do virt-install –ram 4096 –vcpus 4 –os-variant rhel7 –disk path=/home/ktenzer/VirtualMachines/overcloud-node$i.qcow2,device=disk,bus=virtio,format=qcow2 –noautoconsole –vnc –network network:provisioning –network network:external –network network:external –name overcloud-node$i –cpu SandyBridge,+vmx –dry-run –print-xml > /tmp/overcloud-node$i.xml; virsh define –file /tmp/overcloud-node$i.xml; done”
sincerely appreciate if you explain little bit about this , because I am not able to proceed further in my PoC. Please help
LikeLike
OpenStack director runs a DHCP and PXE server. You basically configure the VMs to boot from network and then the PXE server (undercloud) installs image. Once this is done they are available and you can deploy openstack overcloud on those nodes and give node a role like conpute or mgmt. The commands above simply create the VMs and add a disk plus setup networking to boot DHCP.
LikeLike
Keith ,
could you please clarify stack@odpd and stack@undercloud are the same host right ? ie undercloud vm right ? please clarify
LikeLike
Yes
LikeLike
Before we deploy overcloud do we need to create this VLAN in KVM environment?
InternalApiNetworkVlanID: 201
StorageNetworkVlanID: 202
StorageMgmtNetworkVlanID: 203
TenantNetworkVlanID: 204??
LikeLike
No
LikeLike
In your example undercloud.conf the dhcp range overlaps the inspection range. Automatically fails out the undercloud install unless you fix it.
LikeLike
Good catch that is typo inspection range should be .30-.99.
Keith
LikeLike
Hello Keith,
Thank you for your perfect posts. I am still going through this post but I noticed possible typo in undercloud /etc/hosts file:
[root@director ~]# vi /etc/hosts
192.168.122.50 ospd.lab.com ospd
I guess IP address should be 192.168.122.90?
Regards,
Ab
LikeLike
Yep good catch, fixed
LikeLike
Hi Keith,
Can we upgrade OSP 9 Director to OSP 11 Director directly ? i am not updating overcloud node.. i want upgrade undercloud node directly to OSP11
LikeLike
Nope you need to go from 9 to 10 to 11. There is only direct upgrade from long life cycle versions, this would be OSP 8, 11, 14, etc
Keith
LikeLike
instead of using a laptop, can I deploy 4 VMS on vcenter?
it says to create an external network on the hypervisor. My question is, I will install 4 centos7/thel7 vms and one of them will be KVM/hypervisor. I will install KVM on one of the vm and then will deploy under cloud.
LikeLike
Well if you already have hypervisor of Vms then you dont need an additional one. I had a single bare-metal system so I deployed KVM and on top the VMs but if you have virtualization already you can skip that. Then you just need at minimum 3 VMs (1 for undercloud and 2 for overcloud compute + mgmt).
Keith
LikeLike
Hi Keith,
For single vlan deployment, how tagged vlans are created ? did you created in KVM ? please shower some inputs on vlan tagging ( 201 – 204 ) are mentioned in your network-environment.yaml file? , please give some hints on vlan tagging for the overcloud deployment.
LikeLike
My environment was a lab setup so a single physical network, there was no tagging at KVM level. OpenStack overcloud has various networks for example public, API, mgmt, storage, storage mgmt. For each of these vlans are created on the overcloud nodes. If you do “ip a” you will see this. OpenStack uses openvswitch by default to setup the SDN and vlans. This is done automatically by OpenStack director. You simply need to specify the vlans and the interfaces you want to use. Typically for each external network where you have public IPs you will specify interface. The OpenStack communications are all done via SDN and Neutron.
Keith
LikeLike
How did you connected your KVM host and undercloud VM ? is through your External network ( NAT ) or have you created a bridge interface in KVM host and connected to undercloud VM ? please let us know . if possible try to please post your undercloud vm creation arguments
Thanks
LikeLike
Hi Keith,
Is it mandatory to have a non root user in hypervisor host ? or can I SSH as root user to hypervisor host instead of stack, because I am getting stuck during introspection of overclcloud vms, the introspection runs for long hours before it says socket already closed error , however I sshed as root user to hypervisor and gave root as user name in instack json file , any hint or suggestion for how to do successful introspection would be very helpful for me
LikeLike
Hi Keith,
I am trying to deploy rhel OSP 10 using this guide on KVM hypervisor , I am following steps from your guide , I have no problem till the introspection steps, whenever I am doing introspection it is not successful, that is , when the introspection starts the overcloud node goes to running state and it remains there , it is not automatically goes to shut off state. I think I am missing something here , please let me know some theory / tips behind the introspection step so that I can do successful introspection. I am trying this for 7th time till no success , hopefully after your clarification I will able to deploy successfully
Thanks
LikeLike
Hi
there is a mis-configuration in udercloud.conf file
you have mentioned
dhcp_start = 192.168.126.100
dhcp_end = 192.168.126.150
and
inspection_iprange = 192.168.126.130,192.168.126.99
So it’s not vaild ……it’s overlapping the range with DHCP.
It should be as following:
dhcp_start = 192.168.126.100
dhcp_end = 192.168.126.150
and
inspection_iprange = 192.168.126.160,192.168.126.199
By the way………..this tutorial is awesome. thanks and keep it up
LikeLike
Hi,
Any help please,
when I run the introspection command :
openstack overcloud node introspect –all-manageable –provide
Started Mistral Workflow. Execution ID: efc6e1b2-4b15-4e53-93db-cc62554c9ae6
Waiting for introspection to finish…
—-
I see only the error :
Introspection for UUID 1424ebf7-14ed-417e-b4c0-13e2d8de4446 finished with error: Introspection timeout
Introspection for UUID bfac8ae5-977b-4813-b93d-7bdc6a6b564a finished with error: Introspection timeout
Introspection completed with errors:
1424ebf7-14ed-417e-b4c0-13e2d8de4446: Introspection timeout
bfac8ae5-977b-4813-b93d-7bdc6a6b564a: Introspection timeout
—-
With “openstack baremetal node list” was manageable after the openstack overcloud node introspect the VMs from the compute node change to running .
From the Log : ironic/ironic-api.log
https://pastebin.com/KALpuws7
From : ironic-inspector/ironic-inspector.log
https://pastebin.com/jYLXie0b
From ironic-conductor log
https://pastebin.com/R9Ejr91t
Thanks in advance for your help
LikeLike
Hi Ashraf,
Looking at logs you are running into timeouts. Are you running things nested, meaning instrospection is happening on VMs (as blog is doing)? If so can you try disabling firewall on hypervisor. This looks to be communications issue.
Keith
LikeLike
Watch out for permissions on the overcloud vm disk files if you have placed them outside of /var/lib/libvirt/images. The best way to make sure VMs are bootable via virsh start overcloud-node1 on the baremetal node.
LikeLike
Thanks for feedback Ivan
LikeLike
Good Article.I have deployed successfully.
one doubt , What is next steps once overcloud deployed.
I mean how you use network part for VM creation.
Thanks
LikeLike
Nice Article, worked for me 100%.
Now what is next once overcloud deployed .
I means network creation part .
please suggest
LikeLike
Glad things worked! For openstack networking and creating tenant networks you can refer to this article: https://keithtenzer.com/2016/07/18/openstack-networking-101-for-non-network-engineers/
LikeLike
hi, why three NICs are added to overcloud VM, Can you please explain bit more ovcercloud VM networking par
LikeLike
Because there are three networks: provisioning, mgmt and openstack
LikeLike
hi, i am facing this problem “The requested action “provide” can not be performed on node “24b3679b-73c0-419e-a017-cc5c97af94e0” while it is in state “enroll”. (HTTP 400)”. overcloud VM always stuck in “enroll” state.Any idea whats going wrong ??
LikeLike
Sorry never ran into this issue
LikeLike
Hi ,
Please check if you are able to access your overcloud nodes manually .
Also see if your instackenv.json has all details entered correctly .
Then delete the nodes from ironic database #ironic delete-node and rerun #openstack baremetal import –json ~/instackenv.json
LikeLike
Hello Ktenzer, I’m having issue while defining the Networks for KVM like External and Provisioning, bcz my Host and Guest Systems are not pinging and when I performed ‘tcpdump’ for network Interface , It is showing the ‘ARP’ is not able to resolve MAC address .
Please help me to solve this issue.
LikeLike
Try creating the networks manually and ensure you are using ‘NAT’. Then try and ping, if this doesnt work something is fundamentally wrong with your setup.
LikeLike
I’ve done many times but the output is same, even I’ve used both GUI and CLI mode to configure NAT but again it’s not working …
FOR NAT:
# cat external.xml
external
and FOR Provisioning:
# cat provisioning.xml
provisen
LikeLike
[Undercloud]
Copy ssh key from undercloud system to KVM hypervisor host for stack user.
[stack@odpd ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub stack@192.168.122
it is saying permission denied in this. any clue..
LikeLike
Hi,
I am stuck with ssh key copy process from director node to KVM host. it is saying permission denied. Please help some to resolve. Thanks.
LikeLike
I had same issue, then i added stack user in KVM Hypervisor machine that took care this issue.
LikeLike
can someone help me out???…I’m trying to Import nodes into Ironic ….stuck here
[stack@director ~]$ openstack baremetal import –json ~/instackenv.json
Started Mistral Workflow. Execution ID: 1597540b-368d-46eb-ac15-f10df2d30c06
[stack@director ~]$ openstack baremetal import –json ~/instackenv.json
Started Mistral Workflow. Execution ID: 1597540b-368d-46eb-ac15-f10df2d30c06
Successfully registered node UUID 8a9a6051-f995-4a3b-8d6b-24885afb0123
Successfully registered node UUID 77614e0c-2850-418d-80e6-324889a8774f
Successfully registered node UUID 245dc9b3-d25b-4f32-a4f0-a168994fe692
Started Mistral Workflow. Execution ID: 705845d1-fca2-4396-898a-0fd02798fe14
Failed to set nodes to available state: IronicAction.node.set_provision_state failed: : The requ ested action “provide” can not be performed on node “8a9a6051-f995-4a3b-8d6b-24885afb0123” while it is in state “enroll”.
IronicAction.node.set_provision_state failed: : The requested action “provide” can not be perfor med on node “77614e0c-2850-418d-80e6-324889a8774f” while it is in state “enroll”.
IronicAction.node.set_provision_state failed: : The requested action “provide” can not be perfor med on node “245dc9b3-d25b-4f32-a4f0-a168994fe692” while it is in state “enroll”.
+————————————–+——+—————+————-+——————–+————-+
[stack@director ~]$ neutron net-list
+————————————–+———-+——————————————————-+
| id | name | subnets |
+————————————–+———-+——————————————————-+
| d947c1fa-d225-4826-8397-7cb7c00a5058 | ctlplane | 53d6b32a-efcb-4f52-8730-4903d7d154cc 192.168.126.0/24 |
+————————————–+———-+——————————————————-+
[stack@director ~]$ neutron subnet-list
+————————————–+——+——————+——————————————————–+
| id | name | cidr | allocation_pools |
+————————————–+——+——————+——————————————————–+
| 53d6b32a-efcb-4f52-8730-4903d7d154cc | | 192.168.126.0/24 | {“start”: “192.168.126.100”, “end”: “192.168.126.150”} |
+————————————–+——+——————+——————————————————–+
[stack@director ~]$ mistral execution-list
+————————+————————+————————+————————+————————+———+————————–+———————+———————+
| ID | Workflow ID | Workflow name | Description | Task Execution ID | State | State info | Created at | Updated at |
+————————+————————+————————+————————+————————+———+————————–+———————+———————+
| 30c1cd42-2de4-4b1d- | 7fd2ae61-e1d1-48e4-9fd | tripleo.plan_managemen | | | SUCCESS | None | 2018-04-07 05:21:11 | 2018-04-07 05:21:44 |
| a1af-af77c66e3ed1 | 1-0d50379cd84a | t.v1.create_default_de | | | | | | |
| | | ployment_plan | | | | | | |
| e1bf51aa-2d98-418e-a26 | a1a8e7d0-6f19-4bf4-a8d | tripleo.validations.v1 | | | SUCCESS | None | 2018-04-07 05:21:47 | 2018-04-07 05:21:50 |
| 7-3f2ad9a1092a | 5-8d613513dfe0 | .copy_ssh_key | | | | | | |
| a7d82849-27ba- | 068a0321-4ebd- | tripleo.baremetal.v1.r | | | SUCCESS | None | 2018-04-07 06:27:51 | 2018-04-07 06:27:56 |
| 44f5-9615-b8ccb3447331 | 43cf-8175-dbdb83650165 | egister_or_update | | | | | | |
| 6ddd73ff-9d64-4cc4-95b | 068a0321-4ebd- | tripleo.baremetal.v1.r | | | SUCCESS | None | 2018-04-07 06:29:41 | 2018-04-07 06:54:14 |
| 2-84954da538a2 | 43cf-8175-dbdb83650165 | egister_or_update | | | | | | |
| 093be17a-3307-4859-912 | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | 47dd6b40-7f5c- | SUCCESS | None | 2018-04-07 06:29:44 | 2018-04-07 06:54:01 |
| 9-5abe93b44327 | -9f5a-915927b45d53 | et_node_state | | 4c25-b507-9706256d9bae | | | | |
| 0b37c51d-4e96-442a- | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | 47dd6b40-7f5c- | SUCCESS | None | 2018-04-07 06:29:44 | 2018-04-07 06:53:54 |
| 8fa1-37d1636b6ee7 | -9f5a-915927b45d53 | et_node_state | | 4c25-b507-9706256d9bae | | | | |
| 44a0dc07-1992-4162-b08 | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | 47dd6b40-7f5c- | SUCCESS | None | 2018-04-07 06:29:44 | 2018-04-07 06:54:10 |
| 7-9afdd8964644 | -9f5a-915927b45d53 | et_node_state | | 4c25-b507-9706256d9bae | | | | |
| 0333784a-7f69-4f46 | 8e3ea4a9-336b-48bd- | tripleo.baremetal.v1.p | | | SUCCESS | None | 2018-04-07 06:54:14 | 2018-04-07 06:54:22 |
| -904a-f7ae339eb95a | be77-c23f6c1c81a7 | rovide | | | | | | |
| 0850be58-8ea2-480b-920 | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | ddbe0c0b-3ab3-4d39-b4d | ERROR | Failure caused by error | 2018-04-07 06:54:14 | 2018-04-07 06:54:16 |
| 4-05d6c97f3cc6 | -9f5a-915927b45d53 | et_node_state | | 8-20c500c15294 | | i… | | |
| 481b274d-d901-4591 | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | ddbe0c0b-3ab3-4d39-b4d | ERROR | Failure caused by error | 2018-04-07 06:54:14 | 2018-04-07 06:54:16 |
| -bc5e-95d67256e73f | -9f5a-915927b45d53 | et_node_state | | 8-20c500c15294 | | i… | | |
| 7ff36a1c-76e5-483e- | 2a567b73-7beb-4d24 | tripleo.baremetal.v1.s | sub-workflow execution | ddbe0c0b-3ab3-4d39-b4d | ERROR | Failure caused by error | 2018-04-07 06:54:14 | 2018-04-07 06:54:16 |
| ba07-144dae76d177 | -9f5a-915927b45d53 | et_node_state | | 8-20c500c15294 | | i…
[root@director ~]# tail -f /var/log/ironic-inspector/ironic-inspector.log
2018-04-07 11:35:03.509 1308 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
2018-04-07 11:35:18.508 1308 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
2018-04-07 11:35:18.510 1308 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
2018-04-07 11:35:33.510 1308 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
2018-04-07 11:35:33.512 1308 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
2018-04-07 11:35:48.511 1308 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
2018-04-07 11:35:48.513 1308 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
2018-04-07 11:36:03.393 1308 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_clean_up’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
2018-04-07 11:36:03.512 1308 DEBUG futurist.periodics [-] Submitting periodic function ‘ironic_inspector.main.periodic_update’ _process_scheduled /usr/lib/python2.7/site-packages/futurist/periodics.py:614
2018-04-07 11:36:03.514 1308 DEBUG ironic_inspector.firewall [-] DHCP is already disabled, not updating _disable_dhcp /usr/lib/python2.7/site-packages/ironic_inspector/firewall.py:142
on Hypervisor node
[root@hypervisor ~]# virsh list –all
Id Name State
—————————————————-
1 director running
– overcloud-node1 shut off
– overcloud-node2 shut off
– overcloud-node3 shut off
I can start overcloud node via virsh start, checked setting on each overcloud node on virt-manager, looks good to me…
I don’t know what to look next….
any help would be greatly appreciated….
LikeLike
I am good now… thanks
[stack@director ~]$ ironic node-list
+————————————–+——+—————+————-+——————–+————-+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+————————————–+——+—————+————-+——————–+————-+
| a6a91b9b-fcb5-49d4-9c25-bc214931e7ab | None | None | power off | available | False |
| 2fd68d27-ff03-46b8-93bd-f4956aec2cba | None | None | power off | available | False |
| a666fbe5-b42c-4a2d-8c14-5665f255b379 | None | None | power off | available | False |
LikeLike
can some one help me out?
Failing overcloud deployment ( missing /var/lib/os-collect-config/local-data on controller node)
[stack@director ~]$ openstack stack resource show overcloud ObjectStorageAllNodesDeployment
+————————+————————————————————————————————————————————————+
| Field | Value |
+————————+————————————————————————————————————————————————+
| attributes | {u’deploy_stderrs’: None, u’deploy_stdouts’: None, u’deploy_status_codes’: None} |
| creation_time | 2018-04-09T05:52:35Z |
| description | |
| links | [{u’href’: u’https://192.168.126.2:13004/v1/5cc7edaee311446cb76ddf948865b242/stacks/overcloud/ebabb489-b1a7-4f1f-b50d- |
| | e648030d8d8d/resources/ObjectStorageAllNodesDeployment’, u’rel’: u’self’}, {u’href’: |
| | u’https://192.168.126.2:13004/v1/5cc7edaee311446cb76ddf948865b242/stacks/overcloud/ebabb489-b1a7-4f1f-b50d-e648030d8d8d’, u’rel’: u’stack’}] |
| logical_resource_id | ObjectStorageAllNodesDeployment |
| physical_resource_id | |
| required_by | [u’UpdateWorkflow’, u’AllNodesDeploySteps’, u’ObjectStorageAllNodesValidationDeployment’] |
| resource_name | ObjectStorageAllNodesDeployment |
| resource_status | INIT_COMPLETE |
| resource_status_reason | |
| resource_type | OS::Heat::StructuredDeployments |
| updated_time | 2018-04-09T05:52:35Z
[stack@director ~]$ openstack stack list
+————————————–+————+——————–+———————-+————–+
| ID | Stack Name | Stack Status | Creation Time | Updated Time |
+————————————–+————+——————–+———————-+————–+
| ebabb489-b1a7-4f1f-b50d-e648030d8d8d | overcloud | CREATE_IN_PROGRESS | 2018-04-09T05:52:34Z | None |
+————————————–+————+——————–+———————-+————–+
[stack@director ~]$ nova list
+————————————–+————————+——–+————+————-+————————–+
| ID | Name | Status | Task State | Power State | Networks |
+————————————–+————————+——–+————+————-+————————–+
| 7eae42cc-c423-4222-ae70-a84d4da6fbdd | overcloud-compute-0 | ACTIVE | – | Running | ctlplane=192.168.126.102 |
| d0514458-691e-414f-b23a-fb30c8e8a008 | overcloud-compute-1 | ACTIVE | – | Running | ctlplane=192.168.126.107 |
| a746236d-8fd5-4bb8-81e8-1657433627ef | overcloud-controller-0 | ACTIVE | – | Running | ctlplane=192.168.126.111 |
+————————————–+————————+——–+————+————-+————————–+
[root@overcloud-controller-0 os-collect-config]# tail -f /var/log/messages
Apr 9 02:09:55 localhost os-collect-config: /var/lib/os-collect-config/local-data not found. Skipping
Apr 9 02:09:55 localhost os-collect-config: No local metadata found ([‘/var/lib/os-collect-config/local-data’])
Apr 9 02:10:25 localhost os-collect-config: HTTPConnectionPool(host=’169.254.169.254′, port=80): Max retries exceeded with url: /latest/meta-data/ (Caused by NewConnectionError(‘: Failed to establish a new connection: [Errno 111] Connection refused’,))
Apr 9 02:10:25 localhost os-collect-config: Source [ec2] Unavailable.
Apr 9 02:10:25 localhost os-collect-config: /var/lib/os-collect-config/local-data not found. Skipping
Apr 9 02:10:25 localhost os-collect-config: No local metadata found ([‘/var/lib/os-collect-config/local-data’])
Apr 9 02:10:55 localhost os-collect-config: HTTPConnectionPool(host=’169.254.169.254′, port=80): Max retries exceeded with url: /latest/meta-data/ (Caused by NewConnectionError(‘: Failed to establish a new connection: [Errno 111] Connection refused’,))
Apr 9 02:10:55 localhost os-collect-config: Source [ec2] Unavailable.
Apr 9 02:10:56 localhost os-collect-config: /var/lib/os-collect-config/local-data not found. Skipping
Apr 9 02:10:56 localhost os-collect-config: No local metadata found ([‘/var/lib/os-collect-config/local-data’])
Apr 9 02:11:26 localhost os-collect-config: HTTPConnectionPool(host=’169.254.169.254′, port=80): Max retries e
[root@overcloud-controller-0 os-collect-config]# ip route show
169.254.169.254 via 192.168.126.1 dev eth0
172.16.0.0/24 dev vlan201 proto kernel scope link src 172.16.0.15
172.17.0.0/24 dev vlan204 proto kernel scope link src 172.17.0.10
172.18.0.0/24 dev vlan202 proto kernel scope link src 172.18.0.19
172.19.0.0/24 dev vlan203 proto kernel scope link src 172.19.0.10
192.168.122.0/24 dev br-ex proto kernel scope link src 192.168.122.105
192.168.126.0/24 dev eth0 proto kernel scope link src 192.168.126.111
[root@overcloud-controller-0 os-collect-config]# ping 169.254.169.254
PING 169.254.169.254 (169.254.169.254) 56(84) bytes of data.
From 192.168.126.1 icmp_seq=1 Destination Port Unreachable
From 192.168.126.1 icmp_seq=2 Destination Port Unreachable
LikeLike