Red Hat Enterprise Virtualization (RHEV) - Management Options

25 minute read

oVirt_300x100

Overview

RHEV has two separate distinct layers, the hypervisor itself and management. The hypervisor layer, RHEV-H is of course built on Red Hat Enterprise Linux (RHEL) and utilizes KVM for the hypervisor technology. RHEV-H can be configured using pre-built RHEV-H image or using standard RHEL. The management layer, Red Hat Enterprise Virtualization Management (RHEV-M) provides management for a multi-hypervisor environment and uses concepts such as datacenters, clusters, networks and storage domains to describe virtualization resources. In this article we will focus on options for configuring RHEV-M. The upstream opensource project behind RHEV-M is oVirt. There are two options as of RHEV 3.5 for configuring RHEV-M, standalone or hosted engine.

Below are other articles you may find of interest relating to RHEV:

RHEV-M Standalone

RHEV-M standalone means configuring a RHEL physical or virtual system as a RHEV-M host.

PROS

  • Flexibility can install RHEV-M on any RHEL 6 host.

CONS

  • HA not built-in, you need to deal with HA on your own.
  • Can't use RHEL 7.
  • Requires additional host outside of virtualization environment.
  • RHEV-M runs outside virtualization environment and as such cannot directly utilize virtualization resources such as storage.

To configure RHEV using this method follow the steps in this already posted article.

RHEV-M Hosted Engine

RHEV-M hosted engine means configuring RHEV-M as a virtual machine directly on a hypervisor host. RHEV-M lives inside the virtualization environment it is managing. Before configuring hypervisor host, RHEV-M must be configured. As such hosted engine is installed using yum directly on chosen hypervisor host. Simply install RHEL 7 and follow the steps listed further below in this article.

PROS

  • Takes advantage of RHEV, HA is built-in.
  • Simplified and streamlined installation of RHEV.
  • Does not require separate host system.

CONS

  • At least one hypervisor must be running in order to access RHEV-M.
  • If there are multiple-separate virtualization environments complexity could be greater.

Follow the below guide to configure a RHEV environment using the hosted engine.

  • Enable required repositories.
[root@rhevh01 ~]# subscription-manager register
[root@rhevh01 ~]# subscription-manager attach --pool=3848378728191899189
[root@rhevh01 ~]# subscription-manager repos --disable=*
[root@rhevh01 ~]# subscription-manager repos --enable=rhel-7-server-rpms
[root@rhevh01 ~]# subscription-manager repos --enable=rhel-7-server-supplementary-rpms
[root@rhevh01 ~]# subscription-manager repos --enable=rhel-7-server-optional-rpms
[root@rhevh01 ~]# subscription-manager repos --enable=rhel-7-server-rhev-mgmt-agent-rpms 
[root@rhevh01 ~]# yum update -y

The hosted engine requires an NFS or ISCSI share. In this case we will use NFSv3 and create a local share on our hypervisor host. This storage is only used for hosted engine and won't show up in RHEV-M.

  • Install iptables (optional). Note: as of writing of this article, firewalld does not work with RHEV 3.5, at least that has been my experience.
[root@rhevh01 ~]# yum -y install iptables-services
  • Disable firewalld.
[root@rhevh01 ~]# systemctl stop firewalld
[root@rhevh01 ~]# systemctl disable firewalld
  • Enable iptables (optional).
[root@rhevh01 ~]# systemctl enable iptables
[root@rhevh01 ~]# systemctl start iptables
  • Configure iptables rules for RHEV and NFS (optional).
[root@rhevh01 ~]# vi /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [64:6816]
-A INPUT -p udp -m udp --dport 32769 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -p udp -m udp --dport 875 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 875 -j ACCEPT
-A INPUT -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 54321 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p udp -m udp --dport 161 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38465 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38466 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 39543 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 55863 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38468 -j ACCEPT
-A INPUT -p udp -m udp --dport 963 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 965 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 16514 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
COMMIT
  • Restart iptables (optional).
[root@rhevh01 ~]# systemctl restart iptables
  • Configure NFS Services.
[root@rhevh01 ~]# yum install nfs-utils rpcbind
[root@rhevh01 ~]# systemctl enable rpcbind
[root@rhevh01 ~]# systemctl enable nfs-server
[root@rhevh01 ~]# systemctl start rpcbind
[root@rhevh01 ~]# systemctl start nfs-server
  • Configure NFS Share.
[root@rhevh01 ~]# mkdir /usr/share/rhev
[root@rhevh01 ~]# chown -R 36:36 /usr/share/rhev
[root@rhevh01 ~]# chmod -R 0755 /usr/share/rhev
[root@rhevh01 ~]# vi /etc/exports 
/usr/share/rhev 192.168.2.0/24(rw)
[root@rhevh01 ~]# exportfs -a
  • Install Hosted Engine.
[root@rhevh01 ~]# yum install -y ovirt-hosted-engine-setup

Note: if you are connecting via ssh you will need to forward X11. If you are using windows then you need to install Xming and Putty explained here.

  • Install X11 libraries (optional).
[root@rhevh01 ~]# yum groupinstall -y "Server with GUI"
  • Configure Hosted Engine. Note: in this example, screen is used and this is strongly recommended if installing through ssh connection. In fact, anytime you are working through ssh you should use screen, if something unexpected happens you can reconnect.
[root@rhevh01 ~]# yum install -y screen
[root@rhevh01 ~]# screen hosted-engine --deploy
--== VM CONFIGURATION ==--
Please specify the device to boot the VM from (cdrom, disk, pxe) [cdrom]: disk
 Please specify path to OVF archive you would like to use [None]: /usr/share/rhev/rhevm-appliance-20150421.0-1.x86_64.rhevm.ova
[ INFO ] Checking OVF archive content (could take a few minutes depending on archive size)
[ INFO ] Checking OVF XML content (could take a few minutes depending on archive size)
[WARNING] OVF does not contain a valid image description, using default.
 Please specify an alias for the Hosted Engine image [hosted_engine]:
 The following CPU types are supported by this host:
 - model_SandyBridge: Intel SandyBridge Family
 - model_Westmere: Intel Westmere Family
 - model_Nehalem: Intel Nehalem Family
 - model_Penryn: Intel Penryn Family
 - model_Conroe: Intel Conroe Family
 Please specify the CPU type to be used by the VM [model_SandyBridge]:
[WARNING] Minimum requirements for CPUs not met
 You may specify a unicast MAC address for the VM or accept a randomly generated default [00:16:3e:04:eb:78]:
 Please specify the console type you would like to use to connect to the VM (vnc, spice) [vnc]: spice
--== HOSTED ENGINE CONFIGURATION ==--
Enter the name which will be used to identify this host inside the Administrator Portal [hosted_engine_1]:
 Enter 'admin@internal' user password that will be used for accessing the Administrator Portal:
 Confirm 'admin@internal' user password:
 Please provide the FQDN for the engine you would like to use.
 This needs to match the FQDN that you will use for the engine installation within the VM.
 Note: This will be the FQDN of the VM you are now going to create,
 it should not point to the base host or to any other existing machine.
 Engine FQDN: he01.lab
[WARNING] Failed to resolve he01.lab using DNS, it can be resolved only locally
 Please provide the name of the SMTP server through which we will send notifications [localhost]:
 Please provide the TCP port number of the SMTP server [25]:
 Please provide the email address from which notifications will be sent [root@localhost]:
 Please provide a comma-separated list of email addresses which will get notifications [root@localhost]:
[ INFO ] Stage: Setup validation
[WARNING] Failed to resolve rhevh01.lab using DNS, it can be resolved only locally
--== CONFIGURATION PREVIEW ==--
Bridge interface : eno1
 Engine FQDN : he01.lab
 Bridge name : rhevm
 SSH daemon port : 22
 Firewall manager : iptables
 Gateway address : 192.168.0.1
 Host name for web application : hosted_engine_1
 Host ID : 1
 Image alias : hosted_engine
 Image size GB : 50
 Storage connection : rhevh01.lab:/usr/share/rhev
 Console type : qxl
 Memory size MB : 4096
 MAC address : 00:16:3e:04:eb:78
 Boot type : disk
 Number of CPUs : 1
 OVF archive (for disk boot) : /usr/share/rhev/rhevm-appliance-20150421.0-1.x86_64.rhevm.ova
 CPU Type : model_SandyBridge
Please confirm installation settings (Yes, No)[Yes]:

Once you confirm settings the hosted engine will be deployed and you will be prompted to connect to the console of the hosted engine.

  • Using remote-viewer connect to console of hosted engine.
[root@rhevh01 ~]# /bin/remote-viewer --spice-ca-file=/etc/pki/vdsm/libvirt-spice/ca-cert.pem spice://localhost?tls-port=5901 --spice-host-subject="C=EN, L=Test, O=Test, CN=Test"

Hosted_Engine_Setup_Tool

  • Configure authentication settings and set root password in the tools menu.
  • Configure networking.
  • Register with subscription manager.
[root@rhevm ~]# subscription-manager register
[root@rhevm ~]# subscription-manager attach --pool=3948394898198989000922
  • Configure yum repositories.
[root@rhevm ~]# subscription-manager repos --disable=* 
[root@rhevm ~]# subscription-manager repos --enable=rhel-6-server-rpms 
[root@rhevm ~]# subscription-manager repos --enable=rhel-6-server-optional-rpms 
[root@rhevm ~]# subscription-manager repos --enable=rhel-6-server-supplementary-rpms 
[root@rhevm ~]# subscription-manager repos --enable=rhel-6-server-rhev-mgmt-agent-rpms 
[root@rhevm ~]# subscription-manager repos --enable=rhel-6-server-rhevm-3.5-rpms
  • Installed RHEV-M.
# yum update -y
# yum install -y rhevm
# engine-setup
  • After completing engine-setup go back to hypervisor and press [1] to continue hosted engine setup.
[ INFO ] Engine replied: DB Up!Welcome to Health Status!
 Enter the name of the cluster to which you want to add the host (Default) [Default]:
[ INFO ] Waiting for the host to become operational in the engine. This may take several minutes...
[ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] Still waiting for VDSM host to become operational...
[ INFO ] The VDSM Host is now operational
 Please shutdown the VM allowing the system to launch it as a monitored service.
 The system will wait until the VM is down.
  • Shutdown the hosted engine VM.
[root@rhevm ~]# shutdown -h now

Once the hosted VM is shutdown, the installation completes. Congrats, you have successfully deployed a RHEV environment with the hosted engine option.

Add Additional Hypervisor Host to RHEV-M Hosted Engine

Adding an additional hypervisor host using the hosted engine is a simple process.

  • Enable required repositories.
[root@rhevh01 ~]# subscription-manager register
[root@rhevh01 ~]# subscription-manager attach --pool=3848378728191899189
[root@rhevh02 ~]# subscription-manager repos --disable=*
[root@rhevh02 ~]# subscription-manager repos --enable=rhel-7-server-rpms
[root@rhevh02 ~]# subscription-manager repos --enable=rhel-7-server-supplementary-rpms
[root@rhevh02 ~]# subscription-manager repos --enable=rhel-7-server-optional-rpms
[root@rhevh02 ~]# subscription-manager repos --enable=rhel-7-server-rhev-mgmt-agent-rpms 
[root@rhevh02 ~]# yum update -y
  • Install iptables (optional). Note: as of writing of this article, firewalld does not work with RHEV 3.5.
[root@rhevh02 ~]# yum -y install iptables-services
  • Disable firewalld.
[root@rhevh02 ~]# systemctl stop firewalld
[root@rhevh02 ~]# systemctl disable firewalld
  • Enable iptables (optional).
[root@rhevh02 ~]# systemctl enable iptables
[root@rhevh02 ~]# systemctl start iptables
  • Configure iptables rules for RHEV and NFS (optional).
[root@rhevh02 ~]# vi /etc/sysconfig/iptables
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [64:6816]
-A INPUT -p udp -m udp --dport 32769 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 32803 -j ACCEPT
-A INPUT -p udp -m udp --dport 662 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 662 -j ACCEPT
-A INPUT -p udp -m udp --dport 875 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 875 -j ACCEPT
-A INPUT -p udp -m udp --dport 892 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 892 -j ACCEPT
-A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p icmp -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -p tcp -m tcp --dport 54321 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 111 -j ACCEPT
-A INPUT -p udp -m udp --dport 111 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p udp -m udp --dport 161 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38465 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38466 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38467 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 2049 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 39543 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 55863 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 38468 -j ACCEPT
-A INPUT -p udp -m udp --dport 963 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 965 -j ACCEPT
-A INPUT -p tcp -m tcp --dport 16514 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 5900:6923 -j ACCEPT
-A INPUT -p tcp -m multiport --dports 49152:49216 -j ACCEPT
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -m physdev ! --physdev-is-bridged -j REJECT --reject-with icmp-host-prohibited
COMMIT
  • Restart iptables (optional).
[root@rhevh02 ~]# systemctl restart iptables
  • Install Hosted Engine.
[root@rhevh02 ~]# yum install -y ovirt-hosted-engine-setup

Note: if you are connecting via ssh you will need to forward X11. If you are using windows then you need to install Xming and Putty explained here.

  • Install X11 libraries (optional).
[root@rhevh02 ~]# yum groupinstall -y "Server with GUI"
  • Configure Hosted Engine. Note: in this example screen is used and this is strongly recommended if installing through ssh connection.
[root@rhevh02 ~]# yum install -y screen
[root@rhevh02 ~]# screen hosted-engine --deploy

When prompted for storage connection, provide the same NFS share used when originally configuring hosted engine [rhevh01.lab:/usr/share/rhev]. The install script will detect the storage connection and instead of deploying hosted engine, will add the hypervisor host to already running hosted engine.

In order to interact with the hosted engine, the hosted-engine CLI must be used. You can start, stop and get the status of the hosted engine using this CLI tool.

[root@rhevh01 ~]# hosted-engine --vm-status

--== Host 1 status ==--

Status up-to-date : True
Hostname : rhevh01.lab
Host ID : 1
Engine status : {"reason": "bad vm status", "health": "bad", "vm": "up", "detail": "powering up"}
Score : 2400
Local maintenance : False
Host timestamp : 386
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=386 (Sat Jan 2 11:06:10 2016)
 host-id=1
 score=2400
 maintenance=False
 state=EngineStarting

--== Host 2 status ==--

Status up-to-date : False
Hostname : rhevh02.lab
Host ID : 2
Engine status : unknown stale-data
Score : 2400
Local maintenance : False
Host timestamp : 444
Extra metadata (valid at timestamp):
 metadata_parse_version=1
 metadata_feature_version=1
 timestamp=444 (Sat Jan 2 11:06:17 2016)
 host-id=2
 score=2400
 maintenance=False
 state=EngineDown

Summary

In this article we have examined the different options for deploying Red Hat Enterprise Virtualization Management (RHEV-M). We have discussed the various trade-offs when using RHEV-M standalone or hosted engine. Unless special requirements exist, the best option is definitely the hosted engine approach. Finally we have seen how to configure RHEV-M using both standalone (discussed in previous article) and hosted engine.

Virtualization technology has become a commodity but not its management. The once dominant proprietary virtualization solutions no longer provide the same competitive edge. Opensource virtualization such as RHEV allow independence and avoid vendor lock-in to costly proprietary management solutions.

The greatest benefit of KVM over other virtualization technologies is the APIs and management are literally wide open. OpenStack for example interfaces with KVM hypervisors directly, it doesn't need RHEV-M because KVM is open and follows open standards. OpenStack when interfacing with other virtualization technologies, such as VMware and Hyper-V, requires proprietary management which is not only costly but quite limiting. The question we all should be asking is, do we want to drag proprietary management solutions with us on our journey to the cloud? Now is the time to make the switch and move to KVM by implementing RHEV. If you find yourself on the journey to opensource virtualization, I strongly suggest reading the other articles posted above. Together we grow so please share.

Happy RHEVing!

(c) 2016 Keith Tenzer