Explaining OpenStack Cinder Types and Scheduler
Overview
OpenStack Cinder is responsible for handling block storage in the context of OpenStack. Cinder provides a standard API and interface that allows storage companies to create their own drivers in order to integrate storage capabilities into OpenStack in a consistent way. Each storage pool exposed to OpenStack Cinder is a backend and you can have many storage backends. You can also have many of the same kind of storage backends. In this article we will look at two advanced features Cinder provides: types and the scheduler.
Cinder types essentially allow us to label Cinder storage backends. This allows for building out storage services that have expected characteristics and capabilities. The Cinder driver exposes those storage capabilities to Cinder.
The Cinder scheduler is responsible for deciding where to create Cinder volumes when we have more than one of the same kind of storage backend. This is done by looking at the filter rules in order to identify the most appropriate storage backend. More about filter rules can be found here.
Prerequisites
In order to follow along you will need an OpenStack environment. The easiest thing to do is setup an all-in-one environment with RDO. Those steps are documented here.
If you want to use Ceph and don't have an environment or things setup properly then you can follow below guides:
Configure Storage Backends
[OpenStack Controller]
Add two disks to OpenStack controller.
For all-in-one setup this is straight-forward. If you have multiple controllers then it needs to be a controller running openstack-cinder-volume service.
Create LVM Storage Backends.
Authenticate to OpenStack using keystone source file.
# source /root/keystonerc_admin
Determine appropriate block device names for newly added disks. The below commands can be useful. In this case the two disks are vdb and vdc.
# blkid # lsblk
Create physical volumes and volume groups.
# pvcreate /dev/vdb # vgcreate lvm-1 /dev/vdb
# pvcreate /dev/vdc # vgcreate lvm-2 /dev/vdc
Update Cinder configuration and add LVM backends.
# vi /etc/cinder/cinder.conf default_volume_type = rbd enabled_backends = lvm1,lvm2,rbd [lvm1] iscsi_helper=lioadm iscsi_ip_address=192.168.122.80 volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volumes_dir=/var/lib/cinder/volumes volume_backend_name=lvm volume_group=lvm-1 filter_function = "volume.size < 5" [lvm2] iscsi_helper=lioadm iscsi_ip_address=172.16.7.50 volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volume_dir=/var/lib/cinder/volumes volume_backend_name=lvm volume_group=lvm-2 filter_function = "volume.size >= 5"
Notice we also added filter rules. Since we have multiple lvm backends defined, filter rules will help scheduler decide what backend to use for provisioning Cinder volumes. In this case we have added a filter based on size. Cinder volumes smaller than 5GB will end up on backend lvm1 while Cinder volumes larger than 5GB will end up on backend lvm2.
Restart Cinder Services.
# systemctl restart openstack-cinder-api # systemctl restart openstack-cinder-volume
Configure Cinder Types
As mentioned Cinder types are used to create storage services and provide certain capabilities that are exposed by the driver. In this case we are simply labeling the backends however capabilities such as replication, backup and other things are also exposed via cinder types. This depends on the storage backend and the underlying storage capabilities.
Create Cinder Types for backend lvm1.
# openstack volume type create --public LVM1 +---------------------------------+--------------------------------------+ | Field | Value | +---------------------------------+--------------------------------------+ | description | None | | id | 7ee9c132-3556-4294-80d5-a87dffaa8db4 | | is_public | True | | name | LVM | | os-volume-type-access:is_public | True | +---------------------------------+--------------------------------------+
# openstack volume type set --property volume_backend_name=lvm1 LVM1
Create Cinder Types for backend lvm2.
# openstack volume type create --public LVM2 +---------------------------------+--------------------------------------+ | Field | Value | +---------------------------------+--------------------------------------+ | description | None | | id | 7ee9c132-3556-4294-80d5-a87dffaa8db4 | | is_public | True | | name | LVM | | os-volume-type-access:is_public | True | +---------------------------------+--------------------------------------+
# openstack volume type set --property volume_backend_name=lvm2 LVM2
List Cinder backends.
# cinder get-pools +----------+------------------------+ | Property | Value | +----------+------------------------+ | name | osp10.lab.com@lvm2#lvm | +----------+------------------------+ +----------+-----------------------+ | Property | Value | +----------+-----------------------+ | name | osp10.lab.com@lvm1#lvm | +----------+-----------------------+
Create Cinder Volumes
Now that multiple lvm backends are configured, types exist and scheduler has filter rules, we can provision Cinder volumes.
Create 3GB Cinder Volume.
# openstack volume create --size 3 3gb-vol --type lvm +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-03-29T15:17:58.457813 | | description | None | | encrypted | False | | id | 4d764067-da2a-476c-8f64-a52203e2dfd7 | | migration_status | None | | multiattach | False | | name | 3gb-vol | | properties | | | replication_status | disabled | | size | 3 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LVM | | updated_at | None | | user_id | 9d592f8a49654e8592de4e69fd15e603 | +---------------------+--------------------------------------+
Create 6GB Cinder Volume.
# openstack volume create --size 6 6gb-vol --type lvm +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-03-29T15:18:21.086318 | | description | None | | encrypted | False | | id | 5a39b0c3-fb07-481e-a150-36c0eae1adc3 | | migration_status | None | | multiattach | False | | name | 6gb-vol | | properties | | | replication_status | disabled | | size | 6 | | snapshot_id | None | | source_volid | None | | status | creating | | type | LVM | | updated_at | None | | user_id | 9d592f8a49654e8592de4e69fd15e603 | +---------------------+--------------------------------------+
List Cinder Volumes.
# openstack volume list +--------------------------------------+--------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+-----------+------+-------------+ | 5a39b0c3-fb07-481e-a150-36c0eae1adc3 | 6gb-vol | available | 6 | | | 4d764067-da2a-476c-8f64-a52203e2dfd7 | 3gb-vol | available | 3 | | +--------------------------------------+--------------+-----------+------+-------------+
Show Details on LVM volumes
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert volume-4d764067-da2a-476c-8f64-a52203e2dfd7 lvm-1 -wi-a----- 3.00g volume-5a39b0c3-fb07-481e-a150-36c0eae1adc3 lvm-2 -wi-a----- 6.00g root rhel -wi-ao---- 91.57g swap rhel -wi-ao---- 7.88g
Using the Cinder volume id we can see that the scheduler properly placed the Cinder volumes according to the filter. In this case the 3GB volumes ended up on lvm-1 backend and the 6GB volume on lvm-2 backend.
Summary
In this article we explained how to build intelligent storage services using OpenStack Cinder. Using Cinder types we can define service levels and capabilities. Using the scheduler we can apply filters that apply intelligent decision making, in order to find the most appropriate storage backend. Finally we provided a basic example using two LVM backends and a filter based on volume size.
Happy OpenStacking!
(c) 2017 Keith Tenzer