Building Storage Services in OpenStack on NetApp - Part III of III

10 minute read

Welcome to part three of the three-part series on creating storage services in OpenStack on NetApp. In this post we will look at how to configure storage services within OpenStack utilising NetApp storage.  In part one of the series we looked at how to install and configure OpenStack. In part two we looked at how to configure underlying NetApp storage to support OpenStack storage services.

Overview

OpenStack of course consists of many decoupled services that run independently but at the same time integrate with one another. NetApp has integrations in the following OpenStack services: Cinder (block storage), Glance (image services), Swift (object storage) and Manilla (shared file services).

openstack_ntap_integration_overview

Cinder Configuration

Cinder provides block storage services in OpenStack. One very important thing to keep in mind is that while Cinder presents block devices to compute resources, underlying storage such as NetApp can expose either ISCSI or NFS storage to Cinder. In my example below I am going to show you how to set up two NFS storage backends: one for primary storage and the other DR. A storage backend maps to a NetApp Clustered Data ONTAP storage virtual machine (SVM), so for each SVM you will create a storage backend in Cinder.

Edit the cinder.conf (/etc/cinder/cinder.conf)

Note: if we chose to do the RDO installation using answers file then we already configured the first storage backend cdotNfs and just need to add the backend for cdotNfsDr.

[code language="text"]

enabled_backends=cdotNfs,cdotNfsDr

[cdotNfs]
volume_backend_name=cdotNfs
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=<cluster mgmt vserver>
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=<username>
netapp_password=<password>
netapp_vserver=<storage virtual machine>
nfs_shares_config=/etc/cinder/cdotNfs_exports.conf
netapp_copyoffload_tool_path=/usr/bin/na_copyoffload_64

[cdotNfsDr]
volume_backend_name=cdotNfsDr
volume_driver=cinder.volume.drivers.netapp.common.NetAppDriver
netapp_server_hostname=<cluster mgmt vserver>
netapp_server_port=80
netapp_storage_protocol=nfs
netapp_storage_family=ontap_cluster
netapp_login=<username>
netapp_password=<password>
netapp_vserver=<storage virtual machine>
[/code]

Edit the cdotNfs_exports.conf (/etc/cinder/cdotNfs_exports.conf)

Make the three NetApp volumes on our storage virtual machine available to Cinder backend [cdotNfs]:

[code language="text"]
192.168.160.100:/openstack_gold
192.168.160.100:/openstack_silver
192.168.160.100:/openstack_bronze
[/code]

Note: make sure you use a data LIF on the storage virtual machine

Verify Configuration

Once we have configured our storage backends we can verify them following the below steps:

  • . /root/keystonerc_admin
  • service openstack-cinder-api restart
  • service openstack-cinder-volume restart
  • cinder service-list

The "cinder service-list" command will show us the two new storage backends:

  • <hostname>@cdotNfs
  • <hostname>@cdotNfsDr

Glance

Glance is the images service in OpenStack. It stores and provides images as well as templates to the Nova compute service. NetApp storage provides two key benefits for Glance:

  • Save network bandwidth by efficiently cloning images via copy offload driver
  • Reduce storage requirements by enabling dedup

Edit the glance-api.conf (/etc/glance/glance-api.conf)

Mount a NetApp volume (openstack_glance) on the OpenStack host or the host running the glance service. The volume must be mounted at boot so make sure you add it to the /etc/fstab.

[code language="text"]

filesystem_store_datadir=/glance

filesystem_store_metadata_file=/etc/glance/metadata.conf

[/code]

Edit the metadat.conf (/etc/glance/metadata.conf)

The purpose of this file is to enable glance to store images and templates on NetApp storage.

[code language="text"]

{
"share_location": "nfs://192.168.160.100:/openstack_glance",
"mount_point": "/glance",
"type": "nfs"
}

[/code]

Enable copy offload driver

The copy offload driver allows Glance to use NetApp file cloning capabilities providing near instant image copies to the Nova compute service. Without copy offload images must be copied over the network between Glance and Nova.

Copy offload requirements:

  • The storage system must have Data ONTAP v8.2 or greater installed
  • To configure the copy offload workflow, enable NFS v4.0 or greater and export it from the SVM
  • The vStorage feature must be enabled on each storage virtual machine (SVM, also known as a Vserver) that is permitted to interact with the copy offload client. To set this feature, you can use the vserver nfs modify - -vstorage enabled –v4.0 enabled CLI command

To install the copy offload driver following the following steps:

Create Storage Services

One of the major features of cinder is the ability to configure storage services. Below is an example of how we can create a service: gold, silver and bronze from our NetApp storage volumes. The NetApp driver for OpenStack exposes capabilities which we can use to define services.

Below is a table of all the NetApp storage capabilities we can use to define services in OpenStack:

Extra spec DataType Description
netapp:raid_type String Limit the candidate volume list based on one of the following raid types: raid4, raid_dp.
netapp:disk_type String Limit the candidate volume list based on one of the following disk types: ATA, BSAS, EATA, FCAL,FSAS, LUN, MSATA, SAS, SATA, SCSI, XATA, XSAS, or SSD.
netapp:qos_policy_group String Specify the name of a QoS policy group, which defines measurable Service Level Objectives, that should be applied to the Cinder volume at the time of volume creation. Ensure that the QoS policy group object within Data ONTAP should be defined before a Cinder volume is created, and that the QoS policy group is not associated with the destination FlexVol volume.
netapp_mirrored Boolean Limit the candidate volume list to only the ones that are mirrored on the storage controller.
netapp_unmirrored Boolean Limit the candidate volume list to only the ones that are not mirrored on the storage controller.
netapp_dedup Boolean Limit the candidate volume list to only the ones that have deduplication enabled on the storage controller.
netapp_nodedup Boolean Limit the candidate volume list to only the ones that have deduplication disabled on the storage controller.
netapp_compression Boolean Limit the candidate volume list to only the ones that have compression enabled on the storage controller.
netapp_nocompression Boolean Limit the candidate volume list to only the ones that have compression disabled on the storage controller.
netapp_thin_provisioned Boolean Limit the candidate volume list to only the ones that support thin  provisioning on the storage controller.
netapp_thick_provisioned  Boolean Limit the candidate volume list to only the ones that support thick   provisioning on the storage controller.

Gold Service

Gold - Highest performance storage (SSD) and Disaster Recovery

  • cinder type-key gold set netapp_mirrored=true

Silver Service

Silver - Highest performance storage with compression

  • cinder type-key silver set netapp_compression=true

Bronze Service

Bronze Lower performance storage with dedup

  • cinder type-key bronze setnetapp_dedup=true

Note: you can add more than one capability to a storage service.

 Create Instance using Storage Services

Finally we can use the storage services to create a cinder volume and spawn an instance based on the new cinder volume. Log into the OpenStack UI horizon. Under project->volumes select "create volume". You can now select a storage service (gold, silver or bronze), create a cinder volume and attach it to an OS image. Below is an example:

Screen Shot 2014-12-02 at 18.25.18

(c) 2014 Keith Tenzer