CloudForms Installation and Configuration Guide for Red Hat Virtualization

manageiq-logo-glyph

Overview

In this article we will deploy CloudForms 4.2 on Red Hat Enterprise Virtualization (RHV). We will also show how to configure CloudForms in order to properly manage a RHV cluster and it’s hosts as well as virtual machines.

Before you begin a RHV cluster is needed. If you haven’t set one up, here is a guide on how to build a basic two node RHV cluster.

Continue reading

RHV 4.1 Lab Installation and Configuration Guide

redhatvirtual-8648b589ea764898

Overview

In this article we will setup a Red Hat Enterprise Virtualization (RHV) environment. RHV is based on upstream opensource projects KVM and Ovirt. RHV is composed of Red Hat Enterprise Linux (RHEL which includes KVM) and Red Hat Enterprise Virtualization Management (RHV-M), based on Ovirt. As of this article the latest version is RHV 4.1.

RHV has of late become very interesting to customers looking for alternatives to VMware. Below are a few reasons why you should be interested in RHV:

  • 100% opensource no proprietary code and no proprietary licencing.
  • Best Hypervisor for running Red Hat Enterprise Linux (RHEL).
  • Integrated and built with RHEL, uses SELinux to secure Hypervisor.
  • RHV leads VMware in SPECvirt performance and price per performance (last results 2013).
  • RHV scales vertically and performs extremely well on 4 or even 8 socket servers.
  • All features are included in RHV subscription, no licensing for extra capabilities.
  • KVM is future proof and is the defacto standard for OpenStack and modern virtualizations platforms.

Continue reading

Red Hat OpenStack Platform 10 (Newton) Installation and Configuration Guide

tripleo_owl

Overview

In this article we will setup an OpenStack environment based off Newton using the Red Hat OpenStack Platform. OpenStack is OpenStack but every distribution differs in what capabilities or technologies are supported and how OpenStack is installed, configured as well as upgraded.

The Red Hat OpenStack Platform uses OpenStack director based on the TripleO (OpenStack on OpenStack) project to install, configure and update OpenStack. Director is a lifecycle management tool for OpenStack. Red Hat’s approach is to make OpenStack easy to manage, without compromising on the “Open” part of OpenStack. If management of OpenStack can be simpler and the learning curve brought down then it has a real chance to be the next-gen virtualization platform. What company wouldn’t want to be able to consume their internal IT resources like using AWS, GCE or Azure if they didn’t give up anything to do so? We aren’t there yet but Red Hat is making bold strides and as you will see in this article, is on a journey to make OpenStack consumable for everyone!

Continue reading

Storage for Containers Using Ceph RBD – Part IV

Shipping containers

Overview

In this article we will look at how to integrate Ceph RBD (Rados Block Device) with Kubernetes and OpenShift. Ceph is of course a scale-out software-defined storage system that provides block, file and object storage. It focuses primarily on cloud-storage use cases. Providing storage for Kubernetes and OpenShift is just one of many use cases that fit very well with Ceph.

Continue reading

OpenStack Swift Integration with Ceph

imagesplus_sign1015767

Overview

In this article we will configure OpenStack Swift to use Ceph as a storage backend. Object of cloud storage is one of the main services provided by OpenStack. Swift is an object storage protocol and implementation. It has been around for quite a while but is fairly limited (it uses rsync to replicate data, scaling rings can be problematic and it only supports object storage to just mention a few things). OpenStack needs to provide storage for many use cases such as block (Cinder), block (Glance), file (Manila), block (Nova) and object (Swift). Ceph is a distributed software-defined storage system that scales with OpenStack and provides all these use cases. As such it is the defacto standard for OpenStack and is why you see in OpenStack user survey that Ceph is 60% of all OpenStack storage.

OpenStack uses Keystone to store service endpoints for all services. Swift has a Keystone endpoint that authenticates OpenStack tenants to Swift providing object or cloud storage on a per-tenant basis. As mentioned, Ceph provides block, file and object access. In the case of object Ceph provides S3, Swift and NFS interfaces. The RADOS Gateway (RGW) provides object interfaces for Ceph. S3 and Swift users are stored in the RGW. Usually you would want several RADOS GWs in an active/active configuration using a load balancer. OpenStack tenants can be given automatic access and their Keystone tenant ids are automatically configured in the RADOS GW when Swift object storage is accessed from given tenant.

Using Ceph with OpenStack for object storage provides tenants access to cloud storage, integrated with OpenStack using swift and automatically handles authentication of OpenStack tenants. It also provides advantage that external users or tenants (outside of OpenStack) such as application developers can access object storage directly with  protocol of choice: S3, Swift or NFS.

Integrating Ceph with OpenStack Series:

In order to integrate OpenStack Swift with Ceph you need to first follow below prerequisites:

  • Configure OpenStack environment here
  • Configure Ceph cluster here

Continue reading

Storage for Containers using Container Native Storage – Part III

Shipping containers

Overview

In this article we will focus on a new area of storage for containers called Container Native Storage. This article is a collaboration between Daniel Messer (Technical Marketing Manager Storage @RedHat) and Keith Tenzer (Sr. Solutions Architect @RedHat).

So What is Container Native Storage?

Essentially container native storage or CNS is a hyper-converged approach to storage and compute in the context of containers. Each container host or node supplies both compute and storage allowing storage to be completely integrated or absorbed by the platform itself. A software-defined storage system running in containers on the platform, consumes disks provided by nodes and provides a cluster-wide storage abstraction layer. The software-defined storage system then provides capabilities such as high-availability, dynamic provisioning and general storage management. This enables DevOps from a storage perspective, allows storage to grow with the container platform and delivers a high level of efficiency.

Continue reading

OpenStack Manila Integration with Ceph

imagesplus_sign1015767

Overview

In this article we will configure OpenStack Manila using CephFS as a storage backend. OpenStack Manila is an OpenStack project providing file services. Manila is storage backend agnostic and you can have many different kinds of storage backends, similar to Cinder. CephFS is a POSIX-Compliant file system that uses the Ceph storage cluster to store data. CephFS works by providing a Metadata Server (MDS) that collectively manages filesystem namespaces. It also coordinates access to Ceph Object Storage Damones (OSDs). Ceph MDS has two modes: active or passive. There are several documented active/passive MDS configurations and multi-mds or active/active MDS that can be configured when a single MDS becomes a bottleneck. Clients can mount CephFS filesystems using the ceph-fuse client or kernel kernel driver.

Integrating Ceph with OpenStack Series:

Prerequisites

The following are required to configure OpenStack Manila with CephFS:

  • Already configured Ceph cluster (Jewel or higher). See here to setup Ceph cluster.
  • Already configured OpenStack (Mitaka or higher). See here to setup OpenStack.

Continue reading