OpenStack: Integrating Ceph as Storage Backend

cephopenstack

Overview

In this article we will discuss why Ceph is Perfect fit for OpenStack. We will see how to integrate three prominent OpenStack use cases with Ceph: Cinder (block storage), Glance (images) and Nova (VM virtual disks).

Ceph provides unified scale-out storage, using commodity x86 hardware, that is self-healing and intelligently anticipates failures. It has become the defacto standard for software-defined storage. Ceph being an OpenSource project has enabled many vendors the ability to provide Ceph based software-defined storage systems. Ceph is not just limited to Companies like Red Hat, Suse, Mirantis, Ubuntu, etc. Integrated solutions from SanDisk, Fujitsu, HP, Dell, Samsung and many more exist today. There are even large-scale community built environments, Cern comes to mind, that provide storage services for 10,000s of VMs.

Continue reading

Ceph 1.3 Lab Installation and Configuration Guide

1015767

Overview

In this article we will setup a Ceph 1.3 cluster for purpose of learning or a lab environment.

 

Ceph Lab Environment

For this environment you will need three VMs (ceph1, ceph2 and ceph3). Each should have 20GB root disk and 100GB data disk. Ceph has three main components: Admin console, Monitors and OSDs.

Admin console – UI and CLI used for managing Ceph cluster. In this environment we will install on ceph1.

Monitors – Monitor health of Ceph cluster. One or more monitors forms a paxos part-time parliment, providing extreme reliability and durability of cluster membership. Monitors maintain the various maps: monitor, osd, placement group (pg) and crush. Monitors will be installed on ceph1, ceph2 and ceph3.

OSDs – Object storage daemon handles storing data, recovery, backfilling, rebalancing and replication. OSDs sit on top of a disk / filesystem. Bluestore enables OSDs to bypass filesystem but is not an option in Ceph 1.3. An OSD will be installed on ceph1, ceph2 and ceph3.

Continue reading

Ceph: the future of Storage

storage-cloud1

Overview

Since joining Red Hat in 2015, I have intentionally stayed away from the topic of storage. My background is storage but I wanted to do something else as storage became completely mundane and frankly boring. Why?

Storage hasn’t changed much in 20 years. I started my career as a Linux and Storage engineer in 2000 and everything that existed then, exists today. Only things became bigger, faster, cheaper, due to incremental improvements from technologies such as flash. There comes a point however, where minor incremental improvements are no longer good enough and a completely new way of addressing challenges is the only way forward.

I realized in late 2015 that the storage industry is starting a challenging period for all vendors but, didn’t really have feeling for when that could lead to real change. I did know that the monolithic storage array, built on proprietary Linux/Unix, with proprietary x86 hardware we all know and love, was a thing of the past. If you think about it storage is a scam today, you get opensource software running on x86 hardware packaged as a proprietary solution that doesn’t interoperate with anything else. So you get none of the value of opensource and pay extra for it. I like to think that economics like gravity, eventually always wins.

Continue reading

OpenStack Mitaka Lab Installation and Configuration Guide

rdo

Overview

In this article we will focus on installing and configuring OpenStack Mitaka using RDO and the packstack installer. RDO is a community platform around Red Hat’s OpenStack Platform. It allows you to test the latest OpenStack capabilities on a stable platform such as Red Hat Enterprise Linux (RHEL) or CentOS. This guide will take you through installing the OpenStack Liberty release, configuring networking, security groups, flavors, images and are other OpenStack related services. The outcome is a working OpenStack environment based on the Mitaka release that you can use as a baseline for testing your applications with OpenStack capabilities.
Continue reading

OpenShift v3: Basic Release Deployment Scenarios

3d small people - Males with four puzzle together

source: http://snsoftwarelabs.com/

Overview

One of the hardest things companies struggle with today is release management. Of course many methodologies and even more tools or technologies exist, but how do we bring everything together and work across functional boundaries of an organization? A product release involves everyone in the company not just a single team. Many companies struggle with this and the result is a much slower innovation cycle. In the past this used to be something that at least wasn’t a deal breaker. Unfortunately that is no longer the case. Today companies live and die by their ability to not only innovate but release innovation. I would say innovating is the easy part, the ability to provide those innovations in a controlled fashion through products and services is the real challenge.
Continue reading

OpenShift Enterprise 3.2: all-in-one Lab Environment

Screenshot from 2016-08-04 14:40:07

Overview

In this article we will setup a OpenShift Enterprise 3.2 all-in-one configuration. We will also setup the integration with CloudForms that allows additional management of OpenShift environments.

OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:

Continue reading