This is a six part series dedicated to container storage. The article series is a collaboration between Daniel Messer (Technical Marketing Manager Storage @RedHat), Keith Tenzer (Solutions Architect @RedHat) and Kapil Arora (Cloud Platform Architect @NetApp). The focus of this article is an overview on storage for containers. In this article we will focus on laying out fundamentals critical to any container storage discussion. In addition we will go into some details on the various solutions that exist today.
Ceph has become the defacto standard for software-defined storage. Ceph is 100% opensource, built on open standards and as such is offered by many vendors not just Red Hat. If you are new to Ceph or software-defined storage, I would recommend the following article before proceeding to understand some high-level concepts:
Ceph – the future of storage
In this article we will configure a Red Hat Ceph 2.0 cluster and set it up for object storage. We will configure RADOS Gateway (RGW), Red Hat Storage Console (RHCS) and show how to configure the S3 and Swift interfaces of the RGW. Using python we will access both the S3 and Swift interfaces.
If you are interested in configuring Ceph for OpenStack see the following article:
OpenStack – Integrating Ceph as Storage Backend
In this article we will look at how to use Ansible Tower to deploy and manage OpenShift environments. OpenShift of course uses Ansible as its deployment and configuration tool already. While that is great, using Tower provides several major advantages:
- UI for OpenShift deployment and configuration management
- Secure store for credentials
- RBAC and ability to delegate different responsibilities for OpenShift deployments
- Easy to visualize and manage multiple OpenShift environments and even versions of OpenShift
- History, audit trail and detailed logging in central location for all OpenShift environments and deployments
In this article we will setup a OpenShift Enterprise 3.3 all-in-one configuration. We will also configure OpenShift router, registry, aggregate logging, metrics, CloudForms integration and finally an integrated jenkins build pipeline.
OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system, a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:
This article is a collaboration from Rolf Masuch (Microsoft) and Keith Tenzer (Red Hat). It is based on our work together in the field with enterprise customers.
In this article we will explore how to deploy a production ready OpenShift enterprise container platform on the Microsoft Azure Cloud. The entire deployment is completely automated using Ansible and ARM (Azure Resource Manager). Everything is template driven using APIs. The bennefit of this approach is the ability to build-up and tear-down a complete OpenShift environment in the Azure cloud before your coffee gets cold.
Since OpenShift already uses Ansible as its installation and configuration management tool, it made sense to stick with Ansible as opposed to using other tools such as Power Shell. A Red Hat colleague, Ivan McKinley created an Ansible playbook that builds out all the required Azure infrastructure components and integrates the existing OpenShift installation playbook. The result is an optimally configure OpenShift environment on the Azure Cloud. We have used this recipe to deploy real production Environments for customers and it leverages both Microsoft as well as Red Hat best practices.
In this article we will discuss why Ceph is Perfect fit for OpenStack. We will see how to integrate three prominent OpenStack use cases with Ceph: Cinder (block storage), Glance (images) and Nova (VM virtual disks).
Integrating Ceph with OpenStack Series:
Ceph provides unified scale-out storage, using commodity x86 hardware, that is self-healing and intelligently anticipates failures. It has become the defacto standard for software-defined storage. Ceph being an OpenSource project has enabled many vendors the ability to provide Ceph based software-defined storage systems. Ceph is not just limited to Companies like Red Hat, Suse, Mirantis, Ubuntu, etc. Integrated solutions from SanDisk, Fujitsu, HP, Dell, Samsung and many more exist today. There are even large-scale community built environments, Cern comes to mind, that provide storage services for 10,000s of VMs.
In this article we will setup a Ceph 1.3 cluster for purpose of learning or a lab environment.
Ceph Lab Environment
For this environment you will need three VMs (ceph1, ceph2 and ceph3). Each should have 20GB root disk and 100GB data disk. Ceph has three main components: Admin console, Monitors and OSDs.
Admin console – UI and CLI used for managing Ceph cluster. In this environment we will install on ceph1.
Monitors – Monitor health of Ceph cluster. One or more monitors forms a paxos part-time parliment, providing extreme reliability and durability of cluster membership. Monitors maintain the various maps: monitor, osd, placement group (pg) and crush. Monitors will be installed on ceph1, ceph2 and ceph3.
OSDs – Object storage daemon handles storing data, recovery, backfilling, rebalancing and replication. OSDs sit on top of a disk / filesystem. Bluestore enables OSDs to bypass filesystem but is not an option in Ceph 1.3. An OSD will be installed on ceph1, ceph2 and ceph3.