OpenShift 3.6 Fast Track: Everything You Need, Nothing You Don’t

Logotype_RH_OpenShiftContainerPlatform_wLogo_CMYK_Black

Overview

OpenShift Container Platform 3.6 went GA on August 9, 2017. You can read more about the release and new features here. In this article we will setup a standard non-HA environment that is perfect for PoCs or labs. Before we begin, let’s explain OpenShift for those that may be starting their OpenShift journey today. OpenShift is a complete container application build + run-time platform built on Kubernetes (Container Orchestration) and Docker (Container Packaging Format). Organizations looking to adopt containerization for their applications need of course a lot more than just technology, (Kubernetes and Docker), they need a real platform. OpenShift provides a service catalog for containerized applications, huge selection of already certified application runtimes + xPaaS services, a method for building containerized applications (source to image), centralized application logging, metrics, autoscaling, application deployments (Blue-Green, A/B, Canary, Rolling), integrated Jenkins CI/CD pipelines, integrated docker registry, load balancing / routes to containerized apps, multi-tenant SDN, security features (SELinux, secrets, security context), management tooling supporting multiple OpenShift environments (CloudForms), persistent storage (built-in Container Native Storage), automated deployment tooling based on Ansible and much, much more. OpenShift is a platform that runs on any infrastructure, from bare-metal to virtualization to public cloud (Amazon, Google, Microsoft), providing portability across cloud infrastructure for containerized applications. All of these things together is what truly enables organizations to move to DevOps, increase application release cycles, speed up innovation cycles, scale efficiently, gain independence from infrastructure providers and deliver new capabilities faster with more reliability to their customers.

Continue reading

Keep Your Servers and Run Your Applications Forever with Red Hat Virtualization powered by KVM

KVM-tuchaplus_signWindow4NTserver

Overview

This article was written by myself and fellow colleague Götz Rieger. Often one of the most challenging problems we are facing today is both absorbing and leading change. Software defined-everything has taken over and is leveling the playing field, de-marginalizing staunch competitive advantages and nothing is safe anymore. Develop great applications and thrive or become irrelevant is the mantra facing many organizations. In such environments it is important to innovate constantly, delivering new capabilities at an ever increasing speed. In order to do so, new practices (DevOps), values (Agile) and of course technology (Containers) are being implemented.

Today it seems almost everyone is focused on “the new” software-defined whatever, when in reality change happens at different levels and different speeds. Gartner tried to summarize this with “mode 1 vs mode 2” but that trivializes things too far. It comes down to application lifecycles which dictates dependency on change.  What if certain software doesn’t need to change? What if it has a purpose and is already doing it’s job function? What if the software cannot be ported to a new operating platform? What do you do then? The answer surprisingly, is maybe nothing? Maybe we let those applications live well beyond their intended support lifecycles. Consider the old programs in the Matrix, some found a way to survive and were not killed. These were also some of the most important, powerful programs.

Virtualization has enabled us to let x86 platforms essentially run forever or at least well beyond their support lifecycles (hardware and software). If we consider outdated Cobol applications on UNIX or Windows platforms like NT, XP and 2003; they haven’t been supported for years. Applications running on these platforms might are not able to migrate for whatever reason, else they would have already done so. If we think about it, this is in fact a very valid use case for virtualization. There are of course other considerations that are important, like isolation (since these applications are not receiving patches) but assuming that is handled, why not? If it ain’t broken and doesn’t need to change, why fix it?

In this article we will look at how to run Windows NT Server (an operating system that hasn’t been supported since 2004) on KVM and Red Hat Virtualization powered by KVM.

Continue reading

OpenShift Showback Reporting using CloudForms

openshift_logo plus_sign  cf_logo

Overview

One of the most important capabilities of any platform in today’s service driven, pay-as-you-go economy is metering and showback. Without a solid understanding of costs, organizations are in fact unable to provide services. With containers, metering and showback becomes more challenging. If we think about containers simply being processes, then we are basically needing to meter and perform showback at that level of granularity. In addition since OpenShift uses Kubernetes for container orchestration, there are additional concepts that are new. For example, one more more containers run together in what Kubernetes refers to as a Pod. Next Pods are extremely dynamic and their lifetime very short. All of this make metering and showback anything but straight-forward. Thankfully OpenShift and CloudForms have the solution.

Continue reading

Ansible Tower and Satellite: End to End Automation for the Enterprise

Ansible-Tower-Logotype-Large-RGB-FullGrey-300x124_0plus_signsatellite_logoreal-time-satellite-clipart-17

Overview

In this article we will look at how Ansible Tower and Red Hat Satellite 6 integrate with one another, providing end-to-end automation for the enterprise. Satellite is a systems management tool that combines several popular opensource projects: Foreman (provisioning), Katello (content repository), Pulp (database), Candlepin (subscription management) and Puppet (configuration management). While puppet is directly integrated into Satellite, many organizations would rather use Ansible because of its power, simplicity and ease-of-use.

Ansible Tower integrates with Satellite, allowing organizations to run playbooks against the hierarchy and groups of servers defined in Satellite. Additionally, Ansible Tower can dynamically update its inventories with hosts and their updated facts from Satellite at anytime. Hosts show up in Ansible Tower under the groups defined by Satellite. This allows organizations to use Satellite to define their infrastructure, provision hosts, provide patch management while leveraging Ansible to deploy and manage software configuration. It also allows other teams the ability to run playbooks and automation against the infrastructure defined by Satellite. Personally I am a huge fan of this loose coupling and find this solution much more advantageous than a direct coupling of Ansible in Satellite.

Continue reading

Ansible Tower Installation and Configuration Guide

ansible-tower-logo

Overview

In this article we will setup and configure Ansible Tower on Red Hat Enterprise Linux (RHEL). By now unless you are hiding under a rock, you have heard about Ansible. Ansible is quickly becoming the standard automation language used in enterprises for automating everything. Ansible is powerful, simple, easy to learn and these of course are the main reasons it becoming the standard everywhere. Ansible has two components: Ansible core and Ansible Tower. Core provides the Ansible runtime that executes playbooks (yaml files defining tasks and roles) against inventories (group of hosts). Ansible Tower provides management, visibility, job scheduling, credentials, RBAC, auditing / compliance and more. Installing Ansible Tower also installs Ansible core so you kill two birds with one stone.

Continue reading

Explaining OpenStack Cinder Types and Scheduler

openstack-logo-2016

Overview

OpenStack Cinder is responsible for handling block storage in the context of OpenStack. Cinder provides a standard API and interface that allows storage companies to create their own drivers in order to integrate storage capabilities into OpenStack in a consistent way. Each storage pool exposed to OpenStack Cinder is a backend and you can have many storage backends. You can also have many of the same kind of storage backends. In this article we will look at two advanced features Cinder provides: types and the scheduler.

Cinder types essentially allow us to label Cinder storage backends. This allows for building out storage services that have expected characteristics and capabilities. The Cinder driver exposes those storage capabilities to Cinder.

The Cinder scheduler is responsible for deciding where to create Cinder volumes when we have more than one of the same kind of storage backend. This is done by looking at the filter rules in order to identify the most appropriate storage backend. More about filter rules can be found here.

Continue reading

CloudForms Installation and Configuration Guide for Red Hat Virtualization

manageiq-logo-glyph

Overview

In this article we will deploy CloudForms 4.2 on Red Hat Enterprise Virtualization (RHV). We will also show how to configure CloudForms in order to properly manage a RHV cluster and it’s hosts as well as virtual machines.

Before you begin a RHV cluster is needed. If you haven’t set one up, here is a guide on how to build a basic two node RHV cluster.

Continue reading