OpenStack 13 (Queens) Lab Installation and Configuration Guide for Hetzner Root Servers

rdo

Overview

In this article we will focus on installing and configuring OpenStack Queens using RDO and the packstack installer. RDO is a community platform around Red Hat’s Enterprise OpenStack Distribution. It allows you to test the latest OpenStack capabilities on a stable platform such as Red Hat Enterprise Linux (RHEL) or CentOS. This guide will take you through setting up Hetzner root server, preparing environment for OpenStack, installing the OpenStack Queens release, adding a floating ip subnet through OVS, configuring networking, security groups, flavors, images and are other OpenStack related services. The outcome is a working OpenStack environment based on the Queens release that you can use as a baseline for testing your applications using OpenStack capabilities. The installation will create an all-in-one deployment however you can use this guide to create a multi-node deployment as well.
Continue reading

Satellite on OpenStack 1-2-3: Systems Management in the Cloud

Logotype_RH_Satellite_RGB_Black plus_signopenstack

Overview

In this article we will explore an important part of day 2 operations in OpenStack or any IaaS, systems management. There are two ways to maintain applications: immutable or lifecyle. Satellite is a product from Red Hat that focuses on lifecycle management. Specifically the deployment, updating, patching and configuration of Red Hat Enterprise Linux (RHEL) as well as the applications running on top throughout entire lifecycle. We will discuss the value Satellite brings to OpenStack and why systems management is a key part of day 2 cloud operations. Investigate the Satellite architecture and how it applies to OpenStack. Finally we will go through hands-on deploy of Satellite on OpenStack, even deploying an instance and automatically connecting the instance to Satellite, all using Ansible.

The Value of Satellite in OpenStack

Satellite is the second product Red Hat created after RHEL. It has been around for over 10 years and recently gone through a major re-architecture from ground up to address cloud. Red Hat customers have used Satellite to create a standard operating environment (SOE) for RHEL and the applications that run on RHEL for 10+ years. Satellite provides the ability to create various content views and bring them together in a composite content view (a group of content views). This allows us to group content (RPMs, configuration management, Tar files, whatever else) and most importantly version it. Once we can group software and version it we can start thinking about release management across a lifecyle environment. A lifecycle environment is typically something similar the holy trinity: development, test and production. The versions of software for our OS and applications of course vary, you don’t want to update software in production without testing in development or test right?

Continue reading

OpenShift: Getting Started with the Service Broker

Logotype_RH_OpenShiftContainerPlatform_wLogo_CMYK_Black

Overview

In this article we will look at the OpenShift service broker, understand how to integrate external services into OpenShift and even create a custom broker. First before we begin a big thanks to Marek Jelen and Paul Morie, Red Hatters who both helped me understand the service broker in greater detail.

Obviously if you are reading this article you already understand microservices, containers and why it is all so incredible awesome on OpenShift. Of course everything should be in a container but unfortunately it is going to take a while to get there. As we start dissecting and breaking down the monolithic architectures of the past, likely there will be a mix of lightweight services running in containers on OpenShift and other more heavy services (databases, ESBs, etc) running outside. In addition while the service catalog in OpenShift is vast, even allowing you to add your own custom services for anything that can run in OpenShift as-a-container using a template, there will be the need, especially with public cloud to connect to external services. Both of these use cases, on-premise external services and off-premise cloud services really made it obvious that a service broker and more robust service catalog was needed. Originally OpenShift did not have a service broker so you couldn’t easily consume external services. All that existed was the service catalog and templates, so every service had to be a container running on OpenShift. Thankfully other companies also saw a need for an open service abstraction and the Open Service Broker API was born as an opensource project.

Continue reading

Disaster Recovery with Containers? You Bet!

disaster-recovery

Overview

In this article we will discuss the benefits containers bring to business continuance, reveal concepts for applying containers to disaster recovery and of course show disaster recovery of a live database between production and DR OpenShift environments. Business continuance of course is all about maintaining critical business functions, during and after a disaster has occurred.  Business continuance defines two main criteria: recovery point objective (RPO) and recovery time objective (RTO). RPO amounts to how much data loss is tolerable and RTO how quickly services can be restored when a disaster occurs. Disaster recovery outlines the processes as well as technology for how an organization responds to a disaster. Disaster recovery can be viewed as the implementation of RPO and RTO. Most organizations today have DR capabilities but there many challenges.

  • Cost – DR usually is at least doubles the price.
  • Efficiency – DR requires regular testing and in the event of a disaster, resources must be available. This leads to idle resources for 99.9% of the time.
  • Complexity – Updating applications is complex enough but DR requires a complete redeployment where the DR side almost never mirrors production due to cost.
  • Outdated – Business continuance only deals with one aspect,  disaster recovery but as mentioned cloud-native applications are active/active so to be effective today, business continuance architectures must cover DR and multi-site.
  • Slow – DR often is not 100% automated and recovery is often dependent on manual procedures that may not be up to date or even tested with the latest application deployment.

I would take these challenges even further and suggest that for many organizations business continuance and DR is nothing more than a false safety net. It costs a fortune and in the event of a true disaster probably won’t be able to deliver RPO and RTO for all critical applications. How could it when DR is not part of the continuous deployment pipeline and being tested with each application update? How could it with the level of complexity and scale that exists today and not 100% automation?

Continue reading

OpenShift on OpenStack 1-2-3: Bringing IaaS and PaaS Together

openshift-logotype-svgplus_signopenstack

Overview

In this article we will explore why you should consider tackling IaaS and PaaS together. Many organizations gave up on OpenStack during it’s hype phase, but in my view it is time to reconsider the IaaS strategy. Two main factors are really pushing a re-emergence of interest in OpenStack and that is containers and cloud.

Containers require very flexible, software-defined infrastructure and are changing the application landscape fast. Remember when we had the discussions about pets vs cattle? The issue with OpenStack during it’s hype phase was that the workloads simply didn’t exist within most organizations, but now containers are changing that, from a platform perspective. Containers need to be orchestrated and the industry has settled in on Kubernetes for that purpose. In order to run Kubernetes you need quite a lot of flexibility at scale on the infrastructure level. You must be able to provide solid Software Defined Networking, Compute, Storage, Load Balancing, DNS, Authentication, Orchestration, basically everything and do so at a click of the button. Yeah we can all do that, right.

If we think about IT, there are two types of personas. Those that feel IT is generic, 80% is good enough and for them, it is a light switch: on or off. This persona has no reason whatsoever to deal with IaaS and should just go to the public cloud, if not already there. In other words, OpenStack makes no sense. The other persona feel IT adds compelling value to their business and going beyond 80% provides them with distinct business advantages. Anyone can go to public cloud but if you can turn IT into a competitive advantage then there may actually be a purpose for it. Unfortunately with the way many organizations go about IT today, it is not really viable, unless something dramatic happens. This brings me back to OpenStack. It is the only way an organization can provide the capabilities a public cloud offers while also matching price, performance and providing a competitive advantage. If we cannot achieve the flexibility of public cloud, the consumption model, the cost effectiveness and provide compelling business advantage then we ought to just give up right?

I also find it interesting that some organizations, even those that started in the public cloud are starting to see value in build-your-own. Dropbox for example, originally started using AWS and S3. Over last few years they built their own object storage solution, one that provided more value and saved 75 million over two years. They also did so with a fairly small team. I certainly am not advocating for doing everything yourself, I am just saying that we need to make a decision, does IT provide compelling business value? Can you do it for your business, better than the generic level playing field known as public cloud? If so, you really ought to be looking into OpenStack and using momentum behind containers to bring about real change.

Continue reading

Security and Vulnerability Scanning of Container Images

security concept  with a lock

Overview

In this article we will focus on security and vulnerability strategies for scanning container images. I know, in the past security was always viewed upon as an impedance to the speed of production but hopefully these days are behind us. Having a security breach, as you probably know, is one of the most costly things an organization can endure. It takes years to build up a reputation and only seconds to tear it down completely.

I still see today, many organizations ignoring container images completely because it is often misunderstood. Exactly what is inside a container image? Who should be responsible for it? How does it map to what we have done on servers? Security teams often don’t understand containers or even know what questions to ask. We need to help them and it is our duty to do so. Unfortunately there are not very many tools that can help in broad sense. Containers are new and evolving at breakneck speed. That coupled with the fact that security can negatively impact the speed of a DevOps team (if not done right), it is no wonder we are at square one, in many cases.

Before we dive into more detail, let us review important security aspects of containers.

  • Containers can have various packaging formats, Docker is the most popular today
  • Containers are immutable and as such are image based
  • Container are never updated, any change always results in a new container
  • Container images consist of layers (base, runtime, application)
  • Container images require shared responsibility between dev and ops
  • Containers don’t contain, they are in fact, just processes

For more information I recommend reading about the 10 layers of container security.

Continue reading

OpenStack 12 (Pike) Lab Installation and Configuration Guide with Hetzner Root Servers

rdo

Overview

In this article we will focus on installing and configuring OpenStack Pike using RDO and the packstack installer. RDO is a community platform around Red Hat’s Enterprise OpenStack Distribution. It allows you to test the latest OpenStack capabilities on a stable platform such as Red Hat Enterprise Linux (RHEL) or CentOS. This guide will take you through setting up Hetzner root server, preparing environment for OpenStack, installing the OpenStack Pike release, adding a floating ip subnet through OVS, configuring networking, security groups, flavors, images and are other OpenStack related services. The outcome is a working OpenStack environment based on the Pike release that you can use as a baseline for testing your applications using OpenStack capabilities. The installation will create an all-in-one deployment however you can use this guide to create a multi-node deployment as well.
Continue reading