Disaster Recovery with Containers? You Bet!

disaster-recovery

Overview

In this article we will discuss the benefits containers bring to business continuance, reveal concepts for applying containers to disaster recovery and of course show disaster recovery of a live database between production and DR OpenShift environments. Business continuance of course is all about maintaining critical business functions, during and after a disaster has occurred.  Business continuance defines two main criteria: recovery point objective (RPO) and recovery time objective (RTO). RPO amounts to how much data loss is tolerable and RTO how quickly services can be restored when a disaster occurs. Disaster recovery outlines the processes as well as technology for how an organization responds to a disaster. Disaster recovery can be viewed as the implementation of RPO and RTO. Most organizations today have DR capabilities but there many challenges.

  • Cost – DR usually is at least doubles the price.
  • Efficiency – DR requires regular testing and in the event of a disaster, resources must be available. This leads to idle resources for 99.9% of the time.
  • Complexity – Updating applications is complex enough but DR requires a complete redeployment where the DR side almost never mirrors production due to cost.
  • Outdated – Business continuance only deals with one aspect,  disaster recovery but as mentioned cloud-native applications are active/active so to be effective today, business continuance architectures must cover DR and multi-site.
  • Slow – DR often is not 100% automated and recovery is often dependent on manual procedures that may not be up to date or even tested with the latest application deployment.

I would take these challenges even further and suggest that for many organizations business continuance and DR is nothing more than a false safety net. It costs a fortune and in the event of a true disaster probably won’t be able to deliver RPO and RTO for all critical applications. How could it when DR is not part of the continuous deployment pipeline and being tested with each application update? How could it with the level of complexity and scale that exists today and not 100% automation?

Continue reading

Containers in Large IT Enterprises

thinkstockphotos-464267243-100668861-large

Overview

As this will be the last article of 2017 I wanted to do something different and get away from my typical how-to guides (rest assured I will continue doing them in 2018). Over the past year, I have engaged in a lot of conversation with many large organizations looking to adopt or increase their container footprint. In this article I will share my thoughts on what I have learned from those discussions. We will discuss the impact of containers in large IT organizations. Understand the difference between container technology and container platform. Look into the integration points a container platform has into the existing IT landscape and finally discuss high-level architectural design ideas.

This article should serve as a good starting point for IT organizations trying to understand how to go about adopting container technology in their organization.

Continue reading

OpenShift 3.6 Fast Track: Everything You Need, Nothing You Don’t

Logotype_RH_OpenShiftContainerPlatform_wLogo_CMYK_Black

Overview

OpenShift Container Platform 3.6 went GA on August 9, 2017. You can read more about the release and new features here. In this article we will setup a standard non-HA environment that is perfect for PoCs or labs. Before we begin, let’s explain OpenShift for those that may be starting their OpenShift journey today. OpenShift is a complete container application build + run-time platform built on Kubernetes (Container Orchestration) and Docker (Container Packaging Format). Organizations looking to adopt containerization for their applications need of course a lot more than just technology, (Kubernetes and Docker), they need a real platform. OpenShift provides a service catalog for containerized applications, huge selection of already certified application runtimes + xPaaS services, a method for building containerized applications (source to image), centralized application logging, metrics, autoscaling, application deployments (Blue-Green, A/B, Canary, Rolling), integrated Jenkins CI/CD pipelines, integrated docker registry, load balancing / routes to containerized apps, multi-tenant SDN, security features (SELinux, secrets, security context), management tooling supporting multiple OpenShift environments (CloudForms), persistent storage (built-in Container Native Storage), automated deployment tooling based on Ansible and much, much more. OpenShift is a platform that runs on any infrastructure, from bare-metal to virtualization to public cloud (Amazon, Google, Microsoft), providing portability across cloud infrastructure for containerized applications. All of these things together is what truly enables organizations to move to DevOps, increase application release cycles, speed up innovation cycles, scale efficiently, gain independence from infrastructure providers and deliver new capabilities faster with more reliability to their customers.

Continue reading

Storage for Containers using NetApp SolidFire– Part VI

Shipping containers

Overview

In this article we will look at how you can configure and dynamically provision NetApp SolidFire Storage for containerized applications in a Kubernetes/OpenShift environment. This article is a work from Kapil Arora (Cloud Platform Architect @NetApp).

NetApp recently released an open source project known as Trident, the first external storage provisioner for Kubernetes leveraging on-premises storage.

Trident enables the use of the new storage class concept in Kubernetes, acting as a provisioning controller that watches for PVCs (persistent volume requests) and creates them on-demand.

This means that when a pod requests storage from a storage class that Trident is responsible for, it will provision a volume that meets those requirements and make it available to the pod in real-time.

To learn more  check-out these  Trident videos.

Continue reading

Storage for Containers using NetApp ONTAP NAS – Part V

Shipping containers

Overview

In this article we will look at how you can configure and dynamically provision ONTAP NAS Storage for containerized applications in a Kubernetes/OpenShift environment.

This article is a work from Kapil Arora (Cloud Platform Architect @NetApp).

NetApp recently released an open source project known as Trident, the first external storage provisioner for Kubernetes leveraging on-premises storage.

Trident enables the use of the new storage class concept in Kubernetes, acting as a provisioning controller that watches for PVCs (persistent volume requests) and creates them on-demand.

This means that when a pod requests storage from a storage class that Trident is responsible for, it will provision a volume that meets those requirements and make it available to the pod in real-time.

To learn more  check-out these  Trident videos.

Continue reading

Storage for Containers using Container Native Storage – Part III

Shipping containers

Overview

In this article we will focus on a new area of storage for containers called Container Native Storage. This article is a collaboration between Daniel Messer (Technical Marketing Manager Storage @RedHat) and Keith Tenzer (Sr. Solutions Architect @RedHat).

So What is Container Native Storage?

Essentially container native storage or CNS is a hyper-converged approach to storage and compute in the context of containers. Each container host or node supplies both compute and storage allowing storage to be completely integrated or absorbed by the platform itself. A software-defined storage system running in containers on the platform, consumes disks provided by nodes and provides a cluster-wide storage abstraction layer. The software-defined storage system then provides capabilities such as high-availability, dynamic provisioning and general storage management. This enables DevOps from a storage perspective, allows storage to grow with the container platform and delivers a high level of efficiency.

Continue reading

Storage for Containers using Gluster – Part II

Shipping containers

Overview

This article is a collaboration between Daniel Messer (Technical Marketing Manager Storage @RedHat) and Keith Tenzer (Solutions Architect @RedHat).

Gluster as Container-Ready Storage (CRS)

In this article we will look at one of the first options of storage for containers and how to deploy it. Support for GlusterFS has been in Kubernetes and OpenShift for some time. GlusterFS is a good fit because it is available across all deployment options: bare-metal, virtual, on-premise and public cloud. The recent addition of GlusterFS running in a container will be discussed later in this series.

GlusterFS is a distributed filesystem at heart with a native protocol (GlusterFS) and various other protocols (NFS, SMB,…). For integration with OpenShift, nodes will use the native protocol via FUSE to mount GlusterFS volumes on the node itself and then have them bind-mount’ed into the target containers. OpenShift/Kubernetes has a native provisioner that implements requesting, releasing and (un-)mounting GlusterFS volumes.

Continue reading