OpenShift: Accessing External Services using Egress Router

openshift-logotype-svg

Overview

Egress traffic is traffic going from OpenShift pods to external systems, outside of OpenShift. There are two main options for enabling egress traffic. Allow access to external systems from OpenShift physical node IPs or use egress router. In enterprise environments egress routers are often preferred. They allow granular access from a specific pod, group of pods or project to an external system or service. Access via node IP means all pods running on a given node can access external systems.

Continue reading

OpenShift 3.6 Fast Track: Everything You Need, Nothing You Don’t

Logotype_RH_OpenShiftContainerPlatform_wLogo_CMYK_Black

Overview

OpenShift Container Platform 3.6 went GA on August 9, 2017. You can read more about the release and new features here. In this article we will setup a standard non-HA environment that is perfect for PoCs or labs. Before we begin, let’s explain OpenShift for those that may be starting their OpenShift journey today. OpenShift is a complete container application build + run-time platform built on Kubernetes (Container Orchestration) and Docker (Container Packaging Format). Organizations looking to adopt containerization for their applications need of course a lot more than just technology, (Kubernetes and Docker), they need a real platform. OpenShift provides a service catalog for containerized applications, huge selection of already certified application runtimes + xPaaS services, a method for building containerized applications (source to image), centralized application logging, metrics, autoscaling, application deployments (Blue-Green, A/B, Canary, Rolling), integrated Jenkins CI/CD pipelines, integrated docker registry, load balancing / routes to containerized apps, multi-tenant SDN, security features (SELinux, secrets, security context), management tooling supporting multiple OpenShift environments (CloudForms), persistent storage (built-in Container Native Storage), automated deployment tooling based on Ansible and much, much more. OpenShift is a platform that runs on any infrastructure, from bare-metal to virtualization to public cloud (Amazon, Google, Microsoft), providing portability across cloud infrastructure for containerized applications. All of these things together is what truly enables organizations to move to DevOps, increase application release cycles, speed up innovation cycles, scale efficiently, gain independence from infrastructure providers and deliver new capabilities faster with more reliability to their customers.

Continue reading

OpenShift Showback Reporting using CloudForms

openshift_logo plus_sign  cf_logo

Overview

One of the most important capabilities of any platform in today’s service driven, pay-as-you-go economy is metering and showback. Without a solid understanding of costs, organizations are in fact unable to provide services. With containers, metering and showback becomes more challenging. If we think about containers simply being processes, then we are basically needing to meter and perform showback at that level of granularity. In addition since OpenShift uses Kubernetes for container orchestration, there are additional concepts that are new. For example, one more more containers run together in what Kubernetes refers to as a Pod. Next Pods are extremely dynamic and their lifetime very short. All of this make metering and showback anything but straight-forward. Thankfully OpenShift and CloudForms have the solution.

Continue reading

Storage for Containers Using Ceph RBD – Part IV

Shipping containers

Overview

In this article we will look at how to integrate Ceph RBD (Rados Block Device) with Kubernetes and OpenShift. Ceph is of course a scale-out software-defined storage system that provides block, file and object storage. It focuses primarily on cloud-storage use cases. Providing storage for Kubernetes and OpenShift is just one of many use cases that fit very well with Ceph.

Continue reading

Storage for Containers using NetApp SolidFire– Part VI

Shipping containers

Overview

In this article we will look at how you can configure and dynamically provision NetApp SolidFire Storage for containerized applications in a Kubernetes/OpenShift environment. This article is a work from Kapil Arora (Cloud Platform Architect @NetApp).

NetApp recently released an open source project known as Trident, the first external storage provisioner for Kubernetes leveraging on-premises storage.

Trident enables the use of the new storage class concept in Kubernetes, acting as a provisioning controller that watches for PVCs (persistent volume requests) and creates them on-demand.

This means that when a pod requests storage from a storage class that Trident is responsible for, it will provision a volume that meets those requirements and make it available to the pod in real-time.

To learn more  check-out these  Trident videos.

Continue reading

Storage for Containers using NetApp ONTAP NAS – Part V

Shipping containers

Overview

In this article we will look at how you can configure and dynamically provision ONTAP NAS Storage for containerized applications in a Kubernetes/OpenShift environment.

This article is a work from Kapil Arora (Cloud Platform Architect @NetApp).

NetApp recently released an open source project known as Trident, the first external storage provisioner for Kubernetes leveraging on-premises storage.

Trident enables the use of the new storage class concept in Kubernetes, acting as a provisioning controller that watches for PVCs (persistent volume requests) and creates them on-demand.

This means that when a pod requests storage from a storage class that Trident is responsible for, it will provision a volume that meets those requirements and make it available to the pod in real-time.

To learn more  check-out these  Trident videos.

Continue reading

OpenShift Enterprise 3.4: all-in-one Lab Environment

Screenshot from 2016-08-04 14:40:07

Overview

In this article we will setup a OpenShift Enterprise 3.4 all-in-one configuration.

OpenShift has several different roles: masters, nodes, etcd and load balancers. An all-in-one setup means running all service on a single system. Since we are only using a single system a load balancer or ha-proxy won’t be configured. If you would like to read more about OpenShift I can recommend the following:

Continue reading