Happy new year as this will be the first post of 2021! 2020 was obviously a challenging year, my hope is I will have more time to devote to blogging in 2021. Please reach out and let me know what topics would be most helpful.
In this Article we will walk through an OpenShift deployment using the IPI (Installer Provisioned Infrastructure) method on AWS. OpenShift offers two possible deployment methods: IPI (as mentioned) and UPI (User Provisioned Infrastructure). The difference is the degree of automation and customization. IPI will not only deploy OpenShift but also all infrastructure components and configurations. IPI is supported in various environments including AWS, Azure, GCE, VMware Vsphere and even Baremetal. IPI is tightly coupled with the infrastructure layer whereas UPI is not and will allow the most customization and work anywhere.
Ultimately IPI vs UPI is usually dictated by the requirements. My view is unless you have special requirements (like a stretch cluster or specific integrations with infrastructure that require UPI) always default to IPI. It is far better to have the vendor, in this case Red Hat own more of the share of responsibility and ensure proper, tested infrastructure configurations are being deployed as well as maintained throughout the cluster lifecycle.
In this article we will focus on how to get started with automation of windows using Ansible. Specifically we will look at installing 3rd party software and OS updates. Automation is the basis for cloud-computing or cloud-native patterns and breeds a culture of innovation. Let’s face it, we cannot innovate, if we are stuck doing mundane tasks and manual labor. Ansible has revolutionized automation because until Ansible, automating was rather complicated and required lots of domain knowledge. Ansible provides an automation language that the entire organization can use because it is so easy and so flexible. The best thing about Ansible though it it brings teams together and fosters a devops culture. It brings the Linux and Windows world together!
This article will look at the various options available, to do subscription reporting for Red Hat products. Many large organizations sometimes struggle to keep track of what subscriptions are being used, often maintaining their own spreadsheets. This can be very error prone and time consuming. Systems can even be subscribed to the wrong subscriptions, for example a virtual machine running RHEL using a physical subscription. Many Red Hat customers have several different products, not just RHEL and being able to actively inventory the entire subscription landscape is critical.
In this article we will provide a hands-on guide to integrating your already built Operator with the Operator Lifecycle Manager (OLM). Using the Operator SDK and OPM tool we will create the application manifests and images so your application Operator can be managed through OLM.
This article is part of a series that will walk you through understanding and building operators in Go or Ansible end-to-end.
In this article we will introduce the concept of Operators, the Operator Framework and Operator Lifecycle Management. This article is part of a series that will walk you through understanding and building operators in Go or Ansible end-to-end.
Or simply, it is the application that can deploy itself, manage itself and update itself. Welcome to the brave new world, where we don’t spend time doing repetitive manually tasks, but rather put our knowledge into software so it can do it for us, better.
In this short article we will look at a solution for application certificates in OpenShift. Let’s Encrypt is a non-profit certificate authority and provides easy on-demand TLS certificates. Each application you create that you want to expose to users will of course have it’s own URL and require a TLS certificate. It can be quite tedious to manage these certificates and deploy them manually. Kubernetes platforms also require an innovative, built-in native approach to properly mitigate complexity.
Thankfully a fellow RHatter (Tomáš Nožička) has created a k8s admission controller that integrates with let’s encrypt. A k8s admission controller is a pattern for extending kubernetes platform capabilities by reacting to API events in real-time. In this case the admission controller watches the route APIs. If a new route is added, plus has the right annotation, the admission controller will automatically register the route with Let’s Encrypt, wait for the certificate and finally configure the certificate automatically in the route.
Tomáš has provided the code and yaml for an easy deployment in the following Github repository: https://github.com/tnozicka/openshift-acme. While hee does provide documentation there are a few additional steps that need explanation when creating a route. I decided to as such put it all together in a simple concise post.
Cornavirus has arrived at the global level, it is likely only a matter of time before it is declared a world pandemic. Most are wondering how long it will stay and when will things go back to normal? Personally I think we reached the point, there will be no going back and that is not necessarily a bad thing. In this article we will discuss why the cornavirus is just as much an opportunity for the human race as it is a threat. We will discuss how technology can help and how a global world can function, disconnected.
In this article we will focus on installing and configuring OpenStack Train using RDO and the packstack installer. RDO is a community platform around Red Hat’s Enterprise OpenStack Distribution. It allows you to test the latest OpenStack capabilities on a stable platform such as Red Hat Enterprise Linux (RHEL) or CentOS. This guide will take you through setting up Hetzner root server, preparing environment for OpenStack, installing the OpenStack Train release, adding a floating ip subnet through OVS, configuring networking, security groups, flavors, images and are other OpenStack related services. The outcome is a working OpenStack environment based on the Train release that you can use as a baseline for testing your applications using OpenStack capabilities. The installation will create an all-in-one deployment however you can use this guide to create a multi-node deployment as well.