In this article we will explore an important part of day 2 operations in OpenStack or any IaaS, systems management. There are two ways to maintain applications: immutable or lifecyle. Satellite is a product from Red Hat that focuses on lifecycle management. Specifically the deployment, updating, patching and configuration of Red Hat Enterprise Linux (RHEL) as well as the applications running on top throughout entire lifecycle. We will discuss the value Satellite brings to OpenStack and why systems management is a key part of day 2 cloud operations. Investigate the Satellite architecture and how it applies to OpenStack. Finally we will go through hands-on deploy of Satellite on OpenStack, even deploying an instance and automatically connecting the instance to Satellite, all using Ansible.
The Value of Satellite in OpenStack
Satellite is the second product Red Hat created after RHEL. It has been around for over 10 years and recently gone through a major re-architecture from ground up to address cloud. Red Hat customers have used Satellite to create a standard operating environment (SOE) for RHEL and the applications that run on RHEL for 10+ years. Satellite provides the ability to create various content views and bring them together in a composite content view (a group of content views). This allows us to group content (RPMs, configuration management, Tar files, whatever else) and most importantly version it. Once we can group software and version it we can start thinking about release management across a lifecyle environment. A lifecycle environment is typically something similar the holy trinity: development, test and production. The versions of software for our OS and applications of course vary, you don’t want to update software in production without testing in development or test right?
In this article we will focus on security and vulnerability strategies for scanning container images. I know, in the past security was always viewed upon as an impedance to the speed of production but hopefully these days are behind us. Having a security breach, as you probably know, is one of the most costly things an organization can endure. It takes years to build up a reputation and only seconds to tear it down completely.
I still see today, many organizations ignoring container images completely because it is often misunderstood. Exactly what is inside a container image? Who should be responsible for it? How does it map to what we have done on servers? Security teams often don’t understand containers or even know what questions to ask. We need to help them and it is our duty to do so. Unfortunately there are not very many tools that can help in broad sense. Containers are new and evolving at breakneck speed. That coupled with the fact that security can negatively impact the speed of a DevOps team (if not done right), it is no wonder we are at square one, in many cases.
Before we dive into more detail, let us review important security aspects of containers.
- Containers can have various packaging formats, Docker is the most popular today
- Containers are immutable and as such are image based
- Container are never updated, any change always results in a new container
- Container images consist of layers (base, runtime, application)
- Container images require shared responsibility between dev and ops
- Containers don’t contain, they are in fact, just processes
For more information I recommend reading about the 10 layers of container security.
Containers, especially Docker container images have been on fire of late and it is simple to understand why? Docker container images give your development and operations organizations a major shot of adrenaline. The results are quite impressive. Applications are developed at never before seen speeds and as such organizations are able to deliver innovation to customers much faster. It’s all so easy, just get on Docker Hub, download a container and run it. So why isn’t everyone already doing this? Unfortunately it is not quite that simple. Enterprises have many other requirements such as security. Once IT operations gets involved they typically start asking a lot of questions. Who built this container? How is the container maintained? Who provides support for the software within the container? Does the software running within the container adhere to our security guidelines? How can we run security compliance checks within containers? How do we update software within containers?
In a previous article it was stated that cloud in not a technology but rather an architectural methodology of resource governance that utilizes underlying virtualization technologies. Since cloud is more about the methodology, the technology platforms such as VMware vCenter, Red Hat Virtualization Management (RHEVM), Microsoft System Center, Amazon EC2 and OpenStack can change as application requirements or life cycles change. An application may start on a virtualization platform like VMware but over time move to a cloud platform such as OpenStack. It could even have components serviced by both. This is the exact concept behind what Gartner talks about when they refer to Bi-Modal IT. Gartner also says 50% of companies will screw this up. A bridge is definitely needed for managing applications between these various platforms. Red Hat CloudForms is exactly that bridge. It allows us to create an abstraction above the virtualization platforms so that governance and business process can remain consistent across the cloud.
Once upon a time not too long ago, I had a project to transport data in and out of S3 using Java. Things started off rather smoothly. Within about 30 minutes I had my Amazon Web Services (AWS) account active, had the Java SDK pulled into Eclipse through Maven and was testing my first AWS S3 API calls. Continue reading