Auto Scaling Applications with OpenStack Heat

goldfish_bowl_2

Overview

In this article we will look at how to build a auto scaling application in OpenStack using Heat. This article builds on the following previous articles:

OpenStack Kilo Setup and Configuration
Auto Scaling Instances with OpenStack

As discussed in previous articles, Heat is the orchestration framework that not only automates provisioning but also provides policies around auto scaling. This article will build on previous article that showed how to automatically scale up or down instances. Here we will take things one step further and scale-up as well as scale-down a simple PHP web application. In addition to using Ceilometer to determine when a application should be scaled based on CPU load, Neutron will be used to provide not only networking but also Load Balancer As-a-Service (LBAAS). While you can of course use an external load balancer, out-of-the-box OpenStack uses ha-proxy.

You can get access to Heat templates below or directly from GitHub: https://github.com/ktenzer/openstack-heat-templates.

Update Ceilometer Collection Interval

By default Ceilometer will collect CPU data from instances every 10 minutes. For this example we want to change that to 60 seconds. Change the interval to 60 in the pipeline.yaml file and restart OpenStack services.

#vi /etc/ceilometer/pipeline.yaml

- name: cpu_source
interval: 60
meters:
- "cpu"
sinks:
- cpu_sink

#openstack-service restart

Heat Stack Environment

The Heat stack environment describes the unit we are dealing with and the unit of scale. The environment usually contains one or more instances and their dependencies. In this case the environment is a single instance that is a member of a load balancer. Metadata is used to create an association between all instances that are part of the stack template. This is important for metering and determining scaling events.

#vi /etc/heat/templates/lb-env.yaml
heat_template_version: 2014-10-16
description: A load-balancer server
parameters:
  image:
    type: string
    description: Image used for servers
  key_name:
    type: string
    description: SSH key to connect to the servers
  flavor:
    type: string
    description: flavor used by the servers
  pool_id:
    type: string
    description: Pool to contact
  user_data:
    type: string
    description: Server user_data
  metadata:
    type: json
  network:
    type: string
    description: Network used by the server

resources:
  server:
    type: OS::Nova::Server
    properties:
      flavor: {get_param: flavor}
      image: {get_param: image}
      key_name: {get_param: key_name}
      metadata: {get_param: metadata}
      user_data: {get_param: user_data}
      networks:
        - port: { get_resource: port }

  member:
    type: OS::Neutron::PoolMember
    properties:
      pool_id: {get_param: pool_id}
      address: {get_attr: [server, first_address]}
      protocol_port: 80

  port:
    type: OS::Neutron::Port
    properties:
      network: {get_param: network}
      security_groups:
        - base

outputs:
  server_ip:
    description: IP Address of the load-balanced server.
    value: { get_attr: [server, first_address] }
  lb_member:
    description: LB member details.
    value: { get_attr: [member, show] }

Heat Stack Template

The below heat stack template is used to create the Heat environment dependencies and determine auto scaling policies. In this example we are creating a load balancer and using already existing networks for tenant as well as Floating IP. The scale up and down policies are triggered by Ceilometer events based on CPU utilization. You will need to replace the information under parameters with your own information from your environment.

#vi /root/lb-webserver-fedora.yaml
heat_template_version: 2014-10-16
description: AutoScaling Fedora 22 Web Application
parameters:
  image:
    type: string
    description: Image used for servers
    default: Fedora 22
  key_name:
    type: string
    description: SSH key to connect to the servers
    default: admin
  flavor:
    type: string
    description: flavor used by the web servers
    default: m2.tiny
  network:
    type: string
    description: Network used by the server
    default: private
  subnet_id:
    type: string
    description: subnet on which the load balancer will be located
    default: 9daa6b7d-e647-482a-b387-dd5f855b88ef
  external_network_id:
    type: string
    description: UUID of a Neutron external network
    default: db17c885-77fa-45e8-8647-dbb132517960

resources:
  webserver:
    type: OS::Heat::AutoScalingGroup
    properties:
      min_size: 1
      max_size: 3
      cooldown: 60
      desired_capacity: 1
      resource:
        type: file:///etc/heat/templates/lb-env.yaml
        properties:
          flavor: {get_param: flavor}
          image: {get_param: image}
          key_name: {get_param: key_name}
          network: {get_param: network}
          pool_id: {get_resource: pool}
          metadata: {"metering.stack": {get_param: "OS::stack_id"}}
          user_data:
            str_replace:
              template: |
                #!/bin/bash -v
                yum -y install httpd php
                systemctl enable httpd
                systemctl start httpd
                cat < /var/www/html/hostname.php
                
                EOF
              params:
                hostip: 192.168.122.70
                fqdn: sat6.lab.com
                shortname: sat6

  web_server_scaleup_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: webserver}
      cooldown: 60
      scaling_adjustment: 1

  web_server_scaledown_policy:
    type: OS::Heat::ScalingPolicy
    properties:
      adjustment_type: change_in_capacity
      auto_scaling_group_id: {get_resource: webserver}
      cooldown: 60
      scaling_adjustment: -1

  cpu_alarm_high:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-up if the average CPU > 95% for 1 minute
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 95
      alarm_actions:
        - {get_attr: [web_server_scaleup_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: gt

  cpu_alarm_low:
    type: OS::Ceilometer::Alarm
    properties:
      description: Scale-down if the average CPU < 15% for 60 minutes
      meter_name: cpu_util
      statistic: avg
      period: 60
      evaluation_periods: 1
      threshold: 15
      alarm_actions:
        - {get_attr: [web_server_scaledown_policy, alarm_url]}
      matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}}
      comparison_operator: lt

  monitor:
    type: OS::Neutron::HealthMonitor
    properties:
      type: TCP
      delay: 5
      max_retries: 5
      timeout: 5

  pool:
    type: OS::Neutron::Pool
    properties:
      protocol: HTTP
      monitors: [{get_resource: monitor}]
      subnet_id: {get_param: subnet_id}
      lb_method: ROUND_ROBIN
      vip:
        protocol_port: 80

  lb:
    type: OS::Neutron::LoadBalancer
    properties:
      protocol_port: 80
      pool_id: {get_resource: pool}

  lb_floating:
    type: OS::Neutron::FloatingIP
    properties:
      floating_network_id: {get_param: external_network_id}
      port_id: {get_attr: [pool, vip, port_id]}

outputs:
  scale_up_url:
    description: >
      This URL is the webhook to scale up the autoscaling group.  You
      can invoke the scale-up operation by doing an HTTP POST to this
      URL; no body nor extra headers are needed.
    value: {get_attr: [web_server_scaleup_policy, alarm_url]}
  scale_dn_url:
    description: >
      This URL is the webhook to scale down the autoscaling group.
      You can invoke the scale-down operation by doing an HTTP POST to
      this URL; no body nor extra headers are needed.
    value: {get_attr: [web_server_scaledown_policy, alarm_url]}
  pool_ip_address:
    value: {get_attr: [pool, vip, address]}
    description: The IP address of the load balancing pool
  website_url:
    value:
      str_replace:
        template: http://serviceip/hostname.php
        params:
          serviceip: { get_attr: [lb_floating, floating_ip_address] }
    description: >
      This URL is the "external" URL that can be used to access the
      website.
  ceilometer_query:
    value:
      str_replace:
        template: >
          ceilometer statistics -m cpu_util
          -q metadata.user_metadata.stack=stackval -p 600 -a avg
        params:
          stackval: { get_param: "OS::stack_id" }
    description: >
      This is a Ceilometer query for statistics on the cpu_util meter
      Samples about OS::Nova::Server instances in this stack.  The -q
      parameter selects Samples according to the subject's metadata.
      When a VM's metadata includes an item of the form metering.X=Y,
      the corresponding Ceilometer resource has a metadata item of the
      form user_metadata.X=Y and samples about resources so tagged can
      be queried with a Ceilometer query term of the form
      metadata.user_metadata.X=Y.  In this case the nested stacks give
      their VMs metadata that is passed as a nested stack parameter,
      and this stack passes a metadata of the form metering.stack=Y,
      where Y is this stack's ID.

Running Heat Stack

Once both the environment and stack yaml files exist we can run the Heat stack. Make sure you also download Fedora 22 cloud image to Glance. Using the CLI we can run the following commands:

#. /root/keystonerc_admin
#heat stack-create webfarm -f /root/lb-webserver-stack.yaml

You can monitor the Heat stack creation in Horizon under “Orchestration->Stacks->Webfarm”. Horizon provides a very nice Heat stack topology view, where we can see how the environment dependencies fit together and if the deployment of the Heat stack was successful.

heat_stack_create

Once the Heat stack has been created we can view the outputs. These provide information on how to interact with the stack. In this case we are showing endpoints for triggering manual scale-up or scale-down events using REST endpoints, the Floating IP (this is IP we can use to access website) used by the Load Balancer and the Ceilometer command for getting the CPU utilization of the entire stack. This is useful for determining if the stack is scaling properly.

Heat_Stack_Output

Looking in Horizon under “Network->Load Balancers” should reveal the Load Balancer.

Load_Balancer

Once the webserver is running the instance should be listed as an active member of the Load Balancer.

Load_Balancer_Member_Active

We should now be able to access the website URL listed in the Heat Stack Output: http://192.168.122.179/hostname.php.

running_website

Finally we can view the CPU performance data of the entire Heat stack using the Ceilometer command displayed in Heat stack output. Note the metadata associated to each instance that is part of the Heat stack template.

#ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=8f86c3d5-15cf-4a64-b9e8-70215498c046 -p 600 -a avg

Application Auto Scaling

There are two ways we can scale the application up. We can either generate sustained 95% CPU utilization or use the REST scale_up_url. In Mozilla you can install “REST Easy” plugin in order to do simple REST operations through the web browser. To use the Heat REST hooks we need to do a post using REST Easy.

SCALE_UP

Once the REST call returns a 200, the HTTP status code for successful, we should be able to see the event in Horizon under “Orchestration->Stacks->Webfarm->Events”.

SCALE_UP_EVENT

The scale up event will create an additional webserver and add it to the Load Balancer. This takes a few minutes as the instance needs to be stated and the HTTP server must be installed as well as configured. Once complete, the instance will go to active. You will see that the website URL gets routed through the Load Balancer, to all the webservers in the farm.

SCALE_UP_LOAD_BALANCER

One thing to be aware of is that the application with scale down automatically if CPU utilization dips below 15% for 60 seconds. You may want to alter thresholds in the Heat stack template to suit your purposes. In addition you can generate load manually by allocating a Floating IP to one of the instances and following these steps:

#ssh -i admin.key fedora@192.168.122.152
$sudo -i
#dd if=/dev/zero of=/dev/null &
#dd if=/dev/zero of=/dev/null &
#dd if=/dev/zero of=/dev/null &

Summary

In this article we have seen how to setup an auto scaling webserver farm using Heat. We have seen how Ceilometer and network services such as a Load Balancer play their important roles. Heat is the brains behind OpenStack, to know OpenStack is to understand it through Heat. The goal of this article is to really demonstrate and learn the strengths of OpenStack. Many organizations I speak with are trying to understand how OpenStack could add value and how it differentiates itself with traditional virtualization. Auto scaling applications is a major use case but it is really about services. Our auto scaling application leverages many services provided to us through OpenStack. Furthermore, OpenStack allows us to abstract processes behind services and that is the real difference maker. Hopefully this article has shed some light and provided insight in better understanding OpenStack use cases. If you have feedback or other interesting use cases, please share.

Happy OpenStacking!

(c) 2015 Keith Tenzer

4 thoughts on “Auto Scaling Applications with OpenStack Heat

  1. I pulled down your lb-env.yaml and lb-webserver-fedora.yaml templates and tried to create the stack on our OpenStack Kilo lab. The stack-create always fails. Here’s the event where the failure happens:

    stack_lb_rod | 1d6d6e01-d2b3-4037-95a5-7d60f501447b | Resource CREATE failed: PhysicalResourceNotFound: resources.webserver.resources.7x4vanryhl5j.resources.port: The Resource (base) could not be found. | CREATE_FAILED | 2016-03-23T18:41:20Z |
    | webserver | e5e6cf91-8010-491b-a7b6-b11492ba48bf | PhysicalResourceNotFound: resources.webserver.resources.7x4vanryhl5j.resources.port: The Resource (base) could not be found. | CREATE_FAILED | 2016-03-23T18:41:19Z |
    | lb | d729b513-ab29-468c-ae8e-fa51165364da | state changed | CREATE_COMPLETE | 2016-03-23T18:41:17Z |
    | lb_floating | 84d3129c-2d92-4138-9208-28373de9c3ff | state changed | CREATE_COMPLETE | 2016-03-23T18:41:17Z |
    | lb_floating | 7639c76e-59ef-4a9b-a8b3-9708d9814fb6 | state changed | CREATE_IN_PROGRESS | 2016-03-23T18:41:15Z |
    | lb | 8b544e98-806f-490c-9d9e-39cd7cb7d33b | state changed | CREATE_IN_PROGRESS | 2016-03-23T18:41:15Z |
    | webserver | aa1c068a-b05b-43ab-82ce-1f51d64aae37 | state changed | CREATE_IN_PROGRESS | 2016-03-23T18:41:14Z |
    | pool | 6eabc5bc-fa37-4e97-8c37-20ee5f94a647 | state changed | CREATE_COMPLETE | 2016-03-23T18:41:14Z |
    | pool | c1c5207b-6eaa-4286-a4fe-bfaef670653e | state changed | CREATE_IN_PROGRESS | 2016-03-23T18:41:08Z |
    | monitor | 90dbeccb-c760-4cab-8a3d-8def795c86ab | state changed | CREATE_COMPLETE | 2016-03-23T18:41:08Z |
    | monitor | cdb5f729-a62b-4c89-a8eb-fa2364da9810 | state changed | CREATE_IN_PROGRESS | 2016-03-23T18:41:08Z |
    | stack_lb_rod | 50dc757a-cc6d-47f5-850d-1c929fe1f344 | Stack CREATE started | CREATE_IN_PROGRESS | 2016-03-23T18:41:08Z |
    +—————+————————————–+—————————————————————

    I did move the location of the lb-env.yaml file to my local directory
    and changed the location in the webserver resource to:
    webserver:
    type: OS::Heat::AutoScalingGroup
    properties:
    min_size: 1
    max_size: 3
    cooldown: 60
    desired_capacity: 1
    resource:
    type: lb_env.yaml

    Can you explain why I get the PhysicalResourceNotFound error during stack-create?

    Thanks for the help
    Rod

    Like

    • Sorry for deplayed response the issue seems to be with Network port. I would guess the problem is that you didnt change the parameters file I provided. You need to do that and add ID for your specific networks as well as the key, flavor information, etc. Let me know if that was issue?

      Like

  2. Hi Keith Tenzer!
    First, Thanks for your article, it is very helpful for me.
    But I am confused about Alarm. Those Alarm just notyfies when the first webserver in your farm change cpu_util or when any webserver in your farm change cpu_until.

    Tks u!!

    Like

    • If things are working correctly then the cpu_util will be calculated for entire heat stack. If you have two instances one at 100% and other 0% then cpu_util would show as 50%. The alarms and policies dont apply to a specific instance but rather the heat stack itself. You can of course create alarms that apply only to instance but that is not what I have done here.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s