- Get Stackato
- Why a Private PaaS?
- Features & Benefits
- Stackato by Language
- Compare Editions
- Stackato & Cloud Foundry
- Developer Tools
- Stackato Training
- Professional Services
- Commercial Support
- Code Recipes
Jamie Paton, January 24, 2012
We have a number of Stackato deployments here at ActiveState, some larger than others, but one stands out as enticingly quick to get setup and scaling with little effort.
Our Stackato vSphere cluster serves as one of our primary testing platforms and at any given time. It’s running at least 25 Stackato nodes, one 'cloud controller' and the rest mostly 'DEA' nodes.
In a real elastic cloud you wouldn’t want to waste the resources or expense of running these DEA’s perpetually, you would want them only when there is a current or trending resource demand for them. We mainly idle these VMs for QA testing and benchmarking, so we are ready to instantly scale up or down.
Elasticity should be no stranger to those of us working on PaaS or IaaS products, and if you’re not already familiar with our Stackato Sandbox, it runs on Amazon EC2 with EBS backed storage. To meet user demand at any point in time, we can have the cloud controller automatically spin up new DEA instances taking into account projected load on resources and use the EC2 API to spin up a new DEA.
vSphere can also serve your own privately managed cloud in this exact manner, and I’ll be giving a short summary on how we did this on our own vSphere setup.
Taming the vSphere API
It should come as no surprise to more experienced vSphere users that to achieve scaling we made use of the vSphere API. As Cloud Foundry itself was conceived in Ruby, we choose to use RbVmomi, an open source Ruby interface to the vSphere API. If you’ve had previous dealings with the vSphere API, fear not - this Ruby interface actually makes the coalescence of the two a breeze.
For the connecting vCenter user, I recommend creating a new user with the permissions to create, start, stop and deploy new VMs, and access to any of resource pools it might need.
Diving into the Cloud Controller
I reused existing modifications to the health monitor component from our EC2 (Sandbox) scaling code, to incorporate the ‘events’ necessary to trigger a scaling activity.
These previous modifications to the health monitor are designed to probe all the available DEAs at runtime intervals and check their current resource capabilities and limits, including the DEA’s current RAM metrics. This, of course, can also include the condition when there are simply no DEAs available for deployment due to failure or full capacity.
Using a small analysis of past, current and projected app deployments, a ‘scale up’ event will be triggered and passed along the NATS message bus, which is the default PubSub messaging mechanism used by Cloud Foundry.
At the point of receiving the event message, the process flow follows something like this:
- The Health Monitor component keeps track of the health of any deployed apps and the availability of DEAs
- If under load, the ‘scale up’ event is triggered from the Health Monitor to the cloud controller via NATS
- The cloud controller will load up the vCenter YAML configuration (more on this later) and connect to vCenter via the API
- It will round-robin all hypervisor hosts currently available for deployment, so no single one becomes overloaded.
- Upon finding an available host, the DEA is deployed from the template to the host and it’s corresponding resource pool and datastore.
- The MRU index of the last deployed to host is kept in the cloud controllers database.
- When the DEA starts up, it is already configured to register with your cloud controller and is instantly ready to start receiving new app deployments.
Scaling from a pre-built template
On the ‘subscribing’ end of these event messages is the cloud controller itself, which upon receiving a ‘scaleup’ operation, will proceed to connect to vCenter, and deploy a new DEA from a pre-defined template. To make the template, we take a new Stackato VM and run:
stackato-admin become dea -m <cloud_controller_ip> -e <api_url>
This is the same step you would take when setting up a cluster manually. Once you’ve run this, you can shut that VM down, then right click ‘Save as template’. Once converted into a template, it as available for instant deployment across all your ESXi’s / vHosts.
Balancing the VM load on your ESXi’s
Checking the hosts status
A useful feature of the vSphere API is the ability to probe your hosts for their runtime metrics, so you can make an informed decision about where the next VM should be deployed to. We make use of the vSphere API’s HostSystem() object to get some information about which hosts are available, and which hosts are in good health.
Using the API makes it easy to locate a new host to deploy to. You don’t need to supply any host configurations, the cloud controller will take care of keeping tabs of hosts in the CC Database. The scaling code automatically chooses the next available vSphere host at runtime, so you don’t need to manually specify your hosts in a config file.
Wrapping it up in a configuration file
To ease maintenance and provide flexibility, the scale-up control settings are in a simple YAML configuration file that you upload to the stackato user’s $HOME directory, which is then parsed by the cloud controller. This configuration contains parameters like vCenter server, authentication details and template information. It is quite straight forward:
--- server: vcenter.domain.com user: username password: password https: true port: 443 insecure: true path: /sdk datacenter: DataCenter template: "Your-Template-Name"
We use SSL to connect to vCenter (recommended), but I’m also connecting with ‘insecure’ set to true. That simply means we are using self-signed SSL certificates, and forces RbvMomi to accept them.
Expanding the featureset
With the abundance of documentation on the vSphere API we could take the scaling process even further when working with different setups of datastores architecture.
Some items for consideration could be:
- Limiting the maximum number of VMs on any single host
- Allowing deployments from multiple/different templates
- Allowing multiple vCenters in the configuration
If you have any thoughts or questions about how this could apply to your own vSphere setup please let us know, we’d love to hear them! If you’d like to try this Stackato auto-scaling on your own vSphere cluster, please contact us at firstname.lastname@example.org, and we'll arrange access to a special VM of Stackato with this feature.
Subscribe to ActiveState Blogs by Email
Share this post: