ActiveBlog

Application Autoscaling in Action
by Eric Promislow

Eric Promislow, May 13, 2014
Tree branching shows fibonacci sequences

An earlier post mentioned that Application Autoscaling is one of the 10 awesome new features in Stackato 3.2. This post will show autoscaling at work, walking you through a sample session.

Building the Next Killer App

Let's say we've decided to meet the worldwide demand for getting Fibonacci numbers by building a web app, and we just happen to have some code sitting around that we wrote for our first Monstrously Massive Online Course on Ruby. While this isn't a programming blog, we're really proud of this code, and can't wait to present it here:

class Fibo < Sinatra::Base
  get '/fib/:num' do
    fiboWrapper
  end

  get '/fibo/:num' do
    fiboWrapper
  end

  def fiboWrapper
    num = params[:num].to_i
    t1 = Time.now
    x =  fibo(num)
    t2 = Time.now
    return "fib(#{num}) => #{x} | #{t2 - t1} secs\n"
  end

  get '/' do
    #url = request.url.sub(/\/$/, '').sub(/:\d+$/,'')
    return <<-'EOT'
<html><head><title>Fibonacci Fun!</title><body>
<form action="/fibo/" method="POST">
  <input name="num" id="num" type="text">
  <input type="submit">
</form>
</body>
</html>
    EOT
    #url.chop! if url[-1] == '/'
    #return "Usage: #{url}/fib/[NUMBER]"
  end

  def fibo(num)
    if num <= 0
      return 0
    elsif num == 1
      return 1
    end
    return fibo(num - 1) + fibo(num - 2)
  end
end

The web expert who helped us wrap the code in a Sinatra class was muttering something about recurrence relations and memoization, but we were in too much of a hurry to get the app running, and thought he was trying to pad his invoice. We included the time these calculations would take in the code just to show him there wouldn't be a problem. After all, computers are fast.

You can get the code from the sample app site at Github and follow along.

First, we need to use the command-line to load the code into Stackato.

~ $ cd apps

apps $ git clone git@github.com:Stackato-Apps/sinatra-fibo
Cloning into 'sinatra-fibo'...
remote: Reusing existing pack: 21, done.
remote: Total 21 (delta 0), reused 0 (delta 0)
Receiving objects: 100% (21/21), done.
Resolving deltas: 100% (5/5), done.

apps $ cd sinatra-fibo

sinatra-fibo $ stackato push -n

Using manifest file "stackato.yml"
Application Url:   fibo.192.168.68.81.xip.io
Creating Application [fibo] as [https://api.192.168.68.81.xip.io -> activestate -> space1 -> fibo] ... OK
  Map fibo.192.168.68.81.xip.io ... OK
Uploading Application [fibo] ...
[rest of output omitted ... ]

Now we'll start managing the app via the web console, and will return to the command-line terminal only to start drive it. We go the applications page of the web console, see "fibo", and click on it to bring up its page.

Fibo's application page

We can press on the "View App" button, and are presented with the usual not-going-to-win-any-design-awards form:

Fibo landing page

We enter '4', press Submit, and are told that the fibonacci sequence of 4 is 3, and can see the calculation was almost instantaneous.

Fibo results

We can further test the code by selecting "4" in the link, typing 10, and pressing return. And the program instantly replies with

fib(10) => 55 | 3.294e-05 secs

We then find out the following:

fib(20) => 6765 | 0.005166677 secs
fib(30) => 832040 | 0.382243858 secs

And then we try "40" and after what feels like an hour, we get the answer

fib(40) => 102334155 | 35.979649869 secs

OK, the web guy was correct. When the multitudes hit this site and someone enters a large argument, the app wil be tied up until it has calculated the value. Rather than rewriting the code to fix this, let's leave it as is to simulate the kind of load a large number of customers can inflict on your system.

First, go back to the "fibo" page in the Web Console and click on the "Instances" tab in the left column. Click on the "Autoscaling" button and let's set the CPU Autoscaling factors to a threshould of 20 - 40%, and 1 - 6 instances. This means that when the average CPU load goes above 40%, Stackato will add another instance, up to a maximum of 6. Similarly, Stackato reacts to reduced load periods by undeploying instances when the average workload drops below the minimum threshold, in this case 20%. Your screen should look something like this:

Autoscaling parameters

Now we're going to use the ApacheBench tool to simulate our 1,000 best friends and relatives all hitting the web site in a barrage. So go back to the terminal window and type the following, substituting the domain here with your actual domain:

ab -c 8 -n 20000 http://fibo.192.168.68.81.xip.io/fib/30

This is also the part of the post where we're going to pretend this is actually a screencast. But it would have made a boring screencast, since most of the time ApacheBench is doing its thing (silently) and Stackato is waiting one minute before it decides whether to scale up, down, or keep on trucking. Instead the post will show a series of screenshots, each of which has two parts. The upper part is a slice of the web console, showing the details on each instance. This data comes from the page for the Fibonacci app, and then select the Instances tab. The bottom part is taken from a Node.js program run from the command-line, which gets a pile of JSON via stackato stats fibo --json, and then pulls out some interesting fields. Because we're using watch to run the command every 0.2 seconds, we also conveniently get the time of each event.

Here's the state of the world right before we launch ab, at 11:01:33 AM.

Before launching

At 11:02:39 a second instance was launched, about one minute after the CPU usage passed 40%. Note in the image below that the CPU usage is at 88.6%. Stackato takes a minute worth of readings before changing the number of instances, although the CPU reading passed 50% about 20 seconds after ab was launched.

Spinning a new instance

The next screenshot shows that it takes about 20 seconds for a new instance to register with the statistics viewer. It actually was in service earlier, but the statistics gatherers work at a lower priority, so the initial activity doesn't show up (otherwise a single data point would have too much influence over the autoscaler's decisions).

The new instance starts up

16 seconds later, at 11:03:16, we can see that the CPU workload is spread evenly across the two instances, with both at 46.5%. Since this adds up to 93%, we should see a third instance get spun up soon (but not too soon, to avoid thrashing the system, as instance creation is expensive).

Sharing the workload

And sure enough, about 41 seconds later, a minute since the second instance was spawned, Stackato spins a third instance.

Sharing the workload over 3 instances

A screenshot taken a minute later, at 11:04:15, shows the workload spread across the three instances. And, you'll have to take our word for it since we didn't make you watch the screencast, but no other instances were created.

Sharing the workload across 3 instances

Finally, ApacheBench shuts down and gives its report, showing how most of the requests were served between 264 and 818 msec. Every minute, Stackato removes an instance, and after a couple of minutes we're back to running on one instance.

Back to 1 instance

The request time isn't terrific, but then all instances were running on the same machine. If we had a cluster of DEAs spread over different physical machines, we would expect a lower average request time.

Share this post:


You can click here to learn more about the other great new features in Stackato 3.2

Subscribe to ActiveState Blogs by Email

Share this post:

Category: stackato
About the Author: RSS

Eric Promislow is a senior developer who's worked on Komodo since the very beginning. He has a M.Sc. in Computing Science from Queen's University and a B.Sc. in Biophysics from the University of Ontario. Before joining ActiveState, he helped create the OmniMark text-processing language.