ActiveBlog

Rocking it with Stackato and Websockets
by Jamie Paton

Jamie Paton, September 7, 2012
Not a WebSocket

With the release of Stackato v2.2 a few weeks ago, there is now experimental support for WebSockets for your applications.

What-a-socket?

You can skip this section if you know what WebSockets are all about. Otherwise, to understand why WebSockets might work for you, it's best to get an understanding of how the HTTP protocol works. I won't lavish you with the inexplicably exciting details, but if you're interested I do suggest a quick read of HTML5 Web Sockets: A Quantum Leap in Scalability for the Web.

What you will learn there is that the traditional methods of streaming data over the HTTP protocol simply don't stack up well in terms of performance and reliability. To support pushing more data around the browser without saturating it with excessive polling queues, which will inevitably end up causing both UI and network latency, we can take advantage of native connection streaming of the browser and get something similar to TCP.

Router2g

When investigating support for WebSockets, it became evident that the default Cloud Foundry components could not support it out of the box. To facilitate the WebSocket protocol in Stackato, we could have modified the existing stack, patching Nginx with the TCP module and adding support for connection streaming in the Cloud Foundry based router. This route was avoided due to the Nginx TCP proxy module being unsupported and the CF router needing a fair amount wiggling around to make it expect things that were not HTTP-only connections.

You could put HAProxy right at the front of your cluster and siphon off incoming WebSocket connections to the appropriate DEA node, but it is not easy to make such high performance components aware of the routing table without the entire stack becoming unnecessarily convoluted, and adding yet more components and more latency.

So we introduced router2g a.k.a "stackato-router". It's a drop in replacement for both Nginx and the CF router, designed with "simplicity and performance" as its maxim.

A new component

Low-latency, low memory consumption and a high degree of reliability are always an important factor where distributed systems are concerned. This becomes even more of an important notion in Stackato when you consider that the router is the central hub for directing most of the traffic around the system.

Stackato-router is designed with new features in mind, WebSockets being the start. To accommodate future features, bundling Ruby, Lua and Nginx into a single component doesn't give us much room for maintenance and growth without taking performance hits and a potential spaghetti situation.

Node.js

Node.js is a natural fit for a routing component. It supports many of the modern protocols with little fuss and and a high level of modularity. It does so on top of the v8 engine and libuv, which makes for a high performing low level codebase.

Using node.js also delivers the benefit of the bustling and responsive node.js ecosystem and community. Support for cutting edge protocols or reliable and tested modules can be found there and are in use in other production systems (including PaaS's).

One example of this is the ease with which we were able to add SPDY support to stackato-router (though you'll have to wait till v2.4 for that!).

Enabling WebSocket support in Stackato

Login to the router node on your cluster and enter:

kato config cluster --append alternative_processes router2g

You can decide to revert back to the default router stack at any time:

kato config cluster --remove alternative_processes router2g

Wait, tell me what these commands do!

Don't panic, behind the scenes we are simply disabling nginx and the Cloud Foundry-derived router component. It will then start a fresh new addition to the Stackato family, currently known as "router2g", but more formally as "stackato-router". So, stackato-router takes over the Nginx and default router and condenses that routing stack into one component that supports the WebSockets protocol out of the box.

These commands will transform all the routers currently in the cluster, so if you are running multiple routers you should be aware of that.

Showtime

Lets hit a standard static console page:

stackato-router

± slam -c 100 -t 60 http://api.test.184.73.57.20.xip.io/console/login/
slam v1.0.3
Wed Sep 05 2012 14:58:28 GMT-0700 (PDT)

slamming http://api.test.184.73.57.20.xip.io/console/login/ x100 for 60s...

Transactions:                 26592 hits
Availability:                100.00 %
Elapsed time:                 60.17 secs
Data transferred:            140.67 MB
Response time:                 0.21 secs
Transaction rate:            441.88 trans/sec
Throughput:                    2.33 MB/sec
Concurrency:                  49.05 
Successful transactions:      26592 
Failed transactions:              0 
Longest transaction:           1.95 
Shortest transaction:          0.00 

Nginx & CF Router

± slam -c 100 -t 60 http://api.test.184.73.57.20.xip.io/console/login/
slam v1.0.3
Wed Sep 05 2012 15:03:28 GMT-0700 (PDT)

slamming http://api.test.184.73.57.20.xip.io/console/login/ x100 for 60s...

Transactions:                 24743 hits
Availability:                100.00 %
Elapsed time:                 60.27 secs
Data transferred:            130.15 MB
Response time:                 0.24 secs
Transaction rate:            410.53 trans/sec
Throughput:                    2.15 MB/sec
Concurrency:                  53.31 
Successful transactions:      24743 
Failed transactions:              0 
Longest transaction:           2.78 
Shortest transaction:          0.00 

Let's see what we can deduce from this benchmark, bearing in mind it wasn't conducted under strict lab conditions.

The first thing that stands out is the transaction rate. Stackato-router is pumping out more HTTP requests per second, by about 30/sec, reflected in the number of total requests/hits made to the server. Stackato-router was able to handle 1849 more requests in these sixty seconds. Also of note is that both routers had no failed transactions so stability is not an issue yet.

The throughput on stackato-router is marginally higher (0.18MB/s) suggesting that default router setup might be bottlenecked somewhere when passing the request around.

The concurrency of requests is higher in Nginx, most likely due to the worker process model, so Nginx can handle more concurrent requests, but is slower than stackato-router at finishing them.

Now let's hit a deployed application, which will make more use of the dynamic routing features of each router:

stackato-router

± slam -c 100 -t 60 http://phpinfo.test.184.73.57.20.xip.io
slam v1.0.3
Wed Sep 05 2012 16:12:44 GMT-0700 (PDT)

slamming http://phpinfo.test.184.73.57.20.xip.io/ x100 for 60s...

Transactions:                  8130 hits
Availability:                100.00 %
Elapsed time:                 60.11 secs
Data transferred:            324.81 MB
Response time:                 0.73 secs
Transaction rate:            135.24 trans/sec
Throughput:                    5.40 MB/sec
Concurrency:                  73.93 
Successful transactions:       8130 
Failed transactions:              0 
Longest transaction:           4.73 
Shortest transaction:          0.00  

Nginx & CF Router

± slam -c 100 -t 60 http://phpinfo.test.184.73.57.20.xip.io
slam v1.0.3
Wed Sep 05 2012 15:56:37 GMT-0700 (PDT)

slamming http://phpinfo.test.184.73.57.20.xip.io/ x100 for 60s...

Transactions:                  3640 hits
Availability:                100.00 %
Elapsed time:                 60.07 secs
Data transferred:            258.00 MB
Response time:                 1.61 secs
Transaction rate:             60.59 trans/sec
Throughput:                    4.29 MB/sec
Concurrency:                  86.11 
Successful transactions:       3640 
Failed transactions:              0 
Longest transaction:           7.81 
Shortest transaction:          0.00  

Again, many more transactions per second, though this time stackato-router is pushing out more successful requests with lower response times.

In our development builds, Stackato-router has been updated to take advantage of node v0.8 and node.js's cluster model, so further improvements should be seen in v2.4 (the benchmarks above are for v2.2).

Demo apps

There are two WebSocket demos for ready for you to try out on Stackato straight away. Make sure to check out the "ws" branch on each one before deploying:

So, once you've deployed one of those, check your web inspector and you should be able to see the WebSocket frames swimming by, enjoy!:

title title

Subscribe to ActiveState Blogs by Email

Share this post:

Category: stackato
About the Author: RSS

I've been a DevOps engineer at ActiveState for over three years, with a passion for the new league of cloud platforms, virtualization technologies, and software networking with a drive to make Stackato the leader in its field. I've worked with big companies such as HP and Mozilla to bring PaaS to their repertoire of products. I can usually be found tinkering with CI systems, debugging deep inside all layers of the TCP/UDP networking stack and making software work better for everyone.

Comments

3 comments for Rocking it with Stackato and Websockets
Permalink

Very cool! I always wondered if node.js would be a suitable replacement for the nginx/ruby router combo and it clearly is! I like how this is higher performing (well, less concurrent requests but faster at offloading them) and also simplifies the stack for routing.

Will the stackato-router be open sourced? I believe AppFog is doing something similar from their acquisition of nodster and contributing it back to the community. I assume they're taking a similar approach and using node.js.

Permalink

Hi, we were also quite impressed with the numbers coming from it, for dynamic routing at runtime node.js seems like a perfect match.

We have no immediate plans to open source it, although that's not to say we won't.

Permalink

I posted some more info here. Great post for common problem with nginx. I hate how much time it takes to set it up from ground up, especially if for someone with little Linux experience.