- Get Stackato
- Why a Private PaaS?
- Features & Benefits
- Stackato by Language
- Compare Editions
- Stackato & Cloud Foundry
- Developer Tools
Jamie Paton, September 7, 2012
You can skip this section if you know what WebSockets are all about. Otherwise, to understand why WebSockets might work for you, it's best to get an understanding of how the HTTP protocol works. I won't lavish you with the inexplicably exciting details, but if you're interested I do suggest a quick read of HTML5 Web Sockets: A Quantum Leap in Scalability for the Web.
What you will learn there is that the traditional methods of streaming data over the HTTP protocol simply don't stack up well in terms of performance and reliability. To support pushing more data around the browser without saturating it with excessive polling queues, which will inevitably end up causing both UI and network latency, we can take advantage of native connection streaming of the browser and get something similar to TCP.
When investigating support for WebSockets, it became evident that the default Cloud Foundry components could not support it out of the box. To facilitate the WebSocket protocol in Stackato, we could have modified the existing stack, patching Nginx with the TCP module and adding support for connection streaming in the Cloud Foundry based router. This route was avoided due to the Nginx TCP proxy module being unsupported and the CF router needing a fair amount wiggling around to make it expect things that were not HTTP-only connections.
You could put HAProxy right at the front of your cluster and siphon off incoming WebSocket connections to the appropriate DEA node, but it is not easy to make such high performance components aware of the routing table without the entire stack becoming unnecessarily convoluted, and adding yet more components and more latency.
So we introduced router2g a.k.a "stackato-router". It's a drop in replacement for both Nginx and the CF router, designed with "simplicity and performance" as its maxim.
A new component
Low-latency, low memory consumption and a high degree of reliability are always an important factor where distributed systems are concerned. This becomes even more of an important notion in Stackato when you consider that the router is the central hub for directing most of the traffic around the system.
Stackato-router is designed with new features in mind, WebSockets being the start. To accommodate future features, bundling Ruby, Lua and Nginx into a single component doesn't give us much room for maintenance and growth without taking performance hits and a potential spaghetti situation.
Node.js is a natural fit for a routing component. It supports many of the modern protocols with little fuss and and a high level of modularity. It does so on top of the v8 engine and libuv, which makes for a high performing low level codebase.
Using node.js also delivers the benefit of the bustling and responsive node.js ecosystem and community. Support for cutting edge protocols or reliable and tested modules can be found there and are in use in other production systems (including PaaS's).
One example of this is the ease with which we were able to add SPDY support to stackato-router (though you'll have to wait till v2.4 for that!).
Enabling WebSocket support in Stackato
Login to the router node on your cluster and enter:
kato config cluster --append alternative_processes router2g
You can decide to revert back to the default router stack at any time:
kato config cluster --remove alternative_processes router2g
Wait, tell me what these commands do!
Don't panic, behind the scenes we are simply disabling nginx and the Cloud Foundry-derived router component. It will then start a fresh new addition to the Stackato family, currently known as "router2g", but more formally as "stackato-router". So, stackato-router takes over the Nginx and default router and condenses that routing stack into one component that supports the WebSockets protocol out of the box.
These commands will transform all the routers currently in the cluster, so if you are running multiple routers you should be aware of that.
Lets hit a standard static console page:
Â± slam -c 100 -t 60 http://api.test.18.104.22.168.xip.io/console/login/ slam v1.0.3 Wed Sep 05 2012 14:58:28 GMT-0700 (PDT) slamming http://api.test.22.214.171.124.xip.io/console/login/ x100 for 60s... Transactions: 26592 hits Availability: 100.00 % Elapsed time: 60.17 secs Data transferred: 140.67 MB Response time: 0.21 secs Transaction rate: 441.88 trans/sec Throughput: 2.33 MB/sec Concurrency: 49.05 Successful transactions: 26592 Failed transactions: 0 Longest transaction: 1.95 Shortest transaction: 0.00
Nginx & CF Router
Â± slam -c 100 -t 60 http://api.test.126.96.36.199.xip.io/console/login/ slam v1.0.3 Wed Sep 05 2012 15:03:28 GMT-0700 (PDT) slamming http://api.test.188.8.131.52.xip.io/console/login/ x100 for 60s... Transactions: 24743 hits Availability: 100.00 % Elapsed time: 60.27 secs Data transferred: 130.15 MB Response time: 0.24 secs Transaction rate: 410.53 trans/sec Throughput: 2.15 MB/sec Concurrency: 53.31 Successful transactions: 24743 Failed transactions: 0 Longest transaction: 2.78 Shortest transaction: 0.00
Let's see what we can deduce from this benchmark, bearing in mind it wasn't conducted under strict lab conditions.
The first thing that stands out is the transaction rate. Stackato-router is pumping out more HTTP requests per second, by about 30/sec, reflected in the number of total requests/hits made to the server. Stackato-router was able to handle 1849 more requests in these sixty seconds. Also of note is that both routers had no failed transactions so stability is not an issue yet.
The throughput on stackato-router is marginally higher (0.18MB/s) suggesting that default router setup might be bottlenecked somewhere when passing the request around.
The concurrency of requests is higher in Nginx, most likely due to the worker process model, so Nginx can handle more concurrent requests, but is slower than stackato-router at finishing them.
Now let's hit a deployed application, which will make more use of the dynamic routing features of each router:
Â± slam -c 100 -t 60 http://phpinfo.test.184.108.40.206.xip.io slam v1.0.3 Wed Sep 05 2012 16:12:44 GMT-0700 (PDT) slamming http://phpinfo.test.220.127.116.11.xip.io/ x100 for 60s... Transactions: 8130 hits Availability: 100.00 % Elapsed time: 60.11 secs Data transferred: 324.81 MB Response time: 0.73 secs Transaction rate: 135.24 trans/sec Throughput: 5.40 MB/sec Concurrency: 73.93 Successful transactions: 8130 Failed transactions: 0 Longest transaction: 4.73 Shortest transaction: 0.00
Nginx & CF Router
Â± slam -c 100 -t 60 http://phpinfo.test.18.104.22.168.xip.io slam v1.0.3 Wed Sep 05 2012 15:56:37 GMT-0700 (PDT) slamming http://phpinfo.test.22.214.171.124.xip.io/ x100 for 60s... Transactions: 3640 hits Availability: 100.00 % Elapsed time: 60.07 secs Data transferred: 258.00 MB Response time: 1.61 secs Transaction rate: 60.59 trans/sec Throughput: 4.29 MB/sec Concurrency: 86.11 Successful transactions: 3640 Failed transactions: 0 Longest transaction: 7.81 Shortest transaction: 0.00
Again, many more transactions per second, though this time stackato-router is pushing out more successful requests with lower response times.
In our development builds, Stackato-router has been updated to take advantage of node v0.8 and node.js's cluster model, so further improvements should be seen in v2.4 (the benchmarks above are for v2.2).
There are two WebSocket demos for ready for you to try out on Stackato straight away. Make sure to check out the "ws" branch on each one before deploying:
So, once you've deployed one of those, check your web inspector and you should be able to see the WebSocket frames swimming by, enjoy!:
Subscribe to ActiveState Blogs by Email
Share this post: