From Logs to Mighty Oaks: Stackato, Loggly, and Json
by John Wetherill

John Wetherill, February 19, 2013

Back in the day, logs used to be fun with monolithic apps, single log files, and all sorts of geeky tools like awk to sift through them.

But then things got complicated with client/server architectures, multiple log files, inconsistent log message formats, disparate servers, notification requirements, time sync issues, correlation challenges, not to mention multiple user and system tasks running all over the place. Logs suddenly got really hard and dreary, instead of fun.

Over the years, many solutions to these complexities surfaced including syslog, centralized logging services, correlation ids, sophisticated logging APIs, and the like. Properly used, these solutions didn’t necessarily make logging fun, but at least not as painful.

But now, the new cloud era threatens to sap the remaining fun out of logging. Multiple interacting apps and processes are coming and going across servers, data centers, and continents. How can anything solve this without introducing further complexity?

Not So Fast, Cloud Era!

Stackato puts the fun back into logging by integrating instantly with a variety of popular log aggregation services and apps.

One outstanding example of these is Loggly, a Logging as a Service (LaaS) provider worth more than a passing glance. Loggly offers cloud-based log aggregation with short-term archival and basic analytics. The Loggly Cloud Service is quite powerful while extremely simple to setup and use.

Stackato and Loggly

Stackato is well aware of Loggly, and can be configured to redirect logs to the loggly cloud with a single command:

stackato drain weblog-drain tcp://

This command kicks off a process that intercepts all application logs and sends them off to loggly on specified port.

From this point on, the logs can be viewed in the Loggly web interface.

But First, Logging in JSON

I will hold off on that part as I want to get to the really fun stuff. Loggly, to their increasing allure, accepts and interprets json-formatted logs. These logs can be easily searched by the json field names. I am itching to get there so here is what I am going to do:

  1. Create a loggly input that accepts JSON
  2. Create a servlet-based app that logs json
  3. Deploy multiple instances of this app to a large Stackato cluster
  4. Setup a drain to forward logs to Loggly
  5. Exercise the app hard and watch the logs on the Loggly console

Here goes.

1. Create a loggly input that accepts JSON

Click the Add Input button in loggly and specify the options as shown here:

For json you must choose one of the “w/Strip” service types which causes loggly to strip the syslog header (if any) and interpret only the json content.

2. Create a servlet-based app that logs json

Create one yourself or feel free to clone one I built for this very purpose:

git clone <a href=""></a>

This servlet extracts the “instance_index” from the VCAP environment and generates a simple json log message that looks like {“instance_index”:12}

3. Now deploy multiple instances of this app to a large Stackato cluster

If you happen to have such a cluster available, this part is easy. The stackato.yml manifest is already included so issue this command to deploy the app instances:

stackato push -n --instances 20 

Seconds later you’ll have 20 healthy app instances at your beck and call.

4. Check that the logs are being generated


stackato logs --follow

This is like running “tail -f” on any app logs generated by the app described by ./stackato.yml.

Two things of note. First, these logs are coming from systems all over the cloud, and could be across data centers.

Second, all the log aggregation occurs automatically, and no user intervention is required.

Stackato’s inherent log aggregation support is the perfect foundation for the Loggly integration. A single Stackato command is all that is needed to pipe these logs to Loggly.

4. Connect up loggly

The endpoint for loggly input created earlier can be found in the loggly web interface. Click the input: the endpoint is listed under “Destination” and should look something like:

Armed with this information, you can set up your drain as follows:

stackato drain –json log-json-from-java tcp://

5. Now visit the app in a browser

Hit the app a couple of times. Within a few moments you should start to see activity on your loggly page.

To avoid RSI, I configured Apache jMeter to hit the application several thousand times. Each hit generates a json log message and ships it off to Loggly.

JSON Queries in Loggly

Now the real fun begins. Loggly allows searching by json field names, and graphing of the results. After the above app has been running for a while and generated some logs, visit the Loggly interface again and observe the messages that have arrived.

With the json logs coming in, Loggly provides some powerful search capabilties. For example, a “uniq” command (heralding back to the same command from the “good old days”) takes a single json “facet” (or json field name) and a “WHERE” clause.


Is that cool or what? With minimal coding/configuration, we now have visibility into the distribution of traffic across our 20 application instances deployed to the Stackato cluster. We can see that instance “0” saw over 1200 hits, while instance 8 saw 351 hits.

Why is this, you ask? This is due to me spinning up new Stackato instances and adding them to the cluster as the app was being exercised. Instance 8 was the last one to come online.


Loggly can also generate graphs and graphs from json. Here is a chart I built from the same data. Each column color represents a different app instance, with the x-axis representing time. 

The possibilities seem endless, and bring logging in the cloud to a new level.


If you are going to spend time working with Stackato, Loggly, and json, a couple of tips might make the process a bit easier. First, if your logging client -- i.e., the app generating the logs -- changes IP address, it will be blocked from sending logs to Loggly unless “Discovery” is turned on. Discovery is enabled when an input is first created, but then gets turned off after a number of messages are received.

Second, it can take quite some time for the generated logs to appear in the Loggly interface or show up in search queries. It makes sense there would be some lag-time, given the sheer volume of logs that Loggly must be receiving. I will bet that this is an example of what “eventual consistency” means in apps that use NoSQL databases that are subject to it.

Bottom line: be patient when first sending logs to Loggly. It can take five minutes or more before they first show up.


Out of the box, Stackato easily integrates with Loggly, with the end-result that cloud logging complexities simply vanish. And it is just as easy to integrate with Splunk, Logentries, Graylog2, Papertrail, and others, with as little effort and cost.

With that, it’s time to log out.

Subscribe to ActiveState Blogs by Email

Share this post:

Tags: json, loggly
Category: stackato
About the Author: RSS

John, ActiveState's Technology/PaaS Evangelist, spent much of his career designing and building software at a handful of startups, at Sun Microsystems, NeXT Inc., and in the smart grid and energy space. His biggest passion is for tools, languages, processes, or systems that improve developer productivity and quality of life. He now spends his time immersed in cloud technologies, focusing on PaaS, microservices, and containerization.