- Get Stackato
- Why a Private PaaS?
- Features & Benefits
- Stackato by Language
- Compare Editions
- Stackato & Cloud Foundry
- Developer Tools
- Stackato Training
- Professional Services
- Commercial Support
- Code Recipes
Sridhar Ratnakumar, November 29, 2012
We introduced Logyard in Stackato 2.4 as a way to stream system logs to external log aggregators. In addition, we started using Logyard to manage application logs. In the recently released Stackato 2.6, both these aspects are being brought to near perfection.
"Logyard 2.0" has no single point of failure (SPoF). Instead of transferring system logs from all nodes into a common node—from which they may be forwarded to external aggregators—we obviated the need to move logs by using what are called "drains". Consequently, the new Logyard involves no inter-node network traffic.
Drains are an abstraction to denote the various receivers of log data. There are tcp, udp, and redis drains. Many log aggregation tools and services such as Loggly and Splunk provide both TCP and UDP inputs for receiving logs. In Logyard, adding a new drain means that logs from all nodes in the clusters will be channeled to that drain.
An example using Splunk
Suppose you want to archive your cluster logs in Splunk. All you need to do is run a single command:
kato drain add myarchive udp://splunk.example.com:12345/
Powered by doozer, Logyard will respond to such a request (from all nodes in the cluster) and begin channeling the system log stream to the specified drain which, in this case, is a Splunk UDP input.
One could also write a custom drain. In this naive example, we archive logs across the cluster in a single local file:
# start a drain target server on a node, piping to a local file nc -lk 0.0.0.0 7890 > log-output.txt # add that drain using that node’s ip kato drain add logarchive tcp://172.16.145.87:7890
The key aspect of Logyard is that logs are treated as streams, not files. With such an abstraction, we can afford to treat application logs the same as system logs. For example, one can setup a drain to forward logs from a specific application:
kato drain add —prefix apptail.3 myapparchive udp://splunk.example.com:12345/
Here, the prefix apptail.3 denotes the message prefix of the stream of log data from the application with id 3. By default, prefix is systail which denotes the stream of system logs from all nodes (systail.dea, for instance, denotes the stream of dea.log from the cluster).
From here, it is only a small step to allow normal users to add their own drains for applications. The above kato command can equivalently be run by a normal user as:
stackato drain add myapparchive udp://splunk.example.com:12345/
The `stackato drain` command is limited to configuring drains only on that user’s app.
Further, the stream abstraction allows us to process anything that resembles a stream of events. Cloud Events is one such stream; a stream of cloud events from all nodes. So why not let Logyard manage it? Indeed we do, thus allowing one to channel cloud events as well:
kato drain add -p event myarchive udp://splunk.example.com:12345/
Finally, it is important to keep in mind that Logyard 2.0—unlike the previous version—is not an aggregator itself, but rather a facilitator of the logs as streams philosophy. The actual aggregation itself happens elsewhere, behind the drains.
Subscribe to ActiveState Blogs by Email
Share this post: