- Get Stackato
- Why a Private PaaS?
- Features & Benefits
- Stackato by Language
- Compare Editions
- Stackato & Cloud Foundry
- Developer Tools
Phil Whelan, April 1, 2014
Stackato 3.0, which was released at the end of 2013, was a huge milestone for the product. We added a slew of new features and it received an overhaul from top to bottom.
We replaced our Cloud Foundry implementation with version 2 of the open-source project, which was a major update. Cloud Foundry v2 was a complete rewrite in itself and brought new APIs, both internally and externally. Stackato 3.0 was fully compatible with Cloud Foundry v2 out-of-the-box.
We replaced our own LXC implementation with Docker in Stackato 3.0. ActiveState's 3+ years of experience working with LXC meant we were able to draw a clear line around the Docker functionality we utilized as Docker approaches 1.0 maturity.
In Stackato 3.0, we replaced other major back-end components of Stackato, such as retiring Doozerd. We also went fully buildpack, embedding legacy buildpacks to retain backwards compatibility with Stackato 2.10.
Yes, 3.0 was a huge milestone for Stackato and for PaaS. So what does 3.2 bring? In this blog post I will cover 10 of my favorite features of the new Stackato 3.2 release.
1. Placement Zones
One of the complaints I have heard about PaaS in general is "PaaS means that I do not have to care where my application is running. The instances are distributed across the cluster. The problem is, I do care. I have mission critical systems, I have sensitive data, I have machines with specific hardware that I would prefer only certain applications to utilize."
Up until now, the solution to this need would be to have separate PaaS clusters, but that defeats the single-platform concept of PaaS.
We do not doubt the security of Linux containers, but there are still good reasons for clearly separating application instances onto specific machines or areas of the network.
In Stackato, application instances have always been distributed evenly across DEAs (Droplet Execution Agents). Each DEA runs on a separate virtual machine. With Stackato 3.2, Placement Zones provide a way to group DEAs and to specify that only certain applications are deployed to those DEAs.
2. Availability Zones
Our developers have definitely been "in the zone" in the past few months. Credit also goes to IBM, who submitted code for this feature to Cloud Foundry at the same time we started work on it.
Your infrastructure layer has the ability to place machines on different physical networks, but in the same proximity for low latency. You can hand craft this, or use the functionality provided by AWS [http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html], OpenStack or CloudStack.
Until now this level of redundancy has not been exhibited at the PaaS layer. This means that while you may be utilizing 2 or 3 availability zones at the infrastructure layer, the instances of a specific application may be distributed across DEAs that are all located within the same availability zone. If that zone goes down, then so does your application.
Similar to Placement Zones, you can tell Stackato where a DEA is located with "Availability Zones". When Stackato distributes the application instances, it will ensure that instances are evenly spread across Availability Zones. Therefore, any application running within at least two instances, will not go down if you lose one Availability Zone.
How do Placement Zones and Availability Zones relate to each other? Each Placement Zone should ideally be spread across Availability Zones.
3. Cluster Usage
Insights into what is happening with a Stackato cluster is always a highly requested feature from Operations Engineers and it always will be. You can never have too much information. Stackato 3.2 brings some great insights into memory usage and it looks amazing in the web console.
First, we have a summary. This shows memory usage across the whole cluster. We can see "Assigned via Quotas", "Total Physical", "Total Reported", "Unallocated", "Allocated" and "Currently in use". We also see percentages on usages and assignment.
We can then dive deeper with memory usage information across our DEAs. The visual representation is done really well, so I can see all my DEAs at a glance and visually identify and irregularities.
This drill-down is also provided with a view across Placement Zones or by Availability Zones.
4. SSO Enabled Applications
Click, Save! That's all it takes in Stackato 3.2 to enable authentication on an application.
Now when a user visits this deployed application, they will be prompted to login, using their Stackato credentials. When they do, they will gain access to the application and the application will receive the following HTTP headers with each HTTP request sent to it.
x-authenticated-user-id: 5d30c4r3-9985-4aa7-b371-146a7b0832b0 x-authenticated-user-username: jouser x-authenticated-user-email: firstname.lastname@example.org
I love this feature! It makes not only building an authenticated application super simple, but enables me to add social features into my application with little effort.
Stackato has long supported LDAP integration as a way to seamlessly authenticate users with Stackato using LDAP as the single source of truth. This means that if you are using LDAP with Stackato, your deployed applications are authenticated against your existing LDAP database.
How many developers know how to integrate their companies LDAP system into their application or have the access to do so? Now they don't have to. This feature provides a nice clean separation of concerns.
5. LDAP Groups
Taking LDAP integration one step further, Stackato 3.2 integrates with LDAP groups. With Stackato 3.2 you can now define which LDAP groups are authorized to use Stackato and which groups have Stackato administration privileges.
This feature will further simplify the life of system administrators. They are able to manage who has access to their Stackato cluster from their existing LDAP tools and they control which of those users are administrators on the system.
6. Application Auto-Scaling
Stackato has always made it easy for application owners to scale up or down the number of instances of their application to meet demand. This has been either a single command from the command-line, a click in the web console or a call to the Stackato HTTP API.
Stackato's ability to integrate with tools like New Relic has provided a way for users to write simple scripts that retrieve metrics from New Relic, or other monitoring systems, and ping Stackato's API to scale up and down accordingly. These simple scripts can also be deployed on Stackato.
But who wants to write a script and integrate with 3rd party services when you can just use Stackato 3.2's application auto-scaling?
Stackato 3.2 provides the option to enable auto-scaling. This feature is configurable so that you can choose the minimum and maximum number of instances you want to run. You can also control the CPU thresholds at which it will scale up and scale down.
In my example above, I will always have at least 3 instances of my application (distributed across my Availability Zones - see above), but no more than 10 instances. If the average CPU usage rise above 65%, then Stackato will automatically add more instances for me. If the average falls below 25% then Stackato will start to retire instances in order to save server resources.
Now I can sleep at night knowing that the scaling up and down of my application is taken care of. I do not have to spend time writing and testing my own home-grown scripts to manage this.
7. Restart Required Tracking
There are many situations where you will re-configure your application, which then requires it be restarted. One reason for this might be changing environment variables. There is no way I know to inject environment variables into a running process without restarting that process. Another reason for requiring an application restart is enabling SSO (see #4 above) for your application.
Stackato 3.2 tracks any changes made to the configuration of an application and compares these against the current state of the running application. You will see a persistent warning in the web console that the application needs to be restarted.
What I really like about this, is that if you revert your changes before restarting the application, then the warning will cease.
While this is helpful to individual application owners, administrators can also make use of this feature.
When viewing all the running applications on a system, administrators can see which applications have had changes applied to them, without a subsequent restart.
How can the administrator see what has changed? Simple - just click on the Timeline of the application and see what changes have been applied since the application was last restarted.
If something is not clear, then it is easy for the administrator to leave a comment for the application owners.
8. Improved UI
With UI, it is the little things that bring the most joy. Nice UIs are subtle, consistent with good layout. The web console has received many updates in Stackato 3.2 following its rebirth in 3.0.
One such feature is the sidebar tabs. Consistent within each section and provides simple access to sub-sections. As I click around the console, these just feel right.
Other UI updates include consistent buttons and icons. For instance, there is clear distinction between what is removal and what is a disassociation.
The top menu is much lighter now and pages which show lists of entities (Users, Organizations, Spaces, Applications) have consistently nice search and filter options.
Did I mention how much snappier the UI is? 3.0 was a vast improvement over 2.10, but with 3.2 it feels like the training wheels have come off. Navigating around is super responsive.
9. Application Description
In Stackato 3.2, you can now edit a description of each application and provide links, in that description, to external resources. This can be either added in the stackato.yml file, provided by the web console's App Store or edited on the Application page of the web console.
10. Better Patching
"kato patch" has been part of Stackato for a little while now. It is a way to apply small fixes and security updates to a Stackato cluster.
With 3.2 the Stackato Support Team gave their wish-list of all the things they wanted to see improved in this area and it has been delivered.
Patches are now orchestrated and tracked on a per-machine basis, while still being able to apply patches across the cluster with a single kato command. Machines can be updated individually. Specific machines can be excluded from receiving specific updates. Also, patches can now be rolled back.
"kato patch status" shows the status of all patches on all machines.
That's my top 10 whirlwind tour of what you can expect to see in Stackato 3.2. But it does not stop there. There is better integration with CloudStack and Citrix CloudPlatform, more granular control over user permissions, Amazon RDS for Oracle support and more.
Subscribe to ActiveState Blogs by Email
Share this post:
Tags: auto-scaling, availability zones, ldap, ldap groups, PaaS, placement zones, sso, stackato, UI, web console