ActiveBlog

Protoduction: An agile workflow enabled by PaaS
by Troy Topnik

Troy Topnik, May 30, 2013

Here is a word I recently saw on the #stackato IRC channel: protoduction

It’s not in any dictionary that I’m aware of, but it’s generally construed to be “a prototype that ends up in production” or some sort of hacky solution that ends up being deployed in a hurry. The word was used in reference to a proof-of-concept system that had been opened up to developers and was now hosting production applications a little… ahead of schedule.

They made me put this here

The term has a negative connotation, an image problem akin to Perl’s “duct tape of the internet” designation. You wouldn’t want to put anything into capital-P Production without meticulous design, careful testing, and planned rollout, would you?

Well, people have been doing that with software on the web since its inception, and an entire industry has evolved around legitimizing this way of working while making it more structured, safe, and effective. Variously called Agile IT or DevOps, the idea is that you’re deploying code all the time. Instead of planning releases months in advance which make huge changes to the codebase, small non-breaking changes are deployed as soon as they pass their tests.

Stackato (and Platform-as-a-Service in general) doesn’t enforce a particular way of working, but it makes this type of modern deployment workflow especially easy. Let’s look at two ways of releasing code to production. The first is a more traditional flow that would work with the waterfall model of software development. The second is a bit more PaaS-centric.

The “old” way

Let’s look at a simplified, but fairly typical, deployment workflow. This diagram omits the requirements gathering and design steps and starts right in at implementation by the developers.

Typical Workflow

Typically, the developers will get code working on their own machines, check their changes in to an SCM system of some kind, then pass off the code for testing by a dedicated QA group or another group of developers. The top stream here shows a hurdle in the way and bidirectional arrows indicating typical problems encountered during this hand-off. Unless the software and systems used by both groups are identical, there is usually some breakage as a result of version or platform mis-match. Code will often have to be refactored to account for this or address some other incorrect assumptions about the hosting environment.

When deploying software to the web, there’s often an additional staging step between testing and production deployment. This step puts the software onto systems which are, ideally, identical to the production system. Again, there is likely to be some incompatibilities and breakage unless the staging systems are completely identical to the testing systems. Another hurdle, and more back-and-forth.

If the code has to move once again to a new set of systems in production, there is yet another hurdle. Usually at this end of the workflow the systems are pretty close to identical, and there shouldn’t be too many surprises. Often, the staging and production systems will actually be swapped with a DNS or network change: the staging systems are put into production and the old production systems are retired (or kept around for rollback if required).

This flow works, but there are a number of potential points of failure and room for a lot of improvement.

And now, with Stackato

If we bring the Stackato VM “Micro Cloud” into the mix, we can eliminate some of the incompatibilities and back-and-forth from this workflow. Developers work on their code, continuously deploying the applications to a Stackato VM running on their local system. This code gets passed off to QA who deploy it to their own Stackato VMs (or smaller Stackato clusters) using the same config and commands as the developers. The systems are identical in all aspects other than scale (e.g. QA may deploy with more application instances and more memory per instance for load testing). Inconsistencies in the supporting software and OS are eliminated as are the resultant breakages. The same holds true for the next two steps in deployment. The hurdles caused by inconsistent system software are eliminated and the flow into production becomes more uni-directional.

Stackato Workflow

As an added benefit, code deployment (the “push” in PaaS terminology) is made simpler and faster by the PaaS’s API and toolchain. The ‘stackato push’ command that deploys the code can be automated with continuous integration utilities (more on that in an upcoming post).

So now we have smoother, faster, waterfall-type development to production workflow. That’s great, and it fits into many organizations' existing workflow just fine. No serious, disruptive changes to the status quo. Nobody needs to change their job title to see a benefit from adding the micro cloud VM into the process.

But this isn’t really DevOps. It can be considered more agile than the original model, but there’s a more efficient way to use PaaS. Enter protoduction.

Using the PaaS to its fullest potential

Let’s say instead of developers running their own Stackato VM we give them access to the production Stackato PaaS, the one that all the live applications are already running on.

Are we suggesting putting untested, pre-alpha quality code into production? Absolutely not. Stackato (and any PaaS worth its salt) has features that allow for various levels of access, and the network setup of the PaaS can limit how applications are exposed to the internet. What we are suggesting is using the multi-tenant nature of a PaaS to its fullest extent. Here’s a hypothetical workflow:

  1. The developer, instead of pushing the application to a VM on their local machine, pushes applications to the (internal) API endpoint of the production Stackato cluster. These app instances are not public, but are exposed internally on URLs such ‘projname-2013-05-07-r64532.paas.example.com’. Implicit in this application naming scheme is that we track things like the project or application name, a date stamp, and a build or revision number - all done easily with a scripted ‘stackato push’. The network is configured to only resolve the ‘paas.example.com’ domain internally.

  2. The QA engineer, instead of spinning up a separate instance of the application, tests the app in place. If it has been deployed under a group that includes Dev and QA, the QA engineer will be able to scale the number of instances and the size of those instances in order to do load testing.

  3. Any deficiencies found are reported back to Dev, fixed in the source, and redeployed to the PaaS.

  4. Once the application gets the blessing of QA, a Release Engineer (or some admin with similar responsibilities) is given the task of putting the application into Staging or perhaps straight into Production. We’ll assume that this user is also a member of the group that “owns” this deployed app and that there’s an older version of the app running at ‘ourcoolapp.com’.

    a. If a database is involved, the admin un-binds the testing database from the app, and binds the production database to it instead.

    b. Application instances can be scaled up further to handle the projected load.

    c. The ‘ourcoolapp.com’ URL is mapped to the application with the ‘stackato map’ command or the web UI.

    d. The production URL is similarly unmapped from the old production version, or the old version is simply shut down.

Crossfade vs. Cutover

At this point, the new version has taken over all requests to ‘ourcoolapp.com’ and is in production. The gist of this approach (literally) is from Derek Collison, one of the creators of Cloud Foundry. You can think of it as a crossfade between the two versions rather than a pinpoint cutover. At the end of step 4c above, we have a situation where there are two “competing” versions of the code serving requests to the production URL.

Assuming this is possible with the version change you are making (e.g. there are no changes to the database schema), you can do interesting things like A/B testing with this. With equal pools of application instances for both versions the Stackato router will round-robin between them as long as the URL is mapped to both versions. If you’re running some sort of application performance monitoring like NewRelic, you can compare the performance of both versions of the application under real conditions.

Upshot: There’s more than one way to do it

Organizations have different comfort levels with adopting agile IT practices, so Stackato does not enforce any single “one true workflow”. Introducing a consistent deployment environment need not necessitate a radical redesign of deployment workflow, but using Stackato leaves the door open for developing more and more efficient paths for getting from code to cloud.

Subscribe to ActiveState Blogs by Email

Category: stackato
About the Author: RSS

Troy Topnik is ActiveState's technical writer. After joining ActiveState in 2001 as a “Customer Relationship Representative” (AKA Tech Support), Troy went on to lead the PureMessage Enterprise Support team before moving on to a technical writing role in 2004. His talent for describing software for new users stems from his difficulty understanding things that developers find obvious. He has a Bachelor of Music from the University of Victoria.