Continuous Delivery: It’s All About the Pipeline

The Agile Manifesto has had significant impact on the way software is built. It defines twelve fundamental principles, the first of which is “Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.”
Continuous Delivery is explicitly mentioned in this principal, and now, 15 years after the manifesto’s formulation, the practice is hitting the mainstream.
Last month I had the pleasure of joining Continuous Delivery experts Marco Abis, founder of HighOps, and David Farley, Founder and Director of Continuous Delivery Ltd. and author of The Book on Continuous Delivery, for a fascinating webinar on the topic of Continuous Delivery, and how it relates to successful production of cloud software. This blog captures parts of the conversation.

Continuous Integration vs. Delivery vs. Deployment

We kicked off the webinar with some baseline definitions.

Continuous Integration is the process of ensuring a build is in a working state, satisfying developers that it’s ready for production. From a source code management (SCM) point of view, this generally means that all development work is constantly merged with the master branch, allowing the full unit test suite to be run against the “finished” product at every check-in.

Continuous Delivery is the “last mile” in the software lifecycle, and is the child of Continuous Integration, ensuring that a build is always in a releasable state. This is much more than integration with trunk and running of unit tests: the software is actually provisioned to a full, production-like stack, and delivered to some set of end users, whether that be QA, the product team, or a selected group of end-users. Continuous delivery goes hand in hand with a DevOps approach, where the team is responsible for all aspects of software delivery

Continuous Deployment If Continuous Delivery is the last mile, Continuous Deployment is the last inch. The entire process, from check-in to production, is fully automated, with no human intervention.

The Pipeline

The foundation of continuous delivery is the deployment pipeline, which is the path that a code change takes from check-in to production. The term pipeline as it relates to CD was actually coined by David, who says that the job of the deployment pipeline is to prove that the release candidate is not fit for production. You can never prove it’s fit – this is theoretically and practically impossible, but if you have one test or validation failing it proves it’s not fit for production.
At the head of pipeline is the development team working on software. Each commit gives birth to a release candidate which flows through the pipeline. If it makes it through to the end, it’s ready for release. Ideally, as Marco says, the release process itself should be as boring as possible: effectively a YES/NO decision based on a successful pass through the pipeline, and a button that someone can click to choose a release candidate and deploy it.
The goal here is to increase visibility of changes from check-in to production, while at the same time:

  • find and remove bottlenecks
  • shorten the feedback loop
  • automate as much as possible
  • eliminate any error-prone manual processes
  • optimize and visualize what’s going on to improve the flow

What’s in the Pipeline?

Almost anything. At the head end, as part of continuous integration side, unit tests are an expected component. But the pipeline consists of much more than unit tests, including

  • regression tests
  • integration tests
  • UI tests
  • performance tests
  • latency tests
  • scanning for OWASP top 25 vulnerabilities
  • general security scanning, as for example offered by BrightPoint Security
  • accessibility compliance to ensure sur app is usable by people with disabilities see this blog for example
  • time-travel tests to manipulate time for long running scenarios
  • time event tests, such as millennium rollover, daylight savings
  • general event simulation
  • traffic flow tests
  • tests for specific hardware and OS versions
  • tests for dependencies, other software versions, OS changes
  • failure tests: selectively destroy bits of application as in Chaos Monkey
  • memory leak detection such as provided by plumbr
  • regulation and compliance tests (for example automated PCI compliance scans)
  • app-specific analytics
  • performance tests
  • explorability tests
  • usability tests

As you can see, continuous delivery isn’t just about unit tests.

Quick and Easy and Cheap First

It’s best to optimize the process as much as possible. It doesn’t make sense to invoke long-running tests, or tests that are less likely to fail, at the front of the pipeline. Developers need quick feedback as to whether the change is a fit, so that the tests most likely to fail do so quickly. Start with unit tests, potentially thousands or tens of thousands of these. After that move on to deeper and more expensive tests. Any tests requiring manual intervention should be at the end of the pipeline, as human intervention is costly.

What can’t be Automated in the Pipeline?

Automation is key here. We humans are useless at repetitive tasks like for-loops. Computers on the other hand are really good at them, and they can do them for cheap. So anything that requires invocation of repetitive actions should be automated.
However, we humans (for now) do have a leg up when it comes to softer, squishier, pattern matching kinds of activities. And, we are really good at making decisions based on not-fully specified criteria. The point is, software delivery is a decision process and CD isn’t necessarily about eliminating human decision making.
Examples of things that might need human intervention range from governance decisions (where regulations might require live human judgement), legal aspects, to sanity checks on the UI to catch things that the automated tests might miss – white text on white background for example.
Also, if your UI is undergoing rapid changes, it might make sense to skip some of the complexities of building up an automated test suite

Breakpoints

At Velocity conference last month I bumped into my friend Dan Gordon, Product Manager at Electric Cloud, who told me that their Continuous Delivery products support the idea of “breakpoints” where the automated pipeline can be paused at any point to wait for human intervention. For example, if human decision is required for a compliance check, the pipeline can be paused, while a human is notified to clear it for the next stage. Being able to add manual interaction points in your automated processes is important since, although most strive for complete hands off CD, the reality is that human interaction is often required along the way.

Audit Trail

An important benefit of an automated continuous delivery system is that it provides an extensive audit trail: if you’ve automated everything then every change can be identified and tracked in the SCM system. Who committed the change, the nature of the change, the entire dependency chain, on which release it is built, which tests were run, which tests failed, which stages it progressed through, who approved the manual testing steps, who pressed the button to release to production. With a continuous delivery system in place you have a record, the entire process is transparent. CD shines a light on the entire cycle.
This is one reason that continuous delivery is so appealing to the finance industry, which is heavily regulated, and needs audit and policy compliance across their entire SDLC.

Where we are Today

Continuous Delivery has been around for some time, but only recently has it been getting much simpler to implement a continuous delivery pipeline. Nowadays, everything that’s needed to spin up an environment, VM, network, or database is an API call away, and the tools and practices have evolved significantly. Technologies such as PaaS provide a quick way to incorporate scale testing, monitoring and logging into the pipeline from day one of the lifecycle, as well as to incorporate zero-downtime upgrades and updates.
In the old days, at the beginning of a project, teams would spend 70% of the engineering effort building up the infra and process for CD. Now a team can put together a flexible and complete continuous delivery system in no time.

Title image courtesy of on Unsplash.

Recent Posts

Tech Debt Best Practices: Minimizing Opportunity Cost & Security Risk

Tech debt is an unavoidable consequence of modern application development, leading to security and performance concerns as older open-source codebases become more vulnerable and outdated. Unfortunately, the opportunity cost of an upgrade often means organizations are left to manage growing risk the best they can. But it doesn’t have to be this way.

Read More
Scroll to Top