Using Docker To Run Ruby Rspec CI In Jenkins
by Phil Whelan

Phil Whelan, January 8, 2014

In this post, I am going to give a step-by-step introduction into how you can do continuous integration testing with Docker. I will be running the rspec test suite of the CloudFoundry project's Cloud Controller component, although the same process can be applied to any Ruby project. I will show how to build Docker images to easily run repeatable tests and how to set-up Jenkins to do it for you in an automated manner.

Continuous Integration Using Docker

The goal of this post is to show Jenkins running a project’s test-suite using Docker. This will occur following every code check-in or every N minutes or whenever it is needed.

Why use Docker to do this? Having a clean environment to run tests is one of the ten commandments of running tests. With Docker's Dockerfile, you can specify a series of steps to create the full stack of the test environment you need. Docker can follow the steps to pre-build the test environment, then stash that environment for disposable re-use. Since a running Docker image, or [LXC] "container", is ephemeral, you can blow it away and re-create it very quickly. Perfect for continuous integration!

My Docker usage will be two-step. First, I will create the Docker image. This will have all the basics required by any test run from this project. I am basing my assumptions on system requirements from the current state of the project.

It will not have everything installed, because I cannot predict what a developer will do during a day of hacking on code. They may change code dependencies (gem dependencies in this case) and so I cannot install those dependencies until the time I run that version of the code.

The second step will be to take my built Docker image and run it every time a new version of the project’s code is created. I do not have access to create a GitHub code commit hook, which would tell Jenkins to run the tests on each code check-in, so instead I will run it periodically.

Since I can re-use the Docker image for all my subsequent test runs, I will be creating my Docker-based test environment (step 1) far less frequently than running my tests (step 2).

I can use Jenkins to perform both these tasks. In one Jenkins job, run maybe once a day, it can recreate the base Docker image and push it to a local Docker repository. In a second Jenkins job, which is run each time a developer commits code, I can run the Docker image, which will pull it from the local Docker repository.

Guinea Pig

I am going run the test suite of Cloud Foundry's Cloud Controller. This is a core component of the Cloud Foundry project and one of the most complex pieces. The test suite is very large, so it takes more time to run than a developer would have patience for, which for me is about 2 hours. This makes it ideal for continuous testing in the background to confirm that nobody has checked in code that breaks the test suite.

CI Docker Image

My continuous-integration Docker image has 3 parts...

1) Specify a base image

2) Install dependencies

  • Dependencies will be installed via apt-get, wget, rbenv, rubygems and Ruby's bundler.

3) Specify the command that "docker run" executes when this Docker image is used

  • I want to ensure I have the latest code (via "git pull") and that we install any code-level dependencies (via "bundle install"). Finally, it should run the test suite.

  • The exit code of the test suite will be returned by "docker run" and Jenkins will use this to determine if the tests passed or failed. If the test run fails Jenkins will inform relevant people via email, if we configure it to do so.


A Dockerfile is a cross between assembler and a bash script. There are certain action keywords that each non-whitespace non-comment line starts with. I like to uppercase these, so they stand out, but uppercasing these is not mandatory. The remainder of each line is the content used by that action keyword.

For instance, "FROM" is used to specify the base image, so "FROM ubuntu" specifies that I am using the "ubuntu" base image.

"RUN" is used to run a shell command and is commonly used to install dependencies.

"ENV" can set environment variables, which can be used in subsequent actions, but also persists to the "CMD" action.

"CMD" is called when "docker run" is run against your created image. "CMD" is ignored during the image building.

Here is my Dockerfile (gist here)...

# docker image for running CC test suite

FROM ubuntu

RUN apt-get -y install wget
RUN apt-get -y install git

# install Ruby 1.9.3-p484
RUN apt-get -y install build-essential zlib1g-dev libreadline-dev libssl-dev libcurl4-openssl-dev
RUN git clone ~/.rbenv
RUN git clone ~/.rbenv/plugins/ruby-build
RUN echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile
RUN echo 'eval "$(rbenv init -)"' >> ~/.bash_profile
ENV PATH /.rbenv/bin:/.rbenv/shims:$PATH
RUN rbenv init -
RUN rbenv install 1.9.3-p484 && rbenv global 1.9.3-p484

# never install a ruby gem docs
RUN echo "gem: --no-rdoc --no-ri" >> ~/.gemrc

# Install bundler and the "bundle" shim
RUN gem install bundler && rbenv rehash

# Checkout the cloud_controller_ng code
RUN git clone -b master git:// /cloud_controller_ng

# mysql gem requires these
RUN apt-get -y install libmysqld-dev libmysqlclient-dev mysql-client
# pg gem requires this
RUN apt-get -y install libpq-dev
# sqlite gem requires this
RUN apt-get -y install libsqlite3-dev

# Optimization: Pre-run bundle install.
# It may be that some gems are installed that never get cleaned up,
# but this will make the subsequent CMD runs faster
RUN cd /cloud_controller_ng && bundle install

# Command to run at "docker run ..."
CMD if [ -z $BRANCH ]; then BRANCH=master; fi; \
    cd /cloud_controller_ng \
    && git checkout $BRANCH \
    && git pull \
    && git submodule init && git submodule update \
    && bundle install \
    && bundle exec rspec spec    

The above installs Ruby 1.9.3 at a specific patch-level and any known system-level dependencies that may be needed by gems. If a developer added gems that required additional system dependencies, then those would need to be added to the Dockerfile and the Docker image would need to be rebuilt. This happens rarely, but for this reason it would be desirable to have developers own this Dockerfile and put it alongside the code and check it in with the code. This would then be updated in-step and could trigger a re-build, via Jenkins, of the Docker image.

Installed Gems Optimization

Earlier I said that I cannot install code dependencies (gem dependencies), since they may change from one version of the code to the next, but you may have noticed that I have pre-installed them anyway, via "bundle install".

As an optimization, I assume that most of the gems will rarely change. I will still install them just prior to running the tests, via another "bundle install", so some will become redundant over time. But since most, if not all, will already be there, the "bundle install" at test run time will be fast.

Luckily, I am using Jenkins to build the Docker image, probably once a night, so any installed gems that become redundant will not be around for long.

You may think this adds an extra variable in the test run, so this can be skipped for purity at the cost of longer time for each test run.

Docker With Jenkins

Very little was needed to getting Docker working with Jenkins. I just needed to ensure that the unix user "jenkins" belonged to the "docker" group.

Docker runs as the "root" user and the "docker" group. When the docker daemon starts up it creates a unix socket owned by the "root" user and the "docker" group. Therefore, the docker command-line client needs to be run via "root" user or someone in the "docker" group.

$ ls -l /var/run/docker.sock
srw-rw---- 1 root docker 0 Dec 27 09:45 /var/run/docker.sock

Simply add the jenkins user to the docker group to be able to create and run Docker images without sudo.

$ sudo usermod -a -G docker jenkins

Please consider any security concerns with doing this. I am doing this in a trusted environment.

Local Docker Registry

Docker images can get quite large, so it is useful to have a local version of the Docker registry on the same network, or same machine, as you are running Docker. I am going to be running it on the same machine that I am running Jenkins on.

I do not have to worry about the volatility of where I put the repository, as the built Docker images are disposable. As long as I put my Dockerfile somewhere safe (GitHub?), then I can recreate the Docker image anywhere at any time.

Luckily the Docker registry is very simple to setup. It is just a Docker image itself, found on the Docker registry. Yes, things start getting very "Inception" quickly.

$ docker run -p 5000:5000 samalba/docker-registry

Note, if you do not belong to the "docker" group, you will have to run this as sudo. I added myself to the “docker” group as follows...

$ sudo usermod -a -G docker phil

The "-p 5000:5000" specifies that the docker-registry process should listen on the port 5000 internally in the Docker container and Docker should map that to port 5000 on the host machine.

We can check it is running by using the "docker ps" command...

$ docker ps
CONTAINER ID   IMAGE                            COMMAND                CREATED             STATUS              PORTS                    NAMES
81bbfc81f7f9   samalba/docker-registry:latest   /bin/sh -c cd /docke   48 seconds ago      Up 47 seconds>5000/tcp   desperate_bell      

Jenkins Job: Build The Docker Image

Creating a Docker image is quite simple. It requires 3 commands: "build", "tag" and "push".

"docker build", if successful, will output "Successfully built ", where "" is a hex string. You can then use this build-id to "docker tag" the image with a human-readable name. You then use this image name to "docker push" it to a Docker registry.

docker build <directory containing Dockerfile>
docker tag <built-id> <image-name>
docker push <registry-address>:<image-name>

Automating this involves extracting the "" from the "docker build" output, so I created a small bash script called to help with this and manage the whole process of building the Docker image and getting it into the local repository.

#/bin/env bash

# Builds the docker image and pushs to
# repository (local by default)

# Usage:
#   build_and_push <directory of Dockerfile> <resultant docker image name>


if [ "$DOCKER_REPO_SERVER" = "" ]; then

# Build docker image
rm -f docker-built-id
  | perl -pe '/Successfully built (\S+)/ && `echo -n $1 > docker-built-id`'
if [ ! -f docker-built-id ]; then
  echo "No docker-built-id file found"
  exit 1
DOCKER_BUILD_ID=`cat docker-built-id`
rm -f docker-built-id

# Publish built docker image to repo
docker push $DOCKER_REPO_NAME

Using this script and my Dockerfile, I now have everything I need to create my first of two Jenkins jobs.

Note, that for simplicity, I have put the Dockerfile and script in 2 public gists, which are downloaded at the time of running the Jenkins job.


cloud_controller_ng rspec docker build

Build / Execute shell:


# Fetch Dockerfile
wget --directory-prefix=$DOCKERFILE_DIRECTORY

# Fetch build_and_push script
chmod +x

# Build the Docker image
DOCKER_REPO_SERVER=localhost:5000 ./ $DOCKERFILE_DIRECTORY cloud_controller_ng_rspec

Build Triggers / Build periodically / Schedule :

15 3 * * *

This will be run every day at 3:15am, so the next day tests will be run with a fresh docker image.

Jenkins Job: Run The Docker Image

Now that we have a Docker image primed and ready to run our Jenkins job, we just need to run it.


cloud_controller_ng rspec docker run

Build / Execute shell:

docker run localhost:5000/cloud_controller_ng_rspec

The command is quite simple. "docker run" will checkout the latest "cloudcontrollerng_rspec" Docker image from our local Docker repository and run it. At this point the "CMD", found in the Dockerfile, will be run.

To recap, that line looks like this...

# Command to run at "docker run ..."
CMD if [ -z $BRANCH ]; then BRANCH=master; fi; \
    cd /cloud_controller_ng \
    && git checkout $BRANCH \
    && git pull \
    && git submodule init && git submodule update \
    && bundle install \
    && bundle exec rspec spec

We checkout the appropriate $BRANCH of cloudcontrollerng.git, if specified (left to the reader to add in Jenkins). It then does a "git pull" to ensure it has the latest code, then initializes the git submodules, which our project does have.

Then we see the Ruby specific commands, "bundle install" and finally "bundle exec rspec spec" to run our test suite.

If you are interested, here is roughly what you will see in the console output of the Jenkins job.

And finally we see...

Finished in 121 minutes 1 second
7638 examples, 62 failures, 3 pending

"docker run" returns exit code of 1 (failure), since several tests failed. This causes Jenkins to report to use that the tests are failing.

We can see that this took just over 2 hours to run. Not something that most developers would have much patience for.


Since I am using a Dockerfile to specify my test environment, I can be sure that if you follow these steps you will be running the same test suite in an identical environment. It also means that if I hit a problem, I (or anyone else) can replicate it, because I have specified the full stack of my environment. In minutes you can be running it too.

This is a big win for DevOps. Developers can create an initial environment in a Dockerfile, check it into git and the Operations team can then collaborate on it. The Operations team may even send a pull request to the Developers that says, "Hey, our production environment does not look like that. Try this instead...". The updated Dockerfile is then checked out by Jenkins, which builds the new test environment and subsequent test runs are run on a more production-like environment.

Image courtesy of micurs@flickr under CC license

Subscribe to ActiveState Blogs by Email

Share this post:

Category: stackato
About the Author: RSS

Phil is the Director of Engineering for Stackato at ActiveState. Stackato is an enterprise PaaS solution based on Cloud Foundry and Docker. Phil works closely with the Stackato development team and is a compulsive code-reviewer. You will see Phil regularly on ActiveState's Blog writing about Cloud Foundry, Docker, OpenStack, CoreOS, etcd, Consul, DevOps and many other cloud related technologies. Prior to coming to ActiveState, Phil worked in London for the BBC, helping build the iPlayer, and Cloudera in San Francisco, supporting Hadoop and HBase. He also spent time in Japan, where he worked for Phil has worked for several startups in Vancouver which included building several large-scale data processing applications such as real-time search engines, log indexing and a global IP reputation network.