ActiveBlog

Deploying your own Private Docker Registry
by Matthew Fisher

Matthew Fisher, January 21, 2014

ship with containersThis blog post shows how you can deploy your own private Docker Registry behind your firewall with SSL encryption and HTTP authentication. A Docker Registry is a service which you can push Docker images to for storage and sharing. We will be installing the registry on Ubuntu, but it should work on any operating system that supports upstart. SSL encryption and HTTP basic authentication will be managed by Nginx, which will be a proxy server in front of the Docker Registry. Upstart will manage the gunicorn processes that will run the registry. We will also be using a LRU cache to reduce roundtrips to the storage backend. For this cache we will use Redis.

Why do you need a Docker Registry?

When you create new Docker images for use in your environment - whether that'd be a Redis server, a Hipache daemon, or an IRC logbot - you're going to want to store the images somewhere safe. Maybe you're working on a project where you also want to create a Docker image with Jenkins or Buildbot on each commit, bag and tag (read: docker commit && docker tag) the image, and then push that to the registry. But what if your code is proprietary, and you don't want to push that image to the public registry? Docker Inc. has already thought of that for you, and has created the docker-registry project. This project will allow you to push your own images to your own in-house registry. Woo!

If you want to kick the proverbial tires, you can test the docker registry:

$ docker pull samalba/docker-registry
$ docker run -d -p 5000:5000 samalba/docker-registry
$ # let's pull a sample image (or make one ourselves)
$ docker pull busybox
$ docker tag busybox localhost:5000/busybox
$ docker push localhost:5000/busybox

This is great to get started working with the registry for testing, but this will be using plain HTTP. Anyone can push to your server as long as they have endpoint access, which is not good. Let's get started with setting up our own private registry for internal use.

Planning our Deployment

Before we spawn an Ubuntu server to start deploying the registry, let's consider some things...

What Storage Backend?

What storage backend do we want to use? Here's a short list of the supported backends for the registry:

  • local: use the local filesystem
  • s3: store inside an Amazon S3 bucket
  • swift: store inside a Openstack Swift container
  • glance: use Openstack's Glance project
  • elliptics: use the Elliptics key-value store

Sidenote: I created the backend for Openstack Swift. If you find any bugs with it, please feel free to file a bug on the registry's github page.

Hosted or In-House Server?

Where do we want to host our docker registry? Do we want to use our own Openstack cluster, Amazon Web Services, Rackspace, or our own bare metal servers? Any option will work for us!

One thing to consider when using cloud-hosted infrastructure is the advantage of using an external volume for your data. This gives you control over managing your own backups, which is a huge win for us.

What Operating System?

Since the docker registry is a python project, it's ridiculously simple to port over to other operating systems. You can quite easily write up a systemd config file, or launch it as a Windows Service. Because we will be installing it on Ubuntu, we will be using upstart to manage our gunicorn processes.

I will be demonstrating the deployment process using the local storage backend, where all of our assets will be held on our own hardware. We have an internal Openstack cluster over here in our Vancouver office (we love Openstack!), so we will use that for our hosting solution. docker-internal.example.com will be the fully qualified domain name, and we will be using Ubuntu's 12.04.3 cloud image as the server.

All right. Let's get down to deploying!

Boot the Server

First, let's boot up a server. Since I'll be using our internal Openstack cluster, I'll just use the nova client to boot up my server. If you're following this post line by line, here are the credentials you'll need to set up:

$ cat ~/.bashrc
[...]
export OS_AUTH_URL=http://******/v2.0
export OS_TENANT_ID=******
export OS_TENANT_NAME="******"
export OS_USERNAME=******
export OS_PASSWORD="******"
[...]

Once you set that up, test by running:

$ sudo pip install python-novaclient
$ nova list

Before we boot the server, let's upload the Ubuntu cloud image, as well as your own SSH key...

$ nova keypair-add --pub-key ~/.ssh/id_rsa.pub bacongobbler
$ sudo pip install python-glanceclient
$ glance image-create --name ubuntu-12.04.3-server-cloudimg-amd64 --disk-format qcow2 --container-format bare --location http://cloud-images.ubuntu.com/releases/12.04.3/release/ubuntu-12.04-server-cloudimg-amd64-disk1.img

And create a security group that allows external access to port 80 and 443...

$ nova secgroup-create web-server "security group for standard web servers"
$ nova secgroup-add-rule web-server tcp 80 80 0.0.0.0/0
$ nova secgroup-add-rule web-server tcp 443 443 0.0.0.0/0

now we will create the volume, which will be 512GB in size. We will be using this to store our docker images:

$ nova volume-create 512 --display-name docker-internal

Finally, we can boot the server!

$ nova boot docker-internal --image ubuntu-12.04.3-server-cloudimg-amd64 --flavor m1.medium --security-groups web-server --key-name bacongobbler
$ # do some grepping for the volume ID
$ VOLUME_ID=$(nova volume-list | grep docker-internal | awk '{print $2}')
$ nova volume-attach docker-internal $VOLUME_ID /dev/vdb
$ nova floating-ip-list
+----------------+--------------------------------------+---------------+------+
| Ip             | Instance Id                          | Fixed Ip      | Pool |
+----------------+--------------------------------------+---------------+------+
| 192.168.68.222 | 79caf450-7b23-46bd-839a-abec7408a2c0 | 192.168.32.26 | nova |
| 192.168.68.224 | a10cb949-09b6-4533-9733-860a5f8fdff4 | 192.168.32.19 | nova |
| 192.168.68.225 | None                                 | None          | nova |
| 192.168.68.236 | None                                 | None          | nova |
| 192.168.68.237 | dc835a69-2894-4278-aebe-4f9ca6363724 | 192.168.32.12 | nova |
| 192.168.68.238 | 4a8835b6-a318-44b5-897d-2320977cfe01 | 192.168.32.20 | nova |
| 192.168.68.239 | afde96f2-9bac-441a-a0c7-589ace2ac6b9 | 192.168.32.15 | nova |
| 192.168.68.246 | 00ceedf4-8d85-4ea5-8f42-78a1ab521a62 | 192.168.32.13 | nova |
| 192.168.68.250 | c1ef2314-6067-464d-85ec-de2a26a80f3e | 192.168.32.4  | nova |
| 10.3.4.1       | 192dadcc-e786-4366-8091-2e9a364a65cf | 192.168.32.17 | nova |
+----------------+--------------------------------------+---------------+------+
$ nova add-floating-ip docker-internal 192.168.68.236

Wait a couple seconds, and set up your domain registrar to map the subdomain docker-internal to this IP address. After that, run:

$ ssh ubuntu@docker-internal.example.com

Hooray!

Deploy and configure the registry

Now that we have our server, let's install some packages to get started.

ubuntu@docker-internal:~$ # lets update/upgrade/restart before we start 
ubuntu@docker-internal:~$ sudo apt-get update
ubuntu@docker-internal:~$ sudo apt-get upgrade
ubuntu@docker-internal:~$ sudo reboot now
$ ssh ubuntu@docker-internal.example.com
ubuntu@docker-internal:~$ # switch to root
ubuntu@docker-internal:~$ sudo su
root@docker-internal:~# # we need the chunkin module for nginx
root@docker-internal:~# apt-get install git nginx-extras
root@docker-internal:~# # gives us the htpasswd command
root@docker-internal:~# apt-get install apache2-utils
root@docker-internal:~# # install dependencies
root@docker-internal:~# apt-get install build-essential libevent-dev libssl-dev liblzma-dev python-dev python-pip
root@docker-internal:~# # install redis to use as our LRU cache
root@docker-internal:~# apt-get install redis-server
root@docker-internal:~# apt-get clean

Now that we have that out of the way, let's install the docker registry:

root@docker-internal:~# git clone https://github.com/dotcloud/docker-registry.git /opt/docker-registry
root@docker-internal:~# cd /opt/docker-registry
root@docker-internal:~# # checkout the latest stable version of the registry 
root@docker-internal:~# git checkout 0.6.3
root@docker-internal:~# # create log dirs
root@docker-internal:~# mkdir -p /var/log/docker-registry
root@docker-internal:~# # install pip packages
root@docker-internal:~# pip install -r requirements.txt
root@docker-internal:~# cp config/config_sample.yml

If you've done this all correctly, we should now be able to test the registry will run with:

root@docker-internal:~# ./wsgi.py
2014-01-13 23:38:38,470 INFO:  * Running on http://0.0.0.0:5000/
2014-01-13 23:38:38,470 INFO:  * Restarting with reloader

If you see this, you're doing great! Now, we just need to set up a couple more things. Remember that volume we mapped to this server earlier? Let's set that up now:

root@docker-internal:~# mkdir -p /data/registry
root@docker-internal:~# mkfs.ext4 /dev/vdb
root@docker-internal:~# mount /dev/vdb /data/registry

And now, let's edit our configuration file for the docker registry. You can use http://uuidgenerator.net/ to generate a secret key:

root@docker-internal:~# cat << EOF > /opt/docker-registry/config/config.yml
# The 'common' part is automatically included (and possibly overriden by
# all other flavors)
common:
    # Set a random string here
    secret_key: REPLACEME
    standalone: true
 # This is the default configuration when no flavor is specified
dev:
    storage: local
    storage_path: /tmp/registry
    loglevel: debug
# To specify another flavor, set the environment variable SETTINGS_FLAVOR
# $ export SETTINGS_FLAVOR=prod
prod:
    storage: local
    storage_path: /data/registry
    loglevel: info
    # Enabling LRU cache for small files. This speeds up read/write on
    # small files when using a remote storage backend (like S3).
    cache:
        host: localhost
        port: 6379
    cache_lru:
        host: localhost
        port: 6379
EOF

Once this is done, set up an upstart job for the registry:

root@docker-internal:~# cat << EOF > /etc/init/docker-registry.conf
description "Docker Registry"
version "0.6.3"
author "Docker, Inc."

start on runlevel [2345]
stop on runlevel [016]

respawn
respawn limit 10 5

# set environment variables
env REGISTRY_HOME=/opt/docker-registry
env SETTINGS_FLAVOR=prod

script
cd $REGISTRY_HOME
exec gunicorn -k gevent --max-requests 100 --graceful-timeout 3600 -t 3600 -b 0.0.0.0:5000 -w 8 --access-logfile /var/log/docker-registry/access.log --error-logfile /var/log/docker-registry/server.log wsgi:application
end script
EOF

And then start it with:

root@docker-internal:~# start docker-registry
docker-registry start/running, process 10872

Verify that it's running by checking:

root@docker-internal:~# cat /var/log/docker-registry/server.log
2014-01-14 00:33:44 [15051] [INFO] Starting gunicorn 18.0
2014-01-14 00:33:44 [15051] [INFO] Listening at: http://0.0.0.0:5000 (15051)
2014-01-14 00:33:44 [15051] [INFO] Using worker: gevent
2014-01-14 00:33:44 [15056] [INFO] Booting worker with pid: 15056
2014-01-14 00:33:44 [15057] [INFO] Booting worker with pid: 15057
2014-01-14 00:33:44 [15062] [INFO] Booting worker with pid: 15062
2014-01-14 00:33:45 [15067] [INFO] Booting worker with pid: 15067
2014-01-14 00:33:45 [15068] [INFO] Booting worker with pid: 15068
2014-01-14 00:33:45 [15069] [INFO] Booting worker with pid: 15069
2014-01-14 00:33:45 [15070] [INFO] Booting worker with pid: 15070
2014-01-14 00:33:45 [15071] [INFO] Booting worker with pid: 15071

Now for nginx:

root@docker-internal:~# rm /etc/nginx/sites-enabled/default
root@docker-internal:~# cat << EOF > /etc/nginx/sites-enabled/docker-registry
upstream docker-registry {
  server localhost:5000;
}

server {
  listen 443;
  server_name docker-internal.example.com;

  ssl on;
  ssl_certificate /etc/ssl/certs/docker-registry.crt;
  ssl_certificate_key /etc/ssl/private/docker-registry.key;

  proxy_set_header Host             $http_host;   # required for docker client's sake
  proxy_set_header X-Real-IP        $remote_addr; # pass on real client's IP
  proxy_set_header Authorization    ""; # see https://github.com/dotcloud/docker-registry/issues/170

  client_max_body_size 0; # disable any limits to avoid HTTP 413 for large image uploads
   
  # required to avoid HTTP 411: see Issue #1486 (https://github.com/dotcloud/docker/issues/1486)
  chunkin on;
  error_page 411 = @my_411_error;
  location @my_411_error {
    chunkin_resume;
  }

  location / {
    auth_basic              "Restricted";
    auth_basic_user_file    docker-registry.htpasswd;

    proxy_pass http://docker-registry;
    proxy_set_header Host $host;
    proxy_read_timeout 900;
  }

  location /_ping {
    auth_basic off;
    proxy_pass http://docker-registry;
  }

  location /v1/_ping {
    auth_basic off;
    proxy_pass http://docker-registry;
  }
}
EOF
root@docker-internal:~# service nginx restart

And the associated htpasswd file (ensuring to replace USERNAME and PASSWORD):

root@docker-internal:~# htpasswd -bc /etc/nginx/docker-registry.htpasswd USERNAME PASSWORD

Let's install an SSL key onto the server. In this example, I am assuming that someone has handed you an SSL key that has been signed and verified by a certificate authority. This SSL key could be for either 'docker-internal.example.com' or '*.example.com':

root@docker-internal:~# mv server.key /etc/ssl/private/docker-registry.key
root@docker-internal:~# mv server.crt /etc/ssl/certs/docker-registry.crt

If you don't have the cash to fork out for a new SSL key, or you are just testing out this process before deploying, you can install a self-signed SSL key by following the instructions from Akadia:

root@docker-internal:~# openssl genrsa -des3 -out server.key 1024
root@docker-internal:~# openssl req -new -key server.key -out server.csr
root@docker-internal:~# cp server.key server.key.org
root@docker-internal:~# openssl rsa -in server.key.org -out server.key
root@docker-internal:~# openssl x509 -req -days 3650 -in server.csr -signkey server.key -out server.crt

Please note that using self-signed certificates is currently waiting on pull request #2687. You will have to sit tight until it is merged into master, or you can try building Docker from source.

Verification

Finally, let's test this:

root@docker-internal:~# exit
ubuntu@docker-internal:~$ exit
$ curl -u bacongobbler:******* https://docker-internal.example.com
"docker-registry server (prod)"
$ docker login https://docker-internal.example.com
Login against server at https://docker-internal.example.com/v1/
Username (): bacongobbler
Login Succeeded
$ docker pull busybox
Pulling repository busybox
e9aa60c60128: Download complete 
$ docker tag busybox docker-internal.example.com/busybox
$ docker push docker-internal.example.com/busybox
The push refers to a repository [docker-internal.example.com/busybox] (len: 1)
Sending image list
Pushing repository docker-internal.example.com/busybox (1 tags)
Pushing tags for rev [e9aa60c60128] on {https://docker-internal.example.com/v1/repositories/busybox/tags/latest}
e9aa60c60128: Image already pushed, skipping

And we're done! One docker registry, deployed on Openstack and ready to go.

What's Next?

So, after deploying the registry, what are some things that we can do to improve or enhance this project? I can think of a couple:

  • set up email notifications on registry exceptions
  • ship the logs off to logstash or some other log aggregation tool
  • deploy the registry on CentOS or RHEL
  • do some benchmarking to see how well the registry scales

What other suggestions can you think of? Leave a comment below!

Here at ActiveState, we're proud to say that we are actively using the Docker project in Stackato v3. If you missed Phil's amazing post on everything that's in Stackato v3, please take a look at his post, as well as the section about where Docker fits in with Stackato.

Update: As noted by Solomon Hykes in the comments below, "the registry is now available as a top-level image. You can simply do 'docker run registry'. It is kept up-to-date automatically, so new versions are always available with 'docker pull'. Link on the index: https://index.docker.io/_/registry"

Image courtesy of Glyn Lowe Photoworks

Share this post:


Stackato is a platform that lets you deploy and manage your applications more efficiently. You can try Stackato for free by downloading the micro cloud, using Stackato with your own Amazon EC2 or HP Cloud Services account or getting access to the Stackato sandbox.

Subscribe to ActiveState Blogs by Email

Share this post:

Category: stackato
About the Author: RSS

Matthew Fisher is ActiveState’s Junior Product Manager. Born and raised on Vancouver Island, BC, Matthew is a software developer in his spare time, preferring Python as his weapon of choice. In December 2012, he graduated from the British Columbia Institute of Technology with a Diploma in Computer Systems Technology. He has previously built telephony systems for customers using Asterisk PBX and Django, and has completed co-op placements doing IT/Sys Admin work with Core Information Technology and AeroInfo Systems, where he received an AeroInfo Award of Excellence. He joined ActiveState in February 2013.