Deploy Rails Applications using Docker

SteveLTN
SteveLTN
Published in
10 min readMay 7, 2017

--

Originally posted on 2014–03–15 on http://steveltn.me.

Warning: Things changed a lot since 2014. The way introduced in this post is no longer the best practice anymore.

What is Docker?

Docker is “an open source project to pack, ship and run any application as a lightweight container”. It works much like a virtual machine, wraps everything (filesystem, process management, environment variables, etc.) into a container. Unlike a VM, it uses LXC (Linux kernel container) instead of a hypervisor. LXC doesn’t have its own kernel, but shares the Linux kernel with the host and other containers instead. Based on LXC, Docker is so lightweight, that it introduces almost no performance drawback while running the application.

Docker also provides a smart way to manage your images. Through Dockerfile and its caching mechanism, one can easily redeploy an updated image without transferring large amounts of data.

Why Docker?

Because you never know whether your application will work on the server even if you have run and tested it out on you localhost. That’s because the environment on the server differs from your computer so much. The configuration of RVM, ruby, gemsets differs. If we can test the application in our localhost before deploy, and we know that if it works on localhost, it works on the server, we can save a lot of time debugging deployment issues on the server.

For massive deployments, using an VM image is really handy. You create a VM instance and apply your image, and it works in minutes. Besides the convenience, however, there are problems:

  1. You have to upload the whole new image even if you just made a small update.
  2. There is a significant performance loss.
  3. Your probably run your application on a VPS, which is already a virtualized environment. You can’t run a VM on top of another.

In the case of Docker, respectively:

  1. You don’t have to upload the whole image again. Docker is based on AuFS, which tracks the diff of the whole filesystem.
  2. The performance loss is ignorable since it runs on the host kernel.
  3. You can run Docker on a VM because Docker is not a VM.

Magic of layers

Of course you can use Docker container in the same way as you use a VM image: Upload your container, run it and replace the whole image when updating it. But that’s not the preferred way. To understand how Dockerfile works, let’s do experiments step by step. You probably want to know:

  • An image is an image of a filesystem, like a disk image
  • a container is an instance of an image, like a VM instance
  • Docker needs to be run with root privelege

##Helloworld

First, get a Linux machine, install Docker on it. To make things easier, I used a DigitalOcean $5/month instance, with “Docker 0.8 on Ubuntu 13.10” image. It saves my time to install Docker.

In order to print a Helloworld message, we need a working docker container. The easiest way of obtaining it is fetching one from Docker official registry. A docker registry is much like a GitHub for docker images. Run:

host# docker run ubuntu /bin/echo hello world

where:

  • ubuntu is the name of image from official registry. Normally you use USERNAME/TAG to identify an image on the registry. But if you skip the USERNAME, it means the official Docker image.
  • /bin/echo hello world is the command you run in the container.

The output looks like this:

host# docker run ubuntu /bin/echo hello world
Unable to find image 'ubuntu' (tag: latest) locally
Pulling repository ubuntu
9f676bd305a4: Download complete
9cd978db300e: Download complete
eb601b8965b8: Download complete
5ac751e8d623: Download complete
9cc9ea5ea540: Download complete
511136ea3c5a: Download complete
6170bb7b0ad1: Download complete
321f7f4200f4: Download complete
f323cf34fd77: Download complete
1c7f181e78b9: Download complete
7a4f87241845: Download complete
hello world

Each line of download is for a layer.

Docker DOESN’T write into the image. Instead, it creates a layer on top of the existing image, which contains the modifications you made to the filesystem. Migrating from a previous state of filesystem to a recent one is applying one or more layers on top of the old image, just like patching files.

When a container is stopped, you can commit it. Committing a container is creating an additional layer on top of the base image. As expected, the official ubuntu image is also made up of several layers.

After downloading the image, docker boots a container from the image, runs the command, and prints the result. In our case, the printed result is the last line hello world.

Modify your image and commit it

Let’s try to modify the image and commit it.

First, start an interactive shell:

host# docker run -i -t ubuntu /bin/bash

Notice that the shell pops up instantly. The time cost to start a shell consists two parts: downloading time and booting time. Since we already downloaded the image when doing the helloworld, docker caches it. The booting time is ignorable, because unlike a VM, a Docker container doesn’t boot the kernel, doesn’t start system service, etc…

Now let’s install a random software, for example, nginx. Run the following command in the container shell:

root@54e8da3b1db0:/# apt-get update
root@54e8da3b1db0:/# apt-get install -y nginx
root@54e8da3b1db0:/# exit

You are back to your host shell. run

host# docker ps -l

to list your containers. You’ll see something like:

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
54e8da3b1db0 ubuntu:12.04 /bin/bash 5 minutes ago Exit 0 clever_einstein

By far your changes are not committed. Without commiting, you cannot use it as a base image. To commit it, run

root@docker-toy-machine:~# docker commit -m "Install Nginx" 54e8 steveltn:nginx
4e66300102f4218b312fb4352221682ff42614351b18506e51792491e432111d

where 54e8 is the first characters of the container ID. Check the available images:

root@docker-toy-machine:~# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
steveltn nginx 4e66300102f4 4 seconds ago 237.6 MB
ubuntu 13.10 9f676bd305a4 5 weeks ago 178 MB
ubuntu saucy 9f676bd305a4 5 weeks ago 178 MB
ubuntu 13.04 eb601b8965b8 5 weeks ago 166.5 MB
ubuntu raring eb601b8965b8 5 weeks ago 166.5 MB
ubuntu 12.10 5ac751e8d623 5 weeks ago 161 MB
ubuntu quantal 5ac751e8d623 5 weeks ago 161 MB
ubuntu 10.04 9cc9ea5ea540 5 weeks ago 180.8 MB
ubuntu lucid 9cc9ea5ea540 5 weeks ago 180.8 MB
ubuntu 12.04 9cd978db300e 5 weeks ago 204.4 MB
ubuntu latest 9cd978db300e 5 weeks ago 204.4 MB
ubuntu precise 9cd978db300e 5 weeks ago 204.4 MB

You can see the nginx image is created. You can use your new image as a base, you can run another command on it now. But this time let’s run a service on it.

Run Nginx in a container

Usually, after you run nginx form terminal, it spawns several processes and the master process exit. In that case, docker will terminate it's container. To prevent that, we want to modify the Nginx configuration file to keep the master process running after spawning.

host# docker run 4e66300102f4 /bin/bash -lc 'echo "daemon off;" >> /etc/nginx/nginx.conf'
host# docker ps -l
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9352f354c4ab steveltn:nginx /bin/bash -lc echo " 42 seconds ago Exit 0 romantic_curie
host# docker commit -m "Turn off nginx daemon mode" 9352f354c4ab steveltn:nginx
bc94dfbdc83a894e8837769ea8a52d93a4fa8628bf3a1b8748e3a5ffbfd9a760
host# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
steveltn nginx bc94dfbdc83a 14 seconds ago 237.6 MB

You can see the new image ID is the same as the result of commit. As you can see, the process of modifying an image is run a command on a image (which creates a container from image), then commit the container to make a new image. By doing this, a new layer is added to the image.

Next step is to run nginx as a service.

docker run -p 80:80 bc94dfbdc83a /usr/sbin/nginx

-p 80:80 forwards the 80 port of host to the 80 port of the container. bc94dfbdc83a is the image ID.

Now visit the 80 port of the host, you’ll see the familiar “Welcome to nginx!”. Yay!

You can use docker ps to see all Docker processes and us docker kill to terminate them.

Using Dockerfile

Until now we are doing everything manually, step by step. This is not what we want for automated deployment. In order to achieve automation, we need to introduce Dockerfile.

A Dockerfile is a configuration file for Docker, which specifies how to create your customized container from a base image. Instead of running the commands line by line, we just write all of them into the Dockerfile. For example, for the previous container running an nginx, we create a Dockerfile like this:

# Dockerfile for installing and running Nginx

# Select ubuntu as the base image
FROM ubuntu

# Install nginx
RUN apt-get update
RUN apt-get install -y nginx
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

# Publish port 80
EXPOSE 80

# Start nginx when container starts
ENTRYPOINT /usr/sbin/nginx

As you might have guessed,

  • FROM means to choose a base image from the registry
  • RUN means to run a command in the container
  • EXPOSE means to expose a port to the host
  • ENTRYPOINT is the initialization command for the container when running it

You can put this Dockerfile on the server and build from the directory containing it. But Docker provides something better. If you provide a GitHub repository, it will clone the repository, use it as the context, and load the Dockerfile in the root of the repository. So I just do:

host# docker build -t steveltn/nginx github.com/steveltn/toy-rails-project-for-docker
Step 0 : FROM ubuntu
---> 9cd978db300e
Step 1 : RUN apt-get update
---> Running in 7395bc0c6b70
=== suppressed ===
---> 6ad2ab717026
Step 2 : RUN apt-get install -y nginx
---> Running in c7e1df6ef59a
=== suppressed ===
---> 6ce63dc3c19d
Step 3 : RUN echo "daemon off;" >> /etc/nginx/nginx.conf
---> Running in fdc4954637a4
---> 41144f01b920
Step 4 : EXPOSE 80
---> Running in e31a205de745
---> 15d5c1e287c2
Step 5 : ENTRYPOINT /usr/sbin/nginx
---> Running in 17a5e24835d9
---> 7d09c4e0c09c
Successfully built 7d09c4e0c09c

If you run docker ps -l to see all of the containers, you will find the newly built one. But since we tagged it by -t steveltn/nginx , we can use the tag as a reference. Next step, let's boot the container.

host# docker run -p 80:80 steveltn/nginx

Visit port 80, we now have exactly the same nginx as before. You can use -d to run the container as a daemon.

Now try to build the container again. The building command finishes in less than a second! Why is it so fast? It turns out Docker will cache the intermediate state (in the form of layers) between each RUN command in the Dockerfile. When you run the same command on the same container, Docker just applies the layer created by the last run of that command instead of running it again on top of the current filesystem. That's why the apt-get update and apt-get install -y nginx finished instantly. The layer magic is achieved by AuFS, which essential works as a version control system like git (without forking and merging).

Deploy a Rails project

To keep things simple, I just created an empty rails project. To simulate the common condition, I used Unicorn as the Rack server.

We put the Dockerfile in the home directory of the Rails project, and add a directory /config/container to store configuration files for services in the container, e.g. nginx.

The Dockerfile looks like this:

# Dockerfile for a Rails application using Nginx and Unicorn

# Select ubuntu as the base image
FROM ubuntu

# Install nginx, nodejs and curl
RUN apt-get update -q
RUN apt-get install -qy nginx
RUN apt-get install -qy curl
RUN apt-get install -qy nodejs
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

# Install rvm, ruby, bundler
RUN curl -sSL https://get.rvm.io | bash -s stable
RUN /bin/bash -l -c "rvm requirements"
RUN /bin/bash -l -c "rvm install 2.1.0"
RUN /bin/bash -l -c "gem install bundler --no-ri --no-rdoc"

# Add configuration files in repository to filesystem
ADD config/container/nginx-sites.conf /etc/nginx/sites-enabled/default
ADD config/container/start-server.sh /usr/bin/start-server
RUN chmod +x /usr/bin/start-server

# Add rails project to project directory
ADD ./ /rails

# set WORKDIR
WORKDIR /rails

# bundle install
RUN /bin/bash -l -c "bundle install"

# Publish port 80
EXPOSE 80

# Startup commands
ENTRYPOINT /usr/bin/start-server

The ADD directive is too copy the files from the context (the directory/GitHub repository in which you build the image) to a path in the container filesystem. When running commands after adding files, Docker will automatically decide whether to use a cached layer depending on whether the added files have been changed.

The config/container/nginx-sites.conf file look like this:

# nginx-sites.conf

server {
root /rails/public;
server_name 95.85.4.231 _;
location / {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_redirect off;
proxy_set_header Host $http_host;
if (!-f $request_filename) {
proxy_pass http://localhost:8080;
break;
}
}
}

start-server.sh:

# start-server.sh

#!/bin/bash
cd /rails
source /etc/profile.d/rvm.sh
bundle exec unicorn -D -p 8080
nginx

Now we have everything ready. On any server with Docker installed, run

host# docker build -t steveltn/nginx github.com/steveltn/toy-rails-project-for-docker
host# docker run -p 80:80 steveltn/nginx

for building and running the image respectively. Two commands, everything.

Tips and Thoughts

0. I just run the Rails app in development environment for simplicity. For a proper Rails application, you should use production environment.

  1. Since we replace the image when redeploy, database shouldn’t be in the Docker container.
  2. In our example, we use the official ubuntu image as the base and do everything on top of it. You can always install everything you need in advance, push the image to the Docker Index, and use your own image as the base image. This reduces the flexibility, but increases the consistency. You definitely don’t want bundle install in the image because you are likely to change the Gemfile of your project, but putting apt-get install is a good idea.
  3. If you are not comfortable with share you image, you can create your own registry, or use a private registry like Quay.
  4. You can use Capistrano to automate the deploy process.
  5. To avoid doing a bundle install from scratch after modifying any file, see this blog post.
  6. I don’t know how to do a zero-downtime deployment right now.
  7. Docker currently supports Linux only. If you use OS X as I do, you probably want to use Vagrant to test your build before deploying.

You can find the Rails repository here.

As mentioned by @nonsens3, “You can have a zero downtime deployment with a proxy like hipache. It would route traffic to the new container when its ready.”

--

--