Sign up

docker

Docker for Rails Developers: Containerizing Your App

Hana Mohan

Last updated on

In this guide, we will take your existing Ruby on Rails application and deploy it with Docker. I'll assume that you are using Postgres and Redis, the most common databases used by the Rails community. Once we build the docker image, we'll start pushing it up to Amazon's Elastic Container Registry service.

Docker vs Capistrano

If you have been building apps and then deploying them using Capistrano, there is a bit of a mindset switch that needs to happen for you to appreciate what Docker brings to the table.

In the Capistrano world, you push your code up to the application servers and then install bundler, pre-compile assets, run migrations, and update your webserver to run off the latest release. A missing dependency (for example, a missing Imagemagick installation needed for rmagick gem) may cause the bundle install step to error out. You can use Ansible or another tool to manage these dependencies. However, working this way, you are managing your development and staging/production environments differently. There is always going to be a bit of a hit or trial when you deploy.

On the other hand, the Docker approach creates a container image with all the dependencies, including the pre-compiled assets and gems that are guaranteed to run the same when you run it in an environment capable of running this image.

It also makes it trivial to scale up or run off the latest operating system. The only thing you need to do when deploying a docker image is to run the database migration, and you are ready to release it. The Docker for Rails book is worth checking out to understand this in much more detail.

In the following sections, we will create three docker images - a base image with all the code and bundled gems, and a web and a worker image based on this base image.

Create a docker file for the base image

To reap all these benefits, you must create a docker file. Here is a sample one.

Place it in the Rails root folder. Note that we are copying over the Gemfile and installing the gems before copying over the source code. Since Docker ends up caching an image at each step, this makes sure that we can keep using the existing cache with bundled gems even if the rest of the code changes (but the Gemfile doesn't).

FROM ruby:2.7
RUN apt-get update -yqq && apt-get install -y nodejs postgresql-client

# Install gems first for caching efficiently
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app 
RUN bundle install

COPY . /usr/src/app

CMD ["bash"]

Dockerfile.base

To reap all these benefits, you must create a docker file. Here is a sample one.

Place it in the Rails root folder. Note that we are copying over the Gemfile and installing the gems before copying over the source code. Since Docker ends up caching an image at each step, this makes sure that we can keep using the existing cache with bundled gems even if the rest of the code changes (but the Gemfile doesn't).

It's also a good idea to add a .dockerignore file to make sure the build context (all the files needed to build your image) stays small. For example, here is one to get you started

.git
.gitignore
README.md

#
# OS X
#
.DS_Store
.AppleDouble
.LSOverride
# Icon must end with two \r
Icon
# Thumbnails
._*
# Files that might appear on external disk
.Spotlight-V100
.Trashes
# Directories potentially created on remote AFP share
.AppleDB
.AppleDesktop
Network Trash Folder
Temporary Items
.apdisk

# Node
node_modules

#
# Rails
#
.env
.env.sample
*.rbc
capybara-*.html
log
tmp
db/*.sqlite3
db/*.sqlite3-journal
public/system
public/packs
coverage/
spec/tmp
**.orig

.bundle

.ruby-version
.ruby-gemset

.rvmrc

# if using bower-rails ignore default bower_components path bower.json files
vendor/assets/bower_components
*.bowerrc
bower.json

A sample .dockerignore file to keep the build context small

Now let's go ahead and build the image, setting a proper name and tag for it.

export AWS_ECR_ACCOUNT_URL=799812345678.dkr.ecr.us-west-1.amazonaws.com
docker build -f ./Dockerfile.base -t $AWS_ECR_ACCOUNT_URL/my-app-base:latest .

Docker image for the webserver

Now that we have the base image, let's create an image for the webserver. Let's use the base image we just created as a starting point (and not the ruby, 2.7). Any ENV variables set in the docker file can be overridden when running the image.

FROM 799812345678.dkr.ecr.us-west-1.amazonaws.com/my-app-base

ENV A_VARIABLE_NEEDED_TO_RUN_RAKE_WHEN_BUILDING value

# Compile assets
RUN RAILS_ENV=production SECRET_KEY_BASE=128-char-long-key bundle exec rails assets:precompile

# Start the rails server
CMD ["rails", "server", "-b", "0.0.0.0"]

To build this image

docker build -f ./Dockerfile.web -t $AWS_ECR_ACCOUNT_URL/my-app-web:latest .  

Docker image for the background worker

Let's go ahead and create a docker image for running Sidekiq (our background worker of choice).

FROM 799812345678.dkr.ecr.us-west-1.amazonaws.com/my-app-base

# Start sidekiq 
CMD ["sidekiq"]

It's worth noting that each image runs just one process - either the webserver or the Sidekiq process. We can use the base image to run things like database migration when we deploy these images using a Kubernetes cluster.

Docker-compose for all the other components

Your Rails code probably needs many services to do something useful—things like Postgres and Redis.

At MagicBell, we use the docker-compose.yaml file for our development/test environment but don't use it for running production/staging environments. The following compose file starts a Postgres 12 instance and a Redis 5.0.9 instance. The interesting thing to note here is that we map the db_data directory in the Rails root to PG's data directory.

This way, each run of docker-compose doesn't start a fresh new DB. Make sure to add this directory to your .gitignore and .dockerignore files.

version: "3"

services:
  # Postgres
  database: 
    image: postgres:12.3-alpine
    env_file:
      - .env
    volumes:
      - ./db_data:/var/lib/postgresql/data
    ports: 
      - "5432:5432"

  # Redis
  redis:
    image: redis:5.0.9-alpine

You can run this with

# docker-compose up

Pushing the images up the AWS ECR

To use these images with K8s, you need to push them up to a container registry. Since we are using AWS EKS for K8s, it makes sense to use Amazon's ECR service. The right IAM setup makes it trivial to pull the images during deployment without the need for a secret.

To push images from your local setup (or from your CI server), you need to set up the AWS CLI and use that to generate a password to supply to the docker login command. These passwords expire every 12 hours, so you'd have to redo this process locally (or set up a cron job to do it yourself). On most CI systems, you start with a new container for each build, so the process automatically starts over.

aws ecr get-login-password --region {{AWS_REGION}} --profile default | docker login --username AWS --password-stdin {{AWS_ACCOUNT_NUMBER}}.dkr.ecr.{{AWS_REGION}}.amazonaws.com

Once this command succeeds, you can push the image up to ECR with the following command (assuming you have created the repositories there)

docker push $AWS_ECR_ACCOUNT_URL/my-app-base:latest

Push the other images up similarly.

So there you have a containerized Rails application. In the next blog post, we are going to deploy your application to a k8s cluster using these images. In the next few blog posts, I will show you how to deploy the app on AWS in both a staging and a production environment. We'll use Amazon's Elastic Kubernetes (K8s) service, RDS, and Elasticache. We will use Helm to manage the K8s templates.

Related articles:

  1. Building a notification system in Ruby on Rails: DB Design
  2. Building a Notification System in Angular with MagicBell