Dockerizing a Ruby on Rails Application

Semaphore
15 min readMay 10, 2022

In this tutorial, you will learn how to dockerize a Ruby on Rails application from the ground up. We’re going to build it with PostgreSQL, Redis, and Sidekiq. We’ll also be using Unicorn and Nginx in both development and production.

If you would prefer to use Puma or something else, this shouldn’t be an issue.

After reading this article:

  • You will have a basic idea of what Docker is.
  • How Docker can help you to streamline development.
  • How you can use Continuous Integration and Delivery (CI/CD) to build and test your Ruby projects.

You can find the complete code for this tutorial in this repository.

TomFern/ dockerizing-ruby

What is Docker?

Docker allows you to package up an application or service with all of its dependencies into a standardized unit. This unit is typically labeled as a Docker image.

Everything the application needs to run is included. The Docker image contains the code, runtime, system libraries and anything else you would install on a server to make it run if you weren’t using Docker.

What Makes Docker Different from a Virtual Machine

You may have used Vagrant, VirtualBox, or VMWare to run a virtual machine. They allow you to isolate services, but there are a few major differences that make virtual machines much less efficient.

For starters, you need to have an entire guest operating system for each application you want to isolate. It also takes many seconds to boot-up a virtual machine, and each VM can potentially be gigabytes in size.

Docker containers share your host’s kernel, and isolation is done using cgroups and other linux kernel libraries. Docker is very lightweight—it typically takes a few milliseconds for a container to start, and running a container doesn’t use much disk space at all.

What’s the Bottom Line?

What if you could develop your Rails application in isolation on your work station without using RVM or chruby, and changing Ruby versions were super easy?

What if as a consultant or freelancer with 10 Rails projects, you had everything you needed isolated for each project without needing to waste precious SSD disk space?

What if you could spin up your Rails, PostgreSQL, Redis, and Sidekiq stack in about 5 seconds?

What if you wanted to share your project on GitHub and other developers only had to run a single command to get everything running in minutes?

All of this and much more is possible thanks to Docker.

The Benefits of Using Docker

If you’re constantly looking for ways to improve your productivity and make the overall software development experience better, you’ll appreciate the following 5 key benefits Docker offers:

1. Cross Environment Consistency

Docker allows you to encapsulate your application in such a way that you can easily move it between environments. It will work properly in all environments and on all machines capable of running Docker.

2. Expand Your Development Team Painlessly

You should not have to hand over a 30 page document to a new developer to teach them how to set up your application so they can run it locally. This process can take all day or longer, and the new developer is bound to make mistakes.

With Docker all developers in your team can get your multi-service application running on their workstation in an automated, repeatable, and efficient way. You just run a few commands, and minutes later it all works.

3. Use Whatever Technology Fits Best

If you’re a startup or a shop that uses only one language, you could be putting yourself at a disadvantage. Since you can isolate an application in a Docker container, it becomes possible to broaden your horizons as a developer by experimenting with new languages and frameworks.

You no longer have to worry about other developers having to set up your technology of choice. You can hand them a Docker image and tell them to run it.

4. Build Your Image Once and Deploy It Many Times

Since your applications are inside of a pre-built Docker image, they can be started in milliseconds. This makes it very easy to scale up and down.

Time-consuming tasks such as installing dependencies only need to be run once at build time. Once the image has been built, you can move it around to many hosts.

This not only helps with scaling up and down quickly, but it also makes your deployments more predictable and resilient.

5. Developers and Operation Managers Can Work Together

Docker’s toolset allows developers and operation managers to work together towards the common goal of deploying an application.

Docker acts as an abstraction. You can distribute an application, and members of another team do not need to know how to configure or set up its environment.

It also becomes simple to distribute your Docker images publicly or privately. You can keep tabs of what changed when new versions were pushed and more.

Prerequisites

You will need to install Docker. Docker can be run on most major Linux distributions, and there are tools to let you run it on OSX and Windows too.

This tutorial focuses on Linux users, but it will include comments when things need to be adjusted for OSX or Windows.

Installing Docker

Follow one of the installation guides below for your operating system:

Before proceeding, you should have Docker installed and you need to have completed at least the hello world example included in one of the installation guides above.

The Rails Application

The application we’re going to build will be for the latest version of Rails which happens to be Rails 6 at the time of writing. You can find all the code in this tutorial in this repo.

Create a Repository

Create a new GitHub repository to host your code:

  • Follow the instructions to create a repo.
  • Set the language to Rails:
  • Create the repository.
  • Clone it to your machine:
$ git clone YOUR_REPOSITORY_URL

Generating a New Rails Application

We’re going to generate a new Rails project without even needing Ruby installed on our work station. We can do this by using the official Ruby Docker image.

Creating a Rails Image

We’ll install Rails on a Docker container. For that, we’ll need a Dockerfile. A Dockerfile contains all the commands that you would need to install the programs and libraries. It uses a special syntax that is easy to read.

Create a file called Dockerfile.rails:

# Dockerfile.rails
FROM ruby:3.1.2 AS rails-toolbox

# Default directory
ENV INSTALL_PATH /opt/app
RUN mkdir -p $INSTALL_PATH

# Install rails
RUN gem install rails bundler
#RUN chown -R user:user /opt/app
WORKDIR /opt/app

# Run a shell
CMD ["/bin/sh"]

The basic Dockerfile commands are:

  • FROM: defines what image to start from. We’ll use the official Ruby image as a starting point.
  • ARG: specifies build-time argument variables. If your workstation is running Linux, the user and group ids should match between the host and the docker container.
  • RUN: executes commands inside the container. In the example, we use it to create a user and group and then to install the Rails gems.
  • ENV: defines environment variables.
  • WORKDIR: changes the current directory inside the container.
  • USER: changes the active user inside the container.
  • CMD: defines the program to run when the container starts.

To build the image:

$  docker build -t rails-toolbox -f Dockerfile.rails .

Creating the Project

We’ll use the new Rails image to create our project:

$ docker run -it -v $PWD:/opt/app rails-toolbox rails new --skip-bundle drkiq

Docker run starts a new container and runs a program inside:

  • -it: attaches your terminal process with the container.
  • -v $PWD:/opt/app: binds your host machine current directory to the container, so files created inside are visible in your machine
  • rails new --skip-bundle drkiq: That’s the command we’re passing to the Rails image. It creates a new project called “drkiq”.

After running the command, you should find a new directory with Ruby files. It has a brand new Rails project.

Rails new creates a new git repository, but since we already have one at the top level of the project, we won’t need it, you can delete it:

$ rm -rf drkiq/.git

Setting Up a Strong Base

Before we start adding Docker-specific files to the project, let’s add a few gems to our Gemfile and make a few adjustments to our application to make it production-ready.

Modifying the Gemfile

Add the following lines to the bottom of your Gemfile:

gem 'unicorn', '~> 6.1.0'
gem 'pg', '~> 1.3.5'
gem 'sidekiq', '~> 6.4.2'
gem 'redis-rails', '~> 5.0.2'

DRYing Out the Database Configuration

Change your config/database.yml to look like this:

---

development:
url: <%= ENV['DATABASE_URL'].gsub('?', '_development?') %>

test:
url: <%= ENV['DATABASE_URL'].gsub('?', '_test?') %>

staging:
url: <%= ENV['DATABASE_URL'].gsub('?', '_staging?') %>

production:
url: <%= ENV['DATABASE_URL'].gsub('?', '_production?') %>

We will be using environment variables to configure our application. The above file allows us to use the DATABASE_URL, while also allowing us to name our databases based on the environment in which they are being run.

DRYing Out the Secrets File

Create a config/secrets.yml file, it should look like this:

development: &default
secret_key_base: <%= ENV['SECRET_TOKEN'] %>

test:
<<: *default

staging:
<<: *default

production:
<<: *default

YAML is a markup language. If you’ve never seen this syntax before, it involves setting each environment to use the same SECRET_TOKEN environment variable.

This is fine since the value will be different in each environment.

Editing the Application Configuration

Add the following lines to your config/application.rb:

# ...

module Drkiq
class Application < Rails::Application
config.load_defaults 7.0

config.log_level = :debug
config.log_tags = [:subdomain, :uuid]
config.logger = ActiveSupport::TaggedLogging.new(Logger.new(STDOUT))

config.cache_store = :redis_store, ENV['CACHE_URL'],
{ namespace: 'drkiq::cache' }

config.active_job.queue_adapter = :sidekiq
end
end

Creating the Unicorn Config

Next, create the config/unicorn.rb file and add the following content to it:

# Heavily inspired by GitLab:
# https://github.com/gitlabhq/gitlabhq/blob/master/config/unicorn.rb.example

worker_processes ENV['WORKER_PROCESSES'].to_i
listen ENV['LISTEN_ON']
timeout 30
preload_app true
GC.respond_to?(:copy_on_write_friendly=) && GC.copy_on_write_friendly = true

check_client_connection false

before_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!

old_pid = "#{server.config[:pid]}.oldbin"
if old_pid != server.pid
begin
sig = (worker.nr + 1) >= server.worker_processes ? :QUIT : :TTOU
Process.kill(sig, File.read(old_pid).to_i)
rescue Errno::ENOENT, Errno::ESRCH
end
end
end

after_fork do |server, worker|
defined?(ActiveRecord::Base) && ActiveRecord::Base.establish_connection
end

Creating the Sidekiq Initialize Config

Now you can also create the config/initializers/sidekiq.rb file and add the following code to it:

sidekiq_config = { url: ENV['JOB_WORKER_URL'] }

Sidekiq.configure_server do |config|
config.redis = sidekiq_config
end

Sidekiq.configure_client do |config|
config.redis = sidekiq_config
end

Whitelist Docker Host

Rails have a security features that blocks access from unknown sources. We want our different docker containers to communicate with each other, so we need to whitelist the drkiq container.

Edit the config/environment/development.rb file and add the following line:

config.hosts << "drkiq"

Creating the Environment Variable File

Last but not least, you need to create an environment file. Go to the top directory of your project, and create a new file next to your Dockerfile.rails file:

$ cd ..
$ touch env-example

The contents of the example environment are:

# You would typically use rake secret to generate a secure token. It is
# critical that you keep this value private in production.
SECRET_TOKEN=Wa4Kdu6hMt3tYKm4jb9p4vZUuc7jBVFw

WORKER_PROCESSES=1
LISTEN_ON=0.0.0.0:8010
DATABASE_URL=postgresql://drkiq:test_db_password@postgres:5432/drkiq?encoding=utf8&pool=5&timeout=5000
CACHE_URL=redis://redis:6379/0
JOB_WORKER_URL=redis://redis:6379/0

Copy the example file and customize it to your liking. The SECRET_TOKEN should be a random string. The final .env file is secret and should never be checked into git:

$ cp env-example .env
$ echo ".env" >> .gitignore

The above file allows us to configure the application without having to dive into the application code.

This file would also hold information like mail login credentials or API keys.

Dockerizing Your Rails Application

Create the Dockerfile file and add the following content to it:

# Dockerfile development version
FROM ruby:3.1.2 AS drkiq-development

# Install yarn
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg -o /root/yarn-pubkey.gpg && apt-key add /root/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y --no-install-recommends nodejs yarn

# Default directory
ENV INSTALL_PATH /opt/app
RUN mkdir -p $INSTALL_PATH

# Install gems
WORKDIR $INSTALL_PATH
COPY drkiq/ .
RUN rm -rf node_modules vendor
RUN gem install rails bundler
RUN bundle install
RUN yarn install

# Start server
CMD bundle exec unicorn -c config/unicorn.rb

The above file creates the Docker image with:

  • Node and Yarn
  • Rails
  • Gems in the Gemfile

The last part of the Dockerfile sets the correct user and file permissions and starts the unicorn HTTP server.

Configuring Ngnix

While unicorn is perfectly capable of serving our application, for better performance and security, it’s recommended to put a real HTTP server in front. An HTTP server configured as a reverse-proxy protects our application from slow clients and speeds up connections thanks to caching.

We’ll use Nginx, a general-purpose HTTP server in our setup.

Create a configuration file for Ngnix, the file is called reverse-proxy.conf and is at the root directory of your project, next to the other Dockefiles:

# reverse-proxy.conf

server {
listen 8020;
server_name example.org;

location / {
proxy_pass http://drkiq:8010;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}

Create a new file called Dockerfile.nginx to build our custom Nginx image:

# Dockerfile.nginx

FROM nginx:latest
COPY reverse-proxy.conf /etc/nginx/conf.d/reverse-proxy.conf
EXPOSE 8020
STOPSIGNAL SIGTERM
CMD ["nginx", "-g", "daemon off;"]

Creating a dockerignore File

Next, create the .dockerignore file and add the following content to it:

.git
.dockerignore
.env
drkiq/node_modules/
drkiq/vendor/bundle/
drkiq/tmp/

This file is similar to .gitgnore. It will exclude matching files and folders from being built into your Docker image.

What is Docker Compose?

Docker Compose allows you to run 1 or more Docker containers easily. You can define everything in YAML and commit this file so that other developers can simply run docker-compose up and have everything running quickly.

Creating the Docker Compose Configuration File

Next, we will create the docker-compose.yml file and copy the following content into it:

version: "3.9"

services:

postgres:
image: postgres:14.2
environment:
POSTGRES_USER: drkiq
POSTGRES_PASSWORD: test_db_password
ports:
- '5432:5432'
volumes:
- drkiq-postgres:/var/lib/postgresql/data

redis:
image: redis:7.0
ports:
- '6379:6379'
volumes:
- drkiq-redis:/var/lib/redis/data

drkiq:
build:
context: .
volumes:
- ./drkiq:/opt/app
links:
- postgres
- redis
ports:
- '8010:8010'
env_file:
- .env

sidekiq:
build:
context: .
command: bundle exec sidekiq
links:
- postgres
- redis
env_file:
- .env

nginx:
build:
context: .
dockerfile: ./Dockerfile.nginx
links:
- drkiq
ports:
- '8020:8020'

volumes:
drkiq-postgres:
drkiq-redis:

Everything in the above file is documented on Docker Compose‘s website. The short version is:

  • Postgres and Redis use Docker volumes to manage persistence
  • Postgres, Redis and Drkiq all expose a port
  • Drkiq and Sidekiq both have links to Postgres and Redis.
  • Drkiq and Sidekiq both read in environment variables from .env
  • Sidekiq overwrites the default CMD to run Sidekiq instead of Unicorn.

Creating the Volumes

In the docker-compose.yml file, we’re referencing volumes that do not exist. We can create them by running:

$ docker volume create --name drkiq-postgres
$ docker volume create --name drkiq-redis

When data is saved in PostgreSQL or Redis, it is saved to these volumes on your work station. This way, you won’t lose your data when you restart the service because Docker containers are stateless.

Running Everything

Now it’s time to put everything together and start-up our stack by running the following:

$ docker compose up --build

The first time this command runs it will take quite a while because it needs to pull down all of the Docker images that our application requires.

This operation is mostly bound by network speed, so your times may vary.

At some point, it’s going to begin building the Rails application. You will eventually see the terminal output, including lines similar to these:

postgres_1  | ...
redis_1 | ...
drkiq_1 | ...
sidekiq_1 | ...
nginx_1 | ...

You will notice that the drkiq_1 container threw an error saying the database doesn’t exist. This is a completely normal error to expect when running a Rails application because we haven’t initialized the database yet.

Initialize the Database

Hit CTRL+C in the terminal to stop everything. If you see any errors, you can safely ignore them.

Run the following commands to initialize the database:

$ docker­ compose run drkiq rake db:reset
$ docker­ compose run drkiq rake db:migrate

The first command should warn you that db/schema.rb doesn’t exist yet, which is normal. Run the second command to remedy that. It should run successfully.

If you head over to the db folder in your project, you should notice that there is a schema.rb file and that it’s owned by your user.

You may also have noticed that running either of the commands above also started Redis and PostgreSQL automatically. This is because we have them defined as links. docker-compose is smart enough to start dependencies.

Running Everything, Round 2

Now that our database is initialized, try running the following:

$ docker compose up

Testing It Out

Head over to http://localhost:8020

You should be greeted with the typical Rails introduction page.

Working with the Rails Application

Now that we’ve Dockerized our application, let’s start adding features to it to exercise the commands you’ll need to run to interact with your Rails application.

Right now the source code is on your work station, and that source code is being mounted into the Docker container in real-time through a volume.

This means that if you were to edit a file, the changes would take effect instantly, but right now we have no routes or any CSS defined to test this.

Generating a Controller

Run the following command to generate a Pages controller with a home action:

docker compose run drkiq rails g controller Pages home

In a second or two, it should provide everything you would expect when generating a new controller.

This type of command is how you’ll run future Rails commands. If you wanted to generate a model or run a migration, you would run them in the same way.

Modify the Routes File

Remove the get 'pages/home' line near the top of config/routes.rb and replace it with the following:

root 'pages#home'

If you go back to your browser, you should see the new home page we have set up.

Adding a New Job

Use the following to add a new job:

docker compose run drkiq rails g job counter

Modifying the Counter Job

Next, replace the perform function in app/job/counter_job.rb to look like this:

def perform(*args)
21 + 21
end

Modifying the Pages Controller

Replace the home action in app/controllers/pages_controller.rb to look like this:

def home
@meaning_of_life = CounterJob.perform_now
end

Modifying the Home View

The next step is to replace the app/views/pages/home.html.erb file to look as follows:

<h1>The meaning of life is <%= @meaning_of_life %></h1>

Restart the Rails Application

You need to restart the Rails server to pick up new jobs, so hit CTRL+C to stop everything, and then run docker compose up again.

If you reload the website you should see the changes we made.

Experimenting on Your Own

Here are three things you should do to familiarize yourself with your new application:

  • Changing the h1 color to something other than black
  • Generating a model and then running a migration
  • Adding a new action and route to the application

All of these things can be done without having to restart anything, so feel free to check out the changes after you have performed each one.

Adding Some Tests

We can add some testing code to our application. Having tests will help us detect failures and weed out bugs.

Rails will search for test files in the test directory.

Create a test for the CounterJob job. Create a file called test/job/counter_job_test.rb:

require 'test_helper'

class CounterJobTest < ActiveJob::TestCase
test "returns 42" do
assert_equal 42, CounterJob.perform_now
end
end

Let’s add a second test for the Pages controller. Create a file called test/controllers/pages_controller_test.rb:

require 'test_helper'

class PagesControllerTest < ActionDispatch::IntegrationTest
test "should get home" do
get "/"
assert_response :success
end
end

Before running the tests, create a test database:

$ docker compose run drkiq rake db:test:prepare

To run the tests, execute:

$ docker compose run drkiq rails test

...
Finished in 4.850950s, 0.4123 runs/s, 0.4123 assertions/s.
2 runs, 2 assertions, 0 failures, 0 errors, 0 skips

Before continuing, ensure that all your code is checked in GitHub:

$ git add -A 
$ git commit -m "initial commit"
$ git push origin master

Continuous Integration for dockerizing Ruby

With the help of Docker, we created a portable environment that we can share with other developers. In this section, we’ll learn how we can build Docker images to deploy to production.

Continuous Integration (CI) is a software development practice that creates a strong feedback loop that encircles coding and testing. When we make a modification to the code, the CI system picks it up and runs it through a CI Pipeline. The pipeline builds and tests the code and we get an immediate result.

Prerequisites

We’ll need additional services to build and test the Docker images in a scalable way:

  • Docker Hub: create a free account using the Get Started button. Docker Hub provides unlimited public repositories for free.
  • Semaphore: head to Semaphore and sign up using the Sign up with GitHub button. Use your GitHub account to log in.

Next, we have to tell Semaphore how to connect with your Docker Hub account:

  1. Go to your Semaphore account menu and select Settings
  2. Click on Secrets and then Create New Secret.
  3. Create a secret called “dockerhub” with the following details:
  • DOCKER_USERNAME: your Docker Hub username.
  • DOCKER_PASSWORD: type your Docker Hub password.
  1. Click on Save Secret.

Production Images

Our Docker images work very well for development but are not suitable for production. For one thing, our images are not portable, they don’t contain our application code.

We’ll create new images that are independent and can be deployed anywhere.

Create a file called Dockerfile.production with the following contents:

# Dockerfile CI version
FROM registry.semaphoreci.com/ruby:3.1

# Default directory
ENV INSTALL_PATH /opt/app
RUN mkdir -p $INSTALL_PATH

# Install Nodejs
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg -o /root/yarn-pubkey.gpg && apt-key add /root/yarn-pubkey.gpg
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list
RUN apt-get update && apt-get install -y --no-install-recommends nodejs yarn

ENV INSTALL_PATH /opt/app
RUN mkdir -p $INSTALL_PATH

# Install gems
WORKDIR $INSTALL_PATH
COPY drkiq/ .
RUN rm -rf node_modules vendor
RUN gem install rails bundler
RUN bundle install
RUN yarn install

CMD bundle exec unicorn -c config/unicorn.rb

If you compare the development and production Dockerfiles, you’ll find that the main difference is that we’re pulling the Ruby image from registry.semaphoreci.com, a container registry provided by Semaphore which is faster than Docker Hub and which doesn’t count against your Docker download limits.

Push the new Dockerfile to GitHub:

$ git add Dockerfile.production
$ git commit -m "add dockerfile"
$ git push origin master

Continuous Integration Pipeline

You can set up a CI pipeline with a few clicks:

  • Open your Semaphore account.
  • On the left navigation menu, click on the + (plus sign) next to Projects:
  • Find your repository and click on Choose:
  • Select the Docker starter workflow. Click on Customize it first:

Click on Continue to workflow setup. If prompted, choose I want to configure the pipeline from scratch.

The Workflow Builder main components are:

  • Pipeline: A pipeline has a specific objective, e.g. building. Pipelines are made of blocks that are executed from left to right in an agent.
  • Agent: The agent is the virtual machine that powers the pipeline. We have three machine types to choose from. The machine runs an optimized Ubuntu 18.04 image with build tools for many languages.
  • Block: blocks group jobs that can be executed in parallel. Jobs in a block usually have similar commands and configurations. Once all jobs in a block complete, the next block begins.
  • Job: jobs define the commands that do the work. They inherit their configuration from their parent block.

Build Block

The build stage creates our Docker images:

Originally published at https://semaphoreci.com on May 10, 2022.

--

--

Semaphore

Supporting developers with insights and tutorials on delivering good software. · https://semaphoreci.com