How Docker is changing the way we develop, test & ship apps at Songkick

by Paul Springett

Songkick
Songkick Tech

--

We’re really excited to have shipped our first app that uses Docker throughout our entire release cycle; from development, through to running tests on our CI server, and finally to our production environment. This article explains a bit about why we came to choose Docker, how we’re using it, and the benefits it brings.

Since Songkick and Crowdsurge merged last year we’ve had a mix of infrastructures, and in a long-term quest to consolidate platforms we’ve been looking at how to create a great development experience that would work cross-platform. We started by asking what a great development environment looks like, and came up with the following requirements:

  • Isolate dependencies (trying to run two different versions of a language or database on the same machine isn’t fun!)
  • Match production accurately
  • Fast to set up, and fast to work with day-to-day
  • Simple to use (think make run)
  • Easy for developers to change

We’ve aspired to created a development environment that gets out of the way and allows developers to focus on building great products. We believe that if you want a happy, productive development team it’s essential to get this right, and with the right decisions and a bit of work Docker is a great tool to achieve that.

We’ve broken down some advice and examples of how we’re using Docker for one of our new internal apps.

Install the Docker Toolbox

The Docker Toolbox provides you with all the right tools to work with Docker on Mac or Windows.

A few of us have also been playing with Docker for Mac that provides a more native experience. It’s still in beta but it’s a fantastic step forwards compared to the Docker toolbox and docker-machine.

Use VMWare Fusion instead of Virtualbox

Although Docker Toolbox comes with Virtualbox included, we chose to use VMWare Fusion instead. File change notifications are significantly better using VMWare Fusion, allowing features like Rails auto-reloading to work properly.

Creating a different Docker machine is simple:

$ docker-machine create --driver vmwarefusion default

Use existing services where possible

In development we connect directly to our staging database, removing a set of dependencies (running a local database, seeding structure and data) and giving us a useful, rich dataset to develop against.

Having a production-like set of data to develop and test against is really important, helping us catch bugs, edge-cases and data-related UX problems early.

Test in isolation

For testing we use docker-compose to run the tests against an ephemeral local database, making our tests fast and reliable.

Because you may not want to run your entire test suite each time, we also have a test shell ideal for running specific sets of tests:

$ make shell ENV=test
$ rspec spec/controllers/

Proper development tooling

As well as running the Ruby web server through Docker, we also provide a development shell container, aliased for convenience. This is great for trying out commands in the Rails console or installing new gems without needing Ruby or other dependencies on your Mac.

$ make shell ENV=dev
$ bundle install
$ rails console

Use separate Dockerfiles for development and production

We build our development and production images slightly differently. They both declare the same system dependencies but differ in how they install gems and handle assets. Let’s run through each one and see how they work:

Dockefile.dev

FROM ruby:2.3.1-slimRUN mkdir -p /appRUN apt-get update && \
apt-get install -y \
build-essential \
pkg-config \
libxml2-dev \
libxslt-dev \
libmysqlclient-dev \
mysql-client \
libssl-dev \
libreadline-dev \
git \
libfontconfig \
wget && \
apt-get clean && \
rm -rf /var/lib/apt/lists/ /tmp/ /var/tmp/
# Add our Gemfile to the app directory, this is here so if it changes
# then the bundle install is triggered again
WORKDIR /app
COPY Gemfile* /app/
COPY vendor/cache /app/vendor/cache
RUN bundle config build.nokogiri --use-system-libraries \
&& bundle install --local
COPY . /appEXPOSE 8080CMD ["rails", "server", "-b", "0.0.0.0", "-p", "8080"]
Here we deliberately copy the Gemfile, corresponding lock file and the vendor/cache directory first, then run bundle install.
When steps in the Dockerfile change, Docker only re-runs that step and steps after. This means we only run `bundle install` when there's a change to the Gemfile or the cached gems, but when other files in the app change we can skip this step, significantly speeding up build time.We deliberately chose to cache the gems rather than install afresh from Rubygems.org each time for three reasons. First, it removes a deployment dependency--when you're deploying several times a day it's not great having to rely on more external services than necessary. Second, it means we don't have to authenticate for installing private or Git-based gems from inside containers. Finally, it's also much faster installing gems from the filesystem, using the --local flag to avoid hitting Rubygems altogether.Dockefile.prodFROM ruby:2.3.1-slim# Create our app directory
RUN mkdir -p /app
RUN apt-get update && \
apt-get install -y \
build-essential \
...
apt-get clean && \
rm -rf /var/lib/apt/lists/ /tmp/ /var/tmp/
WORKDIR /app
COPY . /app
RUN bundle config build.nokogiri --use-system-libraries \
&& bundle install --local --without development test
RUN RAILS_ENV=production bundle exec rake assets:precompileEXPOSE 8080CMD ["rails", "server", "-b", "0.0.0.0", "-p", "8080", "--pid", "/tmp/rails.pid"]
For production we install our gems differently, skipping test and development groups and precompiling assets into the image.
DeploymentTo release this image we tag it as the latest version, as well as the git SHA. This is then pushed to our private ECR.We deliberately deploy that specific version of the image, meaning rolling back is as simple re-deploying a previous version from Jenkins.Running in productionFor running containers in production, we're doing the simplest possible thing--using Docker to solve a dependency management problem only.We're running one container per node, using host networking and managing the process using upstart. When deploying we simply tell the upstart service to restart, which pulls the relevant image from the registry, stops the existing container and starts the new one.This isn't the most scalable or resource-efficient way of running containers but for a low-traffic internal app it's a great balance of simplicity and effectiveness.Next stepsOne thing we're still missing on production is downtime-less deploys. Amazon's ECS handles this automatically (by spinning up a new pool of containers before automatically swapping them out in the load balancer) so we're looking to move towards using that instead.We're still learning a lot about using Docker but so far it's been a powerful, reliable and enjoyable tool to use for both developers and ops.

--

--