SECURITY INFLUENCERS BLOG

Security influencers provide real-world insight and “in-the-trenches” experiences on topics ranging from application security to DevOps and risk management

START FREE TRIAL

Portable Builds with Docker

logo_docker0417.pngAt Contrast, we like to enable developers to solve their own problems without submitting tickets to the Operations team. We also like to define our infrastructure with code.

I'll show you a new continuous integration strategy, which enables projects to bring their build dependencies with them as code and avoids the configuration overhead that’s typically associated with setting up a new automated build. To accomplish this, we’ll use Docker to run builds inside project-specific Docker containers. All the configuration to do this will reside right next to our code.

Get Started    

Most software projects require some initial setup before a developer can run a build. For example, to build a typical Java web project, we must first install and configure Java, Maven, node.js, etc. Let's call the set of programs we need to build a project the "build dependencies". Developers usually do a good job of updating projects' README documents with instructions on how to set up these build dependencies, and we hope that we all install roughly the same versions. In addition to developers' machines, the continuous integration server needs to be able to build our projects, so it also needs access to machines that have the right build dependencies installed.

We recently started a new project, ts-benchmark, which introduces a new set of build dependencies that we haven’t used in other projects: Go and its associated build tools. To set up the continuous integration server with the right build dependencies for this new project, we usually need to get in touch with our Operations team about building a new Go build node for building Go projects. If we ever change the build dependencies for this project, we need to work with Operations again to configure a new build machine. Furthermore, if we introduce new build dependencies in a feature branch, and these dependencies are incompatible with our old build machine (e.g. upgrade our version of Go), we may need to maintain a new build machine and a new build configuration just for the feature branch.

We can do better – so, we introduced a generic build machine that simply has Docker installed. This is how we use it to build our new project.

Dockerfile Defines the Build Image

We want projects to "bring their build dependencies with them as code". To do this, we include a Dockerfile in the source code repository, which instructs Docker how to build an image that contains all the project's build dependencies. Here's the Dockerfile for building the ts-benchmark project:

FROM golang

RUN mkdir -p /go/src/contrast && \
   apt-get update && \
   apt-get install -y zip && \
   go get -u github.com/jteeuwen/go-bindata/...

 
In case you're not familiar with Dockerfiles, let’s break down these two lines. 

  1. `FROM golang` starts with a base image: the official golang Docker image which includes the `go` executable.
  2. `RUN ...` executes a shell script, which installs some additional dependencies needed by our Go project to build the image.

Run the Build Inside a Container

Remember – the goal is to build our project using a Docker container that contains all the necessary build dependencies. For those of you who aren’t familiar with Docker, let’s quickly review some terminology:

  • Docker image: Immutable image of a Linux file system. Docker has lots of these readily available in the public repository (e.g. MySQL image); or, you can build your own, like we’re doing with the Dockerfile.

  • Docker container: Process running inside a Linux container created from a Docker image.

The Dockerfile defines how to create a Docker image with all the build dependencies installed. We’ll then use a shell script to build our Docker image, and run the build in a Docker container made from that image. Let's break down an example shell script for the ts-benchmark project: 

#!/bin/bash -eu
# build.sh - builds the project in a docker container
this_dir=$(dirname $0)
cd $this_dir/..

docker build -t tsbenchmark-build .

docker run --rm -v $(pwd):/go/src/contrast/tsbenchmark:rw -w /go/src/contrast/tsbenchmark --entrypoint /usr/bin/make tsbenchmark-build linux

 

  1. The first couple of lines just adjust the current working directory based on the location of this script file.
  2. The `docker build` command builds a new Docker image with the tag `tsbenchmark-build` using the Dockerfile in the current working directory. After this runs, you’ll see an image with the tag "tsbenchmark-build" if you list the Docker images with `docker image list`.
  3. The `docker run` command is the complex, so let's consider each argument:
    1. `--rm`indicates that Docker should cleanup the container after it exits. If this isn’t provided, you’ll end up with a lot of clutter.
    2. `-v $(pwd):/go/src/contrast/tsbenchmark:rw` attaches a volume to the container. In this case, we attach the present working directory (the project directory) to the container's `/go/src/contrast/tsbenchmark` directory, and allow the container to read and write this directory. This maps a directory on the host file system to the container's file system, allowing the container to read our source code even though it’s not in the Docker image. The `/go` directory is the container's `$GOPATH`. If you're unfamiliar with Go and GOPATH, just know that Go source code wants to be in a specific file system location within a GOPATH to be compiled.
    3. `-w /go/src/contrast/tsbenchmark`specifies the working directory in the container's file system to use when executing the container's binary. Note that the path provided here is the volume attached in the previous argument, so the working directory is the project directory.
    4. `--entrypoint /usr/bin/make`specifies the binary to execute when the container runs. The ts-benchmark project uses Make for its build system, so we'll set the entrypoint to the Make binary.
    5. `tsbenchmark-build`is the Docker image to run. (We built this image in the previous step.)
    6. Linux (and all arguments provided after the image name) are arguments to the container's entrypoint. In this case, the container will run `make linux`.

So, `docker run` executes `make linux` inside of a new Docker container which contains all our build dependencies. Since our project directory is mapped to the Docker container file system with the volume option, the resulting artifacts from executing `make linux` are available where we expect them to be – on disk, as if we ran `make linux` without Docker!

Review

The goal is to enable projects to bring their build dependencies with them as code. This enables developers to solve their own problems without help from Operations, tracks our infrastructure in version control and allows branches of our code to use different build dependencies without any burden on the continuous integration system. To do this, projects include a Dockerfile, which instructs Docker how to build an image that contains all the necessary build dependencies. The projects' build scripts run the build inside of a Docker container created from the Docker image. If you're unfamiliar with Docker, this basic use case is a great way to start learning about the platform and how we use it at Contrast.

Tell Me the Benefits Again

  • Developers solve their own problems by editing the Dockerfile and build script in the project's source.
  • Track build dependencies in version control.
  • Use different build dependencies on different branches without affecting continuous integration configuration.
  • Any developer that has Docker installed on their laptop can build projects locally (just as the continuous integration server would).
 continuous-application-security
Johnathan Gilday

Johnathan Gilday

Johnathan supports Contrast as a full-stack software developer. From his prior experience with software research and development efforts for the US DoD, he brings expertise in modern software stacks including mobile platforms, non-relational cloud storage solutions, and cloud automation technologies.

SUBSCRIBE TO THE BLOG