Skip to main content

Using Expeditor to build Docker images

This guide covers two patterns: the old and the new. We're documenting the old pattern so that current users can understand what is happening, but we'd prefer that all new projects leverage the new pattern.

The New Way: Building images on Buildkite

This pattern is in the process of being transformed into a shim pipeline definition. In the future this pattern will look more like our Chef Habitat or Omnibus patterns. This process is still a work-in-progress, and as such it has the following limitations.

  1. No integration with version management or project promotion. In an ideal world, we would be able to automatically tag the Docker images we create with versions and channels. As of right now, this pattern does not support either.

Creating your Docker image pipelines

To use Buildkite to build your Docker images you'll need the following:

  1. A single docker/build pipeline defined in your .expeditor/config.yml file.
  2. The .expeditor/docker_build.pipeline.yml pipeline definition file.
  3. A docker-compose.yml file (located in the root of your repository) that defines all the Docker images you wish to build.
  - docker/build:
      definition: .expeditor/docker_build.pipeline.yml

Triggering builds of your Docker images

There are three ways we recommend to trigger your habitat/build pipeline.

  1. Via the trigger_pipeline:docker/build action. The most common pattern is to trigger this action in response to a pull_request_merged workload.
      - workload: pull_request_merged:{{agent_id}}:*
          - ... # pre-commit actions like built_in:bump_version
          - trigger_pipeline:docker/build
  2. Via the Buildkite UI. Triggering a release build via the Buildkite UI is useful when you need to trigger a fresh build out of band of a code change to your project.
  3. Via the Buildkite CLI. If you have the Buildkite CLI configured, you can trigger a release pipeline manually using the bk build create command.
    bk build create --pipeline=chef/chef-example-master-docker-build

Managing your Docker Compose and pipeline definition files

Until we've formalized docker/build into a shim pipeline pattern, you'll need to manually manage both your Docker Compose and pipeline definition files. In the future, this process will be handled automatically for you. For this guide we'll use a simplified version of the pipeline that Release Engineering uses to manage a few of the Docker images it uses.

We'll start by looking at the Docker Compose file.

version: '3'
    image: chefes/releng-base:latest
      context: components/docker-images/chefes/releng-base
    image: chefes/buildkite:latest
      context: components/docker-images/chefes/buildkite
    image: chefes/buildkite-windows:latest
      context: components/docker-images/chefes/buildkite-windows

If you're unfamiliar with Docker Compose, it is a sub-utility of Docker that allows you to coordinate multiple Docker services within a single file. As part of that file you can specify the name of the image (services.*.image) and where the Dockerfile for that image is kept (services.*.build.context), in case Docker needs to build that image prior to launching the service. In Expeditor, we'll short-circuit that process and instead leverage the built-in build functionality of Docker Compose to power our docker/build pipeline.

In this Docker Compose file we have three Docker images:

  1. chefes/releng-base. This Docker image is where we install the bulk of the tools and languages we support. It is used a foundation for a number of Docker images throughout the organization, including the one used by Expeditor itself.
  2. chefes/buildkite. This Docker image builds on top of chefes/releng-base but includes some Buildkite-specific configuration.
  3. chefes/buildkite-windows. This Docker image is based directly on the windows/servercore image and has a subset of our toolset. This is a newer image and is not as fully developed as its Linux counterpart.

In our docker-compose.yml file you'll see we have a version (version: '3'). This tells Docker which version of the Compose schema to use. We also have a services hash which has the “services” we're configuring (while in reality we're treating these more as “image definitions”). The services hash is a deeply nested hash, with the key for each sub-hash being the name of our service (e.g. releng-base or buildkite). This “service” name will be used later in our Buildkite pipeline definition file.

Nested under each service is two additional pieces of information:

  1. The name of the Docker image we wish to build. This is the name of the Docker image that will be built by our pipeline (e.g. chefes/releng-base:latest). Right now this pipeline pattern doesn't support versioned Docker images, so its required that you have a static tag such as latest.
  2. The path to the folder containing your image's Dockerfile relative to the docker-compose.yml file. We recommend that your docker-compose.yml file be located in the root of your project, so these paths should be relative to root of your GitHub repository. In our example we have nested our Docker images inside of several folders; this is not required. You can host your Dockerfiles anywhere in your GitHub repository.

The next component of our docker/build pipeline is our Buildkite pipeline definition. Here we're using the docker-compose Buildkite plugin to trigger the actual build actions, and leveraging Buildkite's DAG functionality to ensure that our chefes/releng-base and chefes/buildkite images are built in the correct order.

  - label: ":docker: releng-base"
    key: "releng-base"
      - docker-compose#v3.1.0:
          build: releng-base
          image-name: latest
          config: docker-compose.yml
          no-cache: true

  - label: ":docker: buildkite"
    key: "buildkite"
      - "releng-base"
      - docker-compose#v3.1.0:
          build: buildkite
          image-name: latest
          config: docker-compose.yml
          no-cache: true

  - label: ":docker: buildkite-windows"
    key: "buildkite-windows"
          privileged: true
      - docker-compose#v3.1.0:
          build: buildkite-windows
          image-name: latest
          config: docker-compose.yml
          no-cache: true

Let's take a quick walk-through of each of the settings in the pipeline definition to better understand how they link together with our docker-compose.yml.

Step Key Description
label The descriptive text that will be used in the Buildkite UI/API to describe the step.
key The identifier that uniquely identified used by the step dependency functionality to uniquely identify the step.
expeditor We need to make sure that our Windows Docker image is built on a privileged Windows instance. See the Buildkite DSL for more details.
plugins An array of Buildkite plugin hash configurations. The key (docker-compose#v3.1.0) is comprised of the name (docker-compose) and the git tag version (v3.1.0) separated by an octothorpe (#).

Inside the docker-compose plugin hash we're setting the following configuration values.

Plugin Setting Description
build The name of the service from your docker-compose.yml file.
image-repository Where to upload the Docker image. If you wish to publish to the public Docker Hub, you'll want to set this to `<IMAGE_NAME>.
image-name Despite it's confusing title, you actually just need to set this to the tag you wish to use. As mentioned above, we don't currently support pulling the content from your VERSION file so we recommend just using latest.
config The location of your docker-compose.yml file relative to the root of the GitHub repository. If you're following our default instructions, this should just be docker-compose.yml.
no-cache Set this to true. This tells Buildkite whether or not to rebuild the Docker image if it already exists on the system. In Docker Compose's normal operating mode, we likely would not want to rebuild the Docker image each time we started the service. Since we're using Docker Compose strictly for image building, we always want to rebuild the image.

The Old Way: Building images on Expeditor

The “old way” of building Docker images comes from a time before Buildkite. The original use case was to provide a way for the chef/chef to easily build Docker images that could be used with kitchen-dokken.

There are several weaknesses to the “old way” of building that are addressed in the new way:

  1. The Docker images are built on the Expeditor host itself. This means that you can only build Linux images and building the images puts additional strain on Expeditor's resources.
  2. All configured Docker images are built sequentially. The built_in:build_docker_image action will build each of the Docker images configured in docker_images in the order they are specified. This can quickly lead to long action sets, which breaks our rule about running long, complex tasks on the Expeditor host.
  3. Docker image builds cannot be manually triggered. If there is an outage that results in Expeditor missing or dropping the workload that triggers your built_in:build_docker_image action, there is no way to manually trigger it.

Rather than go through how to configure a project to leverage this pattern we'll simply walk through what each of the components does so that you can understand how they all work together. If you want a good example of the full implementation, check out the chef/chef project.

  1. The docker_images Expeditor configuration defines which Docker images to build. You can specify as many Docker images as you'd like, but you cannot control which of them you build.
  2. The built_in:build_docker_image action is used to build the initial Docker image. This is usually done in response to the artifact_published workload associated with the artifact you wish to bake into the Docker image.
  3. The built_in:tag_docker_image action is used to tag the Docker image with channel tags. This gives your users the ability to have Docker images like chef/chef:current. The built_in:tag_docker_image will recognize when its tagging something with the stable channel and tag the image as latest instead (which is the Docker nomenclature for “latest stable”).