Skip to main content

Using Buildkite for Pull Request verification

Pull request verification pipelines, or “verify pipelines,” are a form of general purpose pipelines that are triggered automatically as a response to a pull request that is opened, synchronized, or merged against a GitHub repository. They are intended to replace the usage of services like TravisCI and AppVeyor in validating whether or not a pull request is safe to merge. Like general purpose pipelines, but unlike most other pipelines managed by Expeditor, verify pipelines consist primarily of steps composed by the project maintainers. This means that individual project maintainers are responsible for ensuring that their pipelines continue to function as expected.

What differentiates verify pipelines from other general purpose pipelines is they are a named pipeline. When you name your pipeline verify or prefix the pipeline with verify/ (e.g. verify/public or verify/private) that triggers special conditions in Expeditor that configure that pipeline in a certain way.

  • Builds on your verify pipelines are triggered directly from GitHub. Expeditor leverages special Buildkite configuration that allows verify pipelines to behave in a manner similar to services like TravisCI.
  • Build results are shared on the pull request as a required status check. This mirrors the behavior seen with other tools like TravisCI.

There are three common patterns for creating verify pipelines:

  • Public Verify Pipelines. For open source projects, unless you need access to resources behind Chef’s VPN like Vault, we recommend that you use a single public verify pipeline.
    ---
    pipelines:
      - verify:
          definition: .expeditor/verify.pipeline.yml
          public: true
  • Private Verify Pipelines. For closed source projects hosted on a private GitHub repository, we recommend using the default private visibility for your verify pipeline.
    ---
    pipelines:
      - verify:
          definition: .expeditor/verify.pipeline.yml
  • Public and Private Verify Pipelines. For some open source projects, like chef/automate, you may find it beneficial (or sometimes necessary) to have both a public and a private verify pipeline. Something to keep in mind however is that, for security purposes, builds triggered by pull requests from non-Chef employees to private pipelines are blocked by default. If an open source contributor creates a pull request on a project with a private pipeline, Expeditor will block the execution of the verify pipeline until a Chef employee validates the contents of the change and unblocks the pipeline via the Buildkite UI.
    ---
    pipelines:
      - verify/public:
          definition: .expeditor/verify_public.pipeline.yml
          public: true
      - verify/private:
          definition: .expeditor/verify_private.pipeline.yml

Please refer to the general purpose pipelines document for guidance on how to iterate on your verify pipeline.

Running Buildkite Tests Locally

The long term goal is to allow developers an easy way to execute the tests they run in Buildkite locally. This can help avoid issues where tests “work on my machine” but break in Buildkite.

This process does not currently support the following settings:

If your test depends on any of these settings, the following process will not work for you.

Running Docker-based Tests

The following section will walk you through how to run your Docker-based Buildkite test locally on your own machine. To do this, you will need to have Docker installed locally on your machine.

The first step is to take a look at the YAML definition for the Docker job you want to run locally. In our example we’re pulling a test from the chef/chef verify pipeline.

- label: "Chefstyle :ruby: 3.0"
  commands:
    - /workdir/.expeditor/scripts/bk_container_prep.sh
    - bundle install --jobs=3 --retry=3 --path=vendor/bundle --without omnibus_package ruby_prof
    - bundle exec rake style
  expeditor:
    executor:
      docker:
        image: rubydistros/ubuntu-18.04:3.0

We can tell we’re using a Docker-based test because we’re specifying the docker executor.

There are two key pieces of information you need to extract from your Buildkite job definition:

  1. Which Docker image you are using.
  2. The command(s) you need to run.

You can identify which Docker image you need to use by looking at the expeditor.executor.docker.image keyspace. In our example here, we’re using the rubydistros/ubuntu-18.04:3.0 Docker image. If your job doesn’t explicitly specify an image it will use the default chefes/buildkite image.

In our example, we do not specify either. In that situation, we’re going to fallback on the default workdir directory, which is /workdir.

With the Docker image name and the workdir, we can form the basis of the docker command:

docker run --volume $(pwd):/workdir --workdir /workdir rubydistros/ubuntu-18.04:3.0
  • The --volume $(pwd):/workdir option will mount your current directory (the root of your git repository) into the container
  • The --workdir /workdir option will ensure that whatever command we run is run from the /workdir directory
  • rubydistros/ubuntu-18.04:3.0 is the Docker image we’ll use. It will be downloaded on first use.

The next step is to craft the command we’re going to pass into the docker run command. To do this, we’re going to join all the commands in our commands array together using &&, put them in quotes, and then feed them into sh -e -c.

docker run --volume $(pwd):/workdir --workdir /workdir rubydistros/ubuntu-18.04:3.0 sh -e -c "/workdir/.expeditor/scripts/bk_container_prep.sh && bundle install --jobs=3 --retry=3 --path=vendor/bundle --without omnibus_package ruby_prof && bundle exec rake style"

And there you have it! You can now run a Docker-based Buildkite job locally on your machine.s