How to create a Buildkite pipeline from scratch
In our introduction to pipelines we briefly discussed that long running processes should be run inside Buildkite pipelines rather than action sets. We also outlined a number of Buildkite pipeline patterns that already exist for things like Omnibus builds and GitHub pull request verification pipelines. But what happens if you want to accomplish something outside of one of those patterns? In that situation, we want you to follow the following pattern to create a general purpose pipeline.
A general purpose pipeline is one where you're using Buildkite to run one or more scripts of your own creation. More often than not, these scripts are bash scripts. Before we get started, let's quickly go over the differences between a bash action (with which you may be more familiar) and running a bash script in a Buildkite pipeline.
- The bash action helper functions are not available to Bash scripts in Buildkite pipelines
- The logging functionality of bash actions is no where near as robust as those available to Buildkite pipelines
- Buildkite pipelines have a whole host of useful functionality such as automatic retries
- Buildkite pipelines allow you to run scripts in a huge variety of different ecosystems: bash actions run unprivileged Linux Docker containers
- Buildkite pipelines can be triggered manually via the Buildkite API or UI: bash actions can only be triggered in response to workloads
This guide is going to focus on creating a pipeline that runs some test suites that we want to be able to run on a schedule. For details on how to get the pipeline to run on a schedule, check out our article here. If you want to see an example of this pattern in action, check out the
nightly pipeline in chef/automate. The pattern exemplified in this guide can also be applied to verify pipelines, which run tests in response to GitHub pull requests.
Expeditor will not take any action associated with changes to the .expeditor/config.yml file until those modifications are reviewed and merged as part of a pull request: this includes the creation of Buildkite pipelines. As such, we recommend you first create an empty pipeline and then iterate on its contents once it has been created.
Since we want our pipeline to run every night, we'll name our pipeline
nightly. There are a number of naming conventions, but in our case none of those apply.
--- pipelines: - nightly: description: A collection of test suites to run every night
General purpose pipelines typically require pretty minimal configuration. What we have above will create a private pipeline named
nightly for our project using the default pipeline definition file (e.g. .expeditor/nightly.pipeline.yml) in the chef Buildkite organization. Our pipelines.nightly.description of “A collection of test suites to run every night” provides useful context not only in our .expeditor/config.yml file but in the Buildkite UI as well. Remember, we want our .expeditor/config.yml to also act as documentation for our release process, so make use of pipelines.#.description fields or comments as appropriate. By default all pipelines are private, but if we wanted to make this pipeline private, we would simply add
public: true to our definition just under our description.
--- pipelines: - nightly: description: A collection of public test suites to run every night public: true
Once you've added the pipeline to the .expeditor/config.yml, the next step is to create an empty pipeline definition file. For our
nightly pipeline we elected to use the default file location for general purpose pipelines of .expeditor/<pipeline_name>.pipeline.yml. As such, we'll want to create .expeditor/nightly.pipeline.yml with a short comment that indicates that the pipeline is under development.
``` # This pipeline is currently under development. ```
Once we have those two files we can open up a GitHub pull request. Once the pull request has been merged, we're ready to move on to the next step of iterating on our pipeline definition in a branch.
One of the benefits of Buildkite is that you're able to run your pipeline against any branch in your GitHub repository. This is incredibly useful for iterating on your pipeline. There are two options for triggering builds on your pipeline: the Buildkite UI or the Buildkite CLI. This document will refer to the Buildkite CLI. If you have not already done so, please install and configure the Buildkite CLI on your workstation.
There are three phases to iterating on your pipeline definition.
- Create your working branch on GitHub. Just like you would for any other feature work, you'll want to create a git branch.
- Iterate on your working branch. As you work on your pipeline definition file, you'll commit changes to your working branch and push them up to GitHub. To test out your changes you can trigger builds in the pipeline via the Buildkite UI or using the Buildkite CLI command.
bk build create --pipeline=<pipeline_slug> --branch=<working_branch>
Your pipeline slug is based on your Buildkite organization, Expeditor project name, and the pipeline name. In our example, we created our
nightlypipeline on the
chef/example:masterproject in the
chefBuildkite organization, so our pipeline slug will be
chef/chef-example-master-nightly. You can double check this slug by looking at the slug in the URL of the pipeline in the Buildkite UI, or by running the
bk pipeline listCLI command and filtering on your pipeline using a utility like
- Open up a pull request for your working branch. Once you're comfortable that you have a working pipeline, you can open up a pull request for your branch and formally merge it into your release branch (e.g. master).
Buildkite pipelines are YAML files comprised of four different types of steps:
Expeditor supports all of these types of steps as well as the full Buildkite YAML DSL. However, to provide additional Chef and Expeditor-specific functionality, we provide an additional DSL on top of Buildkite's. This DSL is not supported natively by Buildkite and is instead processed as part of a wrapper script around Buildkite's
buildkite-agent pipeline upload utility that we use as part of the initial trigger step that builds out the pipelines.
For Buildkite pipelines managed with Expeditor, you select where to run your step by specifying an executor. The executor allows you to specify the constraints of not only where you want to run your step but how, combining the concept of Buildkite agent tags with plugins. The full matrix of supported execution runtimes is covered in our Expeditor Buildkite DSL reference documentation, so in this guide we'll be going over the most common configurations we recommend people use.
Running your step inside a Linux Docker container on a Linux host is likely the most common configuration you'll use. By default this job will run inside the chefes/buildkite Docker image, which has the latest stable versions of most tools and languages used by Chef already installed. You can also specify your own
image as well. Please check out the reference documentation for the full details.
steps: - label: a job run in a linux docker container on linux expeditor: executor: docker:
If you need to run a step on Windows, we recommend running it in a Windows 2019 docker container. The default container image that will be used is chefes/buildkite-windows, which has the latest stable versions of most tools and languages used by Chef already installed. You can also specify your own image as well, but it will need to be based on on the correct Windows Server Core base image (e.g. ltsc2019).
steps: - label: a job run in a windows docker container on windows expeditor: executor: docker: host_os: windows
If you're running on a private pipeline, you can run your command inside a virtual MacOS Veertu Anka container. These images do not have the same tool set availability as the Linux and Windows Docker containers. In general, we recommend only using the MacOS executor if you need to run a shell script that must be run on a MacOS machine because it needs something like Xcode.
steps: - label: a job run on macos expeditor: executor: macos:
If you're running on a private pipeline, you have the ability to inject secrets into your job. The full details of how to inject secrets is covered in the Expeditor Buildkite DSL reference documentation, but in this guide we'll cover some of the most common use cases.
steps: - label: a job that uses the git CLI to fetch from a private GitHub repository commands: - git clone https://github.com/chef/private-repository.git expeditor: executor: docker: accounts: - github/chef
steps: - label: a job that uses the GITHUB_TOKEN environment variable commands: - curl -H "Authorization: token \$GITHUB_TOKEN" https://api.github.com expeditor: executor: docker: secrets: GITHUB_TOKEN: account: github/chef field: token
steps: - label: a job that uses the aws CLI to interact with an AWS account commands: - aws --profile chef-example s3 ls expeditor: executor: docker: accounts: - aws/chef-example
steps: - label: a job that needs AWS identity information as environment variables expeditor: executor: docker: secrets: AWS_ACCESS_KEY_ID: account: aws/chef-example field: access_key_id AWS_SECRET_ACCESS_KEY: account: aws/chef-example field: secret_access_key AWS_SESSION_TOKEN: account: aws/chef-example field: session_token
Once you feel comfortable that your pipeline is working as expected, you can go through your team's normal code review process and merge your pull request. Congratulations! You now have a working
nightly pipeline that you can trigger using the trigger_pipeline action. Head over to the scheduled action sets walk-through to learn how to configure the schedule to trigger your pipeline automatically at the desired frequency.