Skip to main content

Using Buildkite to build Chef Habitat Packages

The recommended packaging method for all new projects is Chef Habitat. While Chef does provide a public build service for Chef Habitat packages, Expeditor does not leverage it to build packages for the following reasons:

  1. All the reasons we mention in our introduction to pipelines on why we use Buildkite. From a management point of view, there are a number of benefits for having a consistent place for teams to find their build pipelines. One example is that teams like the one that manages chef/chef (which has Omnibus, Habitat, Ruby gem, and Docker Image builds) can see all their build pipelines in a single dashboard.
  2. We are able to experiment with new build patterns and processes before making them public. The Chef Release Engineering team uses the Chef Habitat Buildkite pipelines as a testing ground for functionality that might eventually make its way into the public build service. We are able to iterate on functionality like smart builds and intra-project reverse dependency detection in the Chef Habitat Buildkite pipelines before making it available to everyone in the public build service.
  3. The Release Engineering team is able to more easily respond to the needs of the project teams. Chef Habitat is more than just building packages, and sometimes project teams need certain build functionality very fast. By maintaining our own Chef Habitat Buildkite pipelines the Release Engineering team is able to more quickly respond to these needs without derailing the Chef Habitat maintainers from other equally important work. We then take it upon ourselves to make sure our learnings are communicated to the Chef Habitat team so the public build service can be improved (as mentioned above).

Creating your Chef Habitat build pipelines

Chef Habitat Buildkite pipelines are another example of a named pipeline. To create the pipeline, you’ll need the following:

  1. A single habitat/build pipeline defined in your .expeditor/config.yml file.
  2. The .expeditor/build.habitat.yml shim pipeline definition file.
  3. (optional) A .bldr.toml file that defines all the Chef Habitat packages you have in your project.

As with other named pipelines, prefixing your pipeline name with habitat/ informs Expeditor that this Buildkite pipeline is different than a general purpose pipeline and tells it to expect a shim pipeline definition rather than a traditional pipeline definition. Remember, shim pipeline definition files are not supported by the native Buildkite DSL.

.expeditor/config.yml
---
pipelines:
  - habitat/build:
      definition: .expeditor/build.habitat.yml

Triggering a build of your Habitat packages

There are three ways we recommend to trigger your habitat/build pipeline.

  1. Via the trigger_pipeline:habitat/build action. The most common pattern is to trigger this action in response to a staged_workload_released workload. Due to current limitations of the Chef Habitat API, we need to use a single HAB_BLDR_CHANNEL channel per project rather than a per-build HAB_BLDR_CHANNEL like the public build service uses. This means that it’s dangerous to run more than one Chef Habitat Buildkite build at a time, so we recommend you use staging areas to ensure that only one build is occurring at a time.
    staging_areas:
      - post_merge:
          workload: pull_request_merged:{{github_repo}}:{{release_branch}}:*
    
    subscriptions:
      - workload: staged_workload_released:{{agent_id}}:post_merge:*
        actions:
          - ... # pre-commit actions like built_in:bump_version
          - trigger_pipeline:habitat/build
  2. Via the Buildkite UI. Triggering a release build via the Buildkite UI is useful when you need to trigger a fresh build out of band of a code change to your project.
  3. Via the Buildkite CLI. If you have the Buildkite CLI configured, you can trigger a release pipeline manually using the bk build create command.
    bk build create --pipeline=chef/chef-example-main-habitat-build

In the Chef Habitat public build service, there is reverse dependency detection to insure that when an upstream Chef Habitat package is modified or rebuilt all the downstream packages are rebuilt as well. Unfortunately, you cannot trigger your pipeline when one of your package dependencies is updated in the public Chef Habitat Depot at this time.

Optimizing build time with smart builds

If your project has a large number of packages like chef/automate, you may not want to rebuild every package on every build. One of the benefits that Expeditor offers it the concept of a smart build. If you have set smart_build to true in your .expeditor/build.habitat.yml shim pipeline definition file, Expeditor will only rebuild packages that have been modified since they were last successfully built when your habitat/build pipeline is triggered in response to a pull_request_merged workload.

.expeditor/build.habitat.yml
---
smart_build: true

To do this, Expeditor maintains git tags in your repository corresponding with the last commit against which each package was last successfully built. The naming scheme for these tags is hab-pkg-<PKG_NAME>. For builds triggered by the merging of a pull request, Expeditor performs a git diff between the HEAD of your release branch and the commit associated with the last successful build for each package. It compares the results of this diff against the paths specified in the .bldr.toml for that package and — if it finds matching files — marks that package and all of its reverse dependencies (rdeps) in the repository as “needing to be built” and ensures all the modified packages, and their local reverse dependencies, are rebuilt in the correct order.

In order to provide a mechanism for you to rebuild all your packages if necessary, builds triggered manually through the Buildkite UI or CLI bypass the smart build option and build all the packages in your project (while still respecting the correct and necessary build order).

Auto-promoting to an alternate first channel

As part of the Chef Habitat Buildkite pipeline, when a package is successfully built it is published to the public depot. As part of this publishing process, it is automatically added to the unstable channel. If your project is required to use “head of channel” promotion, relying on the unstable channel as your first channel can be a bit of an issue. Builds sometimes fail making the integrity of the head of the unstable channel difficult to trust. For example, if you happen to promote the head of the unstable channel while another Chef Habitat Buildkite pipeline is running, you may run into unexpected and unpredictable behavior unless you establish and perfectly enforce a merge freeze before you attempt any promotions.

To get around this issue, the built_in:promote_habitat_packages has special logic that will detect when you specify a channel other than unstable as the first channel in your artifact_channels. When used in conjunction with the buildkite_hab_build_group_published workload, Expeditor creates a pattern that allows you to safely use “head of channel” promotion with Chef Habitat packages. This pattern was based on an existing pattern used in the public build service for safely building and promoting core packages.

The exact list of channels doesn’t matter, the only thing that matters is that unstable is not one of the channels. “But unstable is a perfectly viable channel name!" you may be saying, and we hear you. This pattern has limitations in place due to the specifics of how “head of channel” promotion had to be implemented. If you do not use “head of channel” promotion, then this pattern does not apply to you.

.expeditor/config.yml
---
pipelines:
  - habitat/build:
      definition: .expeditor/build.habitat.yml

staging_areas:
  - post_merge:
      workload: pull_request_merged:{{github_repo}}:{{release_branch}}:*

artifact_channels:
  - dev
  - acceptance
  - current
  - stable

subscriptions:
  # Each package is uploaded to the unstable channel as it is finished
  - workload: staged_workload_released:{{agent_id}}:post_merge:*
    actions:
      - ... # your other merge actions
      - trigger_pipeline:habitat/build

  # When all packages are finished, each individual package is automatically promoted to your first channel.
  # In this example, the first channel we specified in artifact_channels is 'dev'
  - workload: buildkite_hab_build_group_published:{{agent_id}}:*
    actions:
      - built_in:promote_habitat_packages

Let’s quickly walk through how this pattern works, and how it relates to the configuration we have above.

  1. We trigger our Chef Habitat Buildkite pipeline when a pull request is merged. This pipeline will go through and build your packages as we’ve gone over above. Upon the completion of any Chef Habitat Buildkite pipeline, a buildkite_hab_build_group_published workload is published.
  2. When all the builds complete successfully, we promote the packages to the first channel. We subscribe to the buildkite_hab_build_group_published workload, which contains the specific pkg_idents and pkg_targets for each of the builds we created in our pipeline, allowing us to safely promote them to our dev channel using the built_in:promote_habitat_packages action. If for some reason you cannot use the built_in:promote_habitat_packages action, you can replace its usage with a bash action that leverages the appropriate environment variables

Walk-through of what happens in the pipeline

Let’s break down the steps of a Chef Habitat Buildkite pipeline build and walk through all the processes.

  1. Parse the shim pipeline definition. Our trigger step reads in the .expeditor/build.habitat.yml shim pipeline definition file and performs the following sub-steps.
    1. Locates all of the Chef Habitat package plan files in the repository. Expeditor uses the same logic as the Chef Habitat build service to determine where the Chef Habitat packages are defined in your source code. If you have defined a .bldr.toml (preferred), Expeditor will use that. Otherwise it will look for a ./plan.(sh|ps1) or a ./habitat/plan.(sh|ps1).
    2. Parses all of the plan files to build a full intra-project directed acyclic graph. This allows Expeditor to create a predictive build group that mirrors the reverse dependency build functionality available in the public build service. Expeditor does this by detecting when a package needs to be rebuilt for one of the following reasons:
      1. One or more of the paths specified in the Chef Habitat packages entry in the .bldr.toml is modified.
      2. The package has a reverse dependency (as determined by the pkg_deps, pkg_build_deps, or pkg_scaffolding variables in the plan file) on one of the packages that has already been marked as “needing to be built.”
    3. Creates a Buildkite job to build each of the packages identified as “needing to be built." Expeditor uses a project-specific HAB_BLDR_CHANNEL (expeditor-<GITHUB_ORG>-<GITHUB_REPO>-<RELEASE_BRANCH>) to ensure that each package is built against the latest build of each of its dependencies, even if the latest build was earlier in the current pipeline.
  2. Complete the builds, respecting the reverse dependency ordering. To keep things flowing, Expeditor optimizes the job ordering to take advantage of the DAG to build as many packages as it can in parallel.
  3. Post processing step to publish the buildkite_hab_build_group_published workload back to Expeditor. Once all of the packages are successfully built and published to the public Chef Habitat Depot, the final step in the pipeline collects the pkg_ident and pkg_target for each package it built and publishes that information back to Expeditor.

.bldr.toml

The .expeditor/build.habitat.yml shim pipeline definition defers to Chef Habitat’s .bldr.toml file to specify which packages it needs to build.

build_targets

A list of one or more pkg_targets for which to build the package. Supported values are x86_64-linux, x86_64-linux-kernel2, and x86_64-windows.

.bldr.toml
[my-package]
pkg_targets = [
  "x86_64-linux",
]

If the value is unspecified, the default value is determined by the presence of the following files relative to plan_path.

Package Target Supported Default File Locations
x86_64-linux plan.sh, habitat/plan.sh
x86_64-linux-kernel2 x86_64-linux-kernel2/plan.sh
x86_64-windows plan.ps1, habitat/plan.ps1

If you manually specify x86_64-linux-kernel2 as a pkg_target, and the x86_64-linux-kernel2/plan.sh file does not exist, Expeditor will use either plan.sh or habitat/plan.sh as the plan file for that target.

export_targets

Warning

This is a custom extension. This setting does not exist in the native .bldr.toml DSL.

A list of hab pkg export actions that you wish to take once the habitat packages have completed building. Defaults to []. Currently only supports docker.

.bldr.toml
[my-package]
export_targets = ["docker"]

paths

A list of one or more glob patterns which can be compared to a list of modified files to determine if your package has changed. Please check out the Ruby docs on File.fnmatch to see what wildcard patterns are supported.

Do not use this setting to force package A to rebuild if package B has been modified. Rather, create the intra-project dependency using the pkg_deps, pkg_build_deps, or pkg_scaffolding setting (as appropriate) to indicate that package A should be rebuilt if package B has been modified.

.bldr.toml
[package-a]
plan_path = "components/package-a"
paths = [
  "components/package-a/*",
  "components/shared-library/*",
]

[package-b]
plan_path = "components/package-b"
paths = [
  "components/package-b",
]

If value is unspecified, the default value will be determined on the value of the plan_path.

Description of plan_path value Example plan_path Value Default paths Value
Default plan_path value ["*"]
Relative path to the component directory some/path ["some/path/*"]
Relative path to the habitat directory some/path/habitat ["some/path/*"]

plan_path

The path, relative to the root of the project, where the Chef Habitat plan file exists. You need not include the habitat/ directory (if you have one) as part of the plan_path.

If unspecified, defaults to the root of the project (./).

.bldr.toml
# example: plan file is located at components/my-package/habitat/plan.sh
[my-package]
plan_path = "components/my-package"

private

Warning

This is a custom extension. This setting does not exist in the native .bldr.toml DSL.

Whether or not the package should be visible on the public Depot. Defaults to false.

[my-package]
private = true

.expeditor/build.habitat.yml

Where Habitat’s Builder service stores build configuration within the UI, Expeditor stores the same information in the pipeline definition file (.expeditor/build.habitat.yml).

---
bldr_toml: .bldr.toml
origin: chef
smart_build: true

bldr_toml

Provide the path (relative to the project root) of your .bldr.toml, if you have one. If no value is specified, we assume .bldr.toml. If we cannot find the file, we proceed with the assumption that you have either a ./plan.sh or ./habitat/plan.sh file.

---
bldr_toml: .bldr.toml

origin

Provide the name of the origin where you would like the packages in your repository uploaded.

---
origin: chef

Supported Public Depot Origins

  • chef
  • chef-demo
  • chef-es
  • chefops
  • core
  • devchef

studio_secrets

You have access to the Secrets DSL via the studio_secrets setting. For simplicity, Expeditor will automatically prefix all your studio_secrets environment variables with the requisite HAB_STUDIO_SECRET_ so that they are injected into the studio environment correctly.

---
studio_secrets:
  FEAT_IGNORE_LOCAL:
    value: "true"
  GITHUB_TOKEN:
    account: github
    field: token

smart_build

When smart_build is set to true in your .expeditor/build.habitat.yml, Expeditor will — when a pull request is merged — only rebuild modified packages.

---
smart_build: true

Check out our walk-through on how to use smart builds above.