Testing with CI
The primary goal of our CI system is to ensure that the master
branch of
rust-lang/rust
is always in a valid state and passes our test suite.
From a high-level point of view, when you open a pull request at
rust-lang/rust
, the following will happen:
- A small subset of tests and checks are run after each push to the PR. This should help catching common errors.
- When the PR is approved, the bors bot enqueues the PR into a merge queue.
- Once the PR gets to the front of the queue, bors will create a merge commit and run the full test suite on it. The merge commit either contains only one specific PR or it can be a "rollup" which combines multiple PRs together, to save CI costs.
- Once the whole test suite finishes, two things can happen. Either CI fails
with an error that needs to be addressed by the developer, or CI succeeds and
the merge commit is then pushed to the
master
branch.
If you want to modify what gets executed on CI, see Modifying CI jobs.
CI workflow
Our CI is primarily executed on GitHub Actions, with a single workflow defined
in .github/workflows/ci.yml
, which contains a bunch of steps that are
unified for all CI jobs that we execute. When a commit is pushed to a
corresponding branch or a PR, the workflow executes the
calculate-job-matrix.py
script, which dynamically generates the specific CI
jobs that should be executed. This script uses the jobs.yml
file as an
input, which contains a declarative configuration of all our CI jobs.
Almost all build steps shell out to separate scripts. This keeps the CI fairly platform independent (i.e., we are not overly reliant on GitHub Actions). GitHub Actions is only relied on for bootstrapping the CI process and for orchestrating the scripts that drive the process.
In essence, all CI jobs run ./x test
, ./x dist
or some other command with
different configurations, across various operating systems, targets and
platforms. There are two broad categories of jobs that are executed, dist
and
non-dist
jobs.
- Dist jobs build a full release of the compiler for a specific platform,
including all the tools we ship through rustup; Those builds are then uploaded
to the
rust-lang-ci2
S3 bucket and are available to be locally installed with the rustup-toolchain-install-master tool. The same builds are also used for actual releases: our release process basically consists of copying those artifacts fromrust-lang-ci2
to the production endpoint and signing them. - Non-dist jobs run our full test suite on the platform, and the test suite of all the tools we ship through rustup; The amount of stuff we test depends on the platform (for example some tests are run only on Tier 1 platforms), and some quicker platforms are grouped together on the same builder to avoid wasting CI resources.
Based on an input event (usually a push to a branch), we execute one of three kinds of builds (sets of jobs).
- PR builds
- Auto builds
- Try builds
Pull Request builds
After each push to a pull request, a set of pr
jobs are executed. Currently,
these execute the x86_64-gnu-llvm-X
, x86_64-gnu-tools
, mingw-check
and
mingw-check-tidy
jobs, all running on Linux. These execute a relatively short
(~30 minutes) and lightweight test suite that should catch common issues. More
specifically, they run a set of lints, they try to perform a cross-compile check
build to Windows mingw (without producing any artifacts) and they test the
compiler using a system version of LLVM. Unfortunately, it would take too many
resources to run the full test suite for each commit on every PR.
Note on doc comments
Note that PR CI as of Oct 2024 by default does not try to run
./x doc xxx
. This means that if you have any broken intradoc links that would lead to./x doc xxx
failing, it will happen very late into the full merge queue CI pipeline.Thus, it is a good idea to run
./x doc xxx
locally for any doc comment changes to help catch these early.
PR jobs are defined in the pr
section of jobs.yml
. They run under the
rust-lang/rust
repository, and their results can be observed directly on the
PR, in the "CI checks" section at the bottom of the PR page.
Auto builds
Before a commit can be merged into the master
branch, it needs to pass our
complete test suite. We call this an auto
build. This build runs tens of CI
jobs that exercise various tests across operating systems and targets. The full
test suite is quite slow; it can take two hours or more until all the auto
CI
jobs finish.
Most platforms only run the build steps, some run a restricted set of tests, only a subset run the full suite of tests (see Rust's platform tiers).
Auto jobs are defined in the auto
section of jobs.yml
. They are executed
on the auto
branch under the rust-lang-ci/rust
repository1 and
their results can be seen here,
although usually you will be notified of the result by a comment made by bors on
the corresponding PR.
At any given time, at most a single auto
build is being executed. Find out
more here.
The auto
and try
jobs run under the rust-lang-ci
fork for
historical reasons. This may change in the future.
Try builds
Sometimes we want to run a subset of the test suite on CI for a given PR, or
build a set of compiler artifacts from that PR, without attempting to merge it.
We call this a "try build". A try build is started after a user with the proper
permissions posts a PR comment with the @bors try
command.
There are several use-cases for try builds:
- Run a set of performance benchmarks using our rustc-perf benchmark suite.
For this, a working compiler build is needed, which can be generated with a
try build that runs the dist-x86_64-linux CI job, which builds an optimized
version of the compiler on Linux (this job is currently executed by default
when you start a try build). To create a try build and schedule it for a
performance benchmark, you can use the
@bors try @rust-timer queue
command combination. - Check the impact of the PR across the Rust ecosystem, using a crater run. Again, a working compiler build is needed for this, which can be produced by the dist-x86_64-linux CI job.
- Run a specific CI job (e.g. Windows tests) on a PR, to quickly test if it
passes the test suite executed by that job. You can select which CI jobs will
be executed in the try build by adding up to 10 lines containing
try-job: <name of job>
to the PR description. All such specified jobs will be executed in the try build once the@bors try
command is used on the PR. If no try jobs are specified in this way, the jobs defined in thetry
section ofjobs.yml
will be executed by default.
Using
try-job
PR description directives
Identify which set of try-jobs (max 10) you would like to exercise. You can find the name of the CI jobs in
jobs.yml
.Amend PR description to include (usually at the end of the PR description) e.g.
This PR fixes #123456. try-job: x86_64-msvc try-job: test-various
Each
try-job
directive must be on its own line.Run the prescribed try jobs with
@bors try
. As aforementioned, this requires the user to either (1) havetry
permissions or (2) be delegated withtry
permissions by@bors delegate
by someone who hastry
permissions.Note that this is usually easier to do than manually edit
jobs.yml
. However, it can be less flexible because you cannot adjust the set of tests that are exercised this way.
Try jobs are defined in the try
section of jobs.yml
. They are executed on
the try
branch under the rust-lang-ci/rust
repository1 and
their results can be seen here,
although usually you will be notified of the result by a comment made by bors on
the corresponding PR.
Multiple try builds can execute concurrently across different PRs.
Modifying CI jobs
If you want to modify what gets executed on our CI, you can simply modify the
pr
, auto
or try
sections of the jobs.yml
file.
You can also modify what gets executed temporarily, for example to test a particular platform or configuration that is challenging to test locally (for example, if a Windows build fails, but you don't have access to a Windows machine). Don't hesitate to use CI resources in such situations to try out a fix!
You can perform an arbitrary CI job in two ways:
- Use the try build functionality, and specify the CI jobs that you want to be executed in try builds in your PR description.
- Modify the
pr
section ofjobs.yml
to specify which CI jobs should be executed after each push to your PR. This might be faster than repeatedly starting try builds.
To modify the jobs executed after each push to a PR, you can simply copy one of
the job definitions from the auto
section to the pr
section. For example,
the x86_64-msvc
job is responsible for running the 64-bit MSVC tests. You can
copy it to the pr
section to cause it to be executed after a commit is pushed
to your PR, like this:
pr:
...
- image: x86_64-gnu-tools
<<: *job-linux-16c
# this item was copied from the `auto` section
# vvvvvvvvvvvvvvvvvv
- image: x86_64-msvc
env:
RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-msvc --enable-profiler
SCRIPT: make ci-msvc
<<: *job-windows-8c
Then you can commit the file and push it to your PR branch on GitHub. GitHub Actions should then execute this CI job after each push to your PR.
After you have finished your experiments, don't forget to remove any changes
you have made to jobs.yml
, if they were supposed to be temporary!
A good practice is to prefix [WIP]
in PR title while still running try jobs
and [DO NOT MERGE]
in the commit that modifies the CI jobs for testing
purposes.
Although you are welcome to use CI, just be conscious that this is a shared resource with limited concurrency. Try not to enable too many jobs at once (one or two should be sufficient in most cases).
Merging PRs serially with bors
CI services usually test the last commit of a branch merged with the last commit
in master
, and while that’s great to check if the feature works in isolation,
it doesn’t provide any guarantee the code is going to work once it’s merged.
Breakages like these usually happen when another, incompatible PR is merged
after the build happened.
To ensure a master
branch that works all the time, we forbid manual merges.
Instead, all PRs have to be approved through our bot, bors (the software
behind it is called homu). All the approved PRs are put in a merge queue
(sorted by priority and creation date) and are automatically tested one at the
time. If all the builders are green, the PR is merged, otherwise the failure is
recorded and the PR will have to be re-approved again.
Bors doesn’t interact with CI services directly, but it works by pushing the
merge commit it wants to test to specific branches (like auto
or try
), which
are configured to execute CI checks. Bors then detects the outcome of the build
by listening for either Commit Statuses or Check Runs. Since the merge commit is
based on the latest master
and only one can be tested at the same time, when
the results are green, master
is fast-forwarded to that merge commit.
Unfortunately testing a single PR at the time, combined with our long CI (~2 hours for a full run), means we can’t merge too many PRs in a single day, and a single failure greatly impacts our throughput for the day. The maximum number of PRs we can merge in a day is around ~10.
The large CI run times and requirement for a large builder pool is largely due
to the fact that full release artifacts are built in the dist-
builders. This
is worth it because these release artifacts:
- Allow perf testing even at a later date.
- Allow bisection when bugs are discovered later.
- Ensure release quality since if we're always releasing, we can catch problems early.
Rollups
Some PRs don’t need the full test suite to be executed: trivial changes like typo fixes or README improvements shouldn’t break the build, and testing every single one of them for 2+ hours is a big waste of time. To solve this, we regularly create a "rollup", a PR where we merge several pending trivial PRs so they can be tested together. Rollups are created manually by a team member using the "create a rollup" button on the merge queue. The team member uses their judgment to decide if a PR is risky or not, and are the best tool we have at the moment to keep the queue in a manageable state.
Docker
All CI jobs, except those on macOS and Windows, are executed inside that platform’s custom Docker container. This has a lot of advantages for us:
- The build environment is consistent regardless of the changes of the underlying image (switching from the trusty image to xenial was painless for us).
- We can use ancient build environments to ensure maximum binary compatibility, for example using older CentOS releases on our Linux builders.
- We can avoid reinstalling tools (like QEMU or the Android emulator) every time thanks to Docker image caching.
- Users can run the same tests in the same environment locally by just running
src/ci/docker/run.sh image-name
, which is awesome to debug failures. Note that there are only linux docker images available locally due to licensing and other restrictions.
The docker images prefixed with dist-
are used for building artifacts while
those without that prefix run tests and checks.
We also run tests for less common architectures (mainly Tier 2 and Tier 3 platforms) in CI. Since those platforms are not x86 we either run everything inside QEMU or just cross-compile if we don’t want to run the tests for that platform.
These builders are running on a special pool of builders set up and maintained for us by GitHub.
Caching
Our CI workflow uses various caching mechanisms, mainly for two things:
Docker images caching
The Docker images we use to run most of the Linux-based builders take a long time to fully build. To speed up the build, we cache it using Docker registry caching, with the intermediate artifacts being stored on ghcr.io. We also push the built Docker images to ghcr, so that they can be reused by other tools (rustup) or by developers running the Docker build locally (to speed up their build).
Since we test multiple, diverged branches (master
, beta
and stable
), we
can’t rely on a single cache for the images, otherwise builds on a branch would
override the cache for the others. Instead, we store the images under different
tags, identifying them with a custom hash made from the contents of all the
Dockerfiles and related scripts.
LLVM caching with sccache
We build some C/C++ stuff in various CI jobs, and we rely on sccache to cache
the intermediate LLVM artifacts. Sccache is a distributed ccache developed by
Mozilla, which can use an object storage bucket as the storage backend. In our
case, the artefacts are uploaded to an S3 bucket that we control
(rust-lang-ci-sccache2
).
Custom tooling around CI
During the years we developed some custom tooling to improve our CI experience.
Rust Log Analyzer to show the error message in PRs
The build logs for rust-lang/rust
are huge, and it’s not practical to find
what caused the build to fail by looking at the logs. To improve the developers’
experience we developed a bot called Rust Log Analyzer (RLA) that
receives the build logs on failure and extracts the error message automatically,
posting it on the PR.
The bot is not hardcoded to look for error strings, but was trained with a bunch of build failures to recognize which lines are common between builds and which are not. While the generated snippets can be weird sometimes, the bot is pretty good at identifying the relevant lines even if it’s an error we've never seen before.
Toolstate to support allowed failures
The rust-lang/rust
repo doesn’t only test the compiler on its CI, but also a
variety of tools and documentation. Some documentation is pulled in via git
submodules. If we blocked merging rustc PRs on the documentation being fixed, we
would be stuck in a chicken-and-egg problem, because the documentation's CI
would not pass since updating it would need the not-yet-merged version of rustc
to test against (and we usually require CI to be passing).
To avoid the problem, submodules are allowed to fail, and their status is recorded in rust-toolstate. When a submodule breaks, a bot automatically pings the maintainers so they know about the breakage, and it records the failure on the toolstate repository. The release process will then ignore broken tools on nightly, removing them from the shipped nightlies.
While tool failures are allowed most of the time, they’re automatically forbidden a week before a release: we don’t care if tools are broken on nightly but they must work on beta and stable, so they also need to work on nightly a few days before we promote nightly to beta.
More information is available in the toolstate documentation.