Getting Started

Thank you for your interest in contributing to Rust! There are many ways to contribute, and we appreciate all of them.

If this is your first time contributing, the walkthrough chapter can give you a good example of how a typical contribution would go.

This documentation is not intended to be comprehensive; it is meant to be a quick guide for the most useful things. For more information, see this chapter on how to build and run the compiler.

Asking Questions

If you have questions, please make a post on the Rust Zulip server or internals.rust-lang.org. If you are contributing to Rustup, be aware they are not on Zulip - you can ask questions in #wg-rustup on Discord. See the list of teams and working groups and the Community page on the official website for more resources.

As a reminder, all contributors are expected to follow our Code of Conduct.

The compiler team (or t-compiler) usually hangs out in Zulip in this "stream"; it will be easiest to get questions answered there.

Please ask questions! A lot of people report feeling that they are "wasting expert time", but nobody on t-compiler feels this way. Contributors are important to us.

Also, if you feel comfortable, prefer public topics, as this means others can see the questions and answers, and perhaps even integrate them back into this guide :)

Experts

Not all t-compiler members are experts on all parts of rustc; it's a pretty large project. To find out who could have some expertise on different parts of the compiler, consult triagebot assign groups. The sections that start with [assign* in triagebot.toml file. But also, feel free to ask questions even if you can't figure out who to ping.

Another way to find experts for a given part of the compiler is to see who has made recent commits. For example, to find people who have recently worked on name resolution since the 1.68.2 release, you could run git shortlog -n 1.68.2.. compiler/rustc_resolve/. Ignore any commits starting with "Rollup merge" or commits by @bors (see CI contribution procedures for more information about these commits).

Etiquette

We do ask that you be mindful to include as much useful information as you can in your question, but we recognize this can be hard if you are unfamiliar with contributing to Rust.

Just pinging someone without providing any context can be a bit annoying and just create noise, so we ask that you be mindful of the fact that the t-compiler folks get a lot of pings in a day.

What should I work on?

The Rust project is quite large and it can be difficult to know which parts of the project need help, or are a good starting place for beginners. Here are some suggested starting places.

Easy or mentored issues

If you're looking for somewhere to start, check out the following issue search. See the Triage for an explanation of these labels. You can also try filtering the search to areas you're interested in. For example:

  • repo:rust-lang/rust-clippy will only show clippy issues
  • label:T-compiler will only show issues related to the compiler
  • label:A-diagnostics will only show diagnostic issues

Not all important or beginner work has issue labels. See below for how to find work that isn't labelled.

Recurring work

Some work is too large to be done by a single person. In this case, it's common to have "Tracking issues" to co-ordinate the work between contributors. Here are some example tracking issues where it's easy to pick up work without a large time commitment:

If you find more recurring work, please feel free to add it here!

Clippy issues

The Clippy project has spent a long time making its contribution process as friendly to newcomers as possible. Consider working on it first to get familiar with the process and the compiler internals.

See the Clippy contribution guide for instructions on getting started.

Diagnostic issues

Many diagnostic issues are self-contained and don't need detailed background knowledge of the compiler. You can see a list of diagnostic issues here.

Picking up abandoned pull requests

Sometimes, contributors send a pull request, but later find out that they don't have enough time to work on it, or they simply are not interested in it anymore. Such PRs are often eventually closed and they receive the S-inactive label. You could try to examine some of these PRs and pick up the work. You can find the list of such PRs here.

If the PR has been implemented in some other way in the meantime, the S-inactive label should be removed from it. If not, and it seems that there is still interest in the change, you can try to rebase the pull request on top of the latest master branch and send a new pull request, continuing the work on the feature.

Contributing to std (standard library)

See std-dev-guide.

Contributing code to other Rust projects

There are a bunch of other projects that you can contribute to outside of the rust-lang/rust repo, including cargo, miri, rustup, and many others.

These repos might have their own contributing guidelines and procedures. Many of them are owned by working groups. For more info, see the documentation in those repos' READMEs.

Other ways to contribute

There are a bunch of other ways you can contribute, especially if you don't feel comfortable jumping straight into the large rust-lang/rust codebase.

The following tasks are doable without much background knowledge but are incredibly helpful:

  • Cleanup crew: find minimal reproductions of ICEs, bisect regressions, etc. This is a way of helping that saves a ton of time for others to fix an error later.
  • Writing documentation: if you are feeling a bit more intrepid, you could try to read a part of the code and write doc comments for it. This will help you to learn some part of the compiler while also producing a useful artifact!
  • Triaging issues: categorizing, replicating, and minimizing issues is very helpful to the Rust maintainers.
  • Working groups: there are a bunch of working groups on a wide variety of rust-related things.
  • Answer questions in the Get Help! channels on the Rust Discord server, on users.rust-lang.org, or on StackOverflow.
  • Participate in the RFC process.
  • Find a requested community library, build it, and publish it to Crates.io. Easier said than done, but very, very valuable!

Cloning and Building

See "How to build and run the compiler".

Contributor Procedures

This section has moved to the "Contribution Procedures" chapter.

Other Resources

This section has moved to the "About this guide" chapter.

About this guide

This guide is meant to help document how rustc – the Rust compiler – works, as well as to help new contributors get involved in rustc development.

There are seven parts to this guide:

  1. Building rustc: Contains information that should be useful no matter how you are contributing, about building, debugging, profiling, etc.
  2. Contributing to rustc: Contains information that should be useful no matter how you are contributing, about procedures for contribution, using git and Github, stabilizing features, etc.
  3. High-Level Compiler Architecture: Discusses the high-level architecture of the compiler and stages of the compile process.
  4. Source Code Representation: Describes the process of taking raw source code from the user and transforming it into various forms that the compiler can work with easily.
  5. Analysis: discusses the analyses that the compiler uses to check various properties of the code and inform later stages of the compile process (e.g., type checking).
  6. From MIR to Binaries: How linked executable machine code is generated.
  7. Appendices at the end with useful reference information. There are a few of these with different information, including a glossary.

Constant change

Keep in mind that rustc is a real production-quality product, being worked upon continuously by a sizeable set of contributors. As such, it has its fair share of codebase churn and technical debt. In addition, many of the ideas discussed throughout this guide are idealized designs that are not fully realized yet. All this makes keeping this guide completely up to date on everything very hard!

The Guide itself is of course open-source as well, and the sources can be found at the GitHub repository. If you find any mistakes in the guide, please file an issue about it. Even better, open a PR with a correction!

If you do contribute to the guide, please see the corresponding subsection on writing documentation in this guide.

“‘All conditioned things are impermanent’ — when one sees this with wisdom, one turns away from suffering.” The Dhammapada, verse 277

Other places to find information

You might also find the following sites useful:

  • This guide contains information about how various parts of the compiler work and how to contribute to the compiler.
  • rustc API docs -- rustdoc documentation for the compiler, devtools, and internal tools
  • Forge -- contains documentation about Rust infrastructure, team procedures, and more
  • compiler-team -- the home-base for the Rust compiler team, with description of the team procedures, active working groups, and the team calendar.
  • std-dev-guide -- a similar guide for developing the standard library.
  • The t-compiler zulip
  • #contribute and #wg-rustup on Discord.
  • The Rust Internals forum, a place to ask questions and discuss Rust's internals
  • The Rust reference, even though it doesn't specifically talk about Rust's internals, is a great resource nonetheless
  • Although out of date, Tom Lee's great blog article is very helpful
  • rustaceans.org is helpful, but mostly dedicated to IRC
  • The Rust Compiler Testing Docs
  • For @bors, this cheat sheet is helpful
  • Google is always helpful when programming. You can search all Rust documentation (the standard library, the compiler, the books, the references, and the guides) to quickly find information about the language and compiler.
  • You can also use Rustdoc's built-in search feature to find documentation on types and functions within the crates you're looking at. You can also search by type signature! For example, searching for * -> vec should find all functions that return a Vec<T>. Hint: Find more tips and keyboard shortcuts by typing ? on any Rustdoc page!

How to build and run the compiler

The compiler is built using a tool called x.py. You will need to have Python installed to run it.

Quick Start

For a less in-depth quick-start of getting the compiler running, see quickstart.

Get the source code

The main repository is rust-lang/rust. This contains the compiler, the standard library (including core, alloc, test, proc_macro, etc), and a bunch of tools (e.g. rustdoc, the bootstrapping infrastructure, etc).

The very first step to work on rustc is to clone the repository:

git clone https://github.com/rust-lang/rust.git
cd rust

Partial clone the repository

Due to the size of the repository, cloning on a slower internet connection can take a long time, and requires disk space to store the full history of every file and directory. Instead, it is possible to tell git to perform a partial clone, which will only fully retrieve the current file contents, but will automatically retrieve further file contents when you, e.g., jump back in the history. All git commands will continue to work as usual, at the price of requiring an internet connection to visit not-yet-loaded points in history.

git clone --filter='blob:none' https://github.com/rust-lang/rust.git
cd rust

NOTE: This link describes this type of checkout in more detail, and also compares it to other modes, such as shallow cloning.

Shallow clone the repository

An older alternative to partial clones is to use shallow clone the repository instead. To do so, you can use the --depth N option with the git clone command. This instructs git to perform a "shallow clone", cloning the repository but truncating it to the last N commits.

Passing --depth 1 tells git to clone the repository but truncate the history to the latest commit that is on the master branch, which is usually fine for browsing the source code or building the compiler.

git clone --depth 1 https://github.com/rust-lang/rust.git
cd rust

NOTE: A shallow clone limits which git commands can be run. If you intend to work on and contribute to the compiler, it is generally recommended to fully clone the repository as shown above, or to perform a partial clone instead.

For example, git bisect and git blame require access to the commit history, so they don't work if the repository was cloned with --depth 1.

What is x.py?

x.py is the build tool for the rust repository. It can build docs, run tests, and compile the compiler and standard library.

This chapter focuses on the basics to be productive, but if you want to learn more about x.py, read this chapter.

Also, using x rather than x.py is recommended as:

./x is the most likely to work on every system (on Unix it runs the shell script that does python version detection, on Windows it will probably run the powershell script - certainly less likely to break than ./x.py which often just opens the file in an editor).1

(You can find the platform related scripts around the x.py, like x.ps1)

Notice that this is not absolute. For instance, using Nushell in VSCode on Win10, typing x or ./x still opens x.py in an editor rather than invoking the program. :)

In the rest of this guide, we use x rather than x.py directly. The following command:

./x check

could be replaced by:

./x.py check

Running x.py

The x.py command can be run directly on most Unix systems in the following format:

./x <subcommand> [flags]

This is how the documentation and examples assume you are running x.py. Some alternative ways are:

# On a Unix shell if you don't have the necessary `python3` command
./x <subcommand> [flags]

# In Windows Powershell (if powershell is configured to run scripts)
./x <subcommand> [flags]
./x.ps1 <subcommand> [flags]

# On the Windows Command Prompt (if .py files are configured to run Python)
x.py <subcommand> [flags]

# You can also run Python yourself, e.g.:
python x.py <subcommand> [flags]

On Windows, the Powershell commands may give you an error that looks like this:

PS C:\Users\vboxuser\rust> ./x
./x : File C:\Users\vboxuser\rust\x.ps1 cannot be loaded because running scripts is disabled on this system. For more
information, see about_Execution_Policies at https:/go.microsoft.com/fwlink/?LinkID=135170.
At line:1 char:1
+ ./x
+ ~~~
    + CategoryInfo          : SecurityError: (:) [], PSSecurityException
    + FullyQualifiedErrorId : UnauthorizedAccess

You can avoid this error by allowing powershell to run local scripts:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

Running x.py slightly more conveniently

There is a binary that wraps x.py called x in src/tools/x. All it does is run x.py, but it can be installed system-wide and run from any subdirectory of a checkout. It also looks up the appropriate version of python to use.

You can install it with cargo install --path src/tools/x.

To clarify that this is another global installed binary util, which is similar to the one declared in section What is x.py, but it works as an independent process to execute the x.py rather than calling the shell to run the platform related scripts.

Create a config.toml

To start, run ./x setup and select the compiler defaults. This will do some initialization and create a config.toml for you with reasonable defaults. If you use a different default (which you'll likely want to do if you want to contribute to an area of rust other than the compiler, such as rustdoc), make sure to read information about that default (located in src/bootstrap/defaults) as the build process may be different for other defaults.

Alternatively, you can write config.toml by hand. See config.example.toml for all the available settings and explanations of them. See src/bootstrap/defaults for common settings to change.

If you have already built rustc and you change settings related to LLVM, then you may have to execute rm -rf build for subsequent configuration changes to take effect. Note that ./x clean will not cause a rebuild of LLVM.

Common x commands

Here are the basic invocations of the x commands most commonly used when working on rustc, std, rustdoc, and other tools.

CommandWhen to use it
./x checkQuick check to see if most things compile; rust-analyzer can run this automatically for you
./x buildBuilds rustc, std, and rustdoc
./x testRuns all tests
./x fmtFormats all code

As written, these commands are reasonable starting points. However, there are additional options and arguments for each of them that are worth learning for serious development work. In particular, ./x build and ./x test provide many ways to compile or test a subset of the code, which can save a lot of time.

Also, note that x supports all kinds of path suffixes for compiler, library, and src/tools directories. So, you can simply run x test tidy instead of x test src/tools/tidy. Or, x build std instead of x build library/std.

See the chapters on testing and rustdoc for more details.

Building the compiler

Note that building will require a relatively large amount of storage space. You may want to have upwards of 10 or 15 gigabytes available to build the compiler.

Once you've created a config.toml, you are now ready to run x. There are a lot of options here, but let's start with what is probably the best "go to" command for building a local compiler:

./x build library

This may look like it only builds the standard library, but that is not the case. What this command does is the following:

  • Build std using the stage0 compiler
  • Build rustc using the stage0 compiler
    • This produces the stage1 compiler
  • Build std using the stage1 compiler

This final product (stage1 compiler + libs built using that compiler) is what you need to build other Rust programs (unless you use #![no_std] or #![no_core]).

You will probably find that building the stage1 std is a bottleneck for you, but fear not, there is a (hacky) workaround... see the section on avoiding rebuilds for std.

Sometimes you don't need a full build. When doing some kind of "type-based refactoring", like renaming a method, or changing the signature of some function, you can use ./x check instead for a much faster build.

Note that this whole command just gives you a subset of the full rustc build. The full rustc build (what you get with ./x build --stage 2 compiler/rustc) has quite a few more steps:

  • Build rustc with the stage1 compiler.
    • The resulting compiler here is called the "stage2" compiler.
  • Build std with stage2 compiler.
  • Build librustdoc and a bunch of other things with the stage2 compiler.

You almost never need to do this.

Build specific components

If you are working on the standard library, you probably don't need to build the compiler unless you are planning to use a recently added nightly feature. Instead, you can just build using the bootstrap compiler.

./x build --stage 0 library

If you choose the library profile when running x setup, you can omit --stage 0 (it's the default).

Creating a rustup toolchain

Once you have successfully built rustc, you will have created a bunch of files in your build directory. In order to actually run the resulting rustc, we recommend creating rustup toolchains. The first one will run the stage1 compiler (which we built above). The second will execute the stage2 compiler (which we did not build, but which you will likely need to build at some point; for example, if you want to run the entire test suite).

rustup toolchain link stage0 build/host/stage0-sysroot # beta compiler + stage0 std
rustup toolchain link stage1 build/host/stage1
rustup toolchain link stage2 build/host/stage2

Now you can run the rustc you built with. If you run with -vV, you should see a version number ending in -dev, indicating a build from your local environment:

$ rustc +stage1 -vV
rustc 1.48.0-dev
binary: rustc
commit-hash: unknown
commit-date: unknown
host: x86_64-unknown-linux-gnu
release: 1.48.0-dev
LLVM version: 11.0

The rustup toolchain points to the specified toolchain compiled in your build directory, so the rustup toolchain will be updated whenever x build or x test are run for that toolchain/stage.

Note: the toolchain we've built does not include cargo. In this case, rustup will fall back to using cargo from the installed nightly, beta, or stable toolchain (in that order). If you need to use unstable cargo flags, be sure to run rustup install nightly if you haven't already. See the rustup documentation on custom toolchains.

Note: rust-analyzer and IntelliJ Rust plugin use a component called rust-analyzer-proc-macro-srv to work with proc macros. If you intend to use a custom toolchain for a project (e.g. via rustup override set stage1) you may want to build this component:

./x build proc-macro-srv-cli

Building targets for cross-compilation

To produce a compiler that can cross-compile for other targets, pass any number of target flags to x build. For example, if your host platform is x86_64-unknown-linux-gnu and your cross-compilation target is wasm32-wasip1, you can build with:

./x build --target x86_64-unknown-linux-gnu,wasm32-wasip1

Note that if you want the resulting compiler to be able to build crates that involve proc macros or build scripts, you must be sure to explicitly build target support for the host platform (in this case, x86_64-unknown-linux-gnu).

If you want to always build for other targets without needing to pass flags to x build, you can configure this in the [build] section of your config.toml like so:

[build]
target = ["x86_64-unknown-linux-gnu", "wasm32-wasip1"]

Note that building for some targets requires having external dependencies installed (e.g. building musl targets requires a local copy of musl). Any target-specific configuration (e.g. the path to a local copy of musl) will need to be provided by your config.toml. Please see config.example.toml for information on target-specific configuration keys.

For examples of the complete configuration necessary to build a target, please visit the rustc book, select any target under the "Platform Support" heading on the left, and see the section related to building a compiler for that target. For targets without a corresponding page in the rustc book, it may be useful to inspect the Dockerfiles that the Rust infrastructure itself uses to set up and configure cross-compilation.

If you have followed the directions from the prior section on creating a rustup toolchain, then once you have built your compiler you will be able to use it to cross-compile like so:

cargo +stage1 build --target wasm32-wasip1

Other x commands

Here are a few other useful x commands. We'll cover some of them in detail in other sections:

  • Building things:
    • ./x build – builds everything using the stage 1 compiler, not just up to std
    • ./x build --stage 2 – builds everything with the stage 2 compiler including rustdoc
  • Running tests (see the section on running tests for more details):
    • ./x test library/std – runs the unit tests and integration tests from std
    • ./x test tests/ui – runs the ui test suite
    • ./x test tests/ui/const-generics - runs all the tests in the const-generics/ subdirectory of the ui test suite
    • ./x test tests/ui/const-generics/const-types.rs - runs the single test const-types.rs from the ui test suite

Cleaning out build directories

Sometimes you need to start fresh, but this is normally not the case. If you need to run this then bootstrap is most likely not acting right and you should file a bug as to what is going wrong. If you do need to clean everything up then you only need to run one command!

./x clean

rm -rf build works too, but then you have to rebuild LLVM, which can take a long time even on fast computers.

Remarks on disk space

Building the compiler (especially if beyond stage 1) can require significant amounts of free disk space, possibly around 100GB. This is compounded if you have a separate build directory for rust-analyzer (e.g. build-rust-analyzer). This is easy to hit with dev-desktops which have a set disk quota for each user, but this also applies to local development as well. Occassionally, you may need to:

  • Remove build/ directory.
  • Remove build-rust-analyzer/ directory (if you have a separate rust-analyzer build directory).
  • Uninstall unnecessary toolchains if you use cargo-bisect-rustc. You can check which toolchains are installed with rustup toolchain list.
1

issue#1707

Quickstart

This is a quickstart guide about getting the compiler running. For more information on the individual steps, see the other pages in this chapter.

First, clone the repository:

git clone https://github.com/rust-lang/rust.git
cd rust

When building the compiler, we don't use cargo directly, instead we use a wrapper called "x". It is invoked with ./x.

We need to create a configuration for the build. Use ./x setup to create a good default.

./x setup

Then, we can build the compiler. Use ./x build to build the compiler, standard library and a few tools. You can also ./x check to just check it. All these commands can take specific components/paths as arguments, for example ./x check compiler to just check the compiler.

./x build

When doing a change to the compiler that does not affect the way it compiles the standard library (so for example, a change to an error message), use --keep-stage-std 1 to avoid recompiling it.

After building the compiler and standard library, you now have a working compiler toolchain. You can use it with rustup by linking it.

rustup toolchain link stage1 build/host/stage1

Now you have a toolchain called stage1 linked to your build. You can use it to test the compiler.

rustc +stage1 testfile.rs

After doing a change, you can run the compiler test suite with ./x test.

./x test runs the full test suite, which is slow and rarely what you want. Usually, ./x test tests/ui is what you want after a compiler change, testing all UI tests that invoke the compiler on a specific test file and check the output.

./x test tests/ui

Use --bless if you've made a change and want to update the .stderr files with the new output.

./x suggest can also be helpful for suggesting which tests to run after a change.

Congrats, you are now ready to make a change to the compiler! If you have more questions, the full chapter might contain the answers, and if it doesn't, feel free to ask for help on Zulip.

If you use VSCode, Vim, Emacs or Helix, ./x setup will ask you if you want to set up the editor config. For more information, check out suggested workflows.

Prerequisites

Dependencies

See the rust-lang/rust INSTALL.

Hardware

You will need an internet connection to build. The bootstrapping process involves updating git submodules and downloading a beta compiler. It doesn't need to be super fast, but that can help.

There are no strict hardware requirements, but building the compiler is computationally expensive, so a beefier machine will help, and I wouldn't recommend trying to build on a Raspberry Pi! We recommend the following.

  • 30GB+ of free disk space. Otherwise, you will have to keep clearing incremental caches. More space is better, the compiler is a bit of a hog; it's a problem we are aware of.
  • 8GB+ RAM
  • 2+ cores. Having more cores really helps. 10 or 20 or more is not too many!

Beefier machines will lead to much faster builds. If your machine is not very powerful, a common strategy is to only use ./x check on your local machine and let the CI build test your changes when you push to a PR branch.

Building the compiler takes more than half an hour on my moderately powerful laptop. We suggest downloading LLVM from CI so you don't have to build it from source (see here).

Like cargo, the build system will use as many cores as possible. Sometimes this can cause you to run low on memory. You can use -j to adjust the number of concurrent jobs. If a full build takes more than ~45 minutes to an hour, you are probably spending most of the time swapping memory in and out; try using -j1.

If you don't have too much free disk space, you may want to turn off incremental compilation (see here). This will make compilation take longer (especially after a rebase), but will save a ton of space from the incremental caches.

Suggested Workflows

The full bootstrapping process takes quite a while. Here are some suggestions to make your life easier.

Installing a pre-push hook

CI will automatically fail your build if it doesn't pass tidy, our internal tool for ensuring code quality. If you'd like, you can install a Git hook that will automatically run ./x test tidy on each push, to ensure your code is up to par. If the hook fails then run ./x test tidy --bless and commit the changes. If you decide later that the pre-push behavior is undesirable, you can delete the pre-push file in .git/hooks.

A prebuilt git hook lives at src/etc/pre-push.sh. It can be copied into your .git/hooks folder as pre-push (without the .sh extension!).

You can also install the hook as a step of running ./x setup!

Configuring rust-analyzer for rustc

Project-local rust-analyzer setup

rust-analyzer can help you check and format your code whenever you save a file. By default, rust-analyzer runs the cargo check and rustfmt commands, but you can override these commands to use more adapted versions of these tools when hacking on rustc. With custom setup, rust-analyzer can use ./x check to check the sources, and the stage 0 rustfmt to format them.

The default rust-analyzer.check.overrideCommand command line will check all the crates and tools in the repository. If you are working on a specific part, you can override the command to only check the part you are working on to save checking time. For example, if you are working on the compiler, you can override the command to x check compiler --json-output to only check the compiler part. You can run x check --help --verbose to see the available parts.

Running ./x setup editor will prompt you to create a project-local LSP config file for one of the supported editors. You can also create the config file as a step of running ./x setup.

Using a separate build directory for rust-analyzer

By default, when rust-analyzer runs a check or format command, it will share the same build directory as manual command-line builds. This can be inconvenient for two reasons:

  • Each build will lock the build directory and force the other to wait, so it becomes impossible to run command-line builds while rust-analyzer is running commands in the background.
  • There is an increased risk of one of the builds deleting previously-built artifacts due to conflicting compiler flags or other settings, forcing additional rebuilds in some cases.

To avoid these problems:

  • Add --build-dir=build-rust-analyzer to all of the custom x commands in your editor's rust-analyzer configuration. (Feel free to choose a different directory name if desired.)
  • Modify the rust-analyzer.rustfmt.overrideCommand setting so that it points to the copy of rustfmt in that other build directory.
  • Modify the rust-analyzer.procMacro.server setting so that it points to the copy of rust-analyzer-proc-macro-srv in that other build directory.

Using separate build directories for command-line builds and rust-analyzer requires extra disk space, and also means that running ./x clean on the command-line will not clean out the separate build directory. To clean the separate build directory, run ./x clean --build-dir=build-rust-analyzer instead.

Visual Studio Code

Selecting vscode in ./x setup editor will prompt you to create a .vscode/settings.json file which will configure Visual Studio code. The recommended rust-analyzer settings live at src/etc/rust_analyzer_settings.json.

If running ./x check on save is inconvenient, in VS Code you can use a Build Task instead:

// .vscode/tasks.json
{
    "version": "2.0.0",
    "tasks": [
        {
            "label": "./x check",
            "command": "./x check",
            "type": "shell",
            "problemMatcher": "$rustc",
            "presentation": { "clear": true },
            "group": { "kind": "build", "isDefault": true }
        }
    ]
}

Neovim

For Neovim users there are several options for configuring for rustc. The easiest way is by using neoconf.nvim, which allows for project-local configuration files with the native LSP. The steps for how to use it are below. Note that they require rust-analyzer to already be configured with Neovim. Steps for this can be found here.

  1. First install the plugin. This can be done by following the steps in the README.
  2. Run ./x setup editor, and select vscode to create a .vscode/settings.json file. neoconf is able to read and update rust-analyzer settings automatically when the project is opened when this file is detected.

If you're using coc.nvim, you can run ./x setup editor and select vim to create a .vim/coc-settings.json. The settings can be edited with :CocLocalConfig. The recommended settings live at src/etc/rust_analyzer_settings.json.

Another way is without a plugin, and creating your own logic in your configuration. To do this you must translate the JSON to Lua yourself. The translation is 1:1 and fairly straight-forward. It must be put in the ["rust-analyzer"] key of the setup table, which is shown here.

If you would like to use the build task that is described above, you may either make your own command in your config, or you can install a plugin such as overseer.nvim that can read VSCode's task.json files, and follow the same instructions as above.

Emacs

Emacs provides support for rust-analyzer with project-local configuration through Eglot.
Steps for setting up Eglot with rust-analyzer can be found here.
Having set up Emacs & Eglot for Rust development in general, you can run ./x setup editor and select emacs, which will prompt you to create .dir-locals.el with the recommended configuration for Eglot. The recommended settings live at src/etc/rust_analyzer_eglot.el.
For more information on project-specific Eglot configuration, consult the manual.

Helix

Helix comes with built-in LSP and rust-analyzer support.
It can be configured through languages.toml, as described here.
You can run ./x setup editor and select helix, which will prompt you to create languages.toml with the recommended configuration for Helix. The recommended settings live at src/etc/rust_analyzer_helix.toml.

Check, check, and check again

When doing simple refactoring, it can be useful to run ./x check continuously. If you set up rust-analyzer as described above, this will be done for you every time you save a file. Here you are just checking that the compiler can build, but often that is all you need (e.g., when renaming a method). You can then run ./x build when you actually need to run tests.

In fact, it is sometimes useful to put off tests even when you are not 100% sure the code will work. You can then keep building up refactoring commits and only run the tests at some later time. You can then use git bisect to track down precisely which commit caused the problem. A nice side-effect of this style is that you are left with a fairly fine-grained set of commits at the end, all of which build and pass tests. This often helps reviewing.

x suggest

The x suggest subcommand suggests (and runs) a subset of the extensive rust-lang/rust tests based on files you have changed. This is especially useful for new contributors who have not mastered the arcane x flags yet and more experienced contributors as a shorthand for reducing mental effort. In all cases it is useful not to run the full tests (which can take on the order of tens of minutes) and just run a subset which are relevant to your changes. For example, running tidy and linkchecker is useful when editing Markdown files, whereas UI tests are much less likely to be helpful. While x suggest is a useful tool, it does not guarantee perfect coverage (just as PR CI isn't a substitute for bors). See the dedicated chapter for more information and contribution instructions.

Please note that x suggest is in a beta state currently and the tests that it will suggest are limited.

Configuring rustup to use nightly

Some parts of the bootstrap process uses pinned, nightly versions of tools like rustfmt. To make things like cargo fmt work correctly in your repo, run

cd <path to rustc repo>
rustup override set nightly

after installing a nightly toolchain with rustup. Don't forget to do this for all directories you have setup a worktree for. You may need to use the pinned nightly version from src/stage0, but often the normal nightly channel will work.

Note see the section on vscode for how to configure it with this real rustfmt x uses, and the section on rustup for how to setup rustup toolchain for your bootstrapped compiler

Note This does not allow you to build rustc with cargo directly. You still have to use x to work on the compiler or standard library, this just lets you use cargo fmt.

Faster builds with --keep-stage.

Sometimes just checking whether the compiler builds is not enough. A common example is that you need to add a debug! statement to inspect the value of some state or better understand the problem. In that case, you don't really need a full build. By bypassing bootstrap's cache invalidation, you can often get these builds to complete very fast (e.g., around 30 seconds). The only catch is this requires a bit of fudging and may produce compilers that don't work (but that is easily detected and fixed).

The sequence of commands you want is as follows:

  • Initial build: ./x build library
    • As documented previously, this will build a functional stage1 compiler as part of running all stage0 commands (which include building a std compatible with the stage1 compiler) as well as the first few steps of the "stage 1 actions" up to "stage1 (sysroot stage1) builds std".
  • Subsequent builds: ./x build library --keep-stage 1
    • Note that we added the --keep-stage 1 flag here

As mentioned, the effect of --keep-stage 1 is that we just assume that the old standard library can be re-used. If you are editing the compiler, this is almost always true: you haven't changed the standard library, after all. But sometimes, it's not true: for example, if you are editing the "metadata" part of the compiler, which controls how the compiler encodes types and other states into the rlib files, or if you are editing things that wind up in the metadata (such as the definition of the MIR).

The TL;DR is that you might get weird behavior from a compile when using --keep-stage 1 -- for example, strange ICEs or other panics. In that case, you should simply remove the --keep-stage 1 from the command and rebuild. That ought to fix the problem.

You can also use --keep-stage 1 when running tests. Something like this:

  • Initial test run: ./x test tests/ui
  • Subsequent test run: ./x test tests/ui --keep-stage 1

Using incremental compilation

You can further enable the --incremental flag to save additional time in subsequent rebuilds:

./x test tests/ui --incremental --test-args issue-1234

If you don't want to include the flag with every command, you can enable it in the config.toml:

[rust]
incremental = true

Note that incremental compilation will use more disk space than usual. If disk space is a concern for you, you might want to check the size of the build directory from time to time.

Fine-tuning optimizations

Setting optimize = false makes the compiler too slow for tests. However, to improve the test cycle, you can disable optimizations selectively only for the crates you'll have to rebuild (source). For example, when working on rustc_mir_build, the rustc_mir_build and rustc_driver crates take the most time to incrementally rebuild. You could therefore set the following in the root Cargo.toml:

[profile.release.package.rustc_mir_build]
opt-level = 0
[profile.release.package.rustc_driver]
opt-level = 0

Working on multiple branches at the same time

Working on multiple branches in parallel can be a little annoying, since building the compiler on one branch will cause the old build and the incremental compilation cache to be overwritten. One solution would be to have multiple clones of the repository, but that would mean storing the Git metadata multiple times, and having to update each clone individually.

Fortunately, Git has a better solution called worktrees. This lets you create multiple "working trees", which all share the same Git database. Moreover, because all of the worktrees share the same object database, if you update a branch (e.g. master) in any of them, you can use the new commits from any of the worktrees. One caveat, though, is that submodules do not get shared. They will still be cloned multiple times.

Given you are inside the root directory for your Rust repository, you can create a "linked working tree" in a new "rust2" directory by running the following command:

git worktree add ../rust2

Creating a new worktree for a new branch based on master looks like:

git worktree add -b my-feature ../rust2 master

You can then use that rust2 folder as a separate workspace for modifying and building rustc!

Using nix-shell

If you're using nix, you can use the following nix-shell to work on Rust:

{ pkgs ? import <nixpkgs> {} }:
pkgs.mkShell {
  name = "rustc";
  nativeBuildInputs = with pkgs; [
    binutils cmake ninja pkg-config python3 git curl cacert patchelf nix
  ];
  buildInputs = with pkgs; [
    openssl glibc.out glibc.static
  ];
  # Avoid creating text files for ICEs.
  RUSTC_ICE = "0";
  # Provide `libstdc++.so.6` for the self-contained lld.
  LD_LIBRARY_PATH = "${with pkgs; lib.makeLibraryPath [
    stdenv.cc.cc.lib
  ]}";
}

Note that when using nix on a not-NixOS distribution, it may be necessary to set patch-binaries-for-nix = true in config.toml. Bootstrap tries to detect whether it's running in nix and enable patching automatically, but this detection can have false negatives.

You can also use your nix shell to manage config.toml:

let
  config = pkgs.writeText "rustc-config" ''
    # Your config.toml content goes here
  ''
pkgs.mkShell {
  /* ... */
  # This environment variable tells bootstrap where our config.toml is.
  RUST_BOOTSTRAP_CONFIG = config;
}

Shell Completions

If you use Bash, Fish or PowerShell, you can find automatically-generated shell completion scripts for x.py in src/etc/completions. Zsh support will also be included once issues with clap_complete have been resolved.

You can use source ./src/etc/completions/x.py.<extension> to load completions for your shell of choice, or & .\src\etc\completions\x.py.ps1 for PowerShell. Adding this to your shell's startup script (e.g. .bashrc) will automatically load this completion.

Build distribution artifacts

You might want to build and package up the compiler for distribution. You’ll want to run this command to do it:

./x dist

Install from source

You might want to prefer installing Rust (and tools configured in your configuration) by building from source. If so, you want to run this command:

./x install

Note: If you are testing out a modification to a compiler, you might want to build the compiler (with ./x build) then create a toolchain as discussed in here.

For example, if the toolchain you created is called "foo", you would then invoke it with rustc +foo ... (where ... represents the rest of the arguments).

Instead of installing Rust (and tools in your config file) globally, you can set DESTDIR environment variable to change the installation path. If you want to set installation paths more dynamically, you should prefer install options in your config file to achieve that.

Building documentation

This chapter describes how to build documentation of toolchain components, like the standard library (std) or the compiler (rustc).

  • Document everything

    This uses rustdoc from the beta toolchain, so will produce (slightly) different output to stage 1 rustdoc, as rustdoc is under active development:

    ./x doc
    

    If you want to be sure the documentation looks the same as on CI:

    ./x doc --stage 1
    

    This ensures that (current) rustdoc gets built, then that is used to document the components.

  • Much like running individual tests or building specific components, you can build just the documentation you want:

    ./x doc src/doc/book
    ./x doc src/doc/nomicon
    ./x doc compiler library
    

    See the nightly docs index page for a full list of books.

  • Document internal rustc items

    Compiler documentation is not built by default. To create it by default with x doc, modify config.toml:

    [build]
    compiler-docs = true
    

    Note that when enabled, documentation for internal compiler items will also be built.

    NOTE: The documentation for the compiler is found at this link.

Rustdoc overview

rustdoc lives in-tree with the compiler and standard library. This chapter is about how it works. For information about Rustdoc's features and how to use them, see the Rustdoc book. For more details about how rustdoc works, see the "Rustdoc internals" chapter.

rustdoc uses rustc internals (and, of course, the standard library), so you will have to build the compiler and std once before you can build rustdoc.

Rustdoc is implemented entirely within the crate librustdoc. It runs the compiler up to the point where we have an internal representation of a crate (HIR) and the ability to run some queries about the types of items. HIR and queries are discussed in the linked chapters.

librustdoc performs two major steps after that to render a set of documentation:

  • "Clean" the AST into a form that's more suited to creating documentation (and slightly more resistant to churn in the compiler).
  • Use this cleaned AST to render a crate's documentation, one page at a time.

Naturally, there's more than just this, and those descriptions simplify out lots of details, but that's the high-level overview.

(Side note: librustdoc is a library crate! The rustdoc binary is created using the project in src/tools/rustdoc. Note that literally all that does is call the main() that's in this crate's lib.rs, though.)

Cheat sheet

  • Run ./x setup tools before getting started. This will configure x with nice settings for developing rustdoc and other tools, including downloading a copy of rustc rather than building it.
  • Use ./x check rustdoc to quickly check for compile errors.
  • Use ./x build library rustdoc to make a usable rustdoc you can run on other projects.
    • Add library/test to be able to use rustdoc --test.
    • Run rustup toolchain link stage2 build/host/stage2 to add a custom toolchain called stage2 to your rustup environment. After running that, cargo +stage2 doc in any directory will build with your locally-compiled rustdoc.
  • Use ./x doc library to use this rustdoc to generate the standard library docs.
    • The completed docs will be available in build/host/doc (under core, alloc, and std).
    • If you want to copy those docs to a webserver, copy all of build/host/doc, since that's where the CSS, JS, fonts, and landing page are.
  • Use ./x test tests/rustdoc* to run the tests using a stage1 rustdoc.

Code structure

  • All paths in this section are relative to src/librustdoc in the rust-lang/rust repository.
  • Most of the HTML printing code is in html/format.rs and html/render/mod.rs. It's in a bunch of fmt::Display implementations and supplementary functions.
  • The types that got Display impls above are defined in clean/mod.rs, right next to the custom Clean trait used to process them out of the rustc HIR.
  • The bits specific to using rustdoc as a test harness are in doctest.rs.
  • The Markdown renderer is loaded up in html/markdown.rs, including functions for extracting doctests from a given block of Markdown.
  • The tests on the structure of rustdoc HTML output are located in tests/rustdoc, where they're handled by the test runner of bootstrap and the supplementary script src/etc/htmldocck.py.

Tests

  • All paths in this section are relative to tests in the rust-lang/rust repository.
  • Tests on search engine and index are located in rustdoc-js and rustdoc-js-std. The format is specified in the search guide.
  • Tests on the "UI" of rustdoc (the terminal output it produces when run) are in rustdoc-ui
  • Tests on the "GUI" of rustdoc (the HTML, JS, and CSS as rendered in a browser) are in rustdoc-gui. These use a NodeJS tool called browser-UI-test that uses puppeteer to run tests in a headless browser and check rendering and interactivity.

Constraints

We try to make rustdoc work reasonably well with JavaScript disabled, and when browsing local files. We support these browsers.

Supporting local files (file:/// URLs) brings some surprising restrictions. Certain browser features that require secure origins, like localStorage and Service Workers, don't work reliably. We can still use such features but we should make sure pages are still usable without them.

Multiple runs, same output directory

Rustdoc can be run multiple times for varying inputs, with its output set to the same directory. That's how cargo produces documentation for dependencies of the current crate. It can also be done manually if a user wants a big documentation bundle with all of the docs they care about.

HTML is generated independently for each crate, but there is some cross-crate information that we update as we add crates to the output directory:

  • crates<SUFFIX>.js holds a list of all crates in the output directory.
  • search-index<SUFFIX>.js holds a list of all searchable items.
  • For each trait, there is a file under implementors/.../trait.TraitName.js containing a list of implementors of that trait. The implementors may be in different crates than the trait, and the JS file is updated as we discover new ones.

Use cases

There are a few major use cases for rustdoc that you should keep in mind when working on it:

Standard library docs

These are published at https://doc.rust-lang.org/std as part of the Rust release process. Stable releases are also uploaded to specific versioned URLs like https://doc.rust-lang.org/1.57.0/std/. Beta and nightly docs are published to https://doc.rust-lang.org/beta/std/ and https://doc.rust-lang.org/nightly/std/. The docs are uploaded with the promote-release tool and served from S3 with CloudFront.

The standard library docs contain five crates: alloc, core, proc_macro, std, and test.

docs.rs

When crates are published to crates.io, docs.rs automatically builds and publishes their documentation, for instance at https://docs.rs/serde/latest/serde/. It always builds with the current nightly rustdoc, so any changes you land in rustdoc are "insta-stable" in that they will have an immediate public effect on docs.rs. Old documentation is not rebuilt, so you will see some variation in UI when browsing old releases in docs.rs. Crate authors can request rebuilds, which will be run with the latest rustdoc.

Docs.rs performs some transformations on rustdoc's output in order to save storage and display a navigation bar at the top. In particular, certain static files, like main.js and rustdoc.css, may be shared across multiple invocations of the same version of rustdoc. Others, like crates.js and sidebar-items.js, are different for different invocations. Still others, like fonts, will never change. These categories are distinguished using the SharedResource enum in src/librustdoc/html/render/write_shared.rs

Documentation on docs.rs is always generated for a single crate at a time, so the search and sidebar functionality don't include dependencies of the current crate.

Locally generated docs

Crate authors can run cargo doc --open in crates they have checked out locally to see the docs. This is useful to check that the docs they are writing are useful and display correctly. It can also be useful for people to view documentation on crates they aren't authors of, but want to use. In both cases, people may use --document-private-items Cargo flag to see private methods, fields, and so on, which are normally not displayed.

By default cargo doc will generate documentation for a crate and all of its dependencies. That can result in a very large documentation bundle, with a large (and slow) search corpus. The Cargo flag --no-deps inhibits that behavior and generates docs for just the crate.

Self-hosted project docs

Some projects like to host their own documentation. For example: https://docs.serde.rs/. This is easy to do by locally generating docs, and simply copying them to a web server. Rustdoc's HTML output can be extensively customized by flags. Users can add a theme, set the default theme, and inject arbitrary HTML. See rustdoc --help for details.

Adding a new target

These are a set of steps to add support for a new target. There are numerous end states and paths to get there, so not all sections may be relevant to your desired goal.

Specifying a new LLVM

For very new targets, you may need to use a different fork of LLVM than what is currently shipped with Rust. In that case, navigate to the src/llvm-project git submodule (you might need to run ./x check at least once so the submodule is updated), check out the appropriate commit for your fork, then commit that new submodule reference in the main Rust repository.

An example would be:

cd src/llvm-project
git remote add my-target-llvm some-llvm-repository
git checkout my-target-llvm/my-branch
cd ..
git add llvm-project
git commit -m 'Use my custom LLVM'

Using pre-built LLVM

If you have a local LLVM checkout that is already built, you may be able to configure Rust to treat your build as the system LLVM to avoid redundant builds.

You can tell Rust to use a pre-built version of LLVM using the target section of config.toml:

[target.x86_64-unknown-linux-gnu]
llvm-config = "/path/to/llvm/llvm-7.0.1/bin/llvm-config"

If you are attempting to use a system LLVM, we have observed the following paths before, though they may be different from your system:

  • /usr/bin/llvm-config-8
  • /usr/lib/llvm-8/bin/llvm-config

Note that you need to have the LLVM FileCheck tool installed, which is used for codegen tests. This tool is normally built with LLVM, but if you use your own preinstalled LLVM, you will need to provide FileCheck in some other way. On Debian-based systems, you can install the llvm-N-tools package (where N is the LLVM version number, e.g. llvm-8-tools). Alternately, you can specify the path to FileCheck with the llvm-filecheck config item in config.toml or you can disable codegen test with the codegen-tests item in config.toml.

Creating a target specification

You should start with a target JSON file. You can see the specification for an existing target using --print target-spec-json:

rustc -Z unstable-options --target=wasm32-unknown-unknown --print target-spec-json

Save that JSON to a file and modify it as appropriate for your target.

Adding a target specification

Once you have filled out a JSON specification and been able to compile somewhat successfully, you can copy the specification into the compiler itself.

You will need to add a line to the big table inside of the supported_targets macro in the rustc_target::spec module. You will then add a corresponding file for your new target containing a target function.

Look for existing targets to use as examples.

After adding your target to the rustc_target crate you may want to add core, std, ... with support for your new target. In that case you will probably need access to some target_* cfg. Unfortunately when building with stage0 (the beta compiler), you'll get an error that the target cfg is unexpected because stage0 doesn't know about the new target specification and we pass --check-cfg in order to tell it to check.

To fix the errors you will need to manually add the unexpected value to the different Cargo.toml in library/{std,alloc,core}/Cargo.toml. Here is an example for adding NEW_TARGET_ARCH as target_arch:

library/std/Cargo.toml:

  [lints.rust.unexpected_cfgs]
  level = "warn"
  check-cfg = [
      'cfg(bootstrap)',
-      'cfg(target_arch, values("xtensa"))',
+      # #[cfg(bootstrap)] NEW_TARGET_ARCH
+      'cfg(target_arch, values("xtensa", "NEW_TARGET_ARCH"))',

To use this target in bootstrap, we need to explicitly add the target triple to the STAGE0_MISSING_TARGETS list in src/bootstrap/src/core/sanity.rs. This is necessary because the default compiler bootstrap uses does not recognize the new target we just added. Therefore, it should be added to STAGE0_MISSING_TARGETS so that the bootstrap is aware that this target is not yet supported by the stage0 compiler.

const STAGE0_MISSING_TARGETS: &[&str] = &[
+   "NEW_TARGET_TRIPLE"
];

Patching crates

You may need to make changes to crates that the compiler depends on, such as libc or cc. If so, you can use Cargo's [patch] ability. For example, if you want to use an unreleased version of libc, you can add it to the top-level Cargo.toml file:

diff --git a/Cargo.toml b/Cargo.toml
index 1e83f05e0ca..4d0172071c1 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -113,6 +113,8 @@ cargo-util = { path = "src/tools/cargo/crates/cargo-util" }
 [patch.crates-io]
+libc = { git = "https://github.com/rust-lang/libc", rev = "0bf7ce340699dcbacabdf5f16a242d2219a49ee0" }

 # See comments in `src/tools/rustc-workspace-hack/README.md` for what's going on
 # here
 rustc-workspace-hack = { path = 'src/tools/rustc-workspace-hack' }

After this, run cargo update -p libc to update the lockfiles.

Beware that if you patch to a local path dependency, this will enable warnings for that dependency. Some dependencies are not warning-free, and due to the deny-warnings setting in config.toml, the build may suddenly start to fail. To work around warnings, you may want to:

  • Modify the dependency to remove the warnings
  • Or for local development purposes, suppress the warnings by setting deny-warnings = false in config.toml.
# config.toml
[rust]
deny-warnings = false

Cross-compiling

Once you have a target specification in JSON and in the code, you can cross-compile rustc:

DESTDIR=/path/to/install/in \
./x install -i --stage 1 --host aarch64-apple-darwin.json --target aarch64-apple-darwin \
compiler/rustc library/std

If your target specification is already available in the bootstrap compiler, you can use it instead of the JSON file for both arguments.

Promoting a target from tier 2 (target) to tier 2 (host)

There are two levels of tier 2 targets: a) Targets that are only cross-compiled (rustup target add) b) Targets that have a native toolchain (rustup toolchain install)

For an example of promoting a target from cross-compiled to native, see #75914.

Optimized build of the compiler

There are multiple additional build configuration options and techniques that can be used to compile a build of rustc that is as optimized as possible (for example when building rustc for a Linux distribution). The status of these configuration options for various Rust targets is tracked here. This page describes how you can use these approaches when building rustc yourself.

Link-time optimization is a powerful compiler technique that can increase program performance. To enable (Thin-)LTO when building rustc, set the rust.lto config option to "thin" in config.toml:

[rust]
lto = "thin"

Note that LTO for rustc is currently supported and tested only for the x86_64-unknown-linux-gnu target. Other targets may work, but no guarantees are provided. Notably, LTO-optimized rustc currently produces miscompilations on Windows.

Enabling LTO on Linux has produced speed-ups by up to 10%.

Memory allocator

Using a different memory allocator for rustc can provide significant performance benefits. If you want to enable the jemalloc allocator, you can set the rust.jemalloc option to true in config.toml:

[rust]
jemalloc = true

Note that this option is currently only supported for Linux and macOS targets.

Codegen units

Reducing the amount of codegen units per rustc crate can produce a faster build of the compiler. You can modify the number of codegen units for rustc and libstd in config.toml with the following options:

[rust]
codegen-units = 1
codegen-units-std = 1

Instruction set

By default, rustc is compiled for a generic (and conservative) instruction set architecture (depending on the selected target), to make it support as many CPUs as possible. If you want to compile rustc for a specific instruction set architecture, you can set the target_cpu compiler option in RUSTFLAGS:

RUSTFLAGS="-C target_cpu=x86-64-v3" ./x build ...

If you also want to compile LLVM for a specific instruction set, you can set llvm flags in config.toml:

[llvm]
cxxflags = "-march=x86-64-v3"
cflags = "-march=x86-64-v3"

Profile-guided optimization

Applying profile-guided optimizations (or more generally, feedback-directed optimizations) can produce a large increase to rustc performance, by up to 15% (1, 2). However, these techniques are not simply enabled by a configuration option, but rather they require a complex build workflow that compiles rustc multiple times and profiles it on selected benchmarks.

There is a tool called opt-dist that is used to optimize rustc with PGO (profile-guided optimizations) and BOLT (a post-link binary optimizer) for builds distributed to end users. You can examine the tool, which is located in src/tools/opt-dist, and build a custom PGO build workflow based on it, or try to use it directly. Note that the tool is currently quite hardcoded to the way we use it in Rust's continuous integration workflows, and it might require some custom changes to make it work in a different environment.

To use the tool, you will need to provide some external dependencies:

  • A Python3 interpreter (for executing x.py).
  • Compiled LLVM toolchain, with the llvm-profdata binary. Optionally, if you want to use BOLT, the llvm-bolt and merge-fdata binaries have to be available in the toolchain.

These dependencies are provided to opt-dist by an implementation of the Environment struct. It specifies directories where will the PGO/BOLT pipeline take place, and also external dependencies like Python or LLVM.

Here is an example of how can opt-dist be used locally (outside of CI):

  1. Build the tool with the following command:
    ./x build tools/opt-dist
    
  2. Run the tool with the local mode and provide necessary parameters:
    ./build/host/stage0-tools-bin/opt-dist local \
      --target-triple <target> \ # select target, e.g. "x86_64-unknown-linux-gnu"
      --checkout-dir <path>    \ # path to rust checkout, e.g. "."
      --llvm-dir <path>        \ # path to built LLVM toolchain, e.g. "/foo/bar/llvm/install"
      -- python3 x.py dist       # pass the actual build command
    
    You can run --help to see further parameters that you can modify.

Note: if you want to run the actual CI pipeline, instead of running opt-dist locally, you can execute DEPLOY=1 src/ci/docker/run.sh dist-x86_64-linux.

Testing the compiler

The Rust project runs a wide variety of different tests, orchestrated by the build system (./x test). This section gives a brief overview of the different testing tools. Subsequent chapters dive into running tests and adding new tests.

Kinds of tests

There are several kinds of tests to exercise things in the Rust distribution. Almost all of them are driven by ./x test, with some exceptions noted below.

Compiletest

The main test harness for testing the compiler itself is a tool called compiletest.

compiletest supports running different styles of tests, organized into test suites. A test mode may provide common presets/behavior for a set of test suites. compiletest-supported tests are located in the tests directory.

The Compiletest chapter goes into detail on how to use this tool.

Example: ./x test tests/ui

Package tests

The standard library and many of the compiler packages include typical Rust #[test] unit tests, integration tests, and documentation tests. You can pass a path to ./x test for almost any package in the library/ or compiler/ directory, and x will essentially run cargo test on that package.

Examples:

CommandDescription
./x test library/stdRuns tests on std only
./x test library/coreRuns tests on core only
./x test compiler/rustc_data_structuresRuns tests on rustc_data_structures

The standard library relies very heavily on documentation tests to cover its functionality. However, unit tests and integration tests can also be used as needed. Almost all of the compiler packages have doctests disabled.

All standard library and compiler unit tests are placed in separate tests file (which is enforced in tidy). This ensures that when the test file is changed, the crate does not need to be recompiled. For example:

#[cfg(test)]
mod tests;

If it wasn't done this way, and you were working on something like core, that would require recompiling the entire standard library, and the entirety of rustc.

./x test includes some CLI options for controlling the behavior with these package tests:

  • --doc — Only runs documentation tests in the package.
  • --no-doc — Run all tests except documentation tests.

Tidy

Tidy is a custom tool used for validating source code style and formatting conventions, such as rejecting long lines. There is more information in the section on coding conventions.

Examples: ./x test tidy

Formatting

Rustfmt is integrated with the build system to enforce uniform style across the compiler. The formatting check is automatically run by the Tidy tool mentioned above.

Examples:

CommandDescription
./x fmt --checkChecks formatting and exits with an error if formatting is needed.
./x fmtRuns rustfmt across the entire codebase.
./x test tidy --blessFirst runs rustfmt to format the codebase, then runs tidy checks.

Book documentation tests

All of the books that are published have their own tests, primarily for validating that the Rust code examples pass. Under the hood, these are essentially using rustdoc --test on the markdown files. The tests can be run by passing a path to a book to ./x test.

Example: ./x test src/doc/book

Links across all documentation is validated with a link checker tool.

Example: ./x test src/tools/linkchecker

Example: ./x test linkchecker

This requires building all of the documentation, which might take a while.

Dist check

distcheck verifies that the source distribution tarball created by the build system will unpack, build, and run all tests.

Example: ./x test distcheck

Tool tests

Packages that are included with Rust have all of their tests run as well. This includes things such as cargo, clippy, rustfmt, miri, bootstrap (testing the Rust build system itself), etc.

Most of the tools are located in the src/tools directory. To run the tool's tests, just pass its path to ./x test.

Example: ./x test src/tools/cargo

Usually these tools involve running cargo test within the tool's directory.

If you want to run only a specified set of tests, append --test-args FILTER_NAME to the command.

Example: ./x test src/tools/miri --test-args padding

In CI, some tools are allowed to fail. Failures send notifications to the corresponding teams, and is tracked on the toolstate website. More information can be found in the toolstate documentation.

Ecosystem testing

Rust tests integration with real-world code to catch regressions and make informed decisions about the evolution of the language. There are several kinds of ecosystem tests, including Crater. See the Ecosystem testing chapter for more details.

Performance testing

A separate infrastructure is used for testing and tracking performance of the compiler. See the Performance testing chapter for more details.

Miscellaneous information

There are some other useful testing-related info at Misc info.

Further reading

The following blog posts may also be of interest:

Running tests

You can run the entire test collection using x. But note that running the entire test collection is almost never what you want to do during local development because it takes a really long time. For local development, see the subsection after on how to run a subset of tests.

Running plain `./x test` will build the stage 1 compiler and then run the whole test suite. This not only include `tests/`, but also `library/`, `compiler/`, `src/tools/` package tests and more.

You usually only want to run a subset of the test suites (or even a smaller set of tests than that) which you expect will exercise your changes. PR CI exercises a subset of test collections, and merge queue CI will exercise all of the test collection.

./x test

The test results are cached and previously successful tests are ignored during testing. The stdout/stderr contents as well as a timestamp file for every test can be found under build/<target-triple>/test/ for the given <target-triple>. To force-rerun a test (e.g. in case the test runner fails to notice a change) you can use the --force-rerun CLI option.

Note on requirements of external dependencies

Some test suites may require external dependecies. This is especially true of debuginfo tests. Some debuginfo tests require a Python-enabled gdb. You can test if your gdb install supports Python by using the python command from within gdb. Once invoked you can type some Python code (e.g. print("hi")) followed by return and then CTRL+D to execute it. If you are building gdb from source, you will need to configure with --with-python=<path-to-python-binary>.

Running a subset of the test suites

When working on a specific PR, you will usually want to run a smaller set of tests. For example, a good "smoke test" that can be used after modifying rustc to see if things are generally working correctly would be to exercise the ui test suite (tests/ui):

./x test tests/ui

This will run the ui test suite. Of course, the choice of test suites is somewhat arbitrary, and may not suit the task you are doing. For example, if you are hacking on debuginfo, you may be better off with the debuginfo test suite:

./x test tests/debuginfo

If you only need to test a specific subdirectory of tests for any given test suite, you can pass that directory as a filter to ./x test:

./x test tests/ui/const-generics

Note for MSYS2

On MSYS2 the paths seem to be strange and ./x test neither recognizes tests/ui/const-generics nor tests\ui\const-generics. In that case, you can workaround it by using e.g. ./x test ui --test-args="tests/ui/const-generics".

Likewise, you can test a single file by passing its path:

./x test tests/ui/const-generics/const-test.rs

x doesn't support running a single tool test by passing its path yet. You'll have to use the --test-args argument as describled below.

./x test src/tools/miri --test-args tests/fail/uninit/padding-enum.rs

Run only the tidy script

./x test tidy

Run tests on the standard library

./x test --stage 0 library/std

Note that this only runs tests on std; if you want to test core or other crates, you have to specify those explicitly.

Run the tidy script and tests on the standard library

./x test --stage 0 tidy library/std

Run tests on the standard library using a stage 1 compiler

./x test --stage 1 library/std

By listing which test suites you want to run you avoid having to run tests for components you did not change at all.

Note that bors only runs the tests with the full stage 2 build; therefore, while the tests **usually** work fine with stage 1, there are some limitations.

Run all tests using a stage 2 compiler

./x test --stage 2
You almost never need to do this; CI will run these tests for you.

Run unit tests on the compiler/library

You may want to run unit tests on a specific file with following:

./x test compiler/rustc_data_structures/src/thin_vec/tests.rs

But unfortunately, it's impossible. You should invoke the following instead:

./x test compiler/rustc_data_structures/ --test-args thin_vec

Running an individual test

Another common thing that people want to do is to run an individual test, often the test they are trying to fix. As mentioned earlier, you may pass the full file path to achieve this, or alternatively one may invoke x with the --test-args option:

./x test tests/ui --test-args issue-1234

Under the hood, the test runner invokes the standard Rust test runner (the same one you get with #[test]), so this command would wind up filtering for tests that include "issue-1234" in the name. Thus, --test-args is a good way to run a collection of related tests.

Passing arguments to rustc when running tests

It can sometimes be useful to run some tests with specific compiler arguments, without using RUSTFLAGS (during development of unstable features, with -Z flags, for example).

This can be done with ./x test's --compiletest-rustc-args option, to pass additional arguments to the compiler when building the tests.

Editing and updating the reference files

If you have changed the compiler's output intentionally, or you are making a new test, you can pass --bless to the test subcommand. E.g. if some tests in tests/ui are failing, you can run

./x test tests/ui --bless

to automatically adjust the .stderr, .stdout or .fixed files of all tests. Of course you can also target just specific tests with the --test-args your_test_name flag, just like when running the tests.

Configuring test running

There are a few options for running tests:

  • config.toml has the rust.verbose-tests option. If false, each test will print a single dot (the default). If true, the name of every test will be printed. This is equivalent to the --quiet option in the Rust test harness
  • The environment variable RUST_TEST_THREADS can be set to the number of concurrent threads to use for testing.

Passing --pass $mode

Pass UI tests now have three modes, check-pass, build-pass and run-pass. When --pass $mode is passed, these tests will be forced to run under the given $mode unless the directive //@ ignore-pass exists in the test file. For example, you can run all the tests in tests/ui as check-pass:

./x test tests/ui --pass check

By passing --pass $mode, you can reduce the testing time. For each mode, please see Controlling pass/fail expectations.

Running tests with different "compare modes"

UI tests may have different output depending on certain "modes" that the compiler is in. For example, when using the Polonius mode, a test foo.rs will first look for expected output in foo.polonius.stderr, falling back to the usual foo.stderr if not found. The following will run the UI test suite in Polonius mode:

./x test tests/ui --compare-mode=polonius

See Compare modes for more details.

Running tests manually

Sometimes it's easier and faster to just run the test by hand. Most tests are just .rs files, so after creating a rustup toolchain, you can do something like:

rustc +stage1 tests/ui/issue-1234.rs

This is much faster, but doesn't always work. For example, some tests include directives that specify specific compiler flags, or which rely on other crates, and they may not run the same without those options.

Running run-make tests

Windows

Running the run-make test suite on Windows is a currently bit more involved. There are numerous prerequisites and environmental requirements:

  • Install msys2: https://www.msys2.org/
  • Specify MSYS2_PATH_TYPE=inherit in msys2.ini in the msys2 installation directory, run the following with MSYS2 MSYS:
    • pacman -Syuu
    • pacman -S make
    • pacman -S diffutils
    • pacman -S binutils
    • ./x test run-make (./x test tests/run-make doesn't work)

There is on-going work to not rely on Makefiles in the run-make test suite. Once this work is completed, you can run the entire run-make test suite on native Windows inside cmd or PowerShell without needing to install and use MSYS2. As of Oct 2024, it is already possible to run the vast majority of the run-make test suite outside of MSYS2, but there will be failures for the tests that still use Makefiles due to not finding make.

Running tests on a remote machine

Tests may be run on a remote machine (e.g. to test builds for a different architecture). This is done using remote-test-client on the build machine to send test programs to remote-test-server running on the remote machine. remote-test-server executes the test programs and sends the results back to the build machine. remote-test-server provides unauthenticated remote code execution so be careful where it is used.

To do this, first build remote-test-server for the remote machine, e.g. for RISC-V

./x build src/tools/remote-test-server --target riscv64gc-unknown-linux-gnu

The binary will be created at ./build/host/stage2-tools/$TARGET_ARCH/release/remote-test-server. Copy this over to the remote machine.

On the remote machine, run the remote-test-server with the --bind 0.0.0.0:12345 flag (and optionally -v for verbose output). Output should look like this:

$ ./remote-test-server -v --bind 0.0.0.0:12345
starting test server
listening on 0.0.0.0:12345!

Note that binding the server to 0.0.0.0 will allow all hosts able to reach your machine to execute arbitrary code on your machine. We strongly recommend either setting up a firewall to block external access to port 12345, or to use a more restrictive IP address when binding.

You can test if the remote-test-server is working by connecting to it and sending ping\n. It should reply pong:

$ nc $REMOTE_IP 12345
ping
pong

To run tests using the remote runner, set the TEST_DEVICE_ADDR environment variable then use x as usual. For example, to run ui tests for a RISC-V machine with the IP address 1.2.3.4 use

export TEST_DEVICE_ADDR="1.2.3.4:12345"
./x test tests/ui --target riscv64gc-unknown-linux-gnu

If remote-test-server was run with the verbose flag, output on the test machine may look something like

[...]
run "/tmp/work/test1007/a"
run "/tmp/work/test1008/a"
run "/tmp/work/test1009/a"
run "/tmp/work/test1010/a"
run "/tmp/work/test1011/a"
run "/tmp/work/test1012/a"
run "/tmp/work/test1013/a"
run "/tmp/work/test1014/a"
run "/tmp/work/test1015/a"
run "/tmp/work/test1016/a"
run "/tmp/work/test1017/a"
run "/tmp/work/test1018/a"
[...]

Tests are built on the machine running x not on the remote machine. Tests which fail to build unexpectedly (or ui tests producing incorrect build output) may fail without ever running on the remote machine.

Testing on emulators

Some platforms are tested via an emulator for architectures that aren't readily available. For architectures where the standard library is well supported and the host operating system supports TCP/IP networking, see the above instructions for testing on a remote machine (in this case the remote machine is emulated).

There is also a set of tools for orchestrating running the tests within the emulator. Platforms such as arm-android and arm-unknown-linux-gnueabihf are set up to automatically run the tests under emulation on GitHub Actions. The following will take a look at how a target's tests are run under emulation.

The Docker image for armhf-gnu includes QEMU to emulate the ARM CPU architecture. Included in the Rust tree are the tools remote-test-client and remote-test-server which are programs for sending test programs and libraries to the emulator, and running the tests within the emulator, and reading the results. The Docker image is set up to launch remote-test-server and the build tools use remote-test-client to communicate with the server to coordinate running tests (see src/bootstrap/src/core/build_steps/test.rs).

TODO

  • Is there any support for using an iOS emulator?
  • It's also unclear to me how the wasm or asm.js tests are run.

Running rustc_codegen_gcc tests

First thing to know is that it only supports linux x86_64 at the moment. We will extend its support later on.

You need to update codegen-backends value in your config.toml file in the [rust] section and add "gcc" in the array:

codegen-backends = ["llvm", "gcc"]

Then you need to install libgccjit 12. For example with apt:

$ apt install libgccjit-12-dev

Now you can run the following command:

$ ./x test compiler/rustc_codegen_gcc/

If it cannot find the .so library (if you installed it with apt for example), you need to pass the library file path with LIBRARY_PATH:

$ LIBRARY_PATH=/usr/lib/gcc/x86_64-linux-gnu/12/ ./x test compiler/rustc_codegen_gcc/

If you encounter bugs or problems, don't hesitate to open issues on the rustc_codegen_gcc repository.

Testing with Docker

The Rust tree includes Docker image definitions for the platforms used on GitHub Actions in src/ci/docker. The script src/ci/docker/run.sh is used to build the Docker image, run it, build Rust within the image, and run the tests.

You can run these images on your local development machine. This can be helpful to test environments different from your local system. First you will need to install Docker on a Linux, Windows, or macOS system (typically Linux will be much faster than Windows or macOS because the latter use virtual machines to emulate a Linux environment). To enter interactive mode which will start a bash shell in the container, run src/ci/docker/run.sh --dev <IMAGE> where <IMAGE> is one of the directory names in src/ci/docker (for example x86_64-gnu is a fairly standard Ubuntu environment).

The docker script will mount your local Rust source tree in read-only mode, and an obj directory in read-write mode. All of the compiler artifacts will be stored in the obj directory. The shell will start out in the obj directory. From there, you can run ../src/ci/run.sh which will run the build as defined by the image.

Alternatively, you can run individual commands to do specific tasks. For example, you can run ../x test tests/ui to just run UI tests. Note that there is some configuration in the src/ci/run.sh script that you may need to recreate. Particularly, set submodules = false in your config.toml so that it doesn't attempt to modify the read-only directory.

Some additional notes about using the Docker images:

  • Some of the std tests require IPv6 support. Docker on Linux seems to have it disabled by default. Run the commands in enable-docker-ipv6.sh to enable IPv6 before creating the container. This only needs to be done once.
  • The container will be deleted automatically when you exit the shell, however the build artifacts persist in the obj directory. If you are switching between different Docker images, the artifacts from previous environments stored in the obj directory may confuse the build system. Sometimes you will need to delete parts or all of the obj directory before building inside the container.
  • The container is bare-bones, with only a minimal set of packages. You may want to install some things like apt install less vim.
  • You can open multiple shells in the container. First you need the container name (a short hash), which is displayed in the shell prompt, or you can run docker container ls outside of the container to list the available containers. With the container name, run docker exec -it <CONTAINER> /bin/bash where <CONTAINER> is the container name like 4ba195e95cef.

Testing with CI

The primary goal of our CI system is to ensure that the master branch of rust-lang/rust is always in a valid state and passes our test suite.

From a high-level point of view, when you open a pull request at rust-lang/rust, the following will happen:

  • A small subset of tests and checks are run after each push to the PR. This should help catching common errors.
  • When the PR is approved, the bors bot enqueues the PR into a merge queue.
  • Once the PR gets to the front of the queue, bors will create a merge commit and run the full test suite on it. The merge commit either contains only one specific PR or it can be a "rollup" which combines multiple PRs together, to save CI costs.
  • Once the whole test suite finishes, two things can happen. Either CI fails with an error that needs to be addressed by the developer, or CI succeeds and the merge commit is then pushed to the master branch.

If you want to modify what gets executed on CI, see Modifying CI jobs.

CI workflow

Our CI is primarily executed on GitHub Actions, with a single workflow defined in .github/workflows/ci.yml, which contains a bunch of steps that are unified for all CI jobs that we execute. When a commit is pushed to a corresponding branch or a PR, the workflow executes the calculate-job-matrix.py script, which dynamically generates the specific CI jobs that should be executed. This script uses the jobs.yml file as an input, which contains a declarative configuration of all our CI jobs.

Almost all build steps shell out to separate scripts. This keeps the CI fairly platform independent (i.e., we are not overly reliant on GitHub Actions). GitHub Actions is only relied on for bootstrapping the CI process and for orchestrating the scripts that drive the process.

In essence, all CI jobs run ./x test, ./x dist or some other command with different configurations, across various operating systems, targets and platforms. There are two broad categories of jobs that are executed, dist and non-dist jobs.

  • Dist jobs build a full release of the compiler for a specific platform, including all the tools we ship through rustup; Those builds are then uploaded to the rust-lang-ci2 S3 bucket and are available to be locally installed with the rustup-toolchain-install-master tool. The same builds are also used for actual releases: our release process basically consists of copying those artifacts from rust-lang-ci2 to the production endpoint and signing them.
  • Non-dist jobs run our full test suite on the platform, and the test suite of all the tools we ship through rustup; The amount of stuff we test depends on the platform (for example some tests are run only on Tier 1 platforms), and some quicker platforms are grouped together on the same builder to avoid wasting CI resources.

Based on an input event (usually a push to a branch), we execute one of three kinds of builds (sets of jobs).

  1. PR builds
  2. Auto builds
  3. Try builds

Pull Request builds

After each push to a pull request, a set of pr jobs are executed. Currently, these execute the x86_64-gnu-llvm-X, x86_64-gnu-tools, mingw-check and mingw-check-tidy jobs, all running on Linux. These execute a relatively short (~30 minutes) and lightweight test suite that should catch common issues. More specifically, they run a set of lints, they try to perform a cross-compile check build to Windows mingw (without producing any artifacts) and they test the compiler using a system version of LLVM. Unfortunately, it would take too many resources to run the full test suite for each commit on every PR.

Note on doc comments

Note that PR CI as of Oct 2024 by default does not try to run ./x doc xxx. This means that if you have any broken intradoc links that would lead to ./x doc xxx failing, it will happen very late into the full merge queue CI pipeline.

Thus, it is a good idea to run ./x doc xxx locally for any doc comment changes to help catch these early.

PR jobs are defined in the pr section of jobs.yml. They run under the rust-lang/rust repository, and their results can be observed directly on the PR, in the "CI checks" section at the bottom of the PR page.

Auto builds

Before a commit can be merged into the master branch, it needs to pass our complete test suite. We call this an auto build. This build runs tens of CI jobs that exercise various tests across operating systems and targets. The full test suite is quite slow; it can take two hours or more until all the auto CI jobs finish.

Most platforms only run the build steps, some run a restricted set of tests, only a subset run the full suite of tests (see Rust's platform tiers).

Auto jobs are defined in the auto section of jobs.yml. They are executed on the auto branch under the rust-lang-ci/rust repository1 and their results can be seen here, although usually you will be notified of the result by a comment made by bors on the corresponding PR.

At any given time, at most a single auto build is being executed. Find out more here.

1

The auto and try jobs run under the rust-lang-ci fork for historical reasons. This may change in the future.

Try builds

Sometimes we want to run a subset of the test suite on CI for a given PR, or build a set of compiler artifacts from that PR, without attempting to merge it. We call this a "try build". A try build is started after a user with the proper permissions posts a PR comment with the @bors try command.

There are several use-cases for try builds:

  • Run a set of performance benchmarks using our rustc-perf benchmark suite. For this, a working compiler build is needed, which can be generated with a try build that runs the dist-x86_64-linux CI job, which builds an optimized version of the compiler on Linux (this job is currently executed by default when you start a try build). To create a try build and schedule it for a performance benchmark, you can use the @bors try @rust-timer queue command combination.
  • Check the impact of the PR across the Rust ecosystem, using a crater run. Again, a working compiler build is needed for this, which can be produced by the dist-x86_64-linux CI job.
  • Run a specific CI job (e.g. Windows tests) on a PR, to quickly test if it passes the test suite executed by that job. You can select which CI jobs will be executed in the try build by adding up to 10 lines containing try-job: <name of job> to the PR description. All such specified jobs will be executed in the try build once the @bors try command is used on the PR. If no try jobs are specified in this way, the jobs defined in the try section of jobs.yml will be executed by default.

Using try-job PR description directives

  1. Identify which set of try-jobs (max 10) you would like to exercise. You can find the name of the CI jobs in jobs.yml.

  2. Amend PR description to include (usually at the end of the PR description) e.g.

    This PR fixes #123456.
    
    try-job: x86_64-msvc
    try-job: test-various
    

    Each try-job directive must be on its own line.

  3. Run the prescribed try jobs with @bors try. As aforementioned, this requires the user to either (1) have try permissions or (2) be delegated with try permissions by @bors delegate by someone who has try permissions.

Note that this is usually easier to do than manually edit jobs.yml. However, it can be less flexible because you cannot adjust the set of tests that are exercised this way.

Try jobs are defined in the try section of jobs.yml. They are executed on the try branch under the rust-lang-ci/rust repository1 and their results can be seen here, although usually you will be notified of the result by a comment made by bors on the corresponding PR.

Multiple try builds can execute concurrently across different PRs.

bors identify try jobs by commit hash. This means that if you have two PRs containing the same (latest) commits, running `@bors try` will result in the *same* try job and it really confuses `bors`. Please refrain from doing so.

Modifying CI jobs

If you want to modify what gets executed on our CI, you can simply modify the pr, auto or try sections of the jobs.yml file.

You can also modify what gets executed temporarily, for example to test a particular platform or configuration that is challenging to test locally (for example, if a Windows build fails, but you don't have access to a Windows machine). Don't hesitate to use CI resources in such situations to try out a fix!

You can perform an arbitrary CI job in two ways:

  • Use the try build functionality, and specify the CI jobs that you want to be executed in try builds in your PR description.
  • Modify the pr section of jobs.yml to specify which CI jobs should be executed after each push to your PR. This might be faster than repeatedly starting try builds.

To modify the jobs executed after each push to a PR, you can simply copy one of the job definitions from the auto section to the pr section. For example, the x86_64-msvc job is responsible for running the 64-bit MSVC tests. You can copy it to the pr section to cause it to be executed after a commit is pushed to your PR, like this:

pr:
  ...
  - image: x86_64-gnu-tools
    <<: *job-linux-16c
  # this item was copied from the `auto` section
  # vvvvvvvvvvvvvvvvvv
  - image: x86_64-msvc
    env:
      RUST_CONFIGURE_ARGS: --build=x86_64-pc-windows-msvc --enable-profiler
      SCRIPT: make ci-msvc
    <<: *job-windows-8c

Then you can commit the file and push it to your PR branch on GitHub. GitHub Actions should then execute this CI job after each push to your PR.

After you have finished your experiments, don't forget to remove any changes you have made to jobs.yml, if they were supposed to be temporary!

A good practice is to prefix [WIP] in PR title while still running try jobs and [DO NOT MERGE] in the commit that modifies the CI jobs for testing purposes.

Although you are welcome to use CI, just be conscious that this is a shared resource with limited concurrency. Try not to enable too many jobs at once (one or two should be sufficient in most cases).

Merging PRs serially with bors

CI services usually test the last commit of a branch merged with the last commit in master, and while that’s great to check if the feature works in isolation, it doesn’t provide any guarantee the code is going to work once it’s merged. Breakages like these usually happen when another, incompatible PR is merged after the build happened.

To ensure a master branch that works all the time, we forbid manual merges. Instead, all PRs have to be approved through our bot, bors (the software behind it is called homu). All the approved PRs are put in a merge queue (sorted by priority and creation date) and are automatically tested one at the time. If all the builders are green, the PR is merged, otherwise the failure is recorded and the PR will have to be re-approved again.

Bors doesn’t interact with CI services directly, but it works by pushing the merge commit it wants to test to specific branches (like auto or try), which are configured to execute CI checks. Bors then detects the outcome of the build by listening for either Commit Statuses or Check Runs. Since the merge commit is based on the latest master and only one can be tested at the same time, when the results are green, master is fast-forwarded to that merge commit.

Unfortunately testing a single PR at the time, combined with our long CI (~2 hours for a full run), means we can’t merge too many PRs in a single day, and a single failure greatly impacts our throughput for the day. The maximum number of PRs we can merge in a day is around ~10.

The large CI run times and requirement for a large builder pool is largely due to the fact that full release artifacts are built in the dist- builders. This is worth it because these release artifacts:

  • Allow perf testing even at a later date.
  • Allow bisection when bugs are discovered later.
  • Ensure release quality since if we're always releasing, we can catch problems early.

Rollups

Some PRs don’t need the full test suite to be executed: trivial changes like typo fixes or README improvements shouldn’t break the build, and testing every single one of them for 2+ hours is a big waste of time. To solve this, we regularly create a "rollup", a PR where we merge several pending trivial PRs so they can be tested together. Rollups are created manually by a team member using the "create a rollup" button on the merge queue. The team member uses their judgment to decide if a PR is risky or not, and are the best tool we have at the moment to keep the queue in a manageable state.

Docker

All CI jobs, except those on macOS and Windows, are executed inside that platform’s custom Docker container. This has a lot of advantages for us:

  • The build environment is consistent regardless of the changes of the underlying image (switching from the trusty image to xenial was painless for us).
  • We can use ancient build environments to ensure maximum binary compatibility, for example using older CentOS releases on our Linux builders.
  • We can avoid reinstalling tools (like QEMU or the Android emulator) every time thanks to Docker image caching.
  • Users can run the same tests in the same environment locally by just running src/ci/docker/run.sh image-name, which is awesome to debug failures. Note that there are only linux docker images available locally due to licensing and other restrictions.

The docker images prefixed with dist- are used for building artifacts while those without that prefix run tests and checks.

We also run tests for less common architectures (mainly Tier 2 and Tier 3 platforms) in CI. Since those platforms are not x86 we either run everything inside QEMU or just cross-compile if we don’t want to run the tests for that platform.

These builders are running on a special pool of builders set up and maintained for us by GitHub.

Caching

Our CI workflow uses various caching mechanisms, mainly for two things:

Docker images caching

The Docker images we use to run most of the Linux-based builders take a long time to fully build. To speed up the build, we cache it using Docker registry caching, with the intermediate artifacts being stored on ghcr.io. We also push the built Docker images to ghcr, so that they can be reused by other tools (rustup) or by developers running the Docker build locally (to speed up their build).

Since we test multiple, diverged branches (master, beta and stable), we can’t rely on a single cache for the images, otherwise builds on a branch would override the cache for the others. Instead, we store the images under different tags, identifying them with a custom hash made from the contents of all the Dockerfiles and related scripts.

LLVM caching with sccache

We build some C/C++ stuff in various CI jobs, and we rely on sccache to cache the intermediate LLVM artifacts. Sccache is a distributed ccache developed by Mozilla, which can use an object storage bucket as the storage backend. In our case, the artefacts are uploaded to an S3 bucket that we control (rust-lang-ci-sccache2).

Custom tooling around CI

During the years we developed some custom tooling to improve our CI experience.

Rust Log Analyzer to show the error message in PRs

The build logs for rust-lang/rust are huge, and it’s not practical to find what caused the build to fail by looking at the logs. To improve the developers’ experience we developed a bot called Rust Log Analyzer (RLA) that receives the build logs on failure and extracts the error message automatically, posting it on the PR.

The bot is not hardcoded to look for error strings, but was trained with a bunch of build failures to recognize which lines are common between builds and which are not. While the generated snippets can be weird sometimes, the bot is pretty good at identifying the relevant lines even if it’s an error we've never seen before.

Toolstate to support allowed failures

The rust-lang/rust repo doesn’t only test the compiler on its CI, but also a variety of tools and documentation. Some documentation is pulled in via git submodules. If we blocked merging rustc PRs on the documentation being fixed, we would be stuck in a chicken-and-egg problem, because the documentation's CI would not pass since updating it would need the not-yet-merged version of rustc to test against (and we usually require CI to be passing).

To avoid the problem, submodules are allowed to fail, and their status is recorded in rust-toolstate. When a submodule breaks, a bot automatically pings the maintainers so they know about the breakage, and it records the failure on the toolstate repository. The release process will then ignore broken tools on nightly, removing them from the shipped nightlies.

While tool failures are allowed most of the time, they’re automatically forbidden a week before a release: we don’t care if tools are broken on nightly but they must work on beta and stable, so they also need to work on nightly a few days before we promote nightly to beta.

More information is available in the toolstate documentation.

Adding new tests

In general, we expect every PR that fixes a bug in rustc to come accompanied by a regression test of some kind. This test should fail in master but pass after the PR. These tests are really useful for preventing us from repeating the mistakes of the past.

The first thing to decide is which kind of test to add. This will depend on the nature of the change and what you want to exercise. Here are some rough guidelines:

  • The majority of compiler tests are done with compiletest.
    • The majority of compiletest tests are UI tests in the tests/ui directory.
  • Changes to the standard library are usually tested within the standard library itself.
    • The majority of standard library tests are written as doctests, which illustrate and exercise typical API behavior.
    • Additional unit tests should go in library/${crate}/tests (where ${crate} is usually core, alloc, or std).
  • If the code is part of an isolated system, and you are not testing compiler output, consider using a unit or integration test.
  • Need to run rustdoc? Prefer a rustdoc or rustdoc-ui test. Occasionally you'll need rustdoc-js as well.
  • Other compiletest test suites are generally used for special purposes:
    • Need to run gdb or lldb? Use the debuginfo test suite.
    • Need to inspect LLVM IR or MIR IR? Use the codegen or mir-opt test suites.
    • Need to inspect the resulting binary in some way? Or if all the other test suites are too limited for your purposes? Then use run-make.
    • Check out the compiletest chapter for more specialized test suites.

After deciding on which kind of test to add, see best practices for guidance on how to author tests that are easy to work with that stand the test of time (i.e. if a test fails or need to be modified several years later, how can we make it easier for them?).

UI test walkthrough

The following is a basic guide for creating a UI test, which is one of the most common compiler tests. For this tutorial, we'll be adding a test for an async error message.

Step 1: Add a test file

The first step is to create a Rust source file somewhere in the tests/ui tree. When creating a test, do your best to find a good location and name (see Test organization for more). Since naming is the hardest part of development, everything should be downhill from here!

Let's place our async test at tests/ui/async-await/await-without-async.rs:

// Provide diagnostics when the user writes `await` in a non-`async` function.
//@ edition:2018

async fn foo() {}

fn bar() {
    foo().await
}

fn main() {}

A few things to notice about our test:

  • The top should start with a short comment that explains what the test is for.
  • The //@ edition:2018 comment is called a directive which provides instructions to compiletest on how to build the test. Here we need to set the edition for async to work (the default is edition 2015).
  • Following that is the source of the test. Try to keep it succinct and to the point. This may require some effort if you are trying to minimize an example from a bug report.
  • We end this test with an empty fn main function. This is because the default for UI tests is a bin crate-type, and we don't want the "main not found" error in our test. Alternatively, you could add #![crate_type="lib"].

Step 2: Generate the expected output

The next step is to create the expected output snapshots from the compiler. This can be done with the --bless option:

./x test tests/ui/async-await/await-without-async.rs --bless

This will build the compiler (if it hasn't already been built), compile the test, and place the output of the compiler in a file called tests/ui/async-await/await-without-async.stderr.

However, this step will fail! You should see an error message, something like this:

error: /rust/tests/ui/async-await/await-without-async.rs:7: unexpected error: '7:10: 7:16: await is only allowed inside async functions and blocks E0728'

This is because the stderr contains errors which were not matched by error annotations in the source file.

Step 3: Add error annotations

Every error needs to be annotated with a comment in the source with the text of the error. In this case, we can add the following comment to our test file:

fn bar() {
    foo().await
    //~^ ERROR `await` is only allowed inside `async` functions and blocks
}

The //~^ squiggle caret comment tells compiletest that the error belongs to the previous line (more on this in the Error annotations section).

Save that, and run the test again:

./x test tests/ui/async-await/await-without-async.rs

It should now pass, yay!

Step 4: Review the output

Somewhat hand-in-hand with the previous step, you should inspect the .stderr file that was created to see if it looks like how you expect. If you are adding a new diagnostic message, now would be a good time to also consider how readable the message looks overall, particularly for people new to Rust.

Our example tests/ui/async-await/await-without-async.stderr file should look like this:

error[E0728]: `await` is only allowed inside `async` functions and blocks
  --> $DIR/await-without-async.rs:7:10
   |
LL | fn bar() {
   |    --- this is not `async`
LL |     foo().await
   |          ^^^^^^ only allowed inside `async` functions and blocks

error: aborting due to previous error

For more information about this error, try `rustc --explain E0728`.

You may notice some things look a little different than the regular compiler output.

  • The $DIR removes the path information which will differ between systems.
  • The LL values replace the line numbers. That helps avoid small changes in the source from triggering large diffs. See the Normalization section for more.

Around this stage, you may need to iterate over the last few steps a few times to tweak your test, re-bless the test, and re-review the output.

Step 5: Check other tests

Sometimes when adding or changing a diagnostic message, this will affect other tests in the test suite. The final step before posting a PR is to check if you have affected anything else. Running the UI suite is usually a good start:

./x test tests/ui

If other tests start failing, you may need to investigate what has changed and if the new output makes sense.

You may also need to re-bless the output with the --bless flag.

Comment explaining what the test is about

The first comment of a test file should summarize the point of the test, and highlight what is important about it. If there is an issue number associated with the test, include the issue number.

This comment doesn't have to be super extensive. Just something like "Regression test for #18060: match arms were matching in the wrong order." might already be enough.

These comments are very useful to others later on when your test breaks, since they often can highlight what the problem is. They are also useful if for some reason the tests need to be refactored, since they let others know which parts of the test were important. Often a test must be rewritten because it no longer tests what is was meant to test, and then it's useful to know what it was meant to test exactly.

Best practices for writing tests

This chapter describes best practices related to authoring and modifying tests. We want to make sure the tests we author are easy to understand and modify, even several years later, without needing to consult the original author and perform a bunch of git archeology.

It's good practice to review the test that you authored by pretending that you are a different contributor who is looking at the test that failed several years later without much context (this also helps yourself even a few days or months later!). Then ask yourself: how can I make my life and their lives easier?

To help put this into perspective, let's start with an aside on how to write a test that makes the life of another contributor as hard as possible.

Aside: Simple Test Sabotage Field Manual

To make the life of another contributor as hard as possible, one might:

  • Name the test after an issue number alone without any other context, e.g. issue-123456.rs.
  • Have no comments at all on what the test is trying to exercise, no links to relevant context.
  • Include a test that is massive (that can otherwise be minimized) and contains non-essential pieces which distracts from the core thing the test is actually trying to test.
  • Include a bunch of unrelated syntax errors and other errors which are not critical to what the test is trying to check.
  • Weirdly format the snippets.
  • Include a bunch of unused and unrelated features.
  • Have e.g. ignore-windows compiletest directives but don't offer any explanation as to why they are needed.

Test naming

Make it easy for the reader to immediately understand what the test is exercising, instead of having to type in the issue number and dig through github search for what the test is trying to exercise. This has an additional benefit of making the test possible to be filtered via --test-args as a collection of related tests.

  • Name the test after what it's trying to exercise or prevent regressions of.
  • Keep it concise.
  • Avoid using issue numbers alone as test names.
  • Avoid starting the test name with issue-xxxxx prefix as it degrades auto-completion.

Avoid using only issue numbers as test names

Prefer including them as links or #123456 in test comments instead. Or if it makes sense to include the issue number, also include brief keywords like macro-external-span-ice-123956.rs.

tests/ui/typeck/issue-123456.rs                              // bad
tests/ui/typeck/issue-123456-asm-macro-external-span-ice.rs  // bad (for tab completion)
tests/ui/typeck/asm-macro-external-span-ice-123456.rs        // good
tests/ui/typeck/asm-macro-external-span-ice.rs               // good

issue-123456.rs does not tell you immediately anything about what the test is actually exercising meaning you need to do additional searching. Including the issue number in the test name as a prefix makes tab completion less useful (if you ls a test directory and get a bunch of issue-xxxxx prefixes). We can link to the issue in a test comment.

//! Check that `asm!` macro including nested macros that come from external
//! crates do not lead to a codepoint boundary assertion ICE.
//!
//! Regression test for <https://github.com/rust-lang/rust/issues/123456>.

Test organization

  • For most test suites, try to find a semantically meaningful subdirectory to home the test.
    • E.g. for an implementation of RFC 2093 specifically, we can group a collection of tests under tests/ui/rfc-2093-infer-outlives/. For the directory name, include what the RFC is about.
  • For the run-make test suite, each rmake.rs must be contained within an immediate subdirectory under tests/run-make/. Further nesting is not presently supported. Avoid including issue number in the directory name too, include that info in a comment inside rmake.rs.

Test descriptions

To help other contributors understand what the test is about if their changes lead to the test failing, we should make sure a test has sufficient docs about its intent/purpose, links to relevant context (incl. issue numbers or other discussions) and possibly relevant resources (e.g. can be helpful to link to Win32 APIs for specific behavior).

Synopsis of a test with good comments

//! Brief summary of what the test is exercising.
//! Example: Regression test for #123456: make sure coverage attribute don't ICE
//!     when applied to non-items.
//!
//! Optional: Remarks on related tests/issues, external APIs/tools, crash
//!     mechanism, how it's fixed, FIXMEs, limitations, etc.
//! Example: This test is like `tests/attrs/linkage.rs`, but this test is
//!     specifically checking `#[coverage]` which exercises a different code
//!     path. The ICE was triggered during attribute validation when we tried
//!     to construct a `def_path_str` but only emitted the diagnostic when the
//!     platform is windows, causing an ICE on unix.
//!
//! Links to relevant issues and discussions. Examples below:
//! Regression test for <https://github.com/rust-lang/rust/issues/123456>.
//! See also <https://github.com/rust-lang/rust/issues/101345>.
//! See discussion at <https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/123456-example-topic>.
//! See [`clone(2)`].
//!
//! [`clone(2)`]: https://man7.org/linux/man-pages/man2/clone.2.html

//@ ignore-windows
// Reason: (why is this test ignored for windows? why not specifically
// windows-gnu or windows-msvc?)

// Optional: Summary of test cases: What positive cases are checked?
// What negative cases are checked? Any specific quirks?

fn main() {
    #[coverage]
    //~^ ERROR coverage attribute can only be applied to function items.
    let _ = {
        // Comment highlighting something that deserves reader attention.
        fn foo() {}
    };
}

For how much context/explanation is needed, it is up to the author and reviewer's discretion. A good rule of thumb is non-trivial things exercised in the test deserves some explanation to help other contributors to understand. This may include remarks on:

  • How an ICE can get triggered if it's quite elaborate.
  • Related issues and tests (e.g. this test is like another test but is kept separate because...).
  • Platform-specific behaviors.
  • Behavior of external dependencies and APIs: syscalls, linkers, tools, environments and the likes.

Test content

  • Try to make sure the test is as minimal as possible.
  • Minimize non-critical code and especially minimize unnecessary syntax and type errors which can clutter stderr snapshots.
  • Where possible, use semantically meaningful names (e.g. fn bare_coverage_attributes() {}).

Flaky tests

All tests need to strive to be reproducible and reliable. Flaky tests are the worst kind of tests, arguably even worse than not having the test in the first place.

  • Flaky tests can fail in completely unrelated PRs which can confuse other contributors and waste their time trying to figure out if test failure is related.
  • Flaky tests provide no useful information from its test results other than it's flaky and not reliable: if a test passed but it's flakey, did I just get lucky? if a test is flakey but it failed, was it just spurious?
  • Flaky tests degrade confidence in the whole test suite. If a test suite can randomly spuriously fail due to flaky tests, did the whole test suite pass or did I just get lucky/unlucky?
  • Flaky tests can randomly fail in full CI, wasting previous full CI resources.

Compiletest directives

See compiletest directives for a listing of directives.

  • For ignore-*/needs-*/only-* directives, unless extremely obvious, provide a brief remark on why the directive is needed. E.g. "//@ ignore-wasi (wasi codegens the main symbol differently)".

FileCheck best practices

See LLVM FileCheck guide for details.

  • Avoid matching on specific register numbers or basic block numbers unless they're special or critical for the test. Consider using patterns to match them where suitable.

TODO

Pending concrete advice.

Compiletest

Introduction

compiletest is the main test harness of the Rust test suite. It allows test authors to organize large numbers of tests (the Rust compiler has many thousands), efficient test execution (parallel execution is supported), and allows the test author to configure behavior and expected results of both individual and groups of tests.

Note for macOS users

For macOS users, SIP (System Integrity Protection) may consistently check the compiled binary by sending network requests to Apple, so you may get a huge performance degradation when running tests.

You can resolve it by tweaking the following settings: Privacy & Security -> Developer Tools -> Add Terminal (Or VsCode, etc.).

compiletest may check test code for compile-time or run-time success/failure.

Tests are typically organized as a Rust source file with annotations in comments before and/or within the test code. These comments serve to direct compiletest on if or how to run the test, what behavior to expect, and more. See directives and the test suite documentation below for more details on these annotations.

See the Adding new tests and Best practies chapters for a tutorial on creating a new test and advice on writing a good test, and the Running tests chapter on how to run the test suite.

Compiletest itself tries to avoid running tests when the artifacts that are involved (mainly the compiler) haven't changed. You can use x test --test-args --force-rerun to rerun a test even when none of the inputs have changed.

Test suites

All of the tests are in the tests directory. The tests are organized into "suites", with each suite in a separate subdirectory. Each test suite behaves a little differently, with different compiler behavior and different checks for correctness. For example, the tests/incremental directory contains tests for incremental compilation. The various suites are defined in src/tools/compiletest/src/common.rs in the pub enum Mode declaration.

The following test suites are available, with links for more information:

Compiler-specific test suites

Test suitePurpose
uiCheck the stdout/stderr snapshots from the compilation and/or running the resulting executable
ui-fulldepsui tests which require a linkable build of rustc (such as using extern crate rustc_span; or used as a plugin)
prettyCheck pretty printing
incrementalCheck incremental compilation behavior
debuginfoCheck debuginfo generation running debuggers
codegenCheck code generation
codegen-unitsCheck codegen unit partitioning
assemblyCheck assembly output
mir-optCheck MIR generation and optimizations
coverageCheck coverage instrumentation
coverage-run-rustdoccoverage tests that also run instrumented doctests

General purpose test suite

run-make are general purpose tests using Rust programs (or Makefiles (legacy)).

Rustdoc test suites

See Rustdoc tests for more details.

Test suitePurpose
rustdocCheck rustdoc generated files contain the expected documentation
rustdoc-guiCheck rustdoc's GUI using a web browser
rustdoc-jsCheck rustdoc search is working as expected
rustdoc-js-stdCheck rustdoc search is working as expected specifically on the std docs
rustdoc-jsonCheck JSON output of rustdoc
rustdoc-uiCheck terminal output of rustdoc

Pretty-printer tests

The tests in tests/pretty exercise the "pretty-printing" functionality of rustc. The -Z unpretty CLI option for rustc causes it to translate the input source into various different formats, such as the Rust source after macro expansion.

The pretty-printer tests have several directives described below. These commands can significantly change the behavior of the test, but the default behavior without any commands is to:

  1. Run rustc -Zunpretty=normal on the source file.
  2. Run rustc -Zunpretty=normal on the output of the previous step.
  3. The output of the previous two steps should be the same.
  4. Run rustc -Zno-codegen on the output to make sure that it can type check (this is similar to running cargo check).

If any of the commands above fail, then the test fails.

The directives for pretty-printing tests are:

  • pretty-mode specifies the mode pretty-print tests should run in (that is, the argument to -Zunpretty). The default is normal if not specified.
  • pretty-compare-only causes a pretty test to only compare the pretty-printed output (stopping after step 3 from above). It will not try to compile the expanded output to type check it. This is needed for a pretty-mode that does not expand to valid Rust, or for other situations where the expanded output cannot be compiled.
  • pp-exact is used to ensure a pretty-print test results in specific output. If specified without a value, then it means the pretty-print output should match the original source. If specified with a value, as in //@ pp-exact:foo.pp, it will ensure that the pretty-printed output matches the contents of the given file. Otherwise, if pp-exact is not specified, then the pretty-printed output will be pretty-printed one more time, and the output of the two pretty-printing rounds will be compared to ensure that the pretty-printed output converges to a steady state.

Incremental tests

The tests in tests/incremental exercise incremental compilation. They use revisions directive to tell compiletest to run the compiler in a series of steps.

Compiletest starts with an empty directory with the -C incremental flag, and then runs the compiler for each revision, reusing the incremental results from previous steps.

The revisions should start with:

  • rpass — the test should compile and run successfully
  • rfail — the test should compile successfully, but the executable should fail to run
  • cfail — the test should fail to compile

To make the revisions unique, you should add a suffix like rpass1 and rpass2.

To simulate changing the source, compiletest also passes a --cfg flag with the current revision name.

For example, this will run twice, simulating changing a function:

//@ revisions: rpass1 rpass2

#[cfg(rpass1)]
fn foo() {
    println!("one");
}

#[cfg(rpass2)]
fn foo() {
    println!("two");
}

fn main() { foo(); }

cfail tests support the forbid-output directive to specify that a certain substring must not appear anywhere in the compiler output. This can be useful to ensure certain errors do not appear, but this can be fragile as error messages change over time, and a test may no longer be checking the right thing but will still pass.

cfail tests support the should-ice directive to specify that a test should cause an Internal Compiler Error (ICE). This is a highly specialized directive to check that the incremental cache continues to work after an ICE.

Debuginfo tests

The tests in tests/debuginfo test debuginfo generation. They build a program, launch a debugger, and issue commands to the debugger. A single test can work with cdb, gdb, and lldb.

Most tests should have the //@ compile-flags: -g directive or something similar to generate the appropriate debuginfo.

To set a breakpoint on a line, add a // #break comment on the line.

The debuginfo tests consist of a series of debugger commands along with "check" lines which specify output that is expected from the debugger.

The commands are comments of the form // $DEBUGGER-command:$COMMAND where $DEBUGGER is the debugger being used and $COMMAND is the debugger command to execute.

The debugger values can be:

  • cdb
  • gdb
  • gdbg — GDB without Rust support (versions older than 7.11)
  • gdbr — GDB with Rust support
  • lldb
  • lldbg — LLDB without Rust support
  • lldbr — LLDB with Rust support (this no longer exists)

The command to check the output are of the form // $DEBUGGER-check:$OUTPUT where $OUTPUT is the output to expect.

For example, the following will build the test, start the debugger, set a breakpoint, launch the program, inspect a value, and check what the debugger prints:

//@ compile-flags: -g

//@ lldb-command: run
//@ lldb-command: print foo
//@ lldb-check: $0 = 123

fn main() {
    let foo = 123;
    b(); // #break
}

fn b() {}

The following directives are available to disable a test based on the debugger currently being used:

  • min-cdb-version: 10.0.18317.1001 — ignores the test if the version of cdb is below the given version
  • min-gdb-version: 8.2 — ignores the test if the version of gdb is below the given version
  • ignore-gdb-version: 9.2 — ignores the test if the version of gdb is equal to the given version
  • ignore-gdb-version: 7.11.90 - 8.0.9 — ignores the test if the version of gdb is in a range (inclusive)
  • min-lldb-version: 310 — ignores the test if the version of lldb is below the given version
  • rust-lldb — ignores the test if lldb is not contain the Rust plugin. NOTE: The "Rust" version of LLDB doesn't exist anymore, so this will always be ignored. This should probably be removed.

Note on running lldb debuginfo tests locally

If you want to run lldb debuginfo tests locally, then currently on Windows it is required that:

  • You have Python 3.10 installed.
  • You have python310.dll available in your PATH env var. This is not provided by the standard Python installer you obtain from python.org; you need to add this to PATH manually.

Otherwise the lldb debuginfo tests can produce crashes in mysterious ways.

Note on acquiring cdb.exe on Windows 11

cdb.exe is acquired alongside a suitable "Windows 11 SDK" which is part of the "Desktop Development with C++" workload profile in a Visual Studio installer (e.g. Visual Studio 2022 installer).

HOWEVER this is not sufficient by default alone. If you need cdb.exe, you must go to Installed Apps, find the newest "Windows Software Development Kit" (and yes, this can still say Windows 10.0.22161.3233 even though the OS is called Windows 11). You must then click "Modify" -> "Change" and then selected "Debugging Tools for Windows" in order to acquire cdb.exe.

Codegen tests

The tests in tests/codegen test LLVM code generation. They compile the test with the --emit=llvm-ir flag to emit LLVM IR. They then run the LLVM FileCheck tool. The test is annotated with various // CHECK comments to check the generated code. See the FileCheck documentation for a tutorial and more information.

See also the assembly tests for a similar set of tests.

If you need to work with #![no_std] cross-compiling tests, consult the minicore test auxiliary chapter.

Assembly tests

The tests in tests/assembly test LLVM assembly output. They compile the test with the --emit=asm flag to emit a .s file with the assembly output. They then run the LLVM FileCheck tool.

Each test should be annotated with the //@ assembly-output: directive with a value of either emit-asm or ptx-linker to indicate the type of assembly output.

Then, they should be annotated with various // CHECK comments to check the assembly output. See the FileCheck documentation for a tutorial and more information.

See also the codegen tests for a similar set of tests.

If you need to work with #![no_std] cross-compiling tests, consult the minicore test auxiliary chapter.

Codegen-units tests

The tests in tests/codegen-units test the monomorphization collector and CGU partitioning.

These tests work by running rustc with a flag to print the result of the monomorphization collection pass, and then special annotations in the file are used to compare against that.

Each test should be annotated with the //@ compile-flags:-Zprint-mono-items=VAL directive with the appropriate VAL to instruct rustc to print the monomorphization information.

Then, the test should be annotated with comments of the form //~ MONO_ITEM name where name is the monomorphized string printed by rustc like fn <u32 as Trait>::foo.

To check for CGU partitioning, a comment of the form //~ MONO_ITEM name @@ cgu where cgu is a space separated list of the CGU names and the linkage information in brackets. For example: //~ MONO_ITEM static function::FOO @@ statics[Internal]

Mir-opt tests

The tests in tests/mir-opt check parts of the generated MIR to make sure it is generated correctly and is doing the expected optimizations. Check out the MIR Optimizations chapter for more.

Compiletest will build the test with several flags to dump the MIR output and set a baseline for optimizations:

  • -Copt-level=1
  • -Zdump-mir=all
  • -Zmir-opt-level=4
  • -Zvalidate-mir
  • -Zdump-mir-exclude-pass-number

The test should be annotated with // EMIT_MIR comments that specify files that will contain the expected MIR output. You can use x test --bless to create the initial expected files.

There are several forms the EMIT_MIR comment can take:

  • // EMIT_MIR $MIR_PATH.mir — This will check that the given filename matches the exact output from the MIR dump. For example, my_test.main.SimplifyCfg-elaborate-drops.after.mir will load that file from the test directory, and compare it against the dump from rustc.

    Checking the "after" file (which is after optimization) is useful if you are interested in the final state after an optimization. Some rare cases may want to use the "before" file for completeness.

  • // EMIT_MIR $MIR_PATH.diff — where $MIR_PATH is the filename of the MIR dump, such as my_test_name.my_function.EarlyOtherwiseBranch. Compiletest will diff the .before.mir and .after.mir files, and compare the diff output to the expected .diff file from the EMIT_MIR comment.

    This is useful if you want to see how an optimization changes the MIR.

  • // EMIT_MIR $MIR_PATH.dot — When using specific flags that dump additional MIR data (e.g. -Z dump-mir-graphviz to produce .dot files), this will check that the output matches the given file.

By default 32 bit and 64 bit targets use the same dump files, which can be problematic in the presence of pointers in constants or other bit width dependent things. In that case you can add // EMIT_MIR_FOR_EACH_BIT_WIDTH to your test, causing separate files to be generated for 32bit and 64bit systems.

run-make tests

Note on phasing out Makefiles

We are planning to migrate all existing Makefile-based run-make tests to Rust programs. You should not be adding new Makefile-based run-make tests.

See https://github.com/rust-lang/rust/issues/121876.

The tests in tests/run-make are general-purpose tests using Rust recipes, which are small programs (rmake.rs) allowing arbitrary Rust code such as rustc invocations, and is supported by a run_make_support library. Using Rust recipes provide the ultimate in flexibility.

run-make tests should be used if no other test suites better suit your needs.

Using Rust recipes

Each test should be in a separate directory with a rmake.rs Rust program, called the recipe. A recipe will be compiled and executed by compiletest with the run_make_support library linked in.

If you need new utilities or functionality, consider extending and improving the run_make_support library.

Compiletest directives like //@ only-<target> or //@ ignore-<target> are supported in rmake.rs, like in UI tests. However, revisions or building auxiliary via directives are not currently supported.

Two run-make tests are ported over to Rust recipes as examples:

Quickly check if rmake.rs tests can be compiled

You can quickly check if rmake.rs tests can be compiled without having to build stage1 rustc by forcing rmake.rs to be compiled with the stage0 compiler:

$ COMPILETEST_FORCE_STAGE0=1 x test --stage 0 tests/run-make/<test-name>

Of course, some tests will not successfully run in this way.

Using Makefiles (legacy)

You should avoid writing new Makefile-based `run-make` tests.

Each test should be in a separate directory with a Makefile indicating the commands to run.

There is a tools.mk Makefile which you can include which provides a bunch of utilities to make it easier to run commands and compare outputs. Take a look at some of the other tests for some examples on how to get started.

Coverage tests

The tests in tests/coverage are shared by multiple test modes that test coverage instrumentation in different ways. Running the coverage test suite will automatically run each test in all of the different coverage modes.

Each mode also has an alias to run the coverage tests in just that mode:

./x test coverage # runs all of tests/coverage in all coverage modes
./x test tests/coverage # same as above

./x test tests/coverage/if.rs # runs the specified test in all coverage modes

./x test coverage-map # runs all of tests/coverage in "coverage-map" mode only
./x test coverage-run # runs all of tests/coverage in "coverage-run" mode only

./x test coverage-map -- tests/coverage/if.rs # runs the specified test in "coverage-map" mode only

If a particular test should not be run in one of the coverage test modes for some reason, use the //@ ignore-coverage-map or //@ ignore-coverage-run directives.

coverage-map suite

In coverage-map mode, these tests verify the mappings between source code regions and coverage counters that are emitted by LLVM. They compile the test with --emit=llvm-ir, then use a custom tool (src/tools/coverage-dump) to extract and pretty-print the coverage mappings embedded in the IR. These tests don't require the profiler runtime, so they run in PR CI jobs and are easy to run/bless locally.

These coverage map tests can be sensitive to changes in MIR lowering or MIR optimizations, producing mappings that are different but produce identical coverage reports.

As a rule of thumb, any PR that doesn't change coverage-specific code should feel free to re-bless the coverage-map tests as necessary, without worrying about the actual changes, as long as the coverage-run tests still pass.

coverage-run suite

In coverage-run mode, these tests perform an end-to-end test of coverage reporting. They compile a test program with coverage instrumentation, run that program to produce raw coverage data, and then use LLVM tools to process that data into a human-readable code coverage report.

Instrumented binaries need to be linked against the LLVM profiler runtime, so coverage-run tests are automatically skipped unless the profiler runtime is enabled in config.toml:

# config.toml
[build]
profiler = true

This also means that they typically don't run in PR CI jobs, though they do run as part of the full set of CI jobs used for merging.

coverage-run-rustdoc suite

The tests in tests/coverage-run-rustdoc also run instrumented doctests and include them in the coverage report. This avoids having to build rustdoc when only running the main coverage suite.

Crashes tests

tests/crashes serve as a collection of tests that are expected to cause the compiler to ICE, panic or crash in some other way, so that accidental fixes are tracked. This was formally done at https://github.com/rust-lang/glacier but doing it inside the rust-lang/rust testsuite is more convenient.

It is imperative that a test in the suite causes rustc to ICE, panic or crash crash in some other way. A test will "pass" if rustc exits with an exit status other than 1 or 0.

If you want to see verbose stdout/stderr, you need to set COMPILETEST_VERBOSE_CRASHES=1, e.g.

$ COMPILETEST_VERBOSE_CRASHES=1 ./x test tests/crashes/999999.rs --stage 1

When adding crashes from https://github.com/rust-lang/rust/issues, the issue number should be noted in the file name (12345.rs should suffice) and also inside the file include a //@ known-bug: #4321 directive.

If you happen to fix one of the crashes, please move it to a fitting subdirectory in tests/ui and give it a meaningful name. Please add a doc comment at the top of the file explaining why this test exists, even better if you can briefly explain how the example causes rustc to crash previously and what was done to prevent rustc to ICE/panic/crash.

Adding

Fixes #NNNNN
Fixes #MMMMM

to the description of your pull request will ensure the corresponding tickets be closed automatically upon merge.

Make sure that your fix actually fixes the root cause of the issue and not just a subset first. The issue numbers can be found in the file name or the //@ known-bug directive inside the test file.

Building auxiliary crates

It is common that some tests require additional auxiliary crates to be compiled. There are multiple directives to assist with that:

  • aux-build
  • aux-crate
  • aux-bin
  • aux-codegen-backend
  • proc-macro

aux-build will build a separate crate from the named source file. The source file should be in a directory called auxiliary beside the test file.

//@ aux-build: my-helper.rs

extern crate my_helper;
// ... You can use my_helper.

The aux crate will be built as a dylib if possible (unless on a platform that does not support them, or the no-prefer-dynamic header is specified in the aux file). The -L flag is used to find the extern crates.

aux-crate is very similar to aux-build. However, it uses the --extern flag to link to the extern crate to make the crate be available as an extern prelude. That allows you to specify the additional syntax of the --extern flag, such as renaming a dependency. For example, // aux-crate:foo=bar.rs will compile auxiliary/bar.rs and make it available under then name foo within the test. This is similar to how Cargo does dependency renaming.

aux-bin is similar to aux-build but will build a binary instead of a library. The binary will be available in auxiliary/bin relative to the working directory of the test.

aux-codegen-backend is similar to aux-build, but will then pass the compiled dylib to -Zcodegen-backend when building the main file. This will only work for tests in tests/ui-fulldeps, since it requires the use of compiler crates.

Auxiliary proc-macro

If you want a proc-macro dependency, then you can use the proc-macro directive. This directive behaves just like aux-build, i.e. that you should place the proc-macro test auxiliary file under a auxiliary folder under the same parent folder as the main test file. However, it also has four additional preset behavior compared to aux-build for the proc-macro test auxiliary:

  1. The aux test file is built with --crate-type=proc-macro.
  2. The aux test file is built without -C prefer-dynamic, i.e. it will not try to produce a dylib for the aux crate.
  3. The aux crate is made available to the test file via extern prelude with --extern <aux_crate_name>. Note that since UI tests default to edition 2015, you still need to specify extern <aux_crate_name> unless the main test file is using an edition that is 2018 or newer if you want to use the aux crate name in a use import.
  4. The proc_macro crate is made available as an extern prelude module. Same edition 2015 vs newer edition distinction for extern proc_macro; applies.

For example, you might have a test tests/ui/cat/meow.rs and proc-macro auxiliary tests/ui/cat/auxiliary/whiskers.rs:

tests/ui/cat/
    meow.rs                 # main test file
    auxiliary/whiskers.rs   # auxiliary
// tests/ui/cat/meow.rs

//@ proc-macro: whiskers.rs

extern crate whiskers; // needed as ui test defaults to edition 2015

fn main() {
  whiskers::identity!();
}
// tests/ui/cat/auxiliary/whiskers.rs

extern crate proc_macro;
use proc_macro::*;

#[proc_macro]
pub fn identity(ts: TokenStream) -> TokenStream {
    ts
}

Note: The proc-macro header currently does not work with the build-aux-doc header for rustdoc tests. In that case, you will need to use the aux-build header, and use #![crate_type="proc_macro"], and //@ force-host and //@ no-prefer-dynamic headers in the proc-macro.

Revisions

Revisions allow a single test file to be used for multiple tests. This is done by adding a special directive at the top of the file:

//@ revisions: foo bar baz

This will result in the test being compiled (and tested) three times, once with --cfg foo, once with --cfg bar, and once with --cfg baz. You can therefore use #[cfg(foo)] etc within the test to tweak each of these results.

You can also customize directives and expected error messages to a particular revision. To do this, add [revision-name] after the //@ for directives, and after // for UI error annotations, like so:

// A flag to pass in only for cfg `foo`:
//@[foo]compile-flags: -Z verbose-internals

#[cfg(foo)]
fn test_foo() {
    let x: usize = 32_u32; //[foo]~ ERROR mismatched types
}

Multiple revisions can be specified in a comma-separated list, such as //[foo,bar,baz]~^.

In test suites that use the LLVM FileCheck tool, the current revision name is also registered as an additional prefix for FileCheck directives:

//@ revisions: NORMAL COVERAGE
//@[COVERAGE] compile-flags: -Cinstrument-coverage
//@[COVERAGE] needs-profiler-runtime

// COVERAGE:   @__llvm_coverage_mapping
// NORMAL-NOT: @__llvm_coverage_mapping

// CHECK: main
fn main() {}

Note that not all directives have meaning when customized to a revision. For example, the ignore-test directives (and all "ignore" directives) currently only apply to the test as a whole, not to particular revisions. The only directives that are intended to really work when customized to a revision are error patterns and compiler flags.

The following test suites support revisions:

  • ui
  • assembly
  • codegen
  • coverage
  • debuginfo
  • rustdoc UI tests
  • incremental (these are special in that they inherently cannot be run in parallel)

Ignoring unused revision names

Normally, revision names mentioned in other directives and error annotations must correspond to an actual revision declared in a revisions directive. This is enforced by an ./x test tidy check.

If a revision name needs to be temporarily removed from the revision list for some reason, the above check can be suppressed by adding the revision name to an //@ unused-revision-names: header instead.

Specifying an unused name of * (i.e. //@ unused-revision-names: *) will permit any unused revision name to be mentioned.

Compare modes

Compiletest can be run in different modes, called compare modes, which can be used to compare the behavior of all tests with different compiler flags enabled. This can help highlight what differences might appear with certain flags, and check for any problems that might arise.

To run the tests in a different mode, you need to pass the --compare-mode CLI flag:

./x test tests/ui --compare-mode=chalk

The possible compare modes are:

  • polonius — Runs with Polonius with -Zpolonius.
  • chalk — Runs with Chalk with -Zchalk.
  • split-dwarf — Runs with unpacked split-DWARF with -Csplit-debuginfo=unpacked.
  • split-dwarf-single — Runs with packed split-DWARF with -Csplit-debuginfo=packed.

See UI compare modes for more information about how UI tests support different output for different modes.

In CI, compare modes are only used in one Linux builder, and only with the following settings:

  • tests/debuginfo: Uses split-dwarf mode. This helps ensure that none of the debuginfo tests are affected when enabling split-DWARF.

Note that compare modes are separate to revisions. All revisions are tested when running ./x test tests/ui, however compare-modes must be manually run individually via the --compare-mode flag.

UI tests

UI tests are a particular test suite of compiletest.

Introduction

The tests in tests/ui are a collection of general-purpose tests which primarily focus on validating the console output of the compiler, but can be used for many other purposes. For example, tests can also be configured to run the resulting program to verify its behavior.

If you need to work with #![no_std] cross-compiling tests, consult the minicore test auxiliary chapter.

General structure of a test

A test consists of a Rust source file located anywhere in the tests/ui directory, but they should be placed in a suitable sub-directory. For example, tests/ui/hello.rs is a basic hello-world test.

Compiletest will use rustc to compile the test, and compare the output against the expected output which is stored in a .stdout or .stderr file located next to the test. See Output comparison for more.

Additionally, errors and warnings should be annotated with comments within the source file. See Error annotations for more.

Compiletest directives in the form of special comments prefixed with //@ control how the test is compiled and what the expected behavior is.

Tests are expected to fail to compile, since most tests are testing compiler errors. You can change that behavior with a directive, see Controlling pass/fail expectations.

By default, a test is built as an executable binary. If you need a different crate type, you can use the #![crate_type] attribute to set it as needed.

Output comparison

UI tests store the expected output from the compiler in .stderr and .stdout snapshots next to the test. You normally generate these files with the --bless CLI option, and then inspect them manually to verify they contain what you expect.

The output is normalized to ignore unwanted differences, see the Normalization section. If the file is missing, then compiletest expects the corresponding output to be empty.

There can be multiple stdout/stderr files. The general form is:

*test-name*`.`*revision*`.`*compare_mode*`.`*extension*
  • test-name cannot contain dots. This is so that the general form of test output filenames have a predictable form we can pattern match on in order to track stray test output files.
  • revision is the revision name. This is not included when not using revisions.
  • compare_mode is the compare mode. This will only be checked when the given compare mode is active. If the file does not exist, then compiletest will check for a file without the compare mode.
  • extension is the kind of output being checked:
    • stderr — compiler stderr
    • stdout — compiler stdout
    • run.stderr — stderr when running the test
    • run.stdout — stdout when running the test
    • 64bit.stderr — compiler stderr with stderr-per-bitwidth directive on a 64-bit target
    • 32bit.stderr — compiler stderr with stderr-per-bitwidth directive on a 32-bit target

A simple example would be foo.stderr next to a foo.rs test. A more complex example would be foo.my-revision.polonius.stderr.

There are several directives which will change how compiletest will check for output files:

  • stderr-per-bitwidth — checks separate output files based on the target pointer width. Consider using the normalize-stderr directive instead (see Normalization).
  • dont-check-compiler-stderr — Ignores stderr from the compiler.
  • dont-check-compiler-stdout — Ignores stdout from the compiler.

UI tests run with -Zdeduplicate-diagnostics=no flag which disables rustc's built-in diagnostic deduplication mechanism. This means you may see some duplicate messages in the output. This helps illuminate situations where duplicate diagnostics are being generated.

Normalization

The compiler output is normalized to eliminate output difference between platforms, mainly about filenames.

Compiletest makes the following replacements on the compiler output:

  • The directory where the test is defined is replaced with $DIR. Example: /path/to/rust/tests/ui/error-codes
  • The directory to the standard library source is replaced with $SRC_DIR. Example: /path/to/rust/library
  • Line and column numbers for paths in $SRC_DIR are replaced with LL:COL. This helps ensure that changes to the layout of the standard library do not cause widespread changes to the .stderr files. Example: $SRC_DIR/alloc/src/sync.rs:53:46
  • The base directory where the test's output goes is replaced with $TEST_BUILD_DIR. This only comes up in a few rare circumstances. Example: /path/to/rust/build/x86_64-unknown-linux-gnu/test/ui
  • Tabs are replaced with \t.
  • Backslashes (\) are converted to forward slashes (/) within paths (using a heuristic). This helps normalize differences with Windows-style paths.
  • CRLF newlines are converted to LF.
  • Error line annotations like //~ ERROR some message are removed.
  • Various v0 and legacy symbol hashes are replaced with placeholders like [HASH] or <SYMBOL_HASH>.

Additionally, the compiler is run with the -Z ui-testing flag which causes the compiler itself to apply some changes to the diagnostic output to make it more suitable for UI testing.

For example, it will anonymize line numbers in the output (line numbers prefixing each source line are replaced with LL). In extremely rare situations, this mode can be disabled with the directive //@ compile-flags: -Z ui-testing=no.

Note: The line and column numbers for --> lines pointing to the test are not normalized, and left as-is. This ensures that the compiler continues to point to the correct location, and keeps the stderr files readable. Ideally all line/column information would be retained, but small changes to the source causes large diffs, and more frequent merge conflicts and test errors.

Sometimes these built-in normalizations are not enough. In such cases, you may provide custom normalization rules using normalize-* directives, e.g.

//@ normalize-stdout-test: "foo" -> "bar"
//@ normalize-stderr-32bit: "fn\(\) \(32 bits\)" -> "fn\(\) \($$PTR bits\)"
//@ normalize-stderr-64bit: "fn\(\) \(64 bits\)" -> "fn\(\) \($$PTR bits\)"

This tells the test, on 32-bit platforms, whenever the compiler writes fn() (32 bits) to stderr, it should be normalized to read fn() ($PTR bits) instead. Similar for 64-bit. The replacement is performed by regexes using default regex flavor provided by regex crate.

The corresponding reference file will use the normalized output to test both 32-bit and 64-bit platforms:

...
   |
   = note: source type: fn() ($PTR bits)
   = note: target type: u16 (16 bits)
...

Please see ui/transmute/main.rs and main.stderr for a concrete usage example.

Besides normalize-stderr-32bit and -64bit, one may use any target information or stage supported by ignore-X here as well (e.g. normalize-stderr-windows or simply normalize-stderr-test for unconditional replacement).

Error annotations

Error annotations specify the errors that the compiler is expected to emit. They are "attached" to the line in source where the error is located.

fn main() {
    boom  //~ ERROR cannot find value `boom` in this scope [E0425]
}

Although UI tests have a .stderr file which contains the entire compiler output, UI tests require that errors are also annotated within the source. This redundancy helps avoid mistakes since the .stderr files are usually auto-generated. It also helps to directly see where the error spans are expected to point to by looking at one file instead of having to compare the .stderr file with the source. Finally, they ensure that no additional unexpected errors are generated.

They have several forms, but generally are a comment with the diagnostic level (such as ERROR) and a substring of the expected error output. You don't have to write out the entire message, just make sure to include the important part of the message to make it self-documenting.

The error annotation needs to match with the line of the diagnostic. There are several ways to match the message with the line (see the examples below):

  • ~: Associates the error level and message with the current line
  • ~^: Associates the error level and message with the previous error annotation line. Each caret (^) that you add adds a line to this, so ~^^^ is three lines above the error annotation line.
  • ~|: Associates the error level and message with the same line as the previous comment. This is more convenient than using multiple carets when there are multiple messages associated with the same line.

Example:

let _ = same_line; //~ ERROR undeclared variable
fn meow(_: [u8]) {}
//~^ ERROR unsized
//~| ERROR anonymous parameters

The space character between //~ (or other variants) and the subsequent text is negligible (i.e. there is no semantic difference between //~ ERROR and //~ERROR although the former is more common in the codebase).

Error annotation examples

Here are examples of error annotations on different lines of UI test source.

Positioned on error line

Use the //~ ERROR idiom:

fn main() {
    let x = (1, 2, 3);
    match x {
        (_a, _x @ ..) => {} //~ ERROR `_x @` is not allowed in a tuple
        _ => {}
    }
}

Positioned below error line

Use the //~^ idiom with number of carets in the string to indicate the number of lines above. In the example below, the error line is four lines above the error annotation line so four carets are included in the annotation.

fn main() {
    let x = (1, 2, 3);
    match x {
        (_a, _x @ ..) => {}  // <- the error is on this line
        _ => {}
    }
}
//~^^^^ ERROR `_x @` is not allowed in a tuple

Use same error line as defined on error annotation line above

Use the //~| idiom to define the same error line as the error annotation line above:

struct Binder(i32, i32, i32);

fn main() {
    let x = Binder(1, 2, 3);
    match x {
        Binder(_a, _x @ ..) => {}  // <- the error is on this line
        _ => {}
    }
}
//~^^^^ ERROR `_x @` is not allowed in a tuple struct
//~| ERROR this pattern has 1 field, but the corresponding tuple struct has 3 fields [E0023]

error-pattern

The error-pattern directive can be used for messages that don't have a specific span.

Let's think about this test:

fn main() {
    let a: *const [_] = &[1, 2, 3];
    unsafe {
        let _b = (*a)[3];
    }
}

We want to ensure this shows "index out of bounds" but we cannot use the ERROR annotation since the error doesn't have any span. Then it's time to use the error-pattern directive:

//@ error-pattern: index out of bounds
fn main() {
    let a: *const [_] = &[1, 2, 3];
    unsafe {
        let _b = (*a)[3];
    }
}

But for strict testing, try to use the ERROR annotation as much as possible.

Error levels

The error levels that you can have are:

  • ERROR
  • WARN or WARNING
  • NOTE
  • HELP and SUGGESTION

You are allowed to not include a level, but you should include it at least for the primary message.

The SUGGESTION level is used for specifying what the expected replacement text should be for a diagnostic suggestion.

UI tests use the -A unused flag by default to ignore all unused warnings, as unused warnings are usually not the focus of a test. However, simple code samples often have unused warnings. If the test is specifically testing an unused warning, just add the appropriate #![warn(unused)] attribute as needed.

cfg revisions

When using revisions, different messages can be conditionally checked based on the current revision. This is done by placing the revision cfg name in brackets like this:

//@ edition:2018
//@ revisions: mir thir
//@[thir] compile-flags: -Z thir-unsafeck

async unsafe fn f() {}

async fn g() {
    f(); //~ ERROR call to unsafe function is unsafe
}

fn main() {
    f(); //[mir]~ ERROR call to unsafe function is unsafe
}

In this example, the second error message is only emitted in the mir revision. The thir revision only emits the first error.

If the cfg causes the compiler to emit different output, then a test can have multiple .stderr files for the different outputs. In the example above, there would be a .mir.stderr and .thir.stderr file with the different outputs of the different revisions.

Note: cfg revisions also work inside the source code with #[cfg] attributes.

By convention, the FALSE cfg is used to have an always-false config.

Controlling pass/fail expectations

By default, a UI test is expected to generate a compile error because most of the tests are checking for invalid input and error diagnostics. However, you can also make UI tests where compilation is expected to succeed, and you can even run the resulting program. Just add one of the following directives:

  • Pass directives:
    • //@ check-pass — compilation should succeed but skip codegen (which is expensive and isn't supposed to fail in most cases).
    • //@ build-pass — compilation and linking should succeed but do not run the resulting binary.
    • //@ run-pass — compilation should succeed and running the resulting binary should also succeed.
  • Fail directives:
    • //@ check-fail — compilation should fail (the codegen phase is skipped). This is the default for UI tests.
    • //@ build-fail — compilation should fail during the codegen phase. This will run rustc twice, once to verify that it compiles successfully without the codegen phase, then a second time the full compile should fail.
    • //@ run-fail — compilation should succeed, but running the resulting binary should fail.

For run-pass and run-fail tests, by default the output of the program itself is not checked.

If you want to check the output of running the program, include the check-run-results directive. This will check for a .run.stderr and .run.stdout files to compare against the actual output of the program.

Tests with the *-pass directives can be overridden with the --pass command-line option:

./x test tests/ui --pass check

The --pass option only affects UI tests. Using --pass check can run the UI test suite much faster (roughly twice as fast on my system), though obviously not exercising as much.

The ignore-pass directive can be used to ignore the --pass CLI flag if the test won't work properly with that override.

Known bugs

The known-bug directive may be used for tests that demonstrate a known bug that has not yet been fixed. Adding tests for known bugs is helpful for several reasons, including:

  1. Maintaining a functional test that can be conveniently reused when the bug is fixed.
  2. Providing a sentinel that will fail if the bug is incidentally fixed. This can alert the developer so they know that the associated issue has been fixed and can possibly be closed.

Do not include error annotations in a test with known-bug. The test should still include other normal directives and stdout/stderr files.

Test organization

When deciding where to place a test file, please try to find a subdirectory that best matches what you are trying to exercise. Do your best to keep things organized. Admittedly it can be difficult as some tests can overlap different categories, and the existing layout may not fit well.

Name the test by a concise description of what the test is checking. Avoid including the issue number in the test name. See best practices for a more in-depth discussion of this.

Ideally, the test should be added to a directory that helps identify what piece of code is being tested here (e.g., tests/ui/borrowck/reject-move-out-of-borrow-via-pat.rs)

When writing a new feature, you may want to create a subdirectory to store your tests. For example, if you are implementing RFC 1234 ("Widgets"), then it might make sense to put the tests in a directory like tests/ui/rfc1234-widgets/.

In other cases, there may already be a suitable directory.

Over time, the tests/ui directory has grown very fast. There is a check in tidy that will ensure none of the subdirectories has more than 1000 entries. Having too many files causes problems because it isn't editor/IDE friendly and the GitHub UI won't show more than 1000 entries. However, since tests/ui (UI test root directory) and tests/ui/issues directories have more than 1000 entries, we set a different limit for those directories. So, please avoid putting a new test there and try to find a more relevant place.

For example, if your test is related to closures, you should put it in tests/ui/closures. When you reach the limit, you could increase it by tweaking here.

Rustfix tests

UI tests can validate that diagnostic suggestions apply correctly and that the resulting changes compile correctly. This can be done with the run-rustfix directive:

//@ run-rustfix
//@ check-pass
#![crate_type = "lib"]

pub struct not_camel_case {}
//~^ WARN `not_camel_case` should have an upper camel case name
//~| HELP convert the identifier to upper camel case
//~| SUGGESTION NotCamelCase

Rustfix tests should have a file with the .fixed extension which contains the source file after the suggestion has been applied.

  • When the test is run, compiletest first checks that the correct lint/warning is generated.
  • Then, it applies the suggestion and compares against .fixed (they must match).
  • Finally, the fixed source is compiled, and this compilation is required to succeed.

Usually when creating a rustfix test you will generate the .fixed file automatically with the x test --bless option.

The run-rustfix directive will cause all suggestions to be applied, even if they are not MachineApplicable. If this is a problem, then you can add the rustfix-only-machine-applicable directive in addition to run-rustfix. This should be used if there is a mixture of different suggestion levels, and some of the non-machine-applicable ones do not apply cleanly.

Compare modes

Compare modes can be used to run all tests with different flags from what they are normally compiled with. In some cases, this might result in different output from the compiler. To support this, different output files can be saved which contain the output based on the compare mode.

For example, when using the Polonius mode, a test foo.rs will first look for expected output in foo.polonius.stderr, falling back to the usual foo.stderr if not found. This is useful as different modes can sometimes result in different diagnostics and behavior. This can help track which tests have differences between the modes, and to visually inspect those diagnostic differences.

If in the rare case you encounter a test that has different behavior, you can run something like the following to generate the alternate stderr file:

./x test tests/ui --compare-mode=polonius --bless

Currently none of the compare modes are checked in CI for UI tests.

rustc_* TEST attributes

The compiler defines several perma-unstable #[rustc_*] attributes gated behind the internal feature rustc_attrs that dump extra compiler-internal information. See the corresponding subsection in compiler debugging for more details.

They can be used in tests to more precisely, legibly and easily test internal compiler state in cases where it would otherwise be very hard to do the same with "user-facing" Rust alone. Indeed, one could say that this slightly abuses the term "UI" (user interface) and turns such UI tests from black-box tests into white-box ones. Use them carefully and sparingly.

Compiletest directives

FIXME(jieyouxu) completely revise this chapter.

Directives are special comments that tell compiletest how to build and interpret a test. They must appear before the Rust source in the test. They may also appear in rmake.rs or legacy Makefiles for run-make tests.

They are normally put after the short comment that explains the point of this test. Compiletest test suites use //@ to signal that a comment is a directive. For example, this test uses the //@ compile-flags command to specify a custom flag to give to rustc when the test is compiled:

// Test the behavior of `0 - 1` when overflow checks are disabled.

//@ compile-flags: -C overflow-checks=off

fn main() {
    let x = 0 - 1;
    ...
}

Directives can be standalone (like //@ run-pass) or take a value (like //@ compile-flags: -C overflow-checks=off).

Directives are written one directive per line: you cannot write multiple directives on the same line. For example, if you write //@ only-x86 only-windows then only-windows is interpreted as a comment, not a separate directive.

Listing of compiletest directives

The following is a list of compiletest directives. Directives are linked to sections that describe the command in more detail if available. This list may not be exhaustive. Directives can generally be found by browsing the TestProps structure found in header.rs from the compiletest source.

Assembly

DirectiveExplanationSupported test suitesPossible values
assembly-outputAssembly output kind to checkassemblyemit-asm, bpf-linker, ptx-linker

Auxiliary builds

DirectiveExplanationSupported test suitesPossible values
aux-binBuild a aux binary, made available in auxiliary/bin relative to test directoryAll except run-makePath to auxiliary .rs file
aux-buildBuild a separate crate from the named source fileAll except run-makePath to auxiliary .rs file
aux-crateLike aux-build but makes available as extern preludeAll except run-make<extern_prelude_name>=<path/to/aux/file.rs>
aux-codegen-backendSimilar to aux-build but pass the compiled dylib to -Zcodegen-backend when building the main fileui-fulldepsPath to codegen backend file
proc-macroSimilar to aux-build, but for aux forces host and don't use -Cprefer-dynamic1.All except run-makePath to auxiliary proc-macro .rs file
build_aux_docsBuild docs for auxiliaries as wellAll except run-makeN/A
1

please see the Auxiliary proc-macro section in the compiletest chapter for specifics.

Controlling outcome expectations

See Controlling pass/fail expectations.

DirectiveExplanationSupported test suitesPossible values
check-passBuilding (no codegen) should passui, crashes, incrementalN/A
check-failBuilding (no codegen) should failui, crashesN/A
build-passBuilding should passui, crashes, codegen, incrementalN/A
build-failBuilding should failui, crashesN/A
run-passRunning the test binary should passui, crashes, incrementalN/A
run-failRunning the test binary should failui, crashesN/A
ignore-passIgnore --pass flagui, crashes, codegen, incrementalN/A
dont-check-failure-statusDon't check exact failure status (i.e. 1)ui, incrementalN/A
failure-statusCheckui, crashesAny u16
should-iceCheck failure status is 101coverage, incrementalN/A
should-failCompiletest self-testAllN/A

Controlling output snapshots and normalizations

See Normalization, Output comparison and Rustfix tests for more details.

DirectiveExplanationSupported test suitesPossible values
check-run-resultsCheck run test binary run-{pass,fail} output snapshotui, crashes, incremental if run-passN/A
error-patternCheck that output contains a regex patternui, crashes, incremental if run-passRegex
check-stdoutCheck stdout against error-patterns from running test binary2ui, crashes, incrementalN/A
normalize-stderr-32bitNormalize actual stderr (for 32-bit platforms) with a rule "<raw>" -> "<normalized>" before comparing against snapshotui, incremental"<RAW>" -> "<NORMALIZED>", <RAW>/<NORMALIZED> is regex capture and replace syntax
normalize-stderr-64bitNormalize actual stderr (for 64-bit platforms) with a rule "<raw>" -> "<normalized>" before comparing against snapshotui, incremental"<RAW>" -> "<NORMALIZED>", <RAW>/<NORMALIZED> is regex capture and replace syntax
normalize-stderr-testNormalize actual stderr with a rule "<raw>" -> "<normalized>" before comparing against snapshotui, incremental"<RAW>" -> "<NORMALIZED>", <RAW>/<NORMALIZED> is regex capture and replace syntax
normalize-stdout-testNormalize actual stdout with a rule "<raw>" -> "<normalized>" before comparing against snapshotui, incremental"<RAW>" -> "<NORMALIZED>", <RAW>/<NORMALIZED> is regex capture and replace syntax
dont-check-compiler-stderrDon't check actual compiler stderr vs stderr snapshotuiN/A
dont-check-compiler-stdoutDon't check actual compiler stdout vs stdout snapshotuiN/A
run-rustfixApply all suggestions via rustfix, snapshot fixed output, and check fixed output buildsuiN/A
rustfix-only-machine-applicablerun-rustfix but only machine-applicable suggestionsuiN/A
exec-envEnv var to set when executing a testui, crashes<KEY>=<VALUE>
unset-exec-envEnv var to unset when executing a testui, crashesAny env var name
stderr-per-bitwidthGenerate a stderr snapshot for each bitwidthuiN/A
forbid-outputA pattern which must not appear in cfail outputincrementalRegex pattern
run-flagsFlags passed to the test executableuiArbitrary flags
known-bugNo error annotation needed due to known bugui, crashes, incrementalIssue number #123456
2

presently this has a weird quirk where the test binary's stdout and stderr gets concatenated and then error-patterns are matched on this combined output, which is ??? slightly questionable to say the least.

Controlling when tests are run

These directives are used to ignore the test in some situations, which means the test won't be compiled or run.

  • ignore-X where X is a target detail or stage will ignore the test accordingly (see below)
  • only-X is like ignore-X, but will only run the test on that target or stage
  • ignore-test always ignores the test. This can be used to temporarily disable a test if it is currently not working, but you want to keep it in tree to re-enable it later.

Some examples of X in ignore-X or only-X:

  • A full target triple: aarch64-apple-ios
  • Architecture: aarch64, arm, mips, wasm32, x86_64, x86, ...
  • OS: android, emscripten, freebsd, ios, linux, macos, windows, ...
  • Environment (fourth word of the target triple): gnu, msvc, musl
  • WASM: wasm32-bare matches wasm32-unknown-unknown. emscripten also matches that target as well as the emscripten targets.
  • Pointer width: 32bit, 64bit
  • Endianness: endian-big
  • Stage: stage0, stage1, stage2
  • Channel: stable, beta
  • When cross compiling: cross-compile
  • When remote testing is used: remote
  • When particular debuggers are being tested: cdb, gdb, lldb
  • When particular debugger versions are matched: ignore-gdb-version
  • Specific compare modes: compare-mode-polonius, compare-mode-chalk, compare-mode-split-dwarf, compare-mode-split-dwarf-single
  • The two different test modes used by coverage tests: ignore-coverage-map, ignore-coverage-run

The following directives will check rustc build settings and target settings:

  • needs-asm-support — ignores if it is running on a target that doesn't have stable support for asm!
  • needs-profiler-runtime — ignores the test if the profiler runtime was not enabled for the target (build.profiler = true in rustc's config.toml)
  • needs-sanitizer-support — ignores if the sanitizer support was not enabled for the target (sanitizers = true in rustc's config.toml)
  • needs-sanitizer-{address,hwaddress,leak,memory,thread} — ignores if the corresponding sanitizer is not enabled for the target (AddressSanitizer, hardware-assisted AddressSanitizer, LeakSanitizer, MemorySanitizer or ThreadSanitizer respectively)
  • needs-run-enabled — ignores if it is a test that gets executed, and running has been disabled. Running tests can be disabled with the x test --run=never flag, or running on fuchsia.
  • needs-unwind — ignores if the target does not support unwinding
  • needs-rust-lld — ignores if the rust lld support is not enabled (rust.lld = true in config.toml)
  • needs-threads — ignores if the target does not have threading support
  • needs-symlink — ignores if the target does not support symlinks. This can be the case on Windows if the developer did not enable privileged symlink permissions.
  • ignore-std-debug-assertions — ignores if std was built with debug assertions.
  • needs-std-debug-assertions — ignores if std was not built with debug assertions.
  • ignore-rustc-debug-assertions — ignores if rustc was built with debug assertions.
  • needs-rustc-debug-assertions — ignores if rustc was not built with debug assertions.
  • needs-target-has-atomic — ignores if target does not have support for all specified atomic widths, e.g. the test with //@ needs-target-has-atomic: 8, 16, ptr will only run if it supports the comma-separated list of atomic widths.

The following directives will check LLVM support:

  • no-system-llvm — ignores if the system llvm is used
  • exact-llvm-major-version: 19 — ignores if the llvm major version does not match the specified llvm major version.
  • min-llvm-version: 13.0 — ignored if the LLVM version is less than the given value
  • min-system-llvm-version: 12.0 — ignored if using a system LLVM and its version is less than the given value
  • max-llvm-major-version: 19 — ignored if the LLVM major version is higher than the given major version
  • ignore-llvm-version: 9.0 — ignores a specific LLVM version
  • ignore-llvm-version: 7.0 - 9.9.9 — ignores LLVM versions in a range (inclusive)
  • needs-llvm-components: powerpc — ignores if the specific LLVM component was not built. Note: The test will fail on CI (when COMPILETEST_REQUIRE_ALL_LLVM_COMPONENTS is set) if the component does not exist.
  • needs-forced-clang-based-tests — test is ignored unless the environment variable RUSTBUILD_FORCE_CLANG_BASED_TESTS is set, which enables building clang alongside LLVM
    • This is only set in two CI jobs (x86_64-gnu-debug and aarch64-gnu-debug), which only runs a subset of run-make tests. Other tests with this directive will not run at all, which is usually not what you want.
    • Notably, the aarch64-gnu-debug CI job currently only runs run-make tests which additionally contain clang in their test name.

See also Debuginfo tests for directives for ignoring debuggers.

Affecting how tests are built

DirectiveExplanationSupported test suitesPossible values
compile-flagsFlags passed to rustc when building the test or aux fileAll except for run-makeAny valid rustc flags, e.g. -Awarnings -Dfoo. Cannot be -Cincremental.
editionAlias for compile-flags: --edition=xxxAll except for run-makeAny valid --edition value
rustc-envEnv var to set when running rustcAll except for run-make<KEY>=<VALUE>
unset-rustc-envEnv var to unset when running rustcAll except for run-makeAny env var name
incrementalProper incremental support for tests outside of incremental test suiteui, crashesN/A
no-prefer-dynamicDon't use -C prefer-dynamic, don't build as a dylib via a --crate-type=dylib preset flagui, crashesN/A
Tests (outside of `run-make`) that want to use incremental tests not in the incremental test-suite must not pass `-C incremental` via `compile-flags`, and must instead use the `//@ incremental` directive.

Consider writing the test as a proper incremental test instead.

Rustdoc

DirectiveExplanationSupported test suitesPossible values
doc-flagsFlags passed to rustdoc when building the test or aux filerustdoc, js-doc-test, rustdoc-jsonAny valid rustdoc flags

FIXME(rustdoc): what does check-test-line-numbers-match do?

Asked in https://rust-lang.zulipchat.com/#narrow/stream/266220-t-rustdoc/topic/What.20is.20the.20.60check-test-line-numbers-match.60.20directive.3F.

Pretty printing

See Pretty-printer.

Misc directives

  • no-auto-check-cfg — disable auto check-cfg (only for --check-cfg tests)
  • revisions — compile multiple times
  • unused-revision-names - suppress tidy checks for mentioning unknown revision names -forbid-output — incremental cfail rejects output pattern
  • should-ice — incremental cfail should ICE
  • reference — an annotation linking to a rule in the reference

Tool-specific directives

The following directives affect how certain command-line tools are invoked, in test suites that use those tools:

Substitutions

Directive values support substituting a few variables which will be replaced with their corresponding value. For example, if you need to pass a compiler flag with a path to a specific file, something like the following could work:

//@ compile-flags: --remap-path-prefix={{src-base}}=/the/src

Where the sentinel {{src-base}} will be replaced with the appropriate path described below:

  • {{cwd}}: The directory where compiletest is run from. This may not be the root of the checkout, so you should avoid using it where possible.
    • Examples: /path/to/rust, /path/to/build/root
  • {{src-base}}: The directory where the test is defined. This is equivalent to $DIR for output normalization.
    • Example: /path/to/rust/tests/ui/error-codes
  • {{build-base}}: The base directory where the test's output goes. This is equivalent to $TEST_BUILD_DIR for output normalization.
    • Example: /path/to/rust/build/x86_64-unknown-linux-gnu/test/ui
  • {{rust-src-base}}: The sysroot directory where libstd/libcore/... are located
  • {{sysroot-base}}: Path of the sysroot directory used to build the test.
    • Mainly intended for ui-fulldeps tests that run the compiler via API.
  • {{target-linker}}: Linker that would be passed to -Clinker for this test, or blank if no linker override is active.
    • Mainly intended for ui-fulldeps tests that run the compiler via API.
  • {{target}}: The target the test is compiling for
    • Example: x86_64-unknown-linux-gnu

See tests/ui/commandline-argfile.rs for an example of a test that uses this substitution.

Adding a directive

One would add a new directive if there is a need to define some test property or behavior on an individual, test-by-test basis. A directive property serves as the directive's backing store (holds the command's current value) at runtime.

To add a new directive property:

  1. Look for the pub struct TestProps declaration in src/tools/compiletest/src/header.rs and add the new public property to the end of the declaration.
  2. Look for the impl TestProps implementation block immediately following the struct declaration and initialize the new property to its default value.

Adding a new directive parser

When compiletest encounters a test file, it parses the file a line at a time by calling every parser defined in the Config struct's implementation block, also in src/tools/compiletest/src/header.rs (note that the Config struct's declaration block is found in src/tools/compiletest/src/common.rs). TestProps's load_from() method will try passing the current line of text to each parser, which, in turn typically checks to see if the line begins with a particular commented (//@) directive such as //@ must-compile-successfully or //@ failure-status. Whitespace after the comment marker is optional.

Parsers will override a given directive property's default value merely by being specified in the test file as a directive or by having a parameter value specified in the test file, depending on the directive.

Parsers defined in impl Config are typically named parse_<directive-name> (note kebab-case <directive-command> transformed to snake-case <directive_command>). impl Config also defines several 'low-level' parsers which make it simple to parse common patterns like simple presence or not (parse_name_directive()), directive:parameter(s) (parse_name_value_directive()), optional parsing only if a particular cfg attribute is defined (has_cfg_prefix()) and many more. The low-level parsers are found near the end of the impl Config block; be sure to look through them and their associated parsers immediately above to see how they are used to avoid writing additional parsing code unnecessarily.

As a concrete example, here is the implementation for the parse_failure_status() parser, in src/tools/compiletest/src/header.rs:

@@ -232,6 +232,7 @@ pub struct TestProps {
     // customized normalization rules
     pub normalize_stdout: Vec<(String, String)>,
     pub normalize_stderr: Vec<(String, String)>,
+    pub failure_status: i32,
 }

 impl TestProps {
@@ -260,6 +261,7 @@ impl TestProps {
             run_pass: false,
             normalize_stdout: vec![],
             normalize_stderr: vec![],
+            failure_status: 101,
         }
     }

@@ -383,6 +385,10 @@ impl TestProps {
             if let Some(rule) = config.parse_custom_normalization(ln, "normalize-stderr") {
                 self.normalize_stderr.push(rule);
             }
+
+            if let Some(code) = config.parse_failure_status(ln) {
+                self.failure_status = code;
+            }
         });

         for key in &["RUST_TEST_NOCAPTURE", "RUST_TEST_THREADS"] {
@@ -488,6 +494,13 @@ impl Config {
         self.parse_name_directive(line, "pretty-compare-only")
     }

+    fn parse_failure_status(&self, line: &str) -> Option<i32> {
+        match self.parse_name_value_directive(line, "failure-status") {
+            Some(code) => code.trim().parse::<i32>().ok(),
+            _ => None,
+        }
+    }

Implementing the behavior change

When a test invokes a particular directive, it is expected that some behavior will change as a result. What behavior, obviously, will depend on the purpose of the directive. In the case of failure-status, the behavior that changes is that compiletest expects the failure code defined by the directive invoked in the test, rather than the default value.

Although specific to failure-status (as every directive will have a different implementation in order to invoke behavior change) perhaps it is helpful to see the behavior change implementation of one case, simply as an example. To implement failure-status, the check_correct_failure_status() function found in the TestCx implementation block, located in src/tools/compiletest/src/runtest.rs, was modified as per below:

@@ -295,11 +295,14 @@ impl<'test> TestCx<'test> {
     }

     fn check_correct_failure_status(&self, proc_res: &ProcRes) {
-        // The value the Rust runtime returns on failure
-        const RUST_ERR: i32 = 101;
-        if proc_res.status.code() != Some(RUST_ERR) {
+        let expected_status = Some(self.props.failure_status);
+        let received_status = proc_res.status.code();
+
+        if expected_status != received_status {
             self.fatal_proc_rec(
-                &format!("failure produced the wrong error: {}", proc_res.status),
+                &format!("Error: expected failure status ({:?}) but received status {:?}.",
+                         expected_status,
+                         received_status),
                 proc_res,
             );
         }
@@ -320,7 +323,6 @@ impl<'test> TestCx<'test> {
         );

         let proc_res = self.exec_compiled_test();
-
         if !proc_res.status.success() {
             self.fatal_proc_rec("test run failed!", &proc_res);
         }
@@ -499,7 +501,6 @@ impl<'test> TestCx<'test> {
                 expected,
                 actual
             );
-            panic!();
         }
     }

Note the use of self.props.failure_status to access the directive property. In tests which do not specify the failure status directive, self.props.failure_status will evaluate to the default value of 101 at the time of this writing. But for a test which specifies a directive of, for example, //@ failure-status: 1, self.props.failure_status will evaluate to 1, as parse_failure_status() will have overridden the TestProps default value, for that test specifically.

minicore test auxiliary: using core stubs

tests/auxiliary/minicore.rs is a test auxiliary for ui/codegen/assembly test suites. It provides core stubs for tests that need to build for cross-compiled targets but do not need/want to run.

A test can use minicore by specifying the //@ add-core-stubs directive. Then, mark the test with #![feature(no_core)] + #![no_std] + #![no_core]. Due to Edition 2015 extern prelude rules, you will probably need to declare minicore as an extern crate.

Due to the no_std + no_core nature of these tests, //@ add-core-stubs implies and requires that the test will be built with -C panic=abort. Unwinding panics are not supported.

If you find a core item to be missing from the minicore stub, consider adding it to the test auxiliary if it's likely to be used or is already needed by more than one test.

Please note that [`minicore`] is only intended for `core` items, and explicitly **not** `std` or `alloc` items because `core` items are applicable to a wider range of tests.

Example codegen test that uses minicore

#![allow(unused)]
fn main() {
//@ add-core-stubs
//@ revisions: meow bark
//@[meow] compile-flags: --target=x86_64-unknown-linux-gnu
//@[meow] needs-llvm-components: x86
//@[bark] compile-flags: --target=wasm32-unknown-unknown
//@[bark] needs-llvm-components: webassembly

#![crate_type = "lib"]
#![feature(no_core)]
#![no_std]
#![no_core]

extern crate minicore;
use minicore::*;

struct Meow;
impl Copy for Meow {} // `Copy` here is provided by `minicore`

// CHECK-LABEL: meow
#[unsafe(no_mangle)]
fn meow() {}
}

Ecosystem testing

Rust tests integration with real-world code in the ecosystem to catch regressions and make informed decisions about the evolution of the language.

Testing methods

Crater

Crater is a tool which runs tests on many thousands of public projects. This tool has its own separate infrastructure for running, and is not run as part of CI. See the Crater chapter for more details.

cargotest

cargotest is a small tool which runs cargo test on a few sample projects (such as servo, ripgrep, tokei, etc.). This runs as part of CI and ensures there aren't any significant regressions.

Example: ./x test src/tools/cargotest

Large OSS Project builders

We have CI jobs that build large open-source Rust projects that are used as regression tests in CI. Our integration jobs build the following projects:

Crater

Crater is a tool for compiling and running tests for every crate on crates.io (and a few on GitHub). It is mainly used for checking the extent of breakage when implementing potentially breaking changes and ensuring lack of breakage by running beta vs stable compiler versions.

When to run Crater

You should request a crater run if your PR makes large changes to the compiler or could cause breakage. If you are unsure, feel free to ask your PR's reviewer.

Requesting Crater Runs

The rust team maintains a few machines that can be used for running crater runs on the changes introduced by a PR. If your PR needs a crater run, leave a comment for the triage team in the PR thread. Please inform the team whether you require a "check-only" crater run, a "build only" crater run, or a "build-and-test" crater run. The difference is primarily in time; the conservative (if you're not sure) option is to go for the build-and-test run. If making changes that will only have an effect at compile-time (e.g., implementing a new trait) then you only need a check run.

Your PR will be enqueued by the triage team and the results will be posted when they are ready. Check runs will take around ~3-4 days, with the other two taking 5-6 days on average.

While crater is really useful, it is also important to be aware of a few caveats:

  • Not all code is on crates.io! There is a lot of code in repos on GitHub and elsewhere. Also, companies may not wish to publish their code. Thus, a successful crater run is not a magically green light that there will be no breakage; you still need to be careful.

  • Crater only runs Linux builds on x86_64. Thus, other architectures and platforms are not tested. Critically, this includes Windows.

  • Many crates are not tested. This could be for a lot of reasons, including that the crate doesn't compile any more (e.g. used old nightly features), has broken or flaky tests, requires network access, or other reasons.

  • Before crater can be run, @bors try needs to succeed in building artifacts. This means that if your code doesn't compile, you cannot run crater.

Fuchsia integration tests

Fuchsia is an open-source operating system with about 2 million lines of Rust code.1 It has caught a large number of regressions in the past and was subsequently included in CI.

Building Fuchsia in CI

Fuchsia builds as part of the suite of bors tests that run before a pull request is merged.

If you are worried that a pull request might break the Fuchsia builder and want to test it out before submitting it to the bors queue, simply add this line to your PR description:

try-job: x86_64-fuchsia

Then when you @bors try it will pick the job that builds Fuchsia.

Building Fuchsia locally

Because Fuchsia uses languages other than Rust, it does not use Cargo as a build system. It also requires the toolchain build to be configured in a certain way.

The recommended way to build Fuchsia is to use the Docker scripts that check out and run a Fuchsia build for you. If you've run Docker tests before, you can simply run this command from your Rust checkout to download and build Fuchsia using your local Rust toolchain.

src/ci/docker/run.sh x86_64-fuchsia

See the Testing with Docker chapter for more details on how to run and debug jobs with Docker.

Note that a Fuchsia checkout is large – as of this writing, a checkout and build takes 46G of space – and as you might imagine, it takes a while to complete.

Modifying the Fuchsia checkout

The main reason you would want to build Fuchsia locally is because you need to investigate a regression. After running a Docker build, you'll find the Fuchsia checkout inside the obj/fuchsia directory of your Rust checkout. If you modify the KEEP_CHECKOUT line in the build-fuchsia.sh script to KEEP_CHECKOUT=1, you can change the checkout as needed and rerun the build command above. This will reuse all the build results from before.

You can find more options to customize the Fuchsia checkout in the build-fuchsia.sh script.

Customizing the Fuchsia build

You can find more info about the options used to build Fuchsia in Rust CI in the build_fuchsia_from_rust_ci.sh script invoked by build-fuchsia.sh.

The Fuchsia build system uses GN, a metabuild system that generates Ninja files and then hands off the work of running the build to Ninja.

Fuchsia developers use fx to run builds and perform other development tasks. This tool is located in .jiri_root/bin of the Fuchsia checkout; you may need to add this to your $PATH for some workflows.

There are a few fx subcommands that are relevant, including:

  • fx set accepts build arguments, writes them to out/default/args.gn, and runs GN.
  • fx build builds the Fuchsia project using Ninja. It will automatically pick up changes to build arguments and rerun GN. By default it builds everything, but it also accepts target paths to build specific targets (see below).
  • fx clippy runs Clippy on specific Rust targets (or all of them). We use this in the Rust CI build to avoid running codegen on most Rust targets. Underneath it invokes Ninja, just like fx build. The clippy results are saved in json files inside the build output directory before being printed.

Target paths

GN uses paths like the following to identify build targets:

//src/starnix/kernel:starnix_core

The initial // means the root of the checkout, and the remaining slashes are directory names. The string after : is the target name of a target defined in the BUILD.gn file of that directory.

The target name can be omitted if it is the same as the directory name. In other words, //src/starnix/kernel is the same as //src/starnix/kernel:kernel.

These target paths are used inside BUILD.gn files to reference dependencies, and can also be used in fx build.

Modifying compiler flags

You can put custom compiler flags inside a GN config that is added to a target. As a simple example:

config("everybody_loops") {
    rustflags = [ "-Zeverybody-loops" ]
}

rustc_binary("example") {
    crate_root = "src/bin.rs"
    # ...existing keys here...
    configs += [ ":everybody_loops" ]
}

This will add the flag -Zeverybody-loops to rustc when building the example target. Note that you can also use public_configs for a config to be added to every target that depends on that target.

If you want to add a flag to every Rust target in the build, you can add rustflags to the //build/config:compiler config or to the OS-specific configs referenced in that file. Note that cflags and ldflags are ignored on Rust targets.

Running ninja and rustc commands directly

Going down one layer, fx build invokes ninja, which in turn eventually invokes rustc. All build actions are run inside the out directory, which is usually out/default inside the Fuchsia checkout.

You can get ninja to print the actual command it invokes by forcing that command to fail, e.g. by adding a syntax error to one of the source files of the target. Once you have the command, you can run it from inside the output directory.

After changing the toolchain itself, the build setting rustc_version_string in out/default/args.gn needs to be changed so that fx build or ninja will rebuild all the Rust targets. This can be done in a text editor and the contents of the string do not matter, as long as it changes from one build to the next. build_fuchsia_from_rust_ci.sh does this for you by hashing the toolchain directory.

The Fuchsia website has more detailed documentation of the build system.

Other tips and tricks

When using build_fuchsia_from_rust_ci.sh you can comment out the fx set command after the initial run so it won't rerun GN each time. If you do this you can also comment out the version_string line to save a couple seconds.

export NINJA_PERSISTENT_MODE=1 to get faster ninja startup times after the initial build.

Fuchsia target support

To learn more about Fuchsia target support, see the Fuchsia chapter in the rustc book.

1

As of June 2024, Fuchsia had about 2 million lines of first-party Rust code and a roughly equal amount of third-party code, as counted by tokei (excluding comments and blanks).

Rust for Linux integration tests

Rust for Linux (RfL) is an effort for adding support for the Rust programming language into the Linux kernel.

Building Rust for Linux in CI

Rust for Linux builds as part of the suite of bors tests that run before a pull request is merged.

The workflow builds a stage1 sysroot of the Rust compiler, downloads the Linux kernel, and tries to compile several Rust for Linux drivers and examples using this sysroot. RfL uses several unstable compiler/language features, therefore this workflow notifies us if a given compiler change would break it.

If you are worried that a pull request might break the Rust for Linux builder and want to test it out before submitting it to the bors queue, simply add this line to your PR description:

try-job: x86_64-rust-for-linux

Then when you @bors try it will pick the job that builds the Rust for Linux integration.

What to do in case of failure

If a PR breaks the Rust for Linux CI job, then:

  • If the breakage was unintentional and seems spurious, then let RfL know and retry.
    • If the PR is urgent and retrying doesn't fix it, then disable the CI job temporarily (comment out the image: x86_64-rust-for-linux job in src/ci/github-actions/jobs.yml).
  • If the breakage was unintentional, then change the PR to resolve the breakage.
  • If the breakage was intentional, then let RfL know and discuss what will the kernel need to change.
    • If the PR is urgent, then disable the CI job temporarily (comment out the image: x86_64-rust-for-linux job in src/ci/github-actions/jobs.yml).
    • If the PR can wait a few days, then wait for RfL maintainers to provide a new Linux kernel commit hash with the needed changes done, and apply it to the PR, which would confirm the changes work (update the LINUX_VERSION environment variable in src/ci/docker/scripts/rfl-build.sh).

If you need to contact the RfL developers, you can ping the Rust for Linux ping group to ask for help:

@rustbot ping rfl

Performance testing

rustc-perf

A lot of work is put into improving the performance of the compiler and preventing performance regressions.

The rustc-perf project provides several services for testing and tracking performance. It provides hosted infrastructure for running benchmarks as a service. At this time, only x86_64-unknown-linux-gnu builds are tracked.

A "perf run" is used to compare the performance of the compiler in different configurations for a large collection of popular crates. Different configurations include "fresh builds", builds with incremental compilation, etc.

The result of a perf run is a comparison between two versions of the compiler (by their commit hashes).

You can also use rustc-perf to manually benchmark and profile the compiler locally.

Automatic perf runs

After every PR is merged, a suite of benchmarks are run against the compiler. The results are tracked over time on the https://perf.rust-lang.org/ website. Any changes are noted in a comment on the PR.

Manual perf runs

Additionally, performance tests can be ran before a PR is merged on an as-needed basis. You should request a perf run if your PR may affect performance, especially if it can affect performance adversely.

To evaluate the performance impact of a PR, write this comment on the PR:

@bors try @rust-timer queue

Note: Only users authorized to do perf runs are allowed to post this comment. Teams that are allowed to use it are tracked in the Teams repository with the perf = true value in the [permissions] section (and bors permissions are also required). If you are not on one of those teams, feel free to ask for someone to post it for you (either on Zulip or ask the assigned reviewer).

This will first tell bors to do a "try" build which do a full release build for x86_64-unknown-linux-gnu. After the build finishes, it will place it in the queue to run the performance suite against it. After the performance tests finish, the bot will post a comment on the PR with a summary and a link to a full report.

If you want to do a perf run for an already built artifact (e.g. for a previous try build that wasn't benchmarked yet), you can run this instead:

@rust-timer build <commit-sha>

You cannot benchmark the same artifact twice though.

More information about the available perf bot commands can be found here.

More details about the benchmarking process itself are available in the perf collector documentation.

Suggest tests tool

This chapter is about the internals of and contribution instructions for the suggest-tests tool. For a high-level overview of the tool, see this section. This tool is currently in a beta state and is tracked by this issue on Github. Currently the number of tests it will suggest are very limited in scope, we are looking to expand this (contributions welcome!).

Internals

The tool is defined in a separate crate (src/tools/suggest-tests) which outputs suggestions which are parsed by a shim in bootstrap (src/bootstrap/src/core/build_steps/suggest.rs). The only notable thing the bootstrap shim does is (when invoked with the --run flag) use bootstrap's internal mechanisms to create a new Builder and uses it to invoke the suggested commands. The suggest-tests crate is where the fun happens, two kinds of suggestions are defined: "static" and "dynamic" suggestions.

Static suggestions

Defined here. Static suggestions are simple: they are just globs which map to a x command. In suggest-tests, this is implemented with a simple macro_rules macro.

Dynamic suggestions

Defined here. These are more complicated than static suggestions and are implemented as functions with the following signature: fn(&Path) -> Vec<Suggestion>. In other words, each suggestion takes a path to a modified file and (after running arbitrary Rust code) can return any number of suggestions, or none. Dynamic suggestions are useful for situations where fine-grained control over suggestions is needed. For example, modifications to the compiler/xyz/ path should trigger the x test compiler/xyz suggestion. In the future, dynamic suggestions might even read file contents to determine if (what) tests should run.

Adding a suggestion

The following steps should serve as a rough guide to add suggestions to suggest-tests (very welcome!):

  1. Determine the rules for your suggestion. Is it simple and operates only on a single path or does it match globs? Does it need fine-grained control over the resulting command or does "one size fit all"?
  2. Based on the previous step, decide if your suggestion should be implemented as either static or dynamic.
  3. Implement the suggestion. If it is dynamic then a test is highly recommended, to verify that your logic is correct and to give an example of the suggestion. See the tests.rs file.
  4. Open a PR implementing your suggestion. (TODO: add example PR)

Miscellaneous testing-related info

RUSTC_BOOTSTRAP and stability

This is a bootstrap/compiler implementation detail, but it can also be useful for testing:

  • RUSTC_BOOTSTRAP=1 will "cheat" and bypass usual stability checking, allowing you to use unstable features and cli flags on a stable rustc.
  • RUSTC_BOOTSTRAP=-1 will force a given rustc to pretend that is a stable compiler, even if it's actually a nightly rustc. This is useful because some behaviors of the compiler (e.g. diagnostics) can differ depending on whether the compiler is nightly or not.

In ui tests and other test suites that support //@ rustc-env, you can specify

// Force unstable features to be usable on stable rustc
//@ rustc-env:RUSTC_BOOTSTRAP=1

// Or force nightly rustc to pretend it is a stable rustc
//@ rustc-env:RUSTC_BOOTSTRAP=-1

For run-make tests, //@ rustc-env is not supported. You can do something like the following for individual rustc invocations.

use run_make_support::rustc;

fn main() {
    rustc()
        // Pretend that I am very stable
        .env("RUSTC_BOOTSTRAP", "-1")
        //...
        .run();
}

Debugging the compiler

This chapter contains a few tips to debug the compiler. These tips aim to be useful no matter what you are working on. Some of the other chapters have advice about specific parts of the compiler (e.g. the Queries Debugging and Testing chapter or the LLVM Debugging chapter).

Configuring the compiler

By default, rustc is built without most debug information. To enable debug info, set debug = true in your config.toml.

Setting debug = true turns on many different debug options (e.g., debug-assertions, debug-logging, etc.) which can be individually tweaked if you want to, but many people simply set debug = true.

If you want to use GDB to debug rustc, please set config.toml with options:

[rust]
debug = true
debuginfo-level = 2

NOTE: This will use a lot of disk space (upwards of 35GB), and will take a lot more compile time. With debuginfo-level = 1 (the default when debug = true), you will be able to track the execution path, but will lose the symbol information for debugging.

The default configuration will enable symbol-mangling-version v0. This requires at least GDB v10.2, otherwise you need to disable new symbol-mangling-version in config.toml.

[rust]
new-symbol-mangling = false

See the comments in config.example.toml for more info.

You will need to rebuild the compiler after changing any configuration option.

Suppressing the ICE file

By default, if rustc encounters an Internal Compiler Error (ICE) it will dump the ICE contents to an ICE file within the current working directory named rustc-ice-<timestamp>-<pid>.txt. If this is not desirable, you can prevent the ICE file from being created with RUSTC_ICE=0.

Getting a backtrace

When you have an ICE (panic in the compiler), you can set RUST_BACKTRACE=1 to get the stack trace of the panic! like in normal Rust programs. IIRC backtraces don't work on MinGW, sorry. If you have trouble or the backtraces are full of unknown, you might want to find some way to use Linux, Mac, or MSVC on Windows.

In the default configuration (without debug set to true), you don't have line numbers enabled, so the backtrace looks like this:

stack backtrace:
   0: std::sys::imp::backtrace::tracing::imp::unwind_backtrace
   1: std::sys_common::backtrace::_print
   2: std::panicking::default_hook::{{closure}}
   3: std::panicking::default_hook
   4: std::panicking::rust_panic_with_hook
   5: std::panicking::begin_panic
   (~~~~ LINES REMOVED BY ME FOR BREVITY ~~~~)
  32: rustc_typeck::check_crate
  33: <std::thread::local::LocalKey<T>>::with
  34: <std::thread::local::LocalKey<T>>::with
  35: rustc::ty::context::TyCtxt::create_and_enter
  36: rustc_driver::driver::compile_input
  37: rustc_driver::run_compiler

If you set debug = true, you will get line numbers for the stack trace. Then the backtrace will look like this:

stack backtrace:
   (~~~~ LINES REMOVED BY ME FOR BREVITY ~~~~)
             at /home/user/rust/compiler/rustc_typeck/src/check/cast.rs:110
   7: rustc_typeck::check::cast::CastCheck::check
             at /home/user/rust/compiler/rustc_typeck/src/check/cast.rs:572
             at /home/user/rust/compiler/rustc_typeck/src/check/cast.rs:460
             at /home/user/rust/compiler/rustc_typeck/src/check/cast.rs:370
   (~~~~ LINES REMOVED BY ME FOR BREVITY ~~~~)
  33: rustc_driver::driver::compile_input
             at /home/user/rust/compiler/rustc_driver/src/driver.rs:1010
             at /home/user/rust/compiler/rustc_driver/src/driver.rs:212
  34: rustc_driver::run_compiler
             at /home/user/rust/compiler/rustc_driver/src/lib.rs:253

-Z flags

The compiler has a bunch of -Z * flags. These are unstable flags that are only enabled on nightly. Many of them are useful for debugging. To get a full listing of -Z flags, use -Z help.

One useful flag is -Z verbose-internals, which generally enables printing more info that could be useful for debugging.

Right below you can find elaborate explainers on a selected few.

Getting a backtrace for errors

If you want to get a backtrace to the point where the compiler emits an error message, you can pass the -Z treat-err-as-bug=n, which will make the compiler panic on the nth error. If you leave off =n, the compiler will assume 1 for n and thus panic on the first error it encounters.

For example:

$ cat error.rs
fn main() {
    1 + ();
}
$ rustc +stage1 error.rs
error[E0277]: cannot add `()` to `{integer}`
 --> error.rs:2:7
  |
2 |       1 + ();
  |         ^ no implementation for `{integer} + ()`
  |
  = help: the trait `Add<()>` is not implemented for `{integer}`

error: aborting due to previous error

Now, where does the error above come from?

$ RUST_BACKTRACE=1 rustc +stage1 error.rs -Z treat-err-as-bug
error[E0277]: the trait bound `{integer}: std::ops::Add<()>` is not satisfied
 --> error.rs:2:7
  |
2 |     1 + ();
  |       ^ no implementation for `{integer} + ()`
  |
  = help: the trait `std::ops::Add<()>` is not implemented for `{integer}`

error: internal compiler error: unexpected panic

note: the compiler unexpectedly panicked. this is a bug.

note: we would appreciate a bug report: https://github.com/rust-lang/rust/blob/master/CONTRIBUTING.md#bug-reports

note: rustc 1.24.0-dev running on x86_64-unknown-linux-gnu

note: run with `RUST_BACKTRACE=1` for a backtrace

thread 'rustc' panicked at 'encountered error with `-Z treat_err_as_bug',
/home/user/rust/compiler/rustc_errors/src/lib.rs:411:12
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose
backtrace.
stack backtrace:
  (~~~ IRRELEVANT PART OF BACKTRACE REMOVED BY ME ~~~)
   7: rustc::traits::error_reporting::<impl rustc::infer::InferCtxt<'a, 'tcx>>
             ::report_selection_error
             at /home/user/rust/compiler/rustc_middle/src/traits/error_reporting.rs:823
   8: rustc::traits::error_reporting::<impl rustc::infer::InferCtxt<'a, 'tcx>>
             ::report_fulfillment_errors
             at /home/user/rust/compiler/rustc_middle/src/traits/error_reporting.rs:160
             at /home/user/rust/compiler/rustc_middle/src/traits/error_reporting.rs:112
   9: rustc_typeck::check::FnCtxt::select_obligations_where_possible
             at /home/user/rust/compiler/rustc_typeck/src/check/mod.rs:2192
  (~~~ IRRELEVANT PART OF BACKTRACE REMOVED BY ME ~~~)
  36: rustc_driver::run_compiler
             at /home/user/rust/compiler/rustc_driver/src/lib.rs:253

Cool, now I have a backtrace for the error!

Debugging delayed bugs

The -Z eagerly-emit-delayed-bugs option makes it easy to debug delayed bugs. It turns them into normal errors, i.e. makes them visible. This can be used in combination with -Z treat-err-as-bug to stop at a particular delayed bug and get a backtrace.

Getting the error creation location

-Z track-diagnostics can help figure out where errors are emitted. It uses #[track_caller] for this and prints its location alongside the error:

$ RUST_BACKTRACE=1 rustc +stage1 error.rs -Z track-diagnostics
error[E0277]: cannot add `()` to `{integer}`
 --> src\error.rs:2:7
  |
2 |     1 + ();
  |       ^ no implementation for `{integer} + ()`
-Ztrack-diagnostics: created at compiler/rustc_trait_selection/src/traits/error_reporting/mod.rs:638:39
  |
  = help: the trait `Add<()>` is not implemented for `{integer}`
  = help: the following other types implement trait `Add<Rhs>`:
            <&'a f32 as Add<f32>>
            <&'a f64 as Add<f64>>
            <&'a i128 as Add<i128>>
            <&'a i16 as Add<i16>>
            <&'a i32 as Add<i32>>
            <&'a i64 as Add<i64>>
            <&'a i8 as Add<i8>>
            <&'a isize as Add<isize>>
          and 48 others

For more information about this error, try `rustc --explain E0277`.

This is similar but different to -Z treat-err-as-bug:

  • it will print the locations for all errors emitted
  • it does not require a compiler built with debug symbols
  • you don't have to read through a big stack trace.

Getting logging output

The compiler uses the tracing crate for logging.

For details see the guide section on tracing

Narrowing (Bisecting) Regressions

The cargo-bisect-rustc tool can be used as a quick and easy way to find exactly which PR caused a change in rustc behavior. It automatically downloads rustc PR artifacts and tests them against a project you provide until it finds the regression. You can then look at the PR to get more context on why it was changed. See this tutorial on how to use it.

Downloading Artifacts from Rust's CI

The rustup-toolchain-install-master tool by kennytm can be used to download the artifacts produced by Rust's CI for a specific SHA1 -- this basically corresponds to the successful landing of some PR -- and then sets them up for your local use. This also works for artifacts produced by @bors try. This is helpful when you want to examine the resulting build of a PR without doing the build yourself.

#[rustc_*] TEST attributes

The compiler defines a whole lot of internal (perma-unstable) attributes some of which are useful for debugging by dumping extra compiler-internal information. These are prefixed with rustc_ and are gated behind the internal feature rustc_attrs (enabled via e.g. #![feature(rustc_attrs)]).

For a complete and up to date list, see builtin_attrs. More specifically, the ones marked TEST. Here are some notable ones:

AttributeDescription
rustc_def_pathDumps the def_path_str of an item.
rustc_dump_def_parentsDumps the chain of DefId parents of certain definitions.
rustc_dump_item_boundsDumps the item_bounds of an item.
rustc_dump_predicatesDumps the predicates_of an item.
rustc_dump_vtable
rustc_hidden_type_of_opaquesDumps the hidden type of each opaque types in the crate.
rustc_layoutSee this section.
rustc_object_lifetime_defaultDumps the object lifetime defaults of an item.
rustc_outlivesDumps implied bounds of an item. More precisely, the inferred_outlives_of an item.
rustc_regionsDumps NLL closure region requirements.
rustc_symbol_nameDumps the mangled & demangled symbol_name of an item.
rustc_variancesDumps the variances of an item.

Right below you can find elaborate explainers on a selected few.

Formatting Graphviz output (.dot files)

Some compiler options for debugging specific features yield graphviz graphs - e.g. the #[rustc_mir(borrowck_graphviz_postflow="suffix.dot")] attribute dumps various borrow-checker dataflow graphs.

These all produce .dot files. To view these files, install graphviz (e.g. apt-get install graphviz) and then run the following commands:

$ dot -T pdf maybe_init_suffix.dot > maybe_init_suffix.pdf
$ firefox maybe_init_suffix.pdf # Or your favorite pdf viewer

Debugging type layouts

The internal attribute #[rustc_layout] can be used to dump the Layout of the type it is attached to. For example:

#![allow(unused)]
#![feature(rustc_attrs)]

fn main() {
#[rustc_layout(debug)]
type T<'a> = &'a u32;
}

Will emit the following:

error: layout_of(&'a u32) = Layout {
    fields: Primitive,
    variants: Single {
        index: 0,
    },
    abi: Scalar(
        Scalar {
            value: Pointer,
            valid_range: 1..=18446744073709551615,
        },
    ),
    largest_niche: Some(
        Niche {
            offset: Size {
                raw: 0,
            },
            scalar: Scalar {
                value: Pointer,
                valid_range: 1..=18446744073709551615,
            },
        },
    ),
    align: AbiAndPrefAlign {
        abi: Align {
            pow2: 3,
        },
        pref: Align {
            pow2: 3,
        },
    },
    size: Size {
        raw: 8,
    },
}
 --> src/lib.rs:4:1
  |
4 | type T<'a> = &'a u32;
  | ^^^^^^^^^^^^^^^^^^^^^

error: aborting due to previous error

Configuring CodeLLDB for debugging rustc

If you are using VSCode, and have edited your config.toml to request debugging level 1 or 2 for the parts of the code you're interested in, then you should be able to use the CodeLLDB extension in VSCode to debug it.

Here is a sample launch.json file, being used to run a stage 1 compiler direct from the directory where it is built (does not have to be "installed"):

// .vscode/launch.json
{
    "version": "0.2.0",
    "configurations": [
      {
        "type": "lldb",
        "request": "launch",
        "name": "Launch",
        "args": [],  // array of string command-line arguments to pass to compiler
        "program": "${workspaceFolder}/build/host/stage1/bin/rustc",
        "windows": {  // applicable if using windows
            "program": "${workspaceFolder}/build/host/stage1/bin/rustc.exe"
        },
        "cwd": "${workspaceFolder}",  // current working directory at program start
        "stopOnEntry": false,
        "sourceLanguages": ["rust"]
      }
    ]
  }

Using tracing to debug the compiler

The compiler has a lot of debug! (or trace!) calls, which print out logging information at many points. These are very useful to at least narrow down the location of a bug if not to find it entirely, or just to orient yourself as to why the compiler is doing a particular thing.

To see the logs, you need to set the RUSTC_LOG environment variable to your log filter. The full syntax of the log filters can be found in the rustdoc of tracing-subscriber.

Function level filters

Lots of functions in rustc are annotated with

#[instrument(level = "debug", skip(self))]
fn foo(&self, bar: Type) {}

which allows you to use

RUSTC_LOG=[foo]

to do the following all at once

  • log all function calls to foo
  • log the arguments (except for those in the skip list)
  • log everything (from anywhere else in the compiler) until the function returns

I don't want everything

Depending on the scope of the function, you may not want to log everything in its body. As an example: the do_mir_borrowck function will dump hundreds of lines even for trivial code being borrowchecked.

Since you can combine all filters, you can add a crate/module path, e.g.

RUSTC_LOG=rustc_borrowck[do_mir_borrowck]

I don't want all calls

If you are compiling libcore, you likely don't want all borrowck dumps, but only one for a specific function. You can filter function calls by their arguments by regexing them.

RUSTC_LOG=[do_mir_borrowck{id=\.\*from_utf8_unchecked\.\*}]

will only give you the logs of borrowchecking from_utf8_unchecked. Note that you will still get a short message per ignored do_mir_borrowck, but none of the things inside those calls. This helps you in looking through the calls that are happening and helps you adjust your regex if you mistyped it.

Query level filters

Every query is automatically tagged with a logging span so that you can display all log messages during the execution of the query. For example, if you want to log everything during type checking:

RUSTC_LOG=[typeck]

The query arguments are included as a tracing field which means that you can filter on the debug display of the arguments. For example, the typeck query has an argument key: LocalDefId of what is being checked. You can use a regex to match on that LocalDefId to log type checking for a specific function:

RUSTC_LOG=[typeck{key=.*name_of_item.*}]

Different queries have different arguments. You can find a list of queries and their arguments in rustc_middle/src/query/mod.rs.

Broad module level filters

You can also use filters similar to the log crate's filters, which will enable everything within a specific module. This is often too verbose and too unstructured, so it is recommended to use function level filters.

Your log filter can be just debug to get all debug! output and higher (e.g., it will also include info!), or path::to::module to get all output (which will include trace!) from a particular module, or path::to::module=debug to get debug! output and higher from a particular module.

For example, to get the debug! output and higher for a specific module, you can run the compiler with RUSTC_LOG=path::to::module=debug rustc my-file.rs. All debug! output will then appear in standard error.

Note that you can use a partial path and the filter will still work. For example, if you want to see info! output from only rustdoc::passes::collect_intra_doc_links, you could use RUSTDOC_LOG=rustdoc::passes::collect_intra_doc_links=info or you could use RUSTDOC_LOG=rustdoc::passes::collect_intra=info.

If you are developing rustdoc, use RUSTDOC_LOG instead. If you are developing Miri, use MIRI_LOG instead. You get the idea :)

See the tracing crate's docs, and specifically the docs for debug! to see the full syntax you can use. (Note: unlike the compiler, the tracing crate and its examples use the RUST_LOG environment variable. rustc, rustdoc, and other tools set custom environment variables.)

Note that unless you use a very strict filter, the logger will emit a lot of output, so use the most specific module(s) you can (comma-separated if multiple). It's typically a good idea to pipe standard error to a file and look at the log output with a text editor.

So, to put it together:

# This puts the output of all debug calls in `rustc_middle/src/traits` into
# standard error, which might fill your console backscroll.
$ RUSTC_LOG=rustc_middle::traits=debug rustc +stage1 my-file.rs

# This puts the output of all debug calls in `rustc_middle/src/traits` in
# `traits-log`, so you can then see it with a text editor.
$ RUSTC_LOG=rustc_middle::traits=debug rustc +stage1 my-file.rs 2>traits-log

# Not recommended! This will show the output of all `debug!` calls
# in the Rust compiler, and there are a *lot* of them, so it will be
# hard to find anything.
$ RUSTC_LOG=debug rustc +stage1 my-file.rs 2>all-log

# This will show the output of all `info!` calls in `rustc_codegen_ssa`.
#
# There's an `info!` statement in `codegen_instance` that outputs
# every function that is codegen'd. This is useful to find out
# which function triggers an LLVM assertion, and this is an `info!`
# log rather than a `debug!` log so it will work on the official
# compilers.
$ RUSTC_LOG=rustc_codegen_ssa=info rustc +stage1 my-file.rs

# This will show all logs in `rustc_codegen_ssa` and `rustc_resolve`.
$ RUSTC_LOG=rustc_codegen_ssa,rustc_resolve rustc +stage1 my-file.rs

# This will show the output of all `info!` calls made by rustdoc
# or any rustc library it calls.
$ RUSTDOC_LOG=info rustdoc +stage1 my-file.rs

# This will only show `debug!` calls made by rustdoc directly,
# not any `rustc*` crate.
$ RUSTDOC_LOG=rustdoc=debug rustdoc +stage1 my-file.rs

Log colors

By default, rustc (and other tools, like rustdoc and Miri) will be smart about when to use ANSI colors in the log output. If they are outputting to a terminal, they will use colors, and if they are outputting to a file or being piped somewhere else, they will not. However, it's hard to read log output in your terminal unless you have a very strict filter, so you may want to pipe the output to a pager like less. But then there won't be any colors, which makes it hard to pick out what you're looking for!

You can override whether to have colors in log output with the RUSTC_LOG_COLOR environment variable (or RUSTDOC_LOG_COLOR for rustdoc, or MIRI_LOG_COLOR for Miri, etc.). There are three options: auto (the default), always, and never. So, if you want to enable colors when piping to less, use something similar to this command:

# The `-R` switch tells less to print ANSI colors without escaping them.
$ RUSTC_LOG=debug RUSTC_LOG_COLOR=always rustc +stage1 ... | less -R

Note that MIRI_LOG_COLOR will only color logs that come from Miri, not logs from rustc functions that Miri calls. Use RUSTC_LOG_COLOR to color logs from rustc.

How to keep or remove debug! and trace! calls from the resulting binary

While calls to error!, warn! and info! are included in every build of the compiler, calls to debug! and trace! are only included in the program if debug-logging=true is turned on in config.toml (it is turned off by default), so if you don't see DEBUG logs, especially if you run the compiler with RUSTC_LOG=rustc rustc some.rs and only see INFO logs, make sure that debug-logging=true is turned on in your config.toml.

Logging etiquette and conventions

Because calls to debug! are removed by default, in most cases, don't worry about the performance of adding "unnecessary" calls to debug! and leaving them in code you commit - they won't slow down the performance of what we ship.

That said, there can also be excessive tracing calls, especially when they are redundant with other calls nearby or in functions called from here. There is no perfect balance to hit here, and is left to the reviewer's discretion to decide whether to let you leave debug! statements in or whether to ask you to remove them before merging.

It may be preferable to use trace! over debug! for very noisy logs.

A loosely followed convention is to use #[instrument(level = "debug")] (also see the attribute's documentation) in favour of debug!("foo(...)") at the start of a function foo. Within functions, prefer debug!(?variable.field) over debug!("xyz = {:?}", variable.field) and debug!(bar = ?var.method(arg)) over debug!("bar = {:?}", var.method(arg)). The documentation for this syntax can be found here.

One thing to be careful of is expensive operations in logs.

If in the module rustc::foo you have a statement

debug!(x = ?random_operation(tcx));

Then if someone runs a debug rustc with RUSTC_LOG=rustc::foo, then random_operation() will run. RUSTC_LOG filters that do not enable this debug statement will not execute random_operation.

This means that you should not put anything too expensive or likely to crash there - that would annoy anyone who wants to use logging for that module. No-one will know it until someone tries to use logging to find another bug.

Profiling the compiler

This section talks about how to profile the compiler and find out where it spends its time.

Depending on what you're trying to measure, there are several different approaches:

  • If you want to see if a PR improves or regresses compiler performance, see the rustc-perf chapter for requesting a benchmarking run.

  • If you want a medium-to-high level overview of where rustc is spending its time:

    • The -Z self-profile flag and measureme tools offer a query-based approach to profiling. See their docs for more information.
  • If you want function level performance data or even just more details than the above approaches:

    • Consider using a native code profiler such as perf
    • or tracy for a nanosecond-precision, full-featured graphical interface.
  • If you want a nice visual representation of the compile times of your crate graph, you can use cargo's --timings flag, e.g. cargo build --timings. You can use this flag on the compiler itself with CARGOFLAGS="--timings" ./x build

  • If you want to profile memory usage, you can use various tools depending on what operating system you are using.

Optimizing rustc's bootstrap times with cargo-llvm-lines

Using cargo-llvm-lines you can count the number of lines of LLVM IR across all instantiations of a generic function. Since most of the time compiling rustc is spent in LLVM, the idea is that by reducing the amount of code passed to LLVM, compiling rustc gets faster.

To use cargo-llvm-lines together with somewhat custom rustc build process, you can use -C save-temps to obtain required LLVM IR. The option preserves temporary work products created during compilation. Among those is LLVM IR that represents an input to the optimization pipeline; ideal for our purposes. It is stored in files with *.no-opt.bc extension in LLVM bitcode format.

Example usage:

cargo install cargo-llvm-lines
# On a normal crate you could now run `cargo llvm-lines`, but `x` isn't normal :P

# Do a clean before every run, to not mix in the results from previous runs.
./x clean
env RUSTFLAGS=-Csave-temps ./x build --stage 0 compiler/rustc

# Single crate, e.g., rustc_middle. (Relies on the glob support of your shell.)
# Convert unoptimized LLVM bitcode into a human readable LLVM assembly accepted by cargo-llvm-lines.
for f in build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps/rustc_middle-*.no-opt.bc; do
  ./build/x86_64-unknown-linux-gnu/llvm/bin/llvm-dis "$f"
done
cargo llvm-lines --files ./build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps/rustc_middle-*.ll > llvm-lines-middle.txt

# Specify all crates of the compiler.
for f in build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps/*.no-opt.bc; do
  ./build/x86_64-unknown-linux-gnu/llvm/bin/llvm-dis "$f"
done
cargo llvm-lines --files ./build/x86_64-unknown-linux-gnu/stage0-rustc/x86_64-unknown-linux-gnu/release/deps/*.ll > llvm-lines.txt

Example output for the compiler:

  Lines            Copies          Function name
  -----            ------          -------------
  45207720 (100%)  1583774 (100%)  (TOTAL)
   2102350 (4.7%)   146650 (9.3%)  core::ptr::drop_in_place
    615080 (1.4%)     8392 (0.5%)  std::thread::local::LocalKey<T>::try_with
    594296 (1.3%)     1780 (0.1%)  hashbrown::raw::RawTable<T>::rehash_in_place
    592071 (1.3%)     9691 (0.6%)  core::option::Option<T>::map
    528172 (1.2%)     5741 (0.4%)  core::alloc::layout::Layout::array
    466854 (1.0%)     8863 (0.6%)  core::ptr::swap_nonoverlapping_one
    412736 (0.9%)     1780 (0.1%)  hashbrown::raw::RawTable<T>::resize
    367776 (0.8%)     2554 (0.2%)  alloc::raw_vec::RawVec<T,A>::grow_amortized
    367507 (0.8%)      643 (0.0%)  rustc_query_system::dep_graph::graph::DepGraph<K>::with_task_impl
    355882 (0.8%)     6332 (0.4%)  alloc::alloc::box_free
    354556 (0.8%)    14213 (0.9%)  core::ptr::write
    354361 (0.8%)     3590 (0.2%)  core::iter::traits::iterator::Iterator::fold
    347761 (0.8%)     3873 (0.2%)  rustc_middle::ty::context::tls::set_tlv
    337534 (0.7%)     2377 (0.2%)  alloc::raw_vec::RawVec<T,A>::allocate_in
    331690 (0.7%)     3192 (0.2%)  hashbrown::raw::RawTable<T>::find
    328756 (0.7%)     3978 (0.3%)  rustc_middle::ty::context::tls::with_context_opt
    326903 (0.7%)      642 (0.0%)  rustc_query_system::query::plumbing::try_execute_query

Since this doesn't seem to work with incremental compilation or ./x check, you will be compiling rustc a lot. I recommend changing a few settings in config.toml to make it bearable:

[rust]
# A debug build takes _a third_ as long on my machine,
# but compiling more than stage0 rustc becomes unbearably slow.
optimize = false

# We can't use incremental anyway, so we disable it for a little speed boost.
incremental = false
# We won't be running it, so no point in compiling debug checks.
debug = false

# Using a single codegen unit gives less output, but is slower to compile.
codegen-units = 0  # num_cpus

The llvm-lines output is affected by several options. optimize = false increases it from 2.1GB to 3.5GB and codegen-units = 0 to 4.1GB.

MIR optimizations have little impact. Compared to the default RUSTFLAGS="-Z mir-opt-level=1", level 0 adds 0.3GB and level 2 removes 0.2GB. As of July 2022, inlining happens in LLVM and GCC codegen backends, missing only in the Cranelift one.

Profiling with perf

This is a guide for how to profile rustc with perf.

Initial steps

  • Get a clean checkout of rust-lang/master, or whatever it is you want to profile.
  • Set the following settings in your config.toml:
    • debuginfo-level = 1 - enables line debuginfo
    • jemalloc = false - lets you do memory use profiling with valgrind
    • leave everything else the defaults
  • Run ./x build to get a full build
  • Make a rustup toolchain pointing to that result

Gathering a perf profile

perf is an excellent tool on linux that can be used to gather and analyze all kinds of information. Mostly it is used to figure out where a program spends its time. It can also be used for other sorts of events, though, like cache misses and so forth.

The basics

The basic perf command is this:

perf record -F99 --call-graph dwarf XXX

The -F99 tells perf to sample at 99 Hz, which avoids generating too much data for longer runs (why 99 Hz you ask? It is often chosen because it is unlikely to be in lockstep with other periodic activity). The --call-graph dwarf tells perf to get call-graph information from debuginfo, which is accurate. The XXX is the command you want to profile. So, for example, you might do:

perf record -F99 --call-graph dwarf cargo +<toolchain> rustc

to run cargo -- here <toolchain> should be the name of the toolchain you made in the beginning. But there are some things to be aware of:

  • You probably don't want to profile the time spend building dependencies. So something like cargo build; cargo clean -p $C may be helpful (where $C is the crate name)
    • Though usually I just do touch src/lib.rs and rebuild instead. =)
  • You probably don't want incremental messing about with your profile. So something like CARGO_INCREMENTAL=0 can be helpful.

Gathering a perf profile from a perf.rust-lang.org test

Often we want to analyze a specific test from perf.rust-lang.org. The easiest way to do that is to use the rustc-perf benchmarking suite, this approach is described here.

Instead of using the benchmark suite CLI, you can also profile the benchmarks manually. First, you need to clone the rustc-perf repository:

$ git clone https://github.com/rust-lang/rustc-perf

and then find the source code of the test that you want to profile. Sources for the tests are found in the collector/compile-benchmarks directory and the collector/runtime-benchmarks directory. So let's go into the directory of a specific test; we'll use clap-rs as an example:

cd collector/compile-benchmarks/clap-3.1.6

In this case, let's say we want to profile the cargo check performance. In that case, I would first run some basic commands to build the dependencies:

# Setup: first clean out any old results and build the dependencies:
cargo +<toolchain> clean
CARGO_INCREMENTAL=0 cargo +<toolchain> check

(Again, <toolchain> should be replaced with the name of the toolchain we made in the first step.)

Next: we want record the execution time for just the clap-rs crate, running cargo check. I tend to use cargo rustc for this, since it also allows me to add explicit flags, which we'll do later on.

touch src/lib.rs
CARGO_INCREMENTAL=0 perf record -F99 --call-graph dwarf cargo rustc --profile check --lib

Note that final command: it's a doozy! It uses the cargo rustc command, which executes rustc with (potentially) additional options; the --profile check and --lib options specify that we are doing a cargo check execution, and that this is a library (not a binary).

At this point, we can use perf tooling to analyze the results. For example:

perf report

will open up an interactive TUI program. In simple cases, that can be helpful. For more detailed examination, the perf-focus tool can be helpful; it is covered below.

A note of caution. Each of the rustc-perf tests is its own special snowflake. In particular, some of them are not libraries, in which case you would want to do touch src/main.rs and avoid passing --lib. I'm not sure how best to tell which test is which to be honest.

Gathering NLL data

If you want to profile an NLL run, you can just pass extra options to the cargo rustc command, like so:

touch src/lib.rs
CARGO_INCREMENTAL=0 perf record -F99 --call-graph dwarf cargo rustc --profile check --lib -- -Z borrowck=mir

Analyzing a perf profile with perf focus

Once you've gathered a perf profile, we want to get some information about it. For this, I personally use perf focus. It's a kind of simple but useful tool that lets you answer queries like:

  • "how much time was spent in function F" (no matter where it was called from)
  • "how much time was spent in function F when it was called from G"
  • "how much time was spent in function F excluding time spent in G"
  • "what functions does F call and how much time does it spend in them"

To understand how it works, you have to know just a bit about perf. Basically, perf works by sampling your process on a regular basis (or whenever some event occurs). For each sample, perf gathers a backtrace. perf focus lets you write a regular expression that tests which functions appear in that backtrace, and then tells you which percentage of samples had a backtrace that met the regular expression. It's probably easiest to explain by walking through how I would analyze NLL performance.

Installing perf-focus

You can install perf-focus using cargo install:

cargo install perf-focus

Example: How much time is spent in MIR borrowck?

Let's say we've gathered the NLL data for a test. We'd like to know how much time it is spending in the MIR borrow-checker. The "main" function of the MIR borrowck is called do_mir_borrowck, so we can do this command:

$ perf focus '{do_mir_borrowck}'
Matcher    : {do_mir_borrowck}
Matches    : 228
Not Matches: 542
Percentage : 29%

The '{do_mir_borrowck}' argument is called the matcher. It specifies the test to be applied on the backtrace. In this case, the {X} indicates that there must be some function on the backtrace that meets the regular expression X. In this case, that regex is just the name of the function we want (in fact, it's a subset of the name; the full name includes a bunch of other stuff, like the module path). In this mode, perf-focus just prints out the percentage of samples where do_mir_borrowck was on the stack: in this case, 29%.

A note about c++filt. To get the data from perf, perf focus currently executes perf script (perhaps there is a better way...). I've sometimes found that perf script outputs C++ mangled names. This is annoying. You can tell by running perf script | head yourself — if you see names like 5rustc6middle instead of rustc::middle, then you have the same problem. You can solve this by doing:

perf script | c++filt | perf focus --from-stdin ...

This will pipe the output from perf script through c++filt and should mostly convert those names into a more friendly format. The --from-stdin flag to perf focus tells it to get its data from stdin, rather than executing perf focus. We should make this more convenient (at worst, maybe add a c++filt option to perf focus, or just always use it — it's pretty harmless).

Example: How much time does MIR borrowck spend solving traits?

Perhaps we'd like to know how much time MIR borrowck spends in the trait checker. We can ask this using a more complex regex:

$ perf focus '{do_mir_borrowck}..{^rustc::traits}'
Matcher    : {do_mir_borrowck},..{^rustc::traits}
Matches    : 12
Not Matches: 1311
Percentage : 0%

Here we used the .. operator to ask "how often do we have do_mir_borrowck on the stack and then, later, some function whose name begins with rustc::traits?" (basically, code in that module). It turns out the answer is "almost never" — only 12 samples fit that description (if you ever see no samples, that often indicates your query is messed up).

If you're curious, you can find out exactly which samples by using the --print-match option. This will print out the full backtrace for each sample. The | at the front of the line indicates the part that the regular expression matched.

Example: Where does MIR borrowck spend its time?

Often we want to do more "explorational" queries. Like, we know that MIR borrowck is 29% of the time, but where does that time get spent? For that, the --tree-callees option is often the best tool. You usually also want to give --tree-min-percent or --tree-max-depth. The result looks like this:

$ perf focus '{do_mir_borrowck}' --tree-callees --tree-min-percent 3
Matcher    : {do_mir_borrowck}
Matches    : 577
Not Matches: 746
Percentage : 43%

Tree
| matched `{do_mir_borrowck}` (43% total, 0% self)
: | rustc_borrowck::nll::compute_regions (20% total, 0% self)
: : | rustc_borrowck::nll::type_check::type_check_internal (13% total, 0% self)
: : : | core::ops::function::FnOnce::call_once (5% total, 0% self)
: : : : | rustc_borrowck::nll::type_check::liveness::generate (5% total, 3% self)
: : : | <rustc_borrowck::nll::type_check::TypeVerifier<'a, 'b, 'tcx> as rustc::mir::visit::Visitor<'tcx>>::visit_mir (3% total, 0% self)
: | rustc::mir::visit::Visitor::visit_mir (8% total, 6% self)
: | <rustc_borrowck::MirBorrowckCtxt<'cx, 'tcx> as rustc_mir_dataflow::DataflowResultsConsumer<'cx, 'tcx>>::visit_statement_entry (5% total, 0% self)
: | rustc_mir_dataflow::do_dataflow (3% total, 0% self)

What happens with --tree-callees is that

  • we find each sample matching the regular expression
  • we look at the code that occurs after the regex match and try to build up a call tree

The --tree-min-percent 3 option says "only show me things that take more than 3% of the time. Without this, the tree often gets really noisy and includes random stuff like the innards of malloc. --tree-max-depth can be useful too, it just limits how many levels we print.

For each line, we display the percent of time in that function altogether ("total") and the percent of time spent in just that function and not some callee of that function (self). Usually "total" is the more interesting number, but not always.

Relative percentages

By default, all in perf-focus are relative to the total program execution. This is useful to help you keep perspective — often as we drill down to find hot spots, we can lose sight of the fact that, in terms of overall program execution, this "hot spot" is actually not important. It also ensures that percentages between different queries are easily compared against one another.

That said, sometimes it's useful to get relative percentages, so perf focus offers a --relative option. In this case, the percentages are listed only for samples that match (vs all samples). So for example we could get our percentages relative to the borrowck itself like so:

$ perf focus '{do_mir_borrowck}' --tree-callees --relative --tree-max-depth 1 --tree-min-percent 5
Matcher    : {do_mir_borrowck}
Matches    : 577
Not Matches: 746
Percentage : 100%

Tree
| matched `{do_mir_borrowck}` (100% total, 0% self)
: | rustc_borrowck::nll::compute_regions (47% total, 0% self) [...]
: | rustc::mir::visit::Visitor::visit_mir (19% total, 15% self) [...]
: | <rustc_borrowck::MirBorrowckCtxt<'cx, 'tcx> as rustc_mir_dataflow::DataflowResultsConsumer<'cx, 'tcx>>::visit_statement_entry (13% total, 0% self) [...]
: | rustc_mir_dataflow::do_dataflow (8% total, 1% self) [...]

Here you see that compute_regions came up as "47% total" — that means that 47% of do_mir_borrowck is spent in that function. Before, we saw 20% — that's because do_mir_borrowck itself is only 43% of the total time (and .47 * .43 = .20).

Profiling on Windows

Introducing WPR and WPA

High-level performance analysis (including memory usage) can be performed with the Windows Performance Recorder (WPR) and Windows Performance Analyzer (WPA). As the names suggest, WPR is for recording system statistics (in the form of event trace log a.k.a. ETL files), while WPA is for analyzing these ETL files.

WPR collects system wide statistics, so it won't just record things relevant to rustc but also everything else that's running on the machine. During analysis, we can filter to just the things we find interesting.

These tools are quite powerful but also require a bit of learning before we can successfully profile the Rust compiler.

Here we will explore how to use WPR and WPA for analyzing the Rust compiler as well as provide links to useful "profiles" (i.e., settings files that tweak the defaults for WPR and WPA) that are specifically designed to make analyzing rustc easier.

Installing WPR and WPA

You can install WPR and WPA as part of the Windows Performance Toolkit which itself is an option as part of downloading the Windows Assessment and Deployment Kit (ADK). You can download the ADK installer here. Make sure to select the Windows Performance Toolkit (you don't need to select anything else).

Recording

In order to perform system analysis, you'll first need to record your system with WPR. Open WPR and at the bottom of the window select the "profiles" of the things you want to record. For looking into memory usage of the rustc bootstrap process, we'll want to select the following items:

  • CPU usage
  • VirtualAlloc usage

You might be tempted to record "Heap usage" as well, but this records every single heap allocation and can be very, very expensive. For high-level analysis, it might be best to leave that turned off.

Now we need to get our setup ready to record. For memory usage analysis, it is best to record the stage 2 compiler build with a stage 1 compiler build with debug symbols. Having symbols in the compiler we're using to build rustc will aid our analysis greatly by allowing WPA to resolve Rust symbols correctly. Unfortunately, the stage 0 compiler does not have symbols turned on which is why we'll need to build a stage 1 compiler and then a stage 2 compiler ourselves.

To do this, make sure you have set debuginfo-level = 1 in your config.toml file. This tells rustc to generate debug information which includes stack frames when bootstrapping.

Now you can build the stage 1 compiler: x build --stage 1 -i library or however else you want to build the stage 1 compiler.

Now that the stage 1 compiler is built, we can record the stage 2 build. Go back to WPR, click the "start" button and build the stage 2 compiler (e.g., x build --stage=2 -i library). When this process finishes, stop the recording.

Click the Save button and once that process is complete, click the "Open in WPA" button which appears.

Note: The trace file is fairly large so it can take WPA some time to finish opening the file.

Analysis

Now that our ETL file is open in WPA, we can analyze the results. First, we'll want to apply the pre-made "profile" which will put WPA into a state conducive to analyzing rustc bootstrap. Download the profile here. Select the "Profiles" menu at the top, then "apply" and then choose the downloaded profile.

You should see something resembling the following:

WPA with profile applied

Next, we will need to tell WPA to load and process debug symbols so that it can properly demangle the Rust stack traces. To do this, click "Trace" and then choose "Load Symbols". This step can take a while.

Once WPA has loaded symbols for rustc, we can expand the rustc.exe node and begin drilling down into the stack with the largest allocations.

To do that, we'll expand the [Root] node in the "Commit Stack" column and continue expanding until we find interesting stack frames.

Tip: After selecting the node you want to expand, press the right arrow key. This will expand the node and put the selection on the next largest node in the expanded set. You can continue pressing the right arrow key until you reach an interesting frame.

WPA with expanded stack

In this sample, you can see calls through codegen are allocating ~30gb of memory in total throughout this profile.

Other Analysis Tabs

The profile also includes a few other tabs which can be helpful:

  • System Configuration
    • General information about the system the capture was recorded on.
  • rustc Build Processes
    • A flat list of relevant processes such as rustc.exe, cargo.exe, link.exe etc.
    • Each process lists its command line arguments.
    • Useful for figuring out what a specific rustc process was working on.
  • rustc Build Process Tree
    • Timeline showing when processes started and exited.
  • rustc CPU Analysis
    • Contains charts preconfigured to show hotspots in rustc.
    • These charts are designed to support analyzing where rustc is spending its time.
  • rustc Memory Analysis
    • Contains charts preconfigured to show where rustc is allocating memory.

Profiling with rustc-perf

The Rust benchmark suite provides a comprehensive way of profiling and benchmarking the Rust compiler. You can find instructions on how to use the suite in its manual.

However, using the suite manually can be a bit cumbersome. To make this easier for rustc contributors, the compiler build system (bootstrap) also provides built-in integration with the benchmarking suite, which will download and build the suite for you, build a local compiler toolchain and let you profile it using a simplified command-line interface.

You can use the ./x perf -- <command> [options] command to use this integration.

Note that you need to specify arguments after -- in the x perf command! You will not be able to pass arguments without the double dashes.

You can use normal bootstrap flags for this command, such as --stage 1 or --stage 2, for example to modify the stage of the created sysroot. It might also be useful to configure config.toml to better support profiling, e.g. set rust.debuginfo-level = 1 to add source line information to the built compiler.

x perf currently supports the following commands:

  • benchmark <id>: Benchmark the compiler and store the results under the passed id.
  • compare <baseline> <modified>: Compare the benchmark results of two compilers with the two passed ids.
  • eprintln: Just run the compiler and capture its stderr output. Note that the compiler normally does not print anything to stderr, you might want to add some eprintln! calls to get any output.
  • samply: Profile the compiler using the samply sampling profiler.
  • cachegrind: Use Cachegrind to generate a detailed simulated trace of the compiler's execution.

You can find a more detailed description of the profilers in the rustc-perf manual.

You can use the following options for the x perf command, which mirror the corresponding options of the profile_local and bench_local commands that you can use in the suite:

  • --include: Select benchmarks which should be profiled/benchmarked.
  • --profiles: Select profiles (Check, Debug, Opt, Doc) which should be profiled/benchmarked.
  • --scenarios: Select scenarios (Full, IncrFull, IncrPatched, IncrUnchanged) which should be profiled/benchmarked.

crates.io Dependencies

The Rust compiler supports building with some dependencies from crates.io. Examples are log and env_logger.

In general, you should avoid adding dependencies to the compiler for several reasons:

  • The dependency may not be of high quality or well-maintained.
  • The dependency may not be using a compatible license.
  • The dependency may have transitive dependencies that have one of the above problems.

Note that there is no official policy for vetting new dependencies to the compiler. Decisions are made on a case-by-case basis, during code review.

Permitted dependencies

The tidy tool has a list of crates that are allowed. To add a dependency that is not already in the compiler, you will need to add it to the list.

Contribution Procedures

Bug reports

While bugs are unfortunate, they're a reality in software. We can't fix what we don't know about, so please report liberally. If you're not sure if something is a bug or not, feel free to file a bug anyway.

If you believe reporting your bug publicly represents a security risk to Rust users, please follow our instructions for reporting security vulnerabilities.

If you're using the nightly channel, please check if the bug exists in the latest toolchain before filing your bug. It might be fixed already.

If you have the chance, before reporting a bug, please search existing issues, as it's possible that someone else has already reported your error. This doesn't always work, and sometimes it's hard to know what to search for, so consider this extra credit. We won't mind if you accidentally file a duplicate report.

Similarly, to help others who encountered the bug find your issue, consider filing an issue with a descriptive title, which contains information that might be unique to it. This can be the language or compiler feature used, the conditions that trigger the bug, or part of the error message if there is any. An example could be: "impossible case reached" on lifetime inference for impl Trait in return position.

Opening an issue is as easy as following this link and filling out the fields in the appropriate provided template.

Bug fixes or "normal" code changes

For most PRs, no special procedures are needed. You can just open a PR, and it will be reviewed, approved, and merged. This includes most bug fixes, refactorings, and other user-invisible changes. The next few sections talk about exceptions to this rule.

Also, note that it is perfectly acceptable to open WIP PRs or GitHub Draft PRs. Some people prefer to do this so they can get feedback along the way or share their code with a collaborator. Others do this so they can utilize the CI to build and test their PR (e.g. when developing on a slow machine).

New features

Rust has strong backwards-compatibility guarantees. Thus, new features can't just be implemented directly in stable Rust. Instead, we have 3 release channels: stable, beta, and nightly.

  • Stable: this is the latest stable release for general usage.
  • Beta: this is the next release (will be stable within 6 weeks).
  • Nightly: follows the master branch of the repo. This is the only channel where unstable, incomplete, or experimental features are usable with feature gates.

See this chapter on implementing new features for more information.

Breaking changes

Breaking changes have a dedicated section in the dev-guide.

Major changes

The compiler team has a special process for large changes, whether or not they cause breakage. This process is called a Major Change Proposal (MCP). MCP is a relatively lightweight mechanism for getting feedback on large changes to the compiler (as opposed to a full RFC or a design meeting with the team).

Example of things that might require MCPs include major refactorings, changes to important types, or important changes to how the compiler does something, or smaller user-facing changes.

When in doubt, ask on zulip. It would be a shame to put a lot of work into a PR that ends up not getting merged! See this document for more info on MCPs.

Performance

Compiler performance is important. We have put a lot of effort over the last few years into gradually improving it.

If you suspect that your change may cause a performance regression (or improvement), you can request a "perf run" (and your reviewer may also request one before approving). This is yet another bot that will compile a collection of benchmarks on a compiler with your changes. The numbers are reported here, and you can see a comparison of your changes against the latest master.

For an introduction to the performance of Rust code in general which would also be useful in rustc development, see The Rust Performance Book.

Pull requests

Pull requests (or PRs for short) are the primary mechanism we use to change Rust. GitHub itself has some great documentation on using the Pull Request feature. We use the "fork and pull" model described here, where contributors push changes to their personal fork and create pull requests to bring those changes into the source repository. We have more info about how to use git when contributing to Rust under the git section.

Advice for potentially large, complex, cross-cutting and/or very domain-specific changes

The compiler reviewers on rotation usually each have areas of the compiler that they know well, but also have areas that they are not very familiar with. If your PR contains changes that are large, complex, cross-cutting and/or highly domain-specific, it becomes very difficult to find a suitable reviewer who is comfortable in reviewing all of the changes in such a PR. This is also true if the changes are not only compiler-specific but also contains changes which fall under the purview of reviewers from other teams, like the standard library team. There's a bot which notifies the relevant teams and pings people who have setup specific alerts based on the files modified.

Before making such changes, you are strongly encouraged to discuss your proposed changes with the compiler team beforehand (and with other teams that the changes would require approval from), and work with the compiler team to see if we can help you break down a large potentially unreviewable PR into a series of smaller more individually reviewable PRs.

You can communicate with the compiler team by creating a #t-compiler thread on zulip to discuss your proposed changes.

Communicating with the compiler team beforehand helps in several ways:

  1. It increases the likelihood of your PRs being reviewed in a timely manner.
    • We can help you identify suitable reviewers before you open actual PRs, or help find advisors and liaisons to help you navigate the change procedures, or help with running try-jobs, perf runs and crater runs as suitable.
  2. It helps the compiler team track your changes.
  3. The compiler team can perform vibe checks on your changes early and often, to see if the direction of the changes align with what the compiler team prefers to see.
  4. Helps to avoid situations where you may have invested significant time and effort into large changes that the compiler team might not be willing to accept, or finding out very late that the changes are in a direction that the compiler team disagrees with.

r?

All pull requests are reviewed by another person. We have a bot, @rustbot, that will automatically assign a random person to review your request based on which files you changed.

If you want to request that a specific person reviews your pull request, you can add an r? to the pull request description or in a comment. For example, if you want to ask a review to @awesome-reviewer, add

r? @awesome-reviewer

to the end of the pull request description, and @rustbot will assign them instead of a random person. This is entirely optional.

You can also assign a random reviewer from a specific team by writing r? rust-lang/groupname. As an example, if you were making a diagnostics change, then you could get a reviewer from the diagnostics team by adding:

r? rust-lang/diagnostics

For a full list of possible groupnames, check the adhoc_groups section at the triagebot.toml config file, or the list of teams in the rust-lang teams database.

Waiting for reviews

NOTE

Pull request reviewers are often working at capacity, and many of them are contributing on a volunteer basis. In order to minimize review delays, pull request authors and assigned reviewers should ensure that the review label (S-waiting-on-review and S-waiting-on-author) stays updated, invoking these commands when appropriate:

  • @rustbot author: the review is finished, and PR author should check the comments and take action accordingly.

  • @rustbot review: the author is ready for a review, and this PR will be queued again in the reviewer's queue.

Please note that the reviewers are humans, who for the most part work on rustc in their free time. This means that they can take some time to respond and review your PR. It also means that reviewers can miss some PRs that are assigned to them.

To try to move PRs forward, the Triage WG regularly goes through all PRs that are waiting for review and haven't been discussed for at least 2 weeks. If you don't get a review within 2 weeks, feel free to ask the Triage WG on Zulip (#t-release/triage). They have knowledge of when to ping, who might be on vacation, etc.

The reviewer may request some changes using the GitHub code review interface. They may also request special procedures for some PRs. See Crater and Breaking Changes chapters for some examples of such procedures.

CI

In addition to being reviewed by a human, pull requests are automatically tested, thanks to continuous integration (CI). Basically, every time you open and update a pull request, CI builds the compiler and tests it against the compiler test suite, and also performs other tests such as checking that your pull request is in compliance with Rust's style guidelines.

Running continuous integration tests allows PR authors to catch mistakes early without going through a first review cycle, and also helps reviewers stay aware of the status of a particular pull request.

Rust has plenty of CI capacity, and you should never have to worry about wasting computational resources each time you push a change. It is also perfectly fine (and even encouraged!) to use the CI to test your changes if it can help your productivity. In particular, we don't recommend running the full ./x test suite locally, since it takes a very long time to execute.

r+

After someone has reviewed your pull request, they will leave an annotation on the pull request with an r+. It will look something like this:

@bors r+

This tells @bors, our lovable integration bot, that your pull request has been approved. The PR then enters the merge queue, where @bors will run all the tests on every platform we support. If it all works out, @bors will merge your code into master and close the pull request.

Depending on the scale of the change, you may see a slightly different form of r+:

@bors r+ rollup

The additional rollup tells @bors that this change should always be "rolled up". Changes that are rolled up are tested and merged alongside other PRs, to speed the process up. Typically only small changes that are expected not to conflict with one another are marked as "always roll up".

Be patient; this can take a while and the queue can sometimes be long. PRs are never merged by hand.

Opening a PR

You are now ready to file a pull request? Great! Here are a few points you should be aware of.

All pull requests should be filed against the master branch, unless you know for sure that you should target a different branch.

Make sure your pull request is in compliance with Rust's style guidelines by running

$ ./x test tidy --bless

We recommend to make this check before every pull request (and every new commit in a pull request); you can add git hooks before every push to make sure you never forget to make this check. The CI will also run tidy and will fail if tidy fails.

Rust follows a no merge-commit policy, meaning, when you encounter merge conflicts you are expected to always rebase instead of merging. E.g. always use rebase when bringing the latest changes from the master branch to your feature branch. If your PR contains merge commits, it will get marked as has-merge-commits. Once you have removed the merge commits, e.g., through an interactive rebase, you should remove the label again:

@rustbot label -has-merge-commits

See this chapter for more details.

If you encounter merge conflicts or when a reviewer asks you to perform some changes, your PR will get marked as S-waiting-on-author. When you resolve them, you should use @rustbot to mark it as S-waiting-on-review:

@rustbot ready

GitHub allows closing issues using keywords. This feature should be used to keep the issue tracker tidy. However, it is generally preferred to put the "closes #123" text in the PR description rather than the issue commit; particularly during rebasing, citing the issue number in the commit can "spam" the issue in question.

However, if your PR fixes a stable-to-beta or stable-to-stable regression and has been accepted for a beta and/or stable backport (i.e., it is marked beta-accepted and/or stable-accepted), please do not use any such keywords since we don't want the corresponding issue to get auto-closed once the fix lands on master. Please update the PR description while still mentioning the issue somewhere. For example, you could write Fixes (after beta backport) #NNN..

As for further actions, please keep a sharp look-out for a PR whose title begins with [beta] or [stable] and which backports the PR in question. When that one gets merged, the relevant issue can be closed. The closing comment should mention all PRs that were involved. If you don't have the permissions to close the issue, please leave a comment on the original PR asking the reviewer to close it for you.

Reverting a PR

When a PR leads to miscompile, significant performance regressions, or other critical issues, we may want to revert that PR with a regression test case. You can also check out the revert policy on Forge docs (which is mainly targeted for reviewers, but contains useful info for PR authors too).

If the PR contains huge changes, it can be challenging to revert, making it harder to review incremental fixes in subsequent updates. Or if certain code in that PR is heavily depended upon by subsequent PRs, reverting it can become difficult.

In such cases, we can identify the problematic code and disable it for some input, as shown in #128271.

For MIR optimizations, we can also use the -Zunsound-mir-opt option to gate the mir-opt, as shown in #132356.

External dependencies

This section has moved to "Using External Repositories".

Writing documentation

Documentation improvements are very welcome. The source of doc.rust-lang.org is located in src/doc in the tree, and standard API documentation is generated from the source code itself (e.g. library/std/src/lib.rs). Documentation pull requests function in the same way as other pull requests.

To find documentation-related issues, sort by the A-docs label.

You can find documentation style guidelines in RFC 1574.

To build the standard library documentation, use x doc --stage 0 library --open. To build the documentation for a book (e.g. the unstable book), use x doc src/doc/unstable-book. Results should appear in build/host/doc, as well as automatically open in your default browser. See Building Documentation for more information.

You can also use rustdoc directly to check small fixes. For example, rustdoc src/doc/reference.md will render reference to doc/reference.html. The CSS might be messed up, but you can verify that the HTML is right.

Contributing to rustc-dev-guide

Contributions to the rustc-dev-guide are always welcome, and can be made directly at the rust-lang/rustc-dev-guide repo. The issue tracker in that repo is also a great way to find things that need doing. There are issues for beginners and advanced compiler devs alike!

Just a few things to keep in mind:

  • Please try to avoid overly long lines and use semantic line breaks (where you break the line after each sentence). There is no strict limit on line lengths; let the sentence or part of the sentence flow to its proper end on the same line.

  • When contributing text to the guide, please contextualize the information with some time period and/or a reason so that the reader knows how much to trust or mistrust the information. Aim to provide a reasonable amount of context, possibly including but not limited to:

    • A reason for why the data may be out of date other than "change", as change is a constant across the project.

    • The date the comment was added, e.g. instead of writing "Currently, ..." or "As of now, ...", consider adding the date, in one of the following formats:

      • Jan 2021
      • January 2021
      • jan 2021
      • january 2021

      There is a CI action (in ~/.github/workflows/date-check.yml) that generates a monthly showing those that are over 6 months old (example).

      For the action to pick the date, add a special annotation before specifying the date:

      <!-- date-check --> Sep 2024
      

      Example:

      As of <!-- date-check --> Sep 2024, the foo did the bar.
      

      For cases where the date should not be part of the visible rendered output, use the following instead:

      <!-- date-check: Sep 2024 -->
      
    • A link to a relevant WG, tracking issue, rustc rustdoc page, or similar, that may provide further explanation for the change process or a way to verify that the information is not outdated.

  • If a text grows rather long (more than a few page scrolls) or complicated (more than four subsections), it might benefit from having a Table of Contents at the beginning, which you can auto-generate by including the <!-- toc --> marker at the top.

Issue triage

Sometimes, an issue will stay open, even though the bug has been fixed. And sometimes, the original bug may go stale because something has changed in the meantime.

It can be helpful to go through older bug reports and make sure that they are still valid. Load up an older issue, double check that it's still true, and leave a comment letting us know if it is or is not. The least recently updated sort is good for finding issues like this.

Thanks to @rustbot, anyone can help triage issues by adding appropriate labels to issues that haven't been triaged yet:

LabelsColorDescription
A- YellowThe area of the project an issue relates to.
B- MagentaIssues which are blockers.
beta- Dark BlueTracks changes which need to be backported to beta
C- Light PurpleThe category of an issue.
D- Mossy GreenIssues for diagnostics.
E- GreenThe experience level necessary to fix an issue.
F- PeachIssues for nightly features.
I- RedThe importance of the issue.
I-*-nominated RedThe issue has been nominated for discussion at the next meeting of the corresponding team.
I-prioritize RedThe issue has been nominated for prioritization by the team tagged with a T-prefixed label.
L- TealThe relevant lint.
metabug PurpleBugs that collect other bugs.
O- Purple GreyThe operating system or platform that the issue is specific to.
P- OrangeThe issue priority. These labels can be assigned by anyone that understand the issue and is able to prioritize it, and remove the I-prioritize label.
regression- PinkTracks regressions from a stable release.
relnotes Light OrangeChanges that should be documented in the release notes of the next release.
S- GrayTracks the status of pull requests.
S-tracking- Steel BlueTracks the status of tracking issues.
stable- Dark BlueTracks changes which need to be backported to stable in anticipation of a point release.
T- BlueDenotes which team the issue belongs to.
WG- GreenDenotes which working group the issue belongs to.

Rfcbot labels

rfcbot uses its own labels for tracking the process of coordinating asynchronous decisions, such as approving or rejecting a change. This is used for RFCs, issues, and pull requests.

LabelsColorDescription
proposed-final-comment-period GrayCurrently awaiting signoff of all team members in order to enter the final comment period.
disposition-merge GreenIndicates the intent is to merge the change.
disposition-close RedIndicates the intent is to not accept the change and close it.
disposition-postpone GrayIndicates the intent is to not accept the change at this time and postpone it to a later date.
final-comment-period BlueCurrently soliciting final comments before merging or closing.
finished-final-comment-period Light YellowThe final comment period has concluded, and the issue will be merged or closed.
postponed YellowThe issue has been postponed.
closed RedThe issue has been rejected.
to-announce GrayIssues that have finished their final-comment-period and should be publicly announced. Note: the rust-lang/rust repository uses this label differently, to announce issues at the triage meetings.

This section has moved to the "About this guide" chapter.

About the compiler team

rustc is maintained by the Rust compiler team. The people who belong to this team collectively work to track regressions and implement new features. Members of the Rust compiler team are people who have made significant contributions to rustc and its design.

Discussion

Currently the compiler team chats in Zulip:

  • Team chat occurs in the t-compiler stream on the Zulip instance
  • There are also a number of other associated Zulip streams, such as t-compiler/help, where people can ask for help with rustc development, or t-compiler/meetings, where the team holds their weekly triage and steering meetings.

Reviewers

If you're interested in figuring out who can answer questions about a particular part of the compiler, or you'd just like to know who works on what, check out triagebot.toml's assign section. It contains a listing of the various parts of the compiler and a list of people who are reviewers of each part.

Rust compiler meeting

The compiler team has a weekly meeting where we do triage and try to generally stay on top of new bugs, regressions, and discuss important things in general. They are held on Zulip. It works roughly as follows:

  • Announcements, MCPs/FCPs, and WG-check-ins: We share some announcements with the rest of the team about important things we want everyone to be aware of. We also share the status of MCPs and FCPs and we use the opportunity to have a couple of WGs giving us an update about their work.
  • Check for beta and stable nominations: These are nominations of things to backport to beta and stable respectively. We then look for new cases where the compiler broke previously working code in the wild. Regressions are important issues to fix, so it's likely that they are tagged as P-critical or P-high; the major exception would be bug fixes (though even there we often aim to give warnings first).
  • Review P-critical and P-high bugs: P-critical and P-high bugs are those that are sufficiently important for us to actively track progress. P-critical and P-high bugs should ideally always have an assignee.
  • Check S-waiting-on-team and I-nominated issues: These are issues where feedback from the team is desired.
  • Look over the performance triage report: We check for PRs that made the performance worse and try to decide if it's worth reverting the performance regression or if the regression can be addressed in a future PR.

The meeting currently takes place on Thursdays at 10am Boston time (UTC-4 typically, but daylight savings time sometimes makes things complicated).

Team membership

Membership in the Rust team is typically offered when someone has been making significant contributions to the compiler for some time. Membership is both a recognition but also an obligation: compiler team members are generally expected to help with upkeep as well as doing reviews and other work.

If you are interested in becoming a compiler team member, the first thing to do is to start fixing some bugs, or get involved in a working group. One good way to find bugs is to look for open issues tagged with E-easy or E-mentor.

You can also dig through the graveyard of PRs that were closed due to inactivity, some of them may contain work that is still useful - refer to the associated issues, if any - and only needs some finishing touches for which the original author didn't have time.

r+ rights

Once you have made a number of individual PRs to rustc, we will often offer r+ privileges. This means that you have the right to instruct "bors" (the robot that manages which PRs get landed into rustc) to merge a PR (here are some instructions for how to talk to bors).

The guidelines for reviewers are as follows:

  • You are always welcome to review any PR, regardless of who it is assigned to. However, do not r+ PRs unless:
    • You are confident in that part of the code.
    • You are confident that nobody else wants to review it first.
      • For example, sometimes people will express a desire to review a PR before it lands, perhaps because it touches a particularly sensitive part of the code.
  • Always be polite when reviewing: you are a representative of the Rust project, so it is expected that you will go above and beyond when it comes to the Code of Conduct.

Reviewer rotation

Once you have r+ rights, you can also be added to the reviewer rotation. triagebot is the bot that automatically assigns incoming PRs to reviewers. If you are added, you will be randomly selected to review PRs. If you find you are assigned a PR that you don't feel comfortable reviewing, you can also leave a comment like r? @so-and-so to assign to someone else — if you don't know who to request, just write r? @nikomatsakis for reassignment and @nikomatsakis will pick someone for you.

Getting on the reviewer rotation is much appreciated as it lowers the review burden for all of us! However, if you don't have time to give people timely feedback on their PRs, it may be better that you don't get on the list.

Full team membership

Full team membership is typically extended once someone made many contributions to the Rust compiler over time, ideally (but not necessarily) to multiple areas. Sometimes this might be implementing a new feature, but it is also important — perhaps more important! — to have time and willingness to help out with general upkeep such as bugfixes, tracking regressions, and other less glamorous work.

Using Git

The Rust project uses Git to manage its source code. In order to contribute, you'll need some familiarity with its features so that your changes can be incorporated into the compiler.

The goal of this page is to cover some of the more common questions and problems new contributors face. Although some Git basics will be covered here, if you find that this is still a little too fast for you, it might make sense to first read some introductions to Git, such as the Beginner and Getting started sections of this tutorial from Atlassian. GitHub also provides documentation and guides for beginners, or you can consult the more in depth book from Git.

This guide is incomplete. If you run into trouble with git that this page doesn't help with, please open an issue so we can document how to fix it.

Prerequisites

We'll assume that you've installed Git, forked rust-lang/rust, and cloned the forked repo to your PC. We'll use the command line interface to interact with Git; there are also a number of GUIs and IDE integrations that can generally do the same things.

If you've cloned your fork, then you will be able to reference it with origin in your local repo. It may be helpful to also set up a remote for the official rust-lang/rust repo via

git remote add upstream https://github.com/rust-lang/rust.git

if you're using HTTPS, or

git remote add upstream git@github.com:rust-lang/rust.git

if you're using SSH.

NOTE: This page is dedicated to workflows for rust-lang/rust, but will likely be useful when contributing to other repositories in the Rust project.

Standard Process

Below is the normal procedure that you're likely to use for most minor changes and PRs:

  1. Ensure that you're making your changes on top of master: git checkout master.
  2. Get the latest changes from the Rust repo: git pull upstream master --ff-only. (see No-Merge Policy for more info about this).
  3. Make a new branch for your change: git checkout -b issue-12345-fix.
  4. Make some changes to the repo and test them.
  5. Stage your changes via git add src/changed/file.rs src/another/change.rs and then commit them with git commit. Of course, making intermediate commits may be a good idea as well. Avoid git add ., as it makes it too easy to unintentionally commit changes that should not be committed, such as submodule updates. You can use git status to check if there are any files you forgot to stage.
  6. Push your changes to your fork: git push --set-upstream origin issue-12345-fix (After adding commits, you can use git push and after rebasing or pulling-and-rebasing, you can use git push --force-with-lease).
  7. Open a PR from your fork to rust-lang/rust's master branch.

If you end up needing to rebase and are hitting conflicts, see Rebasing. If you want to track upstream while working on long-running feature/issue, see Keeping things up to date.

If your reviewer requests changes, the procedure for those changes looks much the same, with some steps skipped:

  1. Ensure that you're making changes to the most recent version of your code: git checkout issue-12345-fix.
  2. Make, stage, and commit your additional changes just like before.
  3. Push those changes to your fork: git push.

Troubleshooting git issues

You don't need to clone rust-lang/rust from scratch if it's out of date! Even if you think you've messed it up beyond repair, there are ways to fix the git state that don't require downloading the whole repository again. Here are some common issues you might run into:

I made a merge commit by accident.

Git has two ways to update your branch with the newest changes: merging and rebasing. Rust uses rebasing. If you make a merge commit, it's not too hard to fix: git rebase -i upstream/master.

See Rebasing for more about rebasing.

I deleted my fork on GitHub!

This is not a problem from git's perspective. If you run git remote -v, it will say something like this:

$ git remote -v
origin  git@github.com:jyn514/rust.git (fetch)
origin  git@github.com:jyn514/rust.git (push)
upstream        https://github.com/rust-lang/rust (fetch)
upstream        https://github.com/rust-lang/rust (fetch)

If you renamed your fork, you can change the URL like this:

git remote set-url origin <URL>

where the <URL> is your new fork.

I changed a submodule by accident

Usually people notice this when rustbot posts a comment on github that cargo has been modified:

rustbot submodule comment

You might also notice conflicts in the web UI:

conflict in src/tools/cargo

The most common cause is that you rebased after a change and ran git add . without first running x to update the submodules. Alternatively, you might have run cargo fmt instead of x fmt and modified files in a submodule, then committed the changes.

To fix it, do the following things:

  1. See which commit has the accidental changes: git log --stat -n1 src/tools/cargo
  2. Revert the changes to that commit: git checkout <my-commit>~ src/tools/cargo. Type ~ literally but replace <my-commit> with the output from step 1.
  3. Tell git to commit the changes: git commit --fixup <my-commit>
  4. Repeat steps 1-3 for all the submodules you modified.
    • If you modified the submodule in several different commits, you will need to repeat steps 1-3 for each commit you modified. You'll know when to stop when the git log command shows a commit that's not authored by you.
  5. Squash your changes into the existing commits: git rebase --autosquash -i upstream/master
  6. Push your changes.

I see "error: cannot rebase" when I try to rebase

These are two common errors to see when rebasing:

error: cannot rebase: Your index contains uncommitted changes.
error: Please commit or stash them.
error: cannot rebase: You have unstaged changes.
error: Please commit or stash them.

(See https://git-scm.com/book/en/v2/Getting-Started-What-is-Git%3F#_the_three_states for the difference between the two.)

This means you have made changes since the last time you made a commit. To be able to rebase, either commit your changes, or make a temporary commit called a "stash" to have them still not be committed when you finish rebasing. You may want to configure git to make this "stash" automatically, which will prevent the "cannot rebase" error in nearly all cases:

git config --global rebase.autostash true

See https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning for more info about stashing.

I see 'Untracked Files: src/stdarch'?

This is left over from the move to the library/ directory. Unfortunately, git rebase does not follow renames for submodules, so you have to delete the directory yourself:

rm -r src/stdarch

I see <<< HEAD?

You were probably in the middle of a rebase or merge conflict. See Conflicts for how to fix the conflict. If you don't care about the changes and just want to get a clean copy of the repository back, you can use git reset:

# WARNING: this throws out any local changes you've made! Consider resolving the conflicts instead.
git reset --hard master

failed to push some refs

git push will not work properly and say something like this:

 ! [rejected]        issue-xxxxx -> issue-xxxxx (non-fast-forward)
error: failed to push some refs to 'https://github.com/username/rust.git'
hint: Updates were rejected because the tip of your current branch is behind
hint: its remote counterpart. Integrate the remote changes (e.g.
hint: 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.

The advice this gives is incorrect! Because of Rust's "no-merge" policy the merge commit created by git pull will not be allowed in the final PR, in addition to defeating the point of the rebase! Use git push --force-with-lease instead.

Git is trying to rebase commits I didn't write?

If you see many commits in your rebase list, or merge commits, or commits by other people that you didn't write, it likely means you're trying to rebase over the wrong branch. For example, you may have a rust-lang/rust remote upstream, but ran git rebase origin/master instead of git rebase upstream/master. The fix is to abort the rebase and use the correct branch instead:

git rebase --abort
git rebase -i upstream/master
Click here to see an example of rebasing over the wrong branch

Interactive rebase over the wrong branch

Quick note about submodules

When updating your local repository with git pull, you may notice that sometimes Git says you have modified some files that you have never edited. For example, running git status gives you something like (note the new commits mention):

On branch master
Your branch is up to date with 'origin/master'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   src/llvm-project (new commits)
	modified:   src/tools/cargo (new commits)

no changes added to commit (use "git add" and/or "git commit -a")

These changes are not changes to files: they are changes to submodules (more on this later). To get rid of those, run ./x --help, which will automatically update the submodules.

Some submodules are not actually needed; for example, src/llvm-project doesn't need to be checked out if you're using download-ci-llvm. To avoid having to keep fetching its history, you can use git submodule deinit -f src/llvm-project, which will also avoid it showing as modified again.

Rebasing and Conflicts

When you edit your code locally, you are making changes to the version of rust-lang/rust that existed when you created your feature branch. As such, when you submit your PR it is possible that some of the changes that have been made to rust-lang/rust since then are in conflict with the changes you've made. When this happens, you need to resolve the conflicts before your changes can be merged. To do that, you need to rebase your work on top of rust-lang/rust.

Rebasing

To rebase your feature branch on top of the newest version of the master branch of rust-lang/rust, checkout your branch, and then run this command:

git pull --rebase https://github.com/rust-lang/rust.git master

If you are met with the following error:

error: cannot pull with rebase: Your index contains uncommitted changes.
error: please commit or stash them.

it means that you have some uncommitted work in your working tree. In that case, run git stash before rebasing, and then git stash pop after you have rebased and fixed all conflicts.

When you rebase a branch on master, all the changes on your branch are reapplied to the most recent version of master. In other words, Git tries to pretend that the changes you made to the old version of master were instead made to the new version of master. During this process, you should expect to encounter at least one "rebase conflict." This happens when Git's attempt to reapply the changes fails because your changes conflicted with other changes that have been made. You can tell that this happened because you'll see lines in the output that look like

CONFLICT (content): Merge conflict in file.rs

When you open these files, you'll see sections of the form

<<<<<<< HEAD
Original code
=======
Your code
>>>>>>> 8fbf656... Commit fixes 12345

This represents the lines in the file that Git could not figure out how to rebase. The section between <<<<<<< HEAD and ======= has the code from master, while the other side has your version of the code. You'll need to decide how to deal with the conflict. You may want to keep your changes, keep the changes on master, or combine the two.

Generally, resolving the conflict consists of two steps: First, fix the particular conflict. Edit the file to make the changes you want and remove the <<<<<<<, ======= and >>>>>>> lines in the process. Second, check the surrounding code. If there was a conflict, its likely there are some logical errors lying around too! It's a good idea to run x check here to make sure there are no glaring errors.

Once you're all done fixing the conflicts, you need to stage the files that had conflicts in them via git add. Afterwards, run git rebase --continue to let Git know that you've resolved the conflicts and it should finish the rebase.

Once the rebase has succeeded, you'll want to update the associated branch on your fork with git push --force-with-lease.

Keeping things up to date

The above section on Rebasing is a specific guide on rebasing work and dealing with merge conflicts. Here is some general advice about how to keep your local repo up-to-date with upstream changes:

Using git pull upstream master while on your local master branch regularly will keep it up-to-date. You will also want to rebase your feature branches up-to-date as well. After pulling, you can checkout the feature branches and rebase them:

git checkout master
git pull upstream master --ff-only # to make certain there are no merge commits
git rebase master feature_branch
git push --force-with-lease # (set origin to be the same as local)

To avoid merges as per the No-Merge Policy, you may want to use git config pull.ff only (this will apply the config only to the local repo) to ensure that Git doesn't create merge commits when git pulling, without needing to pass --ff-only or --rebase every time.

You can also git push --force-with-lease from master to double-check that your feature branches are in sync with their state on the Github side.

Advanced Rebasing

Squash your commits

"Squashing" commits into each other causes them to be merged into a single commit. Both the upside and downside of this is that it simplifies the history. On the one hand, you lose track of the steps in which changes were made, but the history becomes easier to work with.

If there are no conflicts and you are just squashing to clean up the history, use git rebase --interactive --keep-base master. This keeps the fork point of your PR the same, making it easier to review the diff of what happened across your rebases.

Squashing can also be useful as part of conflict resolution. If your branch contains multiple consecutive rewrites of the same code, or if the rebase conflicts are extremely severe, you can use git rebase --interactive master to gain more control over the process. This allows you to choose to skip commits, edit the commits that you do not skip, change the order in which they are applied, or "squash" them into each other.

Alternatively, you can sacrifice the commit history like this:

# squash all the changes into one commit so you only have to worry about conflicts once
git rebase -i --keep-base master  # and squash all changes along the way
git rebase master
# fix all merge conflicts
git rebase --continue

You also may want to squash just the last few commits together, possibly because they only represent "fixups" and not real changes. For example, git rebase --interactive HEAD~2 will allow you to edit the two commits only.

git range-diff

After completing a rebase, and before pushing up your changes, you may want to review the changes between your old branch and your new one. You can do that with git range-diff master @{upstream} HEAD.

The first argument to range-diff, master in this case, is the base revision that you're comparing your old and new branch against. The second argument is the old version of your branch; in this case, @upstream means the version that you've pushed to GitHub, which is the same as what people will see in your pull request. Finally, the third argument to range-diff is the new version of your branch; in this case, it is HEAD, which is the commit that is currently checked-out in your local repo.

Note that you can also use the equivalent, abbreviated form git range-diff master @{u} HEAD.

Unlike in regular Git diffs, you'll see a - or + next to another - or + in the range-diff output. The marker on the left indicates a change between the old branch and the new branch, and the marker on the right indicates a change you've committed. So, you can think of a range-diff as a "diff of diffs" since it shows you the differences between your old diff and your new diff.

Here's an example of git range-diff output (taken from Git's docs):

-:  ------- > 1:  0ddba11 Prepare for the inevitable!
1:  c0debee = 2:  cab005e Add a helpful message at the start
2:  f00dbal ! 3:  decafe1 Describe a bug
    @@ -1,3 +1,3 @@
     Author: A U Thor <author@example.com>

    -TODO: Describe a bug
    +Describe a bug
    @@ -324,5 +324,6
      This is expected.

    -+What is unexpected is that it will also crash.
    ++Unexpectedly, it also crashes. This is a bug, and the jury is
    ++still out there how to fix it best. See ticket #314 for details.

      Contact
3:  bedead < -:  ------- TO-UNDO

(Note that git range-diff output in your terminal will probably be easier to read than in this example because it will have colors.)

Another feature of git range-diff is that, unlike git diff, it will also diff commit messages. This feature can be useful when amending several commit messages so you can make sure you changed the right parts.

git range-diff is a very useful command, but note that it can take some time to get used to its output format. You may also find Git's documentation on the command useful, especially their "Examples" section.

No-Merge Policy

The rust-lang/rust repo uses what is known as a "rebase workflow." This means that merge commits in PRs are not accepted. As a result, if you are running git merge locally, chances are good that you should be rebasing instead. Of course, this is not always true; if your merge will just be a fast-forward, like the merges that git pull usually performs, then no merge commit is created and you have nothing to worry about. Running git config merge.ff only (this will apply the config to the local repo) once will ensure that all the merges you perform are of this type, so that you cannot make a mistake.

There are a number of reasons for this decision and like all others, it is a tradeoff. The main advantage is the generally linear commit history. This greatly simplifies bisecting and makes the history and commit log much easier to follow and understand.

Tips for reviewing

NOTE: This section is for reviewing PRs, not authoring them.

Hiding whitespace

Github has a button for disabling whitespace changes that may be useful. You can also use git diff -w origin/master to view changes locally.

hide whitespace

Fetching PRs

To checkout PRs locally, you can use git fetch upstream pull/NNNNN/head && git checkout FETCH_HEAD.

You can also use github's cli tool. Github shows a button on PRs where you can copy-paste the command to check it out locally. See https://cli.github.com/ for more info.

gh suggestion

Moving large sections of code

Git and Github's default diff view for large moves within a file is quite poor; it will show each line as deleted and each line as added, forcing you to compare each line yourself. Git has an option to show moved lines in a different color:

git log -p --color-moved=dimmed-zebra --color-moved-ws=allow-indentation-change

See the docs for --color-moved for more info.

range-diff

See the relevant section for PR authors. This can be useful for comparing code that was force-pushed to make sure there are no unexpected changes.

Ignoring changes to specific files

Many large files in the repo are autogenerated. To view a diff that ignores changes to those files, you can use the following syntax (e.g. Cargo.lock):

git log -p ':!Cargo.lock'

Arbitrary patterns are supported (e.g. :!compiler/*). Patterns use the same syntax as .gitignore, with : prepended to indicate a pattern.

Git submodules

NOTE: submodules are a nice thing to know about, but it isn't an absolute prerequisite to contribute to rustc. If you are using Git for the first time, you might want to get used to the main concepts of Git before reading this section.

The rust-lang/rust repository uses Git submodules as a way to use other Rust projects from within the rust repo. Examples include Rust's fork of llvm-project, cargo and libraries like stdarch and backtrace.

Those projects are developed and maintained in an separate Git (and GitHub) repository, and they have their own Git history/commits, issue tracker and PRs. Submodules allow us to create some sort of embedded sub-repository inside the rust repository and use them like they were directories in the rust repository.

Take llvm-project for example. llvm-project is maintained in the rust-lang/llvm-project repository, but it is used in rust-lang/rust by the compiler for code generation and optimization. We bring it in rust as a submodule, in the src/llvm-project folder.

The contents of submodules are ignored by Git: submodules are in some sense isolated from the rest of the repository. However, if you try to cd src/llvm-project and then run git status:

HEAD detached at 9567f08afc943
nothing to commit, working tree clean

As far as git is concerned, you are no longer in the rust repo, but in the llvm-project repo. You will notice that we are in "detached HEAD" state, i.e. not on a branch but on a particular commit.

This is because, like any dependency, we want to be able to control which version to use. Submodules allow us to do just that: every submodule is "pinned" to a certain commit, which doesn't change unless modified manually. If you use git checkout <commit> in the llvm-project directory and go back to the rust directory, you can stage this change like any other, e.g. by running git add src/llvm-project. (Note that if you don't stage the change to commit, then you run the risk that running x will just undo your change by switching back to the previous commit when it automatically "updates" the submodules.)

This version selection is usually done by the maintainers of the project, and looks like this.

Git submodules take some time to get used to, so don't worry if it isn't perfectly clear yet. You will rarely have to use them directly and, again, you don't need to know everything about submodules to contribute to Rust. Just know that they exist and that they correspond to some sort of embedded subrepository dependency that Git can nicely and fairly conveniently handle for us.

Hard-resetting submodules

Sometimes you might run into (when you run git status)

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
  (commit or discard the untracked or modified content in submodules)
        modified:   src/llvm-project (new commits, modified content)

and when you try to run git submodule update it breaks horribly with errors like

error: RPC failed; curl 92 HTTP/2 stream 7 was not closed cleanly: CANCEL (err 8)
error: 2782 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
fatal: Fetched in submodule path 'src/llvm-project', but it did not contain 5a5152f653959d14d68613a3a8a033fb65eec021. Direct fetching of that commit failed.

If you see (new commits, modified content) you can run

$ git submodule foreach git reset --hard

and then try git submodule update again.

Deinit git submodules

If that doesn't work, you can try to deinit all git submodules...

git submodule deinit -f --all

Unfortunately sometimes your local git submodules configuration can become completely messed up for some reason.

Overcoming fatal: not a git repository: <submodule>/../../.git/modules/<submodule>

Sometimes, for some forsaken reason, you might run into

fatal: not a git repository: src/gcc/../../.git/modules/src/gcc

In this situation, for the given submodule path, i.e. <submodule_path> = src/gcc in this example, you need to:

  1. rm -rf <submodule_path>/.git
  2. rm -rf .git/modules/<submodule_path>/config
  3. rm -rf .gitconfig.lock if somehow the .gitconfig lock is orphaned.

Then do something like ./x fmt to have bootstrap manage the submodule checkouts for you.

Ignoring commits during git blame

Some commits contain large reformatting changes that don't otherwise change functionality. They can be instructed to be ignored by git blame through .git-blame-ignore-revs:

  1. Configure git blame to use .git-blame-ignore-revs as the list of commits to ignore: git config blame.ignorerevsfile .git-blame-ignore-revs
  2. Add suitable commits that you wish to be ignored by git blame.

Please include a comment for the commit that you add to .git-blame-ignore-revs so people can easily figure out why a commit is ignored.

Mastering @rustbot

@rustbot (also known as triagebot) is a utility robot that is mostly used to allow any contributor to achieve certain tasks that would normally require GitHub membership to the rust-lang organization. Its most interesting features for contributors to rustc are issue claiming and relabeling.

Issue claiming

@rustbot exposes a command that allows anyone to assign an issue to themselves. If you see an issue you want to work on, you can send the following message as a comment on the issue at hand:

@rustbot claim

This will tell @rustbot to assign the issue to you if it has no assignee yet. Note that because of some GitHub restrictions, you may be assigned indirectly, i.e. @rustbot will assign itself as a placeholder and edit the top comment to reflect the fact that the issue is now assigned to you.

If you want to unassign from an issue, @rustbot has a different command:

@rustbot release-assignment

Issue relabeling

Changing labels for an issue or PR is also normally reserved for members of the organization. However, @rustbot allows you to relabel an issue yourself, only with a few restrictions. This is mostly useful in two cases:

Helping with issue triage: Rust's issue tracker has more than 5,000 open issues at the time of this writing, so labels are the most powerful tool that we have to keep it as tidy as possible. You don't need to spend hours in the issue tracker to triage issues, but if you open an issue, you should feel free to label it if you are comfortable with doing it yourself.

Updating the status of a PR: We use "status labels" to reflect the status of PRs. For example, if your PR has merge conflicts, it will automatically be assigned the S-waiting-on-author, and reviewers might not review it until you rebase your PR. Once you do rebase your branch, you should change the labels yourself to remove the S-waiting-on-author label and add back S-waiting-on-review. In this case, the @rustbot command will look like this:

@rustbot label -S-waiting-on-author +S-waiting-on-review

The syntax for this command is pretty loose, so there are other variants of this command invocation. There are also some shortcuts to update labels, for instance @rustbot ready will do the same thing with above command. For more details, see the docs page about labeling and shortcuts.

Other commands

If you are interested in seeing what @rustbot is capable of, check out its documentation, which is meant as a reference for the bot and should be kept up to date every time the bot gets an upgrade.

@rustbot is maintained by the Release team. If you have any feedback regarding existing commands or suggestions for new commands, feel free to reach out on Zulip or file an issue in the triagebot repository

Walkthrough: a typical contribution

There are a lot of ways to contribute to the Rust compiler, including fixing bugs, improving performance, helping design features, providing feedback on existing features, etc. This chapter does not claim to scratch the surface. Instead, it walks through the design and implementation of a new feature. Not all of the steps and processes described here are needed for every contribution, and I will try to point those out as they arise.

In general, if you are interested in making a contribution and aren't sure where to start, please feel free to ask!

Overview

The feature I will discuss in this chapter is the ? Kleene operator for macros. Basically, we want to be able to write something like this:

macro_rules! foo {
    ($arg:ident $(, $optional_arg:ident)?) => {
        println!("{}", $arg);

        $(
            println!("{}", $optional_arg);
        )?
    }
}

fn main() {
    let x = 0;
    foo!(x); // ok! prints "0"
    foo!(x, x); // ok! prints "0 0"
}

So basically, the $(pat)? matcher in the macro means "this pattern can occur 0 or 1 times", similar to other regex syntaxes.

There were a number of steps to go from an idea to stable Rust feature. Here is a quick list. We will go through each of these in order below. As I mentioned before, not all of these are needed for every type of contribution.

  • Idea discussion/Pre-RFC A Pre-RFC is an early draft or design discussion of a feature. This stage is intended to flesh out the design space a bit and get a grasp on the different merits and problems with an idea. It's a great way to get early feedback on your idea before presenting it to the wider audience. You can find the original discussion here.
  • RFC This is when you formally present your idea to the community for consideration. You can find the RFC here.
  • Implementation Implement your idea unstably in the compiler. You can find the original implementation here.
  • Possibly iterate/refine As the community gets experience with your feature on the nightly compiler and in std, there may be additional feedback about design choice that might be adjusted. This particular feature went through a number of iterations.
  • Stabilization When your feature has baked enough, a Rust team member may propose to stabilize it. If there is consensus, this is done.
  • Relax Your feature is now a stable Rust feature!

Pre-RFC and RFC

NOTE: In general, if you are not proposing a new feature or substantial change to Rust or the ecosystem, you don't need to follow the RFC process. Instead, you can just jump to implementation.

You can find the official guidelines for when to open an RFC here.

An RFC is a document that describes the feature or change you are proposing in detail. Anyone can write an RFC; the process is the same for everyone, including Rust team members.

To open an RFC, open a PR on the rust-lang/rfcs repo on GitHub. You can find detailed instructions in the README.

Before opening an RFC, you should do the research to "flesh out" your idea. Hastily-proposed RFCs tend not to be accepted. You should generally have a good description of the motivation, impact, disadvantages, and potential interactions with other features.

If that sounds like a lot of work, it's because it is. But no fear! Even if you're not a compiler hacker, you can get great feedback by doing a pre-RFC. This is an informal discussion of the idea. The best place to do this is internals.rust-lang.org. Your post doesn't have to follow any particular structure. It doesn't even need to be a cohesive idea. Generally, you will get tons of feedback that you can integrate back to produce a good RFC.

(Another pro-tip: try searching the RFCs repo and internals for prior related ideas. A lot of times an idea has already been considered and was either rejected or postponed to be tried again later. This can save you and everybody else some time)

In the case of our example, a participant in the pre-RFC thread pointed out a syntax ambiguity and a potential resolution. Also, the overall feedback seemed positive. In this case, the discussion converged pretty quickly, but for some ideas, a lot more discussion can happen (e.g. see this RFC which received a whopping 684 comments!). If that happens, don't be discouraged; it means the community is interested in your idea, but it perhaps needs some adjustments.

The RFC for our ? macro feature did receive some discussion on the RFC thread too. As with most RFCs, there were a few questions that we couldn't answer by discussion: we needed experience using the feature to decide. Such questions are listed in the "Unresolved Questions" section of the RFC. Also, over the course of the RFC discussion, you will probably want to update the RFC document itself to reflect the course of the discussion (e.g. new alternatives or prior work may be added or you may decide to change parts of the proposal itself).

In the end, when the discussion seems to reach a consensus and die down a bit, a Rust team member may propose to move to "final comment period" (FCP) with one of three possible dispositions. This means that they want the other members of the appropriate teams to review and comment on the RFC. More discussion may ensue, which may result in more changes or unresolved questions being added. At some point, when everyone is satisfied, the RFC enters the FCP, which is the last chance for people to bring up objections. When the FCP is over, the disposition is adopted. Here are the three possible dispositions:

  • Merge: accept the feature. Here is the proposal to merge for our ? macro feature.
  • Close: this feature in its current form is not a good fit for rust. Don't be discouraged if this happens to your RFC, and don't take it personally. This is not a reflection on you, but rather a community decision that rust will go a different direction.
  • Postpone: there is interest in going this direction but not at the moment. This happens most often because the appropriate Rust team doesn't have the bandwidth to shepherd the feature through the process to stabilization. Often this is the case when the feature doesn't fit into the team's roadmap. Postponed ideas may be revisited later.

When an RFC is merged, the PR is merged into the RFCs repo. A new tracking issue is created in the rust-lang/rust repo to track progress on the feature and discuss unresolved questions, implementation progress and blockers, etc. Here is the tracking issue on for our ? macro feature.

Implementation

To make a change to the compiler, open a PR against the rust-lang/rust repo.

Depending on the feature/change/bug fix/improvement, implementation may be relatively-straightforward or it may be a major undertaking. You can always ask for help or mentorship from more experienced compiler devs. Also, you don't have to be the one to implement your feature; but keep in mind that if you don't, it might be a while before someone else does.

For the ? macro feature, I needed to go understand the relevant parts of macro expansion in the compiler. Personally, I find that improving the comments in the code is a helpful way of making sure I understand it, but you don't have to do that if you don't want to.

I then implemented the original feature, as described in the RFC. When a new feature is implemented, it goes behind a feature gate, which means that you have to use #![feature(my_feature_name)] to use the feature. The feature gate is removed when the feature is stabilized.

Most bug fixes and improvements don't require a feature gate. You can just make your changes/improvements.

When you open a PR on the rust-lang/rust, a bot will assign your PR to a reviewer. If there is a particular Rust team member you are working with, you can request that reviewer by leaving a comment on the thread with r? @reviewer-github-id (e.g. r? @eddyb). If you don't know who to request, don't request anyone; the bot will assign someone automatically based on which files you changed.

The reviewer may request changes before they approve your PR, they may mark the PR with label "S-waiting-on-author" after leaving comments, this means that the PR is blocked on you to make some requested changes. When you finished iterating on the changes, you can mark the PR as S-waiting-on-review again by leaving a comment with @rustbot ready, this will remove the S-waiting-on-author label and add the S-waiting-on-review label.

Feel free to ask questions or discuss things you don't understand or disagree with. However, recognize that the PR won't be merged unless someone on the Rust team approves it. If a reviewer leave a comment like r=me after fixing ..., that means they approve the PR and you can merge it with comment with @bors r=reviewer-github-id(e.g. @bors r=eddyb) to merge it after fixing trivial issues. Note that r=someone requires permission and bors could say something like "🔑 Insufficient privileges..." when commenting r=someone. In that case, you have to ask the reviewer to revisit your PR.

When your reviewer approves the PR, it will go into a queue for yet another bot called @bors. @bors manages the CI build/merge queue. When your PR reaches the head of the @bors queue, @bors will test out the merge by running all tests against your PR on GitHub Actions. This takes a lot of time to finish. If all tests pass, the PR is merged and becomes part of the next nightly compiler!

There are a couple of things that may happen for some PRs during the review process

  • If the change is substantial enough, the reviewer may request an FCP on the PR. This gives all members of the appropriate team a chance to review the changes.
  • If the change may cause breakage, the reviewer may request a crater run. This compiles the compiler with your changes and then attempts to compile all crates on crates.io with your modified compiler. This is a great smoke test to check if you introduced a change to compiler behavior that affects a large portion of the ecosystem.
  • If the diff of your PR is large or the reviewer is busy, your PR may have some merge conflicts with other PRs that happen to get merged first. You should fix these merge conflicts using the normal git procedures.

If you are not doing a new feature or something like that (e.g. if you are fixing a bug), then that's it! Thanks for your contribution :)

Refining your implementation

As people get experience with your new feature on nightly, slight changes may be proposed and unresolved questions may become resolved. Updates/changes go through the same process for implementing any other changes, as described above (i.e. submit a PR, go through review, wait for @bors, etc).

Some changes may be major enough to require an FCP and some review by Rust team members.

For the ? macro feature, we went through a few different iterations after the original implementation: 1, 2, 3.

Along the way, we decided that ? should not take a separator, which was previously an unresolved question listed in the RFC. We also changed the disambiguation strategy: we decided to remove the ability to use ? as a separator token for other repetition operators (e.g. + or *). However, since this was a breaking change, we decided to do it over an edition boundary. Thus, the new feature can be enabled only in edition 2018. These deviations from the original RFC required another FCP.

Stabilization

Finally, after the feature had baked for a while on nightly, a language team member moved to stabilize it.

A stabilization report needs to be written that includes

  • brief description of the behavior and any deviations from the RFC
  • which edition(s) are affected and how
  • links to a few tests to show the interesting aspects

The stabilization report for our feature is here.

After this, a PR is made to remove the feature gate, enabling the feature by default (on the 2018 edition). A note is added to the Release notes about the feature.

Steps to stabilize the feature can be found at Stabilizing Features.

Implementing new language features

When you want to implement a new significant feature in the compiler, you need to go through this process to make sure everything goes smoothly.

NOTE: this section is for language features, not library features, which use a different process.

The @rfcbot FCP process

When the change is small and uncontroversial, then it can be done with just writing a PR and getting an r+ from someone who knows that part of the code. However, if the change is potentially controversial, it would be a bad idea to push it without consensus from the rest of the team (both in the "distributed system" sense to make sure you don't break anything you don't know about, and in the social sense to avoid PR fights).

If such a change seems to be too small to require a full formal RFC process (e.g., a small standard library addition, a big refactoring of the code, a "technically-breaking" change, or a "big bugfix" that basically amounts to a small feature) but is still too controversial or big to get by with a single r+, you can propose a final comment period (FCP). Or, if you're not on the relevant team (and thus don't have @rfcbot permissions), ask someone who is to start one; unless they have a concern themselves, they should.

Again, the FCP process is only needed if you need consensus – if you don't think anyone would have a problem with your change, it's OK to get by with only an r+. For example, it is OK to add or modify unstable command-line flags or attributes without an FCP for compiler development or standard library use, as long as you don't expect them to be in wide use in the nightly ecosystem. Some teams have lighter weight processes that they use in scenarios like this; for example, the compiler team recommends filing a Major Change Proposal (MCP) as a lightweight way to garner support and feedback without requiring full consensus.

You don't need to have the implementation fully ready for r+ to propose an FCP, but it is generally a good idea to have at least a proof of concept so that people can see what you are talking about.

When an FCP is proposed, it requires all members of the team to sign off the FCP. After they all do so, there's a 10-day-long "final comment period" (hence the name) where everybody can comment, and if no concerns are raised, the PR/issue gets FCP approval.

The logistics of writing features

There are a few "logistic" hoops you might need to go through in order to implement a feature in a working way.

Warning Cycles

In some cases, a feature or bugfix might break some existing programs in some edge cases. In that case, you might want to do a crater run to assess the impact and possibly add a future-compatibility lint, similar to those used for edition-gated lints.

Stability

We value the stability of Rust. Code that works and runs on stable should (mostly) not break. Because of that, we don't want to release a feature to the world with only team consensus and code review - we want to gain real-world experience on using that feature on nightly, and we might want to change the feature based on that experience.

To allow for that, we must make sure users don't accidentally depend on that new feature - otherwise, especially if experimentation takes time or is delayed and the feature takes the trains to stable, it would end up de facto stable and we'll not be able to make changes in it without breaking people's code.

The way we do that is that we make sure all new features are feature gated - they can't be used without enabling a feature gate (#[feature(foo)]), which can't be done in a stable/beta compiler. See the stability in code section for the technical details.

Eventually, after we gain enough experience using the feature, make the necessary changes, and are satisfied, we expose it to the world using the stabilization process described here. Until then, the feature is not set in stone: every part of the feature can be changed, or the feature might be completely rewritten or removed. Features are not supposed to gain tenure by being unstable and unchanged for a year.

Tracking Issues

To keep track of the status of an unstable feature, the experience we get while using it on nightly, and of the concerns that block its stabilization, every feature-gate needs a tracking issue. General discussions about the feature should be done on the tracking issue.

For features that have an RFC, you should use the RFC's tracking issue for the feature.

For other features, you'll have to make a tracking issue for that feature. The issue title should be "Tracking issue for YOUR FEATURE". Use the "Tracking Issue" issue template.

Stability in code

The below steps needs to be followed in order to implement a new unstable feature:

  1. Open a tracking issue - if you have an RFC, you can use the tracking issue for the RFC.

    The tracking issue should be labeled with at least C-tracking-issue. For a language feature, a label F-feature_name should be added as well.

  2. Pick a name for the feature gate (for RFCs, use the name in the RFC).

  3. Add the feature name to rustc_span/src/symbol.rs in the Symbols {...} block.

    Note that this block must be in alphabetical order.

  4. Add a feature gate declaration to rustc_feature/src/unstable.rs in the unstable declare_features block.

    /// description of feature
    (unstable, $feature_name, "CURRENT_RUSTC_VERSION", Some($tracking_issue_number))
    

    If you haven't yet opened a tracking issue (e.g. because you want initial feedback on whether the feature is likely to be accepted), you can temporarily use None - but make sure to update it before the PR is merged!

    For example:

    /// Allows defining identifiers beyond ASCII.
    (unstable, non_ascii_idents, "CURRENT_RUSTC_VERSION", Some(55467), None),
    

    Features can be marked as incomplete, and trigger the warn-by-default incomplete_features lint by setting their type to incomplete:

    /// Allows unsized rvalues at arguments and parameters.
    (incomplete, unsized_locals, "CURRENT_RUSTC_VERSION", Some(48055), None),
    

    To avoid semantic merge conflicts, please use CURRENT_RUSTC_VERSION instead of 1.70 or another explicit version number.

  5. Prevent usage of the new feature unless the feature gate is set. You can check it in most places in the compiler using the expression tcx.features().$feature_name (or sess.features_untracked().$feature_name if the tcx is unavailable)

    If the feature gate is not set, you should either maintain the pre-feature behavior or raise an error, depending on what makes sense. Errors should generally use rustc_session::parse::feature_err. For an example of adding an error, see #81015.

    For features introducing new syntax, pre-expansion gating should be used instead. During parsing, when the new syntax is parsed, the symbol must be inserted to the current crate's GatedSpans via self.sess.gated_span.gate(sym::my_feature, span).

    After being inserted to the gated spans, the span must be checked in the rustc_ast_passes::feature_gate::check_crate function, which actually denies features. Exactly how it is gated depends on the exact type of feature, but most likely will use the gate_all!() macro.

  6. Add a test to ensure the feature cannot be used without a feature gate, by creating tests/ui/feature-gates/feature-gate-$feature_name.rs. You can generate the corresponding .stderr file by running ./x test tests/ui/feature-gates/ --bless.

  7. Add a section to the unstable book, in src/doc/unstable-book/src/language-features/$feature_name.md.

  8. Write a lot of tests for the new feature, preferably in tests/ui/$feature_name/. PRs without tests will not be accepted!

  9. Get your PR reviewed and land it. You have now successfully implemented a feature in Rust!

Stability attributes

This section is about the stability attributes and schemes that allow stable APIs to use unstable APIs internally in the rustc standard library.

NOTE: this section is for library features, not language features. For instructions on stabilizing a language feature see Stabilizing Features.

unstable

The #[unstable(feature = "foo", issue = "1234", reason = "lorem ipsum")] attribute explicitly marks an item as unstable. Items that are marked as "unstable" cannot be used without a corresponding #![feature] attribute on the crate, even on a nightly compiler. This restriction only applies across crate boundaries, unstable items may be used within the crate that defines them.

The issue field specifies the associated GitHub issue number. This field is required and all unstable features should have an associated tracking issue. In rare cases where there is no sensible value issue = "none" is used.

The unstable attribute infects all sub-items, where the attribute doesn't have to be reapplied. So if you apply this to a module, all items in the module will be unstable.

You can make specific sub-items stable by using the #[stable] attribute on them. The stability scheme works similarly to how pub works. You can have public functions of nonpublic modules and you can have stable functions in unstable modules or vice versa.

Previously, due to a rustc bug, stable items inside unstable modules were available to stable code in that location. As of September 2024, items with accidentally stabilized paths are marked with the #[rustc_allowed_through_unstable_modules] attribute to prevent code dependent on those paths from breaking.

The unstable attribute may also have the soft value, which makes it a future-incompatible deny-by-default lint instead of a hard error. This is used by the bench attribute which was accidentally accepted in the past. This prevents breaking dependencies by leveraging Cargo's lint capping.

stable

The #[stable(feature = "foo", since = "1.420.69")] attribute explicitly marks an item as stabilized. Note that stable functions may use unstable things in their body.

rustc_const_unstable

The #[rustc_const_unstable(feature = "foo", issue = "1234", reason = "lorem ipsum")] has the same interface as the unstable attribute. It is used to mark const fn as having their constness be unstable. This is only needed in rare cases:

  • If a const fn makes use of unstable language features or intrinsics. (The compiler will tell you to add the attribute if you run into this.)
  • If a const fn is #[stable] but not yet intended to be const-stable.
  • To change the feature gate that is required to call a const-unstable intrinsic.

Const-stability differs from regular stability in that it is recursive: a #[rustc_const_unstable(...)] function cannot even be indirectly called from stable code. This is to avoid accidentally leaking unstable compiler implementation artifacts to stable code or locking us into the accidental quirks of an incomplete implementation. See the rustc_const_stable_indirect and rustc_allow_const_fn_unstable attributes below for how to fine-tune this check.

rustc_const_stable

The #[rustc_const_stable(feature = "foo", since = "1.420.69")] attribute explicitly marks a const fn as having its constness be stable.

rustc_const_stable_indirect

The #[rustc_const_stable_indirect] attribute can be added to a #[rustc_const_unstable(...)] function to make it callable from #[rustc_const_stable(...)] functions. This indicates that the function is ready for stable in terms of its implementation (i.e., it doesn't use any unstable compiler features); the only reason it is not const-stable yet are API concerns.

This should also be added to lang items for which const-calls are synthesized in the compiler, to ensure those calls do not bypass recursive const stability rules.

rustc_intrinsic_const_stable_indirect

On an intrinsic, this attribute marks the intrinsic as "ready to be used by public stable functions". If the intrinsic has a rustc_const_unstable attribute, it should be removed. Adding this attribute to an intrinsic requires t-lang and wg-const-eval approval!

rustc_default_body_unstable

The #[rustc_default_body_unstable(feature = "foo", issue = "1234", reason = "lorem ipsum")] attribute has the same interface as the unstable attribute. It is used to mark the default implementation for an item within a trait as unstable. A trait with a default-body-unstable item can be implemented stably by providing an explicit body for any such item, or the default body can be used by enabling its corresponding #![feature].

Stabilizing a library feature

To stabilize a feature, follow these steps:

  1. Ask a @T-libs-api member to start an FCP on the tracking issue and wait for the FCP to complete (with disposition-merge).
  2. Change #[unstable(...)] to #[stable(since = "CURRENT_RUSTC_VERSION")].
  3. Remove #![feature(...)] from any test or doc-test for this API. If the feature is used in the compiler or tools, remove it from there as well.
  4. If this is a const fn, add #[rustc_const_stable(since = "CURRENT_RUSTC_VERSION")]. Alternatively, if this is not supposed to be const-stabilized yet, add #[rustc_const_unstable(...)] for some new feature gate (with a new tracking issue).
  5. Open a PR against rust-lang/rust.
    • Add the appropriate labels: @rustbot modify labels: +T-libs-api.
    • Link to the tracking issue and say "Closes #XXXXX".

You can see an example of stabilizing a feature with tracking issue #81656 with FCP and the associated implementation PR #84642.

allow_internal_unstable

Macros and compiler desugarings expose their bodies to the call site. To work around not being able to use unstable things in the standard library's macros, there's the #[allow_internal_unstable(feature1, feature2)] attribute that allows the given features to be used in stable macros.

Note that if a macro is used in const context and generates a call to a #[rustc_const_unstable(...)] function, that will still be rejected even with allow_internal_unstable. Add #[rustc_const_stable_indirect] to the function to ensure the macro cannot accidentally bypass the recursive const stability checks.

rustc_allow_const_fn_unstable

As explained above, no unstable const features are allowed inside stable const fn, not even indirectly.

However, sometimes we do know that a feature will get stabilized, just not when, or there is a stable (but e.g. runtime-slow) workaround, so we could always fall back to some stable version if we scrapped the unstable feature. In those cases, the [rustc_allow_const_fn_unstable(feature1, feature2)] attribute can be used to allow some unstable features in the body of a stable (or indirectly stable) const fn.

You also need to take care to uphold the const fn invariant that calling it at runtime and compile-time needs to behave the same (see also this blog post). This means that you may not create a const fn that e.g. transmutes a memory address to an integer, because the addresses of things are nondeterministic and often unknown at compile-time.

Always ping @rust-lang/wg-const-eval if you are adding more rustc_allow_const_fn_unstable attributes to any const fn.

staged_api

Any crate that uses the stable or unstable attributes must include the #![feature(staged_api)] attribute on the crate.

deprecated

Deprecations in the standard library are nearly identical to deprecations in user code. When #[deprecated] is used on an item, it must also have a stable or unstable attribute.

deprecated has the following form:

#[deprecated(
    since = "1.38.0",
    note = "explanation for deprecation",
    suggestion = "other_function"
)]

The suggestion field is optional. If given, it should be a string that can be used as a machine-applicable suggestion to correct the warning. This is typically used when the identifier is renamed, but no other significant changes are necessary. When the suggestion field is used, you need to have #![feature(deprecated_suggestion)] at the crate root.

Another difference from user code is that the since field is actually checked against the current version of rustc. If since is in a future version, then the deprecated_in_future lint is triggered which is default allow, but most of the standard library raises it to a warning with #![warn(deprecated_in_future)].

Request for stabilization

NOTE: this page is about stabilizing language features. For stabilizing library features, see Stabilizing a library feature.

Once an unstable feature has been well-tested with no outstanding concern, anyone may push for its stabilization. It involves the following steps:

Documentation PRs

If any documentation for this feature exists, it should be in the Unstable Book, located at src/doc/unstable-book. If it exists, the page for the feature gate should be removed.

If there was documentation there, integrating it into the existing documentation is needed.

If there wasn't documentation there, it needs to be added.

Places that may need updated documentation:

  • The Reference: This must be updated, in full detail.
  • The Book: This may or may not need updating, depends. If you're not sure, please open an issue on this repository and it can be discussed.
  • standard library documentation: As needed. Language features often don't need this, but if it's a feature that changes how good examples are written, such as when ? was added to the language, updating examples is important.
  • Rust by Example: As needed.

Prepare PRs to update documentation involving this new feature for repositories mentioned above. Maintainers of these repositories will keep these PRs open until the whole stabilization process has completed. Meanwhile, we can proceed to the next step.

Write a stabilization report

Find the tracking issue of the feature, and create a short stabilization report. Essentially this would be a brief summary of the feature plus some links to test cases showing it works as expected, along with a list of edge cases that came up and were considered. This is a minimal "due diligence" that we do before stabilizing.

The report should contain:

  • A summary, showing examples (e.g. code snippets) what is enabled by this feature.
  • Links to test cases in our test suite regarding this feature and describe the feature's behavior on encountering edge cases.
  • Links to the documentations (the PRs we have made in the previous steps).
  • Any other relevant information.
  • The resolutions of any unresolved questions if the stabilization is for an RFC.

Examples of stabilization reports can be found in rust-lang/rust#44494 and rust-lang/rust#28237 (these links will bring you directly to the comment containing the stabilization report).

FCP

If any member of the team responsible for tracking this feature agrees with stabilizing this feature, they will start the FCP (final-comment-period) process by commenting

@rfcbot fcp merge

The rest of the team members will review the proposal. If the final decision is to stabilize, we proceed to do the actual code modification.

Stabilization PR

This is for stabilizing language features. If you are stabilizing a library feature, see the stabilization chapter of the std dev guide instead.

Once we have decided to stabilize a feature, we need to have a PR that actually makes that stabilization happen. These kinds of PRs are a great way to get involved in Rust, as they take you on a little tour through the source code.

Here is a general guide to how to stabilize a feature -- every feature is different, of course, so some features may require steps beyond what this guide talks about.

Note: Before we stabilize any feature, it's the rule that it should appear in the documentation.

Updating the feature-gate listing

There is a central listing of unstable feature-gates in compiler/rustc_feature/src/unstable.rs. Search for the declare_features! macro. There should be an entry for the feature you are aiming to stabilize, something like (this example is taken from rust-lang/rust#32409:

// pub(restricted) visibilities (RFC 1422)
(unstable, pub_restricted, "CURRENT_RUSTC_VERSION", Some(32409)),

The above line should be moved to compiler/rustc_feature/src/accepted.rs. Entries in the declare_features! call are sorted, so find the correct place. When it is done, it should look like:

// pub(restricted) visibilities (RFC 1422)
(accepted, pub_restricted, "CURRENT_RUSTC_VERSION", Some(32409)),
// note that we changed this

(Even though you will encounter version numbers in the file of past changes, you should not put the rustc version you expect your stabilization to happen in, but instead CURRENT_RUSTC_VERSION)

Removing existing uses of the feature-gate

Next search for the feature string (in this case, pub_restricted) in the codebase to find where it appears. Change uses of #![feature(XXX)] from the std and any rustc crates (this includes test folders under library/ and compiler/ but not the toplevel tests/ one) to be #![cfg_attr(bootstrap, feature(XXX))]. This includes the feature-gate only for stage0, which is built using the current beta (this is needed because the feature is still unstable in the current beta).

Also, remove those strings from any tests (e.g. under tests/). If there are tests specifically targeting the feature-gate (i.e., testing that the feature-gate is required to use the feature, but nothing else), simply remove the test.

Do not require the feature-gate to use the feature

Most importantly, remove the code which flags an error if the feature-gate is not present (since the feature is now considered stable). If the feature can be detected because it employs some new syntax, then a common place for that code to be is in the same compiler/rustc_ast_passes/src/feature_gate.rs. For example, you might see code like this:

gate_feature_post!(&self, pub_restricted, span,
 "`pub(restricted)` syntax is experimental");

This gate_feature_post! macro prints an error if the pub_restricted feature is not enabled. It is not needed now that #[pub_restricted] is stable.

For more subtle features, you may find code like this:

if self.tcx.sess.features.borrow().pub_restricted { /* XXX */ }

This pub_restricted field (obviously named after the feature) would ordinarily be false if the feature flag is not present and true if it is. So transform the code to assume that the field is true. In this case, that would mean removing the if and leaving just the /* XXX */.

if self.tcx.sess.features.borrow().pub_restricted { /* XXX */ }
becomes
/* XXX */

if self.tcx.sess.features.borrow().pub_restricted && something { /* XXX */ }
 becomes
if something { /* XXX */ }

Feature Gates

This chapter is intended to provide basic help for adding, removing, and modifying feature gates.

Note that this is specific to language feature gates; library feature gates use a different mechanism.

Adding a feature gate

See "Stability in code" in the "Implementing new features" section for instructions.

Removing a feature gate

To remove a feature gate, follow these steps:

  1. Remove the feature gate declaration in rustc_feature/src/unstable.rs. It will look like this:

    /// description of feature
    (unstable, $feature_name, "$version", Some($tracking_issue_number))
    
  2. Add a modified version of the feature gate declaration that you just removed to rustc_feature/src/removed.rs:

    /// description of feature
    (removed, $old_feature_name, "$version", Some($tracking_issue_number),
     Some("$why_it_was_removed"))
    

Renaming a feature gate

To rename a feature gate, follow these steps (the first two are the same steps to follow when removing a feature gate):

  1. Remove the old feature gate declaration in rustc_feature/src/unstable.rs. It will look like this:

    /// description of feature
    (unstable, $old_feature_name, "$version", Some($tracking_issue_number))
    
  2. Add a modified version of the old feature gate declaration that you just removed to rustc_feature/src/removed.rs:

    /// description of feature
    /// Renamed to `$new_feature_name`
    (removed, $old_feature_name, "$version", Some($tracking_issue_number),
     Some("renamed to `$new_feature_name`"))
    
  3. Add a feature gate declaration with the new name to rustc_feature/src/unstable.rs. It should look very similar to the old declaration:

    /// description of feature
    (unstable, $new_feature_name, "$version", Some($tracking_issue_number))
    

Stabilizing a feature

See "Updating the feature-gate listing" in the "Stabilizing Features" chapter for instructions. There are additional steps you will need to take beyond just updating the declaration!

This file offers some tips on the coding conventions for rustc. This chapter covers formatting, coding for correctness, using crates from crates.io, and some tips on structuring your PR for easy review.

Formatting and the tidy script

rustc is moving towards the Rust standard coding style.

However, for now we don't use stable rustfmt; we use a pinned version with a special config, so this may result in different style from normal rustfmt. Therefore, formatting this repository using cargo fmt is not recommended.

Instead, formatting should be done using ./x fmt. It's a good habit to run ./x fmt before every commit, as this reduces conflicts later.

Formatting is checked by the tidy script. It runs automatically when you do ./x test and can be run in isolation with ./x fmt --check.

If you want to use format-on-save in your editor, the pinned version of rustfmt is built under build/<target>/stage0/bin/rustfmt. You'll have to pass the --edition=2021 argument yourself when calling rustfmt directly.

Formatting C++ code

The compiler contains some C++ code for interfacing with parts of LLVM that don't have a stable C API. When modifying that code, use this command to format it:

./x test tidy --extra-checks=cpp:fmt --bless

This uses a pinned version of clang-format, to avoid relying on the local environment.

In the past, files began with a copyright and license notice. Please omit this notice for new files licensed under the standard terms (dual MIT/Apache-2.0).

All of the copyright notices should be gone by now, but if you come across one in the rust-lang/rust repo, feel free to open a PR to remove it.

Line length

Lines should be at most 100 characters. It's even better if you can keep things to 80.

Ignoring the line length limit. Sometimes – in particular for tests – it can be necessary to exempt yourself from this limit. In that case, you can add a comment towards the top of the file like so:

#![allow(unused)]
fn main() {
// ignore-tidy-linelength
}

Tabs vs spaces

Prefer 4-space indent.

Coding for correctness

Beyond formatting, there are a few other tips that are worth following.

Prefer exhaustive matches

Using _ in a match is convenient, but it means that when new variants are added to the enum, they may not get handled correctly. Ask yourself: if a new variant were added to this enum, what's the chance that it would want to use the _ code, versus having some other treatment? Unless the answer is "low", then prefer an exhaustive match. (The same advice applies to if let and while let, which are effectively tests for a single variant.)

Use "TODO" comments for things you don't want to forget

As a useful tool to yourself, you can insert a // TODO comment for something that you want to get back to before you land your PR:

fn do_something() {
    if something_else {
        unimplemented!(); // TODO write this
    }
}

The tidy script will report an error for a // TODO comment, so this code would not be able to land until the TODO is fixed (or removed).

This can also be useful in a PR as a way to signal from one commit that you are leaving a bug that a later commit will fix:

if foo {
    return true; // TODO wrong, but will be fixed in a later commit
}

Using crates from crates.io

See the crates.io dependencies section.

How to structure your PR

How you prepare the commits in your PR can make a big difference for the reviewer. Here are some tips.

Isolate "pure refactorings" into their own commit. For example, if you rename a method, then put that rename into its own commit, along with the renames of all the uses.

More commits is usually better. If you are doing a large change, it's almost always better to break it up into smaller steps that can be independently understood. The one thing to be aware of is that if you introduce some code following one strategy, then change it dramatically (versus adding to it) in a later commit, that 'back-and-forth' can be confusing.

Format liberally. While only the final commit of a PR must be correctly formatted, it is both easier to review and less noisy to format each commit individually using ./x fmt.

No merges. We do not allow merge commits into our history, other than those by bors. If you get a merge conflict, rebase instead via a command like git rebase -i rust-lang/master (presuming you use the name rust-lang for your remote).

Individual commits do not have to build (but it's nice). We do not require that every intermediate commit successfully builds – we only expect to be able to bisect at a PR level. However, if you can make individual commits build, that is always helpful.

Naming conventions

Apart from normal Rust style/naming conventions, there are also some specific to the compiler.

  • cx tends to be short for "context" and is often used as a suffix. For example, tcx is a common name for the Typing Context.

  • 'tcx is used as the lifetime name for the Typing Context.

  • Because crate is a keyword, if you need a variable to represent something crate-related, often the spelling is changed to krate.

Procedures for Breaking Changes

This page defines the best practices procedure for making bug fixes or soundness corrections in the compiler that can cause existing code to stop compiling. This text is based on RFC 1589.

Motivation

From time to time, we encounter the need to make a bug fix, soundness correction, or other change in the compiler which will cause existing code to stop compiling. When this happens, it is important that we handle the change in a way that gives users of Rust a smooth transition. What we want to avoid is that existing programs suddenly stop compiling with opaque error messages: we would prefer to have a gradual period of warnings, with clear guidance as to what the problem is, how to fix it, and why the change was made. This RFC describes the procedure that we have been developing for handling breaking changes that aims to achieve that kind of smooth transition.

One of the key points of this policy is that (a) warnings should be issued initially rather than hard errors if at all possible and (b) every change that causes existing code to stop compiling will have an associated tracking issue. This issue provides a point to collect feedback on the results of that change. Sometimes changes have unexpectedly large consequences or there may be a way to avoid the change that was not considered. In those cases, we may decide to change course and roll back the change, or find another solution (if warnings are being used, this is particularly easy to do).

What qualifies as a bug fix?

Note that this RFC does not try to define when a breaking change is permitted. That is already covered under RFC 1122. This document assumes that the change being made is in accordance with those policies. Here is a summary of the conditions from RFC 1122:

  • Soundness changes: Fixes to holes uncovered in the type system.
  • Compiler bugs: Places where the compiler is not implementing the specified semantics found in an RFC or lang-team decision.
  • Underspecified language semantics: Clarifications to grey areas where the compiler behaves inconsistently and no formal behavior had been previously decided.

Please see the RFC for full details!

Detailed design

The procedure for making a breaking change is as follows (each of these steps is described in more detail below):

  1. Do a crater run to assess the impact of the change.
  2. Make a special tracking issue dedicated to the change.
  3. Do not report an error right away. Instead, issue forwards-compatibility lint warnings.
    • Sometimes this is not straightforward. See the text below for suggestions on different techniques we have employed in the past.
    • For cases where warnings are infeasible:
      • Report errors, but make every effort to give a targeted error message that directs users to the tracking issue
      • Submit PRs to all known affected crates that fix the issue
        • or, at minimum, alert the owners of those crates to the problem and direct them to the tracking issue
  4. Once the change has been in the wild for at least one cycle, we can stabilize the change, converting those warnings into errors.

Finally, for changes to rustc_ast that will affect plugins, the general policy is to batch these changes. That is discussed below in more detail.

Tracking issue

Every breaking change should be accompanied by a dedicated tracking issue for that change. The main text of this issue should describe the change being made, with a focus on what users must do to fix their code. The issue should be approachable and practical; it may make sense to direct users to an RFC or some other issue for the full details. The issue also serves as a place where users can comment with questions or other concerns.

A template for these breaking-change tracking issues can be found below. An example of how such an issue should look can be found here.

The issue should be tagged with (at least) B-unstable and T-compiler.

Tracking issue template

This is a template to use for tracking issues:

This is the **summary issue** for the `YOUR_LINT_NAME_HERE`
future-compatibility warning and other related errors. The goal of
this page is describe why this change was made and how you can fix
code that is affected by it. It also provides a place to ask questions
or register a complaint if you feel the change should not be made. For
more information on the policy around future-compatibility warnings,
see our [breaking change policy guidelines][guidelines].

[guidelines]: LINK_TO_THIS_RFC

#### What is the warning for?

*Describe the conditions that trigger the warning and how they can be
fixed. Also explain why the change was made.**

#### When will this warning become a hard error?

At the beginning of each 6-week release cycle, the Rust compiler team
will review the set of outstanding future compatibility warnings and
nominate some of them for **Final Comment Period**. Toward the end of
the cycle, we will review any comments and make a final determination
whether to convert the warning into a hard error or remove it
entirely.

Issuing future compatibility warnings

The best way to handle a breaking change is to begin by issuing future-compatibility warnings. These are a special category of lint warning. Adding a new future-compatibility warning can be done as follows.

#![allow(unused)]
fn main() {
// 1. Define the lint in `compiler/rustc_middle/src/lint/builtin.rs`:
declare_lint! {
    pub YOUR_ERROR_HERE,
    Warn,
    "illegal use of foo bar baz"
}

// 2. Add to the list of HardwiredLints in the same file:
impl LintPass for HardwiredLints {
    fn get_lints(&self) -> LintArray {
        lint_array!(
            ..,
            YOUR_ERROR_HERE
        )
    }
}

// 3. Register the lint in `compiler/rustc_lint/src/lib.rs`:
store.register_future_incompatible(sess, vec![
    ...,
    FutureIncompatibleInfo {
        id: LintId::of(YOUR_ERROR_HERE),
        reference: "issue #1234", // your tracking issue here!
    },
]);

// 4. Report the lint:
tcx.lint_node(
    lint::builtin::YOUR_ERROR_HERE,
    path_id,
    binding.span,
    format!("some helper message here"));
}

Helpful techniques

It can often be challenging to filter out new warnings from older, pre-existing errors. One technique that has been used in the past is to run the older code unchanged and collect the errors it would have reported. You can then issue warnings for any errors you would give which do not appear in that original set. Another option is to abort compilation after the original code completes if errors are reported: then you know that your new code will only execute when there were no errors before.

Crater and crates.io

Crater is a bot that will compile all crates.io crates and many public github repos with the compiler with your changes. A report will then be generated with crates that ceased to compile with or began to compile with your changes. Crater runs can take a few days to complete.

We should always do a crater run to assess impact. It is polite and considerate to at least notify the authors of affected crates the breaking change. If we can submit PRs to fix the problem, so much the better.

Is it ever acceptable to go directly to issuing errors?

Changes that are believed to have negligible impact can go directly to issuing an error. One rule of thumb would be to check against crates.io: if fewer than 10 total affected projects are found (not root errors), we can move straight to an error. In such cases, we should still make the "breaking change" page as before, and we should ensure that the error directs users to this page. In other words, everything should be the same except that users are getting an error, and not a warning. Moreover, we should submit PRs to the affected projects (ideally before the PR implementing the change lands in rustc).

If the impact is not believed to be negligible (e.g., more than 10 crates are affected), then warnings are required (unless the compiler team agrees to grant a special exemption in some particular case). If implementing warnings is not feasible, then we should make an aggressive strategy of migrating crates before we land the change so as to lower the number of affected crates. Here are some techniques for approaching this scenario:

  1. Issue warnings for subparts of the problem, and reserve the new errors for the smallest set of cases you can.
  2. Try to give a very precise error message that suggests how to fix the problem and directs users to the tracking issue.
  3. It may also make sense to layer the fix:
    • First, add warnings where possible and let those land before proceeding to issue errors.
    • Work with authors of affected crates to ensure that corrected versions are available before the fix lands, so that downstream users can use them.

Stabilization

After a change is made, we will stabilize the change using the same process that we use for unstable features:

  • After a new release is made, we will go through the outstanding tracking issues corresponding to breaking changes and nominate some of them for final comment period (FCP).

  • The FCP for such issues lasts for one cycle. In the final week or two of the cycle, we will review comments and make a final determination:

    • Convert to error: the change should be made into a hard error.
    • Revert: we should remove the warning and continue to allow the older code to compile.
    • Defer: can't decide yet, wait longer, or try other strategies.

Ideally, breaking changes should have landed on the stable branch of the compiler before they are finalized.

Removing a lint

Once we have decided to make a "future warning" into a hard error, we need a PR that removes the custom lint. As an example, here are the steps required to remove the overlapping_inherent_impls compatibility lint. First, convert the name of the lint to uppercase (OVERLAPPING_INHERENT_IMPLS) ripgrep through the source for that string. We will basically by converting each place where this lint name is mentioned (in the compiler, we use the upper-case name, and a macro automatically generates the lower-case string; so searching for overlapping_inherent_impls would not find much).

NOTE: these exact files don't exist anymore, but the procedure is still the same.

Remove the lint.

The first reference you will likely find is the lint definition in rustc_session/src/lint/builtin.rs that resembles this:

#![allow(unused)]
fn main() {
declare_lint! {
    pub OVERLAPPING_INHERENT_IMPLS,
    Deny, // this may also say Warning
    "two overlapping inherent impls define an item with the same name were erroneously allowed"
}
}

This declare_lint! macro creates the relevant data structures. Remove it. You will also find that there is a mention of OVERLAPPING_INHERENT_IMPLS later in the file as part of a lint_array!; remove it too.

Next, you see a reference to OVERLAPPING_INHERENT_IMPLS in rustc_lint/src/lib.rs. This is defining the lint as a "future compatibility lint":

#![allow(unused)]
fn main() {
FutureIncompatibleInfo {
    id: LintId::of(OVERLAPPING_INHERENT_IMPLS),
    reference: "issue #36889 <https://github.com/rust-lang/rust/issues/36889>",
},
}

Remove this too.

Add the lint to the list of removed lints.

In compiler/rustc_lint/src/lib.rs there is a list of "renamed and removed lints". You can add this lint to the list:

#![allow(unused)]
fn main() {
store.register_removed("overlapping_inherent_impls", "converted into hard error, see #36889");
}

where #36889 is the tracking issue for your lint.

Update the places that issue the lint

Finally, the last class of references you will see are the places that actually trigger the lint itself (i.e., what causes the warnings to appear). These you do not want to delete. Instead, you want to convert them into errors. In this case, the add_lint call looks like this:

#![allow(unused)]
fn main() {
self.tcx.sess.add_lint(lint::builtin::OVERLAPPING_INHERENT_IMPLS,
                       node_id,
                       self.tcx.span_of_impl(item1).unwrap(),
                       msg);
}

We want to convert this into an error. In some cases, there may be an existing error for this scenario. In others, we will need to allocate a fresh diagnostic code. Instructions for allocating a fresh diagnostic code can be found here. You may want to mention in the extended description that the compiler behavior changed on this point, and include a reference to the tracking issue for the change.

Let's say that we've adopted E0592 as our code. Then we can change the add_lint() call above to something like:

#![allow(unused)]
fn main() {
struct_span_code_err!(self.dcx(), self.tcx.span_of_impl(item1).unwrap(), E0592, msg)
    .emit();
}

Update tests

Finally, run the test suite. These should be some tests that used to reference the overlapping_inherent_impls lint, those will need to be updated. In general, if the test used to have #[deny(overlapping_inherent_impls)], that can just be removed.

./x test

All done!

Open a PR. =)

Using External Repositories

The rust-lang/rust git repository depends on several other repos in the rust-lang organization. There are three main ways we use dependencies:

  1. As a Cargo dependency through crates.io (e.g. rustc-rayon)
  2. As a git subtree (e.g. clippy)
  3. As a git submodule (e.g. cargo)

As a general rule, use crates.io for libraries that could be useful for others in the ecosystem; use subtrees for tools that depend on compiler internals and need to be updated if there are breaking changes; and use submodules for tools that are independent of the compiler.

External Dependencies (subtree)

As a developer to this repository, you don't have to treat the following external projects differently from other crates that are directly in this repo:

In contrast to submodule dependencies (see below for those), the subtree dependencies are just regular files and directories which can be updated in tree. However, if possible, enhancements, bug fixes, etc. specific to these tools should be filed against the tools directly in their respective upstream repositories. The exception is that when rustc changes are required to implement a new tool feature or test, that should happen in one collective rustc PR.

Synchronizing a subtree

Periodically the changes made to subtree based dependencies need to be synchronized between this repository and the upstream tool repositories.

Subtree synchronizations are typically handled by the respective tool maintainers. Other users are welcome to submit synchronization PRs, however, in order to do so you will need to modify your local git installation and follow a very precise set of instructions. These instructions are documented, along with several useful tips and tricks, in the syncing subtree changes section in Clippy's Contributing guide. The instructions are applicable for use with any subtree based tool, just be sure to use the correct corresponding subtree directory and remote repository.

The synchronization process goes in two directions: subtree push and subtree pull.

A subtree push takes all the changes that happened to the copy in this repo and creates commits on the remote repo that match the local changes. Every local commit that touched the subtree causes a commit on the remote repo, but is modified to move the files from the specified directory to the tool repo root.

A subtree pull takes all changes since the last subtree pull from the tool repo and adds these commits to the rustc repo along with a merge commit that moves the tool changes into the specified directory in the Rust repository.

It is recommended that you always do a push first and get that merged to the tool master branch. Then, when you do a pull, the merge works without conflicts. While it's definitely possible to resolve conflicts during a pull, you may have to redo the conflict resolution if your PR doesn't get merged fast enough and there are new conflicts. Do not try to rebase the result of a git subtree pull, rebasing merge commits is a bad idea in general.

You always need to specify the -P prefix to the subtree directory and the corresponding remote repository. If you specify the wrong directory or repository you'll get very fun merges that try to push the wrong directory to the wrong remote repository. Luckily you can just abort this without any consequences by throwing away either the pulled commits in rustc or the pushed branch on the remote and try again. It is usually fairly obvious that this is happening because you suddenly get thousands of commits that want to be synchronized.

Creating a new subtree dependency

If you want to create a new subtree dependency from an existing repository, call (from this repository's root directory!)

git subtree add -P src/tools/clippy https://github.com/rust-lang/rust-clippy.git master

This will create a new commit, which you may not rebase under any circumstances! Delete the commit and redo the operation if you need to rebase.

Now you're done, the src/tools/clippy directory behaves as if Clippy were part of the rustc monorepo, so no one but you (or others that synchronize subtrees) actually needs to use git subtree.

External Dependencies (submodules)

Building Rust will also use external git repositories tracked using git submodules. The complete list may be found in the .gitmodules file. Some of these projects are required (like stdarch for the standard library) and some of them are optional (like src/doc/book).

Usage of submodules is discussed more in the Using Git chapter.

Some of the submodules are allowed to be in a "broken" state where they either don't build or their tests don't pass, e.g. the documentation books like The Rust Reference. Maintainers of these projects will be notified when the project is in a broken state, and they should fix them as soon as possible. The current status is tracked on the toolstate website. More information may be found on the Forge Toolstate chapter. In practice, it is very rare for documentation to have broken toolstate.

Breakage is not allowed in the beta and stable channels, and must be addressed before the PR is merged. They are also not allowed to be broken on master in the week leading up to the beta cut.

Fuzzing

For the purposes of this guide, fuzzing is any testing methodology that involves compiling a wide variety of programs in an attempt to uncover bugs in rustc. Fuzzing is often used to find internal compiler errors (ICEs). Fuzzing can be beneficial, because it can find bugs before users run into them and provide small, self-contained programs that make the bug easier to track down. However, some common mistakes can reduce the helpfulness of fuzzing and end up making contributors' lives harder. To maximize your positive impact on the Rust project, please read this guide before reporting fuzzer-generated bugs!

Guidelines

In a nutshell

Please do:

  • Ensure the bug is still present on the latest nightly rustc
  • Include a reasonably minimal, standalone example along with any bug report
  • Include all of the information requested in the bug report template
  • Search for existing reports with the same message and query stack
  • Format the test case with rustfmt, if it maintains the bug
  • Indicate that the bug was found by fuzzing

Please don't:

  • Don't report lots of bugs that use internal features, including but not limited to custom_mir, lang_items, no_core, and rustc_attrs.
  • Don't seed your fuzzer with inputs that are known to crash rustc (details below).

Discussion

If you're not sure whether or not an ICE is a duplicate of one that's already been reported, please go ahead and report it and link to issues you think might be related. In general, ICEs on the same line but with different query stacks are usually distinct bugs. For example, #109020 and #109129 had similar error messages:

error: internal compiler error: compiler/rustc_middle/src/ty/normalize_erasing_regions.rs:195:90: Failed to normalize <[closure@src/main.rs:36:25: 36:28] as std::ops::FnOnce<(Emplacable<()>,)>>::Output, maybe try to call `try_normalize_erasing_regions` instead
error: internal compiler error: compiler/rustc_middle/src/ty/normalize_erasing_regions.rs:195:90: Failed to normalize <() as Project>::Assoc, maybe try to call `try_normalize_erasing_regions` instead

but different query stacks:

query stack during panic:
#0 [fn_abi_of_instance] computing call ABI of `<[closure@src/main.rs:36:25: 36:28] as core::ops::function::FnOnce<(Emplacable<()>,)>>::call_once - shim(vtable)`
end of query stack
query stack during panic:
#0 [check_mod_attrs] checking attributes in top-level module
#1 [analysis] running analysis passes on this crate
end of query stack

Building a corpus

When building a corpus, be sure to avoid collecting tests that are already known to crash rustc. A fuzzer that is seeded with such tests is more likely to generate bugs with the same root cause, wasting everyone's time. The simplest way to avoid this is to loop over each file in the corpus, see if it causes an ICE, and remove it if so.

To build a corpus, you may want to use:

  • The rustc/rust-analyzer/clippy test suites (or even source code) --- though avoid tests that are already known to cause failures, which often begin with comments like // failure-status: 101 or // known-bug: #NNN.
  • The already-fixed ICEs in Glacier --- though avoid the unfixed ones in ices/!

Extra credit

Here are a few things you can do to help the Rust project after filing an ICE.

  • Bisect the bug to figure out when it was introduced
  • Fix "distractions": problems with the test case that don't contribute to triggering the ICE, such as syntax errors or borrow-checking errors
  • Minimize the test case (see below)
  • Add the minimal test case to Glacier

Minimization

It is helpful to carefully minimize the fuzzer-generated input. When minimizing, be careful to preserve the original error, and avoid introducing distracting problems such as syntax, type-checking, or borrow-checking errors.

There are some tools that can help with minimization. If you're not sure how to avoid introducing syntax, type-, and borrow-checking errors while using these tools, post both the complete and minimized test cases. Generally, syntax-aware tools give the best results in the least amount of time. treereduce-rust and picireny are syntax-aware. halfempty is not, but is generally a high-quality tool.

Effective fuzzing

When fuzzing rustc, you may want to avoid generating machine code, since this is mostly done by LLVM. Try --emit=mir instead.

A variety of compiler flags can uncover different issues. -Zmir-opt-level=4 will turn on MIR optimization passes that are not run by default, potentially uncovering interesting bugs. -Zvalidate-mir can help uncover such bugs.

If you're fuzzing a compiler you built, you may want to build it with -C target-cpu=native or even PGO/BOLT to squeeze out a few more executions per second. Of course, it's best to try multiple build configurations and see what actually results in superior throughput.

You may want to build rustc from source with debug assertions to find additional bugs, though this is a trade-off: it can slow down fuzzing by requiring extra work for every execution. To enable debug assertions, add this to config.toml when compiling rustc:

[rust]
debug-assertions = true

ICEs that require debug assertions to reproduce should be tagged requires-debug-assertions.

Existing projects

  • fuzz-rustc demonstrates how to fuzz rustc with libfuzzer
  • icemaker runs rustc and other tools on a large number of source files with a variety of flags to catch ICEs
  • tree-splicer generates new source files by combining existing ones while maintaining correct syntax

Notification groups

The notification groups are an easy way to help out with rustc in a "piece-meal" fashion, without committing to a larger project. Notification groups are easy to join (just submit a PR!) and joining does not entail any particular commitment.

Once you join a notification group, you will be added to a list that receives pings on github whenever a new issue is found that fits the notification group's criteria. If you are interested, you can then claim the issue and start working on it.

Of course, you don't have to wait for new issues to be tagged! If you prefer, you can use the Github label for a notification group to search for existing issues that haven't been claimed yet.

List of notification groups

Here's the list of the notification groups:

What issues are a good fit for notification groups?

Notification groups tend to get pinged on isolated bugs, particularly those of middle priority:

  • By isolated, we mean that we do not expect large-scale refactoring to be required to fix the bug.
  • By middle priority, we mean that we'd like to see the bug fixed, but it's not such a burning problem that we are dropping everything else to fix it. The danger with such bugs, of course, is that they can accumulate over time, and the role of the notification group is to try and stop that from happening!

Joining a notification group

To join a notification group, you just have to open a PR adding your Github username to the appropriate file in the Rust team repository. See the "example PRs" below to get a precise idea and to identify the file to edit.

Also, if you are not already a member of a Rust team then -- in addition to adding your name to the file -- you have to checkout the repository and run the following command:

cargo run add-person $your_user_name

Example PRs:

Tagging an issue for a notification group

To tag an issue as appropriate for a notification group, you give rustbot a ping command with the name of the notification group. For example:

@rustbot ping apple
@rustbot ping arm
@rustbot ping cleanup-crew
@rustbot ping emscripten
@rustbot ping llvm
@rustbot ping risc-v
@rustbot ping wasi
@rustbot ping wasm
@rustbot ping windows

To make some commands shorter and easier to remember, there are aliases, defined in the triagebot.toml file. For example, all of these commands are equivalent and will ping the Cleanup Crew:

@rustbot ping cleanup
@rustbot ping bisect
@rustbot ping reduce

Keep in mind that these aliases are meant to make humans' life easier. They might be subject to change. If you need to ensure that a command will always be valid, prefer the full invocations over the aliases.

Note though that this should only be done by compiler team members or contributors, and is typically done as part of compiler team triage.

Apple notification group

Github Labels: O-macos, O-ios, O-tvos, O-watchos and O-visionos
Ping command: @rustbot ping apple

This list will be used to ask for help both in diagnosing and testing Apple-related issues as well as suggestions on how to resolve interesting questions regarding our macOS/iOS/tvOS/watchOS/visionOS support.

To get a better idea for what the group will do, here are some examples of the kinds of questions where we would have reached out to the group for advice in determining the best course of action:

  • Raising the minimum supported versions (e.g. #104385)
  • Additional Apple targets (e.g. #121419)
  • Obscure Xcode linker details (e.g. #121430)

Deployment targets

Apple platforms have a concept of "deployment target", controlled with the *_DEPLOYMENT_TARGET environment variables, and specifies the minimum OS version that a binary runs on.

Using an API from a newer OS version in the standard library than the default that rustc uses will result in either a static or a dynamic linker error. For this reason, try to suggest that people document on extern "C" APIs which OS version they were introduced with, and if that's newer than the current default used by rustc, suggest to use weak linking.

The App Store and private APIs

Apple are very protective about using undocumented APIs, so it's important that whenever a change uses a new function, that they are verified to actually be public API, as even just mentioning undocumented APIs in the binary (without calling it) can lead to rejections from the App Store.

For example, Darwin / the XNU kernel actually has futex syscalls, but we can't use them in std because they are not public API.

In general, for an API to be considered public by Apple, it has to:

  • Appear in a public header (i.e. one distributed with Xcode, and found for the specific platform under xcrun --show-sdk-path --sdk $SDK).
  • Have an availability attribute on it (like __API_AVAILABLE, API_AVAILABLE or similar).

ARM notification group

Github Label: O-ARM
Ping command: @rustbot ping arm

This list will be used to ask for help both in diagnosing and testing ARM-related issues as well as suggestions on how to resolve interesting questions regarding our ARM support.

The group also has an associated Zulip stream (#t-compiler/arm) where people can go to pose questions and discuss ARM-specific topics.

So, if you are interested in participating, please sign up for the ARM group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

Cleanup Crew

Github Label: ICEBreaker-Cleanup-Crew
Ping command: @rustbot ping cleanup-crew

The "Cleanup Crew" are focused on improving bug reports. Specifically, the goal is to try to ensure that every bug report has all the information that will be needed for someone to fix it:

  • a minimal, standalone example that shows the problem
  • links to duplicates or related bugs
  • if the bug is a regression (something that used to work, but no longer does), then a bisection to the PR or nightly that caused the regression

This kind of cleanup is invaluable in getting bugs fixed. Better still, it can be done by anybody who knows Rust, without any particularly deep knowledge of the compiler.

Let's look a bit at the workflow for doing "cleanup crew" actions.

Finding a minimal, standalone example

Here the ultimate goal is to produce an example that reproduces the same problem but without relying on any external crates. Such a test ought to contain as little code as possible, as well. This will make it much easier to isolate the problem.

However, even if the "ultimate minimal test" cannot be achieved, it's still useful to post incremental minimizations. For example, if you can eliminate some of the external dependencies, that is helpful, and so forth.

It's particularly useful to reduce to an example that works in the Rust playground, rather than requiring people to checkout a cargo build.

There are many resources for how to produce minimized test cases. Here are a few:

  • The rust-reduce tool can try to reduce code automatically.
    • The C-reduce tool also works on Rust code, though it requires that you start from a single file. (A post explaining how to do it can be found here.)
  • pnkfelix's Rust Bug Minimization Patterns blog post
    • This post focuses on "heavy bore" techniques, where you are starting with a large, complex cargo project that you wish to narrow down to something standalone.

If you are on the "Cleanup Crew", you will sometimes see multiple bug reports that seem very similar. You can link one to the other just by mentioning the other bug number in a Github comment. Sometimes it is useful to close duplicate bugs. But if you do so, you should always copy any test case from the bug you are closing to the other bug that remains open, as sometimes duplicate-looking bugs will expose different facets of the same problem.

Bisecting regressions

For regressions (something that used to work, but no longer does), it is super useful if we can figure out precisely when the code stopped working. The gold standard is to be able to identify the precise PR that broke the code, so we can ping the author, but even narrowing it down to a nightly build is helpful, especially as that then gives us a range of PRs. (One other challenge is that we sometimes land "rollup" PRs, which combine multiple PRs into one.)

cargo-bisect-rustc

To help in figuring out the cause of a regression we have a tool called cargo-bisect-rustc. It will automatically download and test various builds of rustc. For recent regressions, it is even able to use the builds from our CI to track down the regression to a specific PR; for older regressions, it will simply identify a nightly.

To learn to use cargo-bisect-rustc, check out this blog post, which gives a quick introduction to how it works. Additionally, there is a Guide which goes into more detail on how to use it. You can also ask questions at the Zulip stream #t-compiler/cargo-bisect-rustc, or help in improving the tool.

Emscripten notification group

Github Label: O-emscripten
Ping command: @rustbot ping emscripten

This list will be used to ask for help both in diagnosing and testing Emscripten-related issues as well as suggestions on how to resolve interesting questions regarding our Emscripten support.

The group also has an associated Zulip stream (#t-compiler/wasm) where people can go to pose questions and discuss Emscripten-specific topics.

So, if you are interested in participating, please sign up for the Emscripten group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

LLVM Notification group

Github Label: A-LLVM
Ping command: @rustbot ping llvm

The "LLVM Notification Group" are focused on bugs that center around LLVM. These bugs often arise because of LLVM optimizations gone awry, or as the result of an LLVM upgrade. The goal here is:

  • to determine whether the bug is a result of us generating invalid LLVM IR, or LLVM misoptimizing;
  • if the former, to fix our IR;
  • if the latter, to try and file a bug on LLVM (or identify an existing bug).

The group may also be asked to weigh in on other sorts of LLVM-focused questions.

Helpful tips and options

The "Debugging LLVM" section of the rustc-dev-guide gives a step-by-step process for how to help debug bugs caused by LLVM. In particular, it discusses how to emit LLVM IR, run the LLVM IR optimization pipelines, and so forth. You may also find it useful to look at the various codegen options listed under -C help and the internal options under -Z help -- there are a number that pertain to LLVM (just search for LLVM).

If you do narrow to an LLVM bug

The "Debugging LLVM" section also describes what to do once you've identified the bug.

RISC-V notification group

Github Label: O-riscv
Ping command: @rustbot ping risc-v

This list will be used to ask for help both in diagnosing and testing RISC-V-related issues as well as suggestions on how to resolve interesting questions regarding our RISC-V support.

The group also has an associated Zulip stream (#t-compiler/risc-v) where people can go to pose questions and discuss RISC-V-specific topics.

So, if you are interested in participating, please sign up for the RISC-V group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

WASI notification group

Github Label: O-wasi
Ping command: @rustbot ping wasi

This list will be used to ask for help both in diagnosing and testing WASI-related issues as well as suggestions on how to resolve interesting questions regarding our WASI support.

The group also has an associated Zulip stream (#t-compiler/wasm) where people can go to pose questions and discuss WASI-specific topics.

So, if you are interested in participating, please sign up for the WASI group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

WebAssembly (WASM) notification group

Github Label: O-wasm
Ping command: @rustbot ping wasm

This list will be used to ask for help both in diagnosing and testing WebAssembly-related issues as well as suggestions on how to resolve interesting questions regarding our WASM support.

The group also has an associated Zulip stream (#t-compiler/wasm) where people can go to pose questions and discuss WASM-specific topics.

So, if you are interested in participating, please sign up for the WASM group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

Windows notification group

Github Label: O-Windows
Ping command: @rustbot ping windows

This list will be used to ask for help both in diagnosing and testing Windows-related issues as well as suggestions on how to resolve interesting questions regarding our Windows support.

The group also has an associated Zulip stream (#t-compiler/windows) where people can go to pose questions and discuss Windows-specific topics.

To get a better idea for what the group will do, here are some examples of the kinds of questions where we would have reached out to the group for advice in determining the best course of action:

  • Which versions of MinGW should we support?
  • Should we remove the legacy InnoSetup GUI installer? #72569
  • What names should we use for static libraries on Windows? #29520

So, if you are interested in participating, please sign up for the Windows group! To do so, open a PR against the rust-lang/team repository. Just follow this example, but change the username to your own!

Rust for Linux notification group

Github Label: O-rfl
Ping command: @rustbot ping rfl

This list will be used to notify Rust for Linux (RfL) maintainers when the compiler or the standard library changes in a way that would break Rust for Linux, since it depends on several unstable flags and features. The RfL maintainers should then ideally provide support for resolving the breakage or decide to temporarily accept the breakage and unblock CI by temporarily removing the RfL CI jobs.

The group also has an associated Zulip stream (#rust-for-linux) where people can go to ask questions and discuss topics related to Rust for Linux.

If you are interested in participating, please sign up for the Rust for Linux group on Zulip!

rust-lang/rust Licenses

The rustc compiler source and standard library are dual licensed under the Apache License v2.0 and the MIT License unless otherwise specified.

Detailed licensing information is available in the COPYRIGHT document of the rust-lang/rust repository.

Guidelines for reviewers

In general, reviewers need to be looking not only for the code quality of contributions but also that they are properly licensed. We have some tips below for things to look out for when reviewing, but if you ever feel uncertain as to whether some code might be properly licensed, err on the safe side — reach out to the Council or Compiler Team Leads for feedback!

Things to watch out for:

  • The PR author states that they copied, ported, or adapted the code from some other source.
  • There is a comment in the code pointing to a webpage or describing where the algorithm was taken from.
  • The algorithm or code pattern seems like it was likely copied from somewhere else.
  • When adding new dependencies, double check the dependency's license.

In all of these cases, we will want to check that source to make sure it is licensed in a way that is compatible with Rust’s license.

Examples

  • Porting C code from a GPL project, like GNU binutils, is not allowed. That would require Rust itself to be licensed under the GPL.
  • Copying code from an algorithms text book may be allowed, but some algorithms are patented.

Porting

Contributions to rustc, especially around platform and compiler intrinsics, often include porting over work from other projects, mainly LLVM and GCC.

Some general rules apply:

  • Copying work needs to adhere to the original license
    • This applies to direct copy & paste
    • This also applies to code you looked at and ported

In general, taking inspiration from other codebases is fine, but please exercise caution when porting code.

Ports of full libraries (e.g. C libraries shipped with LLVM) must keep the license of the original library.

Editions

This chapter gives an overview of how Edition support works in rustc. This assumes that you are familiar with what Editions are (see the Edition Guide).

Edition definition

The --edition CLI flag specifies the edition to use for a crate. This can be accessed from Session::edition. There are convenience functions like Session::at_least_rust_2021 for checking the crate's edition, though you should be careful about whether you check the global session or the span, see Edition hygiene below.

As an alternative to the at_least_rust_20xx convenience methods, the Edition type also supports comparisons for doing range checks, such as span.edition() >= Edition::Edition2021.

Adding a new edition

Adding a new edition mainly involves adding a variant to the Edition enum and then fixing everything that is broken. See #94461 for an example.

Features and Edition stability

The Edition enum defines whether or not an edition is stable. If it is not stable, then the -Zunstable-options CLI option must be passed to enable it.

When adding a new feature, there are two options you can choose for how to handle stability with a future edition:

  • Just check the edition of the span like span.at_least_rust_20xx() (see Edition hygiene) or the Session::edition. This will implicitly depend on the stability of the edition itself to indicate that your feature is available.
  • Place your new behavior behind a feature gate.

It may be sufficient to only check the current edition for relatively simple changes. However, for larger language changes, you should consider creating a feature gate. There are several benefits to using a feature gate:

  • A feature gate makes it easier to work on and experiment with a new feature.
  • It makes the intent clear when the #![feature(…)] attribute is used that your new feature is being enabled.
  • It makes testing of editions easier so that features that are not yet complete do not interfere with testing of edition-specific features that are complete and ready.
  • It decouples the feature from an edition, which makes it easier for the team to make a deliberate decision of whether or not a feature should be added to the next edition when the feature is ready.

When a feature is complete and ready, the feature gate can be removed (and the code should just check the span or Session edition to determine if it is enabled).

There are a few different options for doing feature checks:

  • For highly experimental features, that may or may not be involved in an edition, they can implement regular feature gates like tcx.features().my_feature, and ignore editions for the time being.

  • For experimental features that might be involved in an edition, they should implement gates with tcx.features().my_feature && span.at_least_rust_20xx(). This requires the user to still specify #![feature(my_feature)], to avoid disrupting testing of other edition features which are ready and have been accepted within the edition.

  • For experimental features that have graduated to definitely be part of an edition, they should implement gates with tcx.features().my_feature || span.at_least_rust_20xx(), or just remove the feature check altogether and just check span.at_least_rust_20xx().

If you need to do the feature gating in multiple places, consider placing the check in a single function so that there will only be a single place to update. For example:

// An example from Edition 2021 disjoint closure captures.

fn enable_precise_capture(tcx: TyCtxt<'_>, span: Span) -> bool {
    tcx.features().capture_disjoint_fields || span.rust_2021()
}

See Lints and stability below for more information about how lints handle stability.

Edition parsing

For the most part, the lexer is edition-agnostic. Within StringReader, tokens can be modified based on edition-specific behavior. For example, C-String literals like c"foo" are split into multiple tokens in editions before 2021. This is also where things like reserved prefixes are handled for the 2021 edition.

Edition-specific parsing is relatively rare. One example is async fn which checks the span of the token to determine if it is the 2015 edition, and emits an error in that case. This can only be done if the syntax was already invalid.

If you need to do edition checking in the parser, you will normally want to look at the edition of the token, see Edition hygiene. In some rare cases you may instead need to check the global edition from ParseSess::edition.

Most edition-specific parsing behavior is handled with migration lints instead of in the parser. This is appropriate when there is a change in syntax (as opposed to new syntax). This allows the old syntax to continue to work on previous editions. The lint then checks for the change in behavior. On older editions, the lint pass should emit the migration lint to help with migrating to new editions. On newer editions, your code should emit a hard error with emit_err instead. For example, the deprecated start...end pattern syntax emits the ellipsis_inclusive_range_patterns lint on editions before 2021, and in 2021 is an hard error via the emit_err method.

Keywords

New keywords can be introduced across an edition boundary. This is implemented by functions like Symbol::is_used_keyword_conditional, which rely on the ordering of how the keywords are defined.

When new keywords are introduced, the keyword_idents lint should be updated so that automatic migrations can transition code that might be using the keyword as an identifier (see KeywordIdents). An alternative to consider is to implement the keyword as a weak keyword if the position it is used is sufficient to distinguish it.

An additional option to consider is the k# prefix which was introduced in RFC 3101. This allows the use of a keyword in editions before the edition where the keyword is introduced. This is currently not implemented.

Edition hygiene

Spans are marked with the edition of the crate that the span came from. See Macro hygiene in the Edition Guide for a user-centric description of what this means.

You should normally use the edition from the token span instead of looking at the global Session edition. For example, use span.edition().at_least_rust_2021() instead of sess.at_least_rust_2021(). This helps ensure that macros behave correctly when used across crates.

Lints

Lints support a few different options for interacting with editions. Lints can be future incompatible edition migration lints, which are used to support migrations to newer editions. Alternatively, lints can be edition-specific, where they change their default level starting in a specific edition.

Migration lints

Migration lints are used to migrate projects from one edition to the next. They are implemented with a MachineApplicable suggestion which will rewrite code so that it will successfully compile in both the previous and the next edition. For example, the keyword_idents lint will take identifiers that conflict with a new keyword to use the raw identifier syntax to avoid the conflict (for example changing async to r#async).

Migration lints must be declared with the FutureIncompatibilityReason::EditionError or FutureIncompatibilityReason::EditionSemanticsChange future-incompatible option in the lint declaration:

declare_lint! {
    pub KEYWORD_IDENTS,
    Allow,
    "detects edition keywords being used as an identifier",
    @future_incompatible = FutureIncompatibleInfo {
        reason: FutureIncompatibilityReason::EditionError(Edition::Edition2018),
        reference: "issue #49716 <https://github.com/rust-lang/rust/issues/49716>",
    };
}

When declared like this, the lint is automatically added to the appropriate rust-20xx-compatibility lint group. When a user runs cargo fix --edition, cargo will pass the --force-warn rust-20xx-compatibility flag to force all of these lints to appear during the edition migration. Cargo also passes --cap-lints=allow so that no other lints interfere with the edition migration.

Migration lints can be either Allow or Warn by default. If it is Allow, users usually won't see this warning unless they are doing an edition migration manually or there is a problem during the migration. Most migration lints are Allow.

If it is Warn by default, users on all editions will see this warning. Only use Warn if you think it is important for everyone to be aware of the change, and to encourage people to update their code on all editions. Beware that new warn-by-default lint that hit many projects can be very disruptive and frustrating for users. You may consider switching an Allow to Warn several years after the edition stabilizes. This will only show up for the relatively small number of stragglers who have not updated to the new edition.

Edition-specific lints

Lints can be marked so that they have a different level starting in a specific edition. In the lint declaration, use the @edition marker:

declare_lint! {
    pub SOME_LINT_NAME,
    Allow,
    "my lint description",
    @edition Edition2024 => Warn;
}

Here, SOME_LINT_NAME defaults to Allow on all editions before 2024, and then becomes Warn afterwards.

This should generally be used sparingly, as there are other options:

  • Small impact stylistic changes unrelated to an edition can just make the lint Warn on all editions. If you want people to adopt a different way to write things, then go ahead and commit to having it show up for all projects.

    Beware that if a new warn-by-default lint hits many projects, it can be very disruptive and frustrating for users.

  • Change the new style to be a hard error in the new edition, and use a migration lint to automatically convert projects to the new style. For example, ellipsis_inclusive_range_patterns is a hard error in 2021, and warns in all previous editions.

    Beware that these cannot be added after the edition stabilizes.

  • Migration lints can also change over time. For example, the migration lint can start out as Allow by default. For people performing the migration, they will automatically get updated to the new code. Then, after some years, the lint can be made to Warn in previous editions.

    For example anonymous_parameters was a 2018 Edition migration lint (and a hard-error in 2018) that was Allow by default in previous editions. Then, three years later, it was changed to Warn for all previous editions, so that all users got a warning that the style was being phased out. If this was a warning from the start, it would have impacted many projects and be very disruptive. By making it part of the edition, most users eventually updated to the new edition and were handled by the migration. Switching to Warn only impacted a few stragglers who did not update.

Lints and stability

Lints can be marked as being unstable, which can be helpful when developing a new edition feature, and you want to test out a migration lint. The feature gate can be specified in the lint's declaration like this:

declare_lint! {
    pub SOME_LINT_NAME,
    Allow,
    "my cool lint",
    @feature_gate = sym::my_feature_name;
}

Then, the lint will only fire if the user has the appropriate #![feature(my_feature_name)]. Just beware that when it comes time to do crater runs testing the migration that the feature gate will need to be removed.

Alternatively, you can implement an allow-by-default migration lint for an upcoming unstable edition without a feature gate. Although users may technically be able to enable the lint before the edition is stabilized, most will not notice the new lint exists, and it should not disrupt anything or cause any breakage.

Idiom lints

In the 2018 edition, there was a concept of "idiom lints" under the rust-2018-idioms lint group. The concept was to have new idiomatic styles under a different lint group separate from the forced migrations under the rust-2018-compatibility lint group, giving some flexibility as to how people opt-in to certain edition changes.

Overall this approach did not seem to work very well, and it is unlikely that we will use the idiom groups in the future.

Standard library changes

Preludes

Each edition comes with a specific prelude of the standard library. These are implemented as regular modules in core::prelude and std::prelude. New items can be added to the prelude, just beware that this can conflict with user's pre-existing code. Usually a migration lint should be used to migrate existing code to avoid the conflict. For example, rust_2021_prelude_collisions is used to handle the collisions with the new traits in 2021.

Customized language behavior

Usually it is not possible to make breaking changes to the standard library. In some rare cases, the teams may decide that the behavior change is important enough to break this rule. The downside is that this requires special handling in the compiler to be able to distinguish when the old and new signatures or behaviors should be used.

One example is the change in method resolution for into_iter() of arrays. This was implemented with the #[rustc_skip_array_during_method_dispatch] attribute on the IntoIterator trait which then tells the compiler to consider an alternate trait resolution choice based on the edition.

Another example is the panic! macro changes. This required defining multiple panic macros, and having the built-in panic macro implementation determine the appropriate way to expand it. This also included the non_fmt_panics migration lint to adjust old code to the new form, which required the rustc_diagnostic_item attribute to detect the usage of the panic macro.

In general it is recommended to avoid these special cases except for very high value situations.

Bootstrapping the compiler

Bootstrapping is the process of using a compiler to compile itself. More accurately, it means using an older compiler to compile a newer version of the same compiler.

This raises a chicken-and-egg paradox: where did the first compiler come from? It must have been written in a different language. In Rust's case it was written in OCaml. However it was abandoned long ago and the only way to build a modern version of rustc is a slightly less modern version.

This is exactly how x.py works: it downloads the current beta release of rustc, then uses it to compile the new compiler.

In this section, we give a high-level overview of what Bootstrap does, followed by a high-level introduction to how Bootstrap does it.

What Bootstrapping does

Bootstrapping is the process of using a compiler to compile itself. More accurately, it means using an older compiler to compile a newer version of the same compiler.

This raises a chicken-and-egg paradox: where did the first compiler come from? It must have been written in a different language. In Rust's case it was written in OCaml. However it was abandoned long ago and the only way to build a modern version of rustc is a slightly less modern version.

This is exactly how ./x.py works: it downloads the current beta release of rustc, then uses it to compile the new compiler.

Note that this documentation mostly covers user-facing information. See bootstrap/README.md to read about bootstrap internals.

Stages of bootstrapping

Overview

  • Stage 0: the pre-compiled compiler
  • Stage 1: from current code, by an earlier compiler
  • Stage 2: the truly current compiler
  • Stage 3: the same-result test

Compiling rustc is done in stages. Here's a diagram, adapted from Jynn Nelson's talk on bootstrapping at RustConf 2022, with detailed explanations below.

The A, B, C, and D show the ordering of the stages of bootstrapping. Blue nodes are downloaded, yellow nodes are built with the stage0 compiler, and green nodes are built with the stage1 compiler.

graph TD
    s0c["stage0 compiler (1.63)"]:::downloaded -->|A| s0l("stage0 std (1.64)"):::with-s0c;
    s0c & s0l --- stepb[ ]:::empty;
    stepb -->|B| s0ca["stage0 compiler artifacts (1.64)"]:::with-s0c;
    s0ca -->|copy| s1c["stage1 compiler (1.64)"]:::with-s0c;
    s1c -->|C| s1l("stage1 std (1.64)"):::with-s1c;
    s1c & s1l --- stepd[ ]:::empty;
    stepd -->|D| s1ca["stage1 compiler artifacts (1.64)"]:::with-s1c;
    s1ca -->|copy| s2c["stage2 compiler"]:::with-s1c;

    classDef empty width:0px,height:0px;
    classDef downloaded fill: lightblue;
    classDef with-s0c fill: yellow;
    classDef with-s1c fill: lightgreen;

Stage 0: the pre-compiled compiler

The stage0 compiler is usually the current beta rustc compiler and its associated dynamic libraries, which ./x.py will download for you. (You can also configure ./x.py to use something else.)

The stage0 compiler is then used only to compile src/bootstrap, library/std, and compiler/rustc. When assembling the libraries and binaries that will become the stage1 rustc compiler, the freshly compiled std and rustc are used. There are two concepts at play here: a compiler (with its set of dependencies) and its 'target' or 'object' libraries (std and rustc). Both are staged, but in a staggered manner.

Stage 1: from current code, by an earlier compiler

The rustc source code is then compiled with the stage0 compiler to produce the stage1 compiler.

Stage 2: the truly current compiler

We then rebuild our stage1 compiler with itself to produce the stage2 compiler.

In theory, the stage1 compiler is functionally identical to the stage2 compiler, but in practice there are subtle differences. In particular, the stage1 compiler itself was built by stage0 and hence not by the source in your working directory. This means that the ABI generated by the stage0 compiler may not match the ABI that would have been made by the stage1 compiler, which can cause problems for dynamic libraries, tests, and tools using rustc_private.

Note that the proc_macro crate avoids this issue with a C FFI layer called proc_macro::bridge, allowing it to be used with stage1.

The stage2 compiler is the one distributed with rustup and all other install methods. However, it takes a very long time to build because one must first build the new compiler with an older compiler and then use that to build the new compiler with itself. For development, you usually only want the stage1 compiler, which you can build with ./x build library. See Building the compiler.

Stage 3: the same-result test

Stage 3 is optional. To sanity check our new compiler we can build the libraries with the stage2 compiler. The result ought to be identical to before, unless something has broken.

Building the stages

The script ./x tries to be helpful and pick the stage you most likely meant for each subcommand. These defaults are as follows:

  • check: --stage 0
  • doc: --stage 0
  • build: --stage 1
  • test: --stage 1
  • dist: --stage 2
  • install: --stage 2
  • bench: --stage 2

You can always override the stage by passing --stage N explicitly.

For more information about stages, see below.

Complications of bootstrapping

Since the build system uses the current beta compiler to build a stage1 bootstrapping compiler, the compiler source code can't use some features until they reach beta (because otherwise the beta compiler doesn't support them). On the other hand, for compiler intrinsics and internal features, the features have to be used. Additionally, the compiler makes heavy use of nightly features (#![feature(...)]). How can we resolve this problem?

There are two methods used:

  1. The build system sets --cfg bootstrap when building with stage0, so we can use cfg(not(bootstrap)) to only use features when built with stage1. Setting --cfg bootstrap in this way is used for features that were just stabilized, which require #![feature(...)] when built with stage0, but not for stage1.
  2. The build system sets RUSTC_BOOTSTRAP=1. This special variable means to break the stability guarantees of Rust: allowing use of #![feature(...)] with a compiler that's not nightly. Setting RUSTC_BOOTSTRAP=1 should never be used except when bootstrapping the compiler.

Understanding stages of bootstrap

Overview

This is a detailed look into the separate bootstrap stages.

The convention ./x uses is that:

  • A --stage N flag means to run the stage N compiler (stageN/rustc).
  • A "stage N artifact" is a build artifact that is produced by the stage N compiler.
  • The stage N+1 compiler is assembled from stage N artifacts. This process is called uplifting.

Build artifacts

Anything you can build with ./x is a build artifact. Build artifacts include, but are not limited to:

  • binaries, like stage0-rustc/rustc-main
  • shared objects, like stage0-sysroot/rustlib/libstd-6fae108520cf72fe.so
  • rlib files, like stage0-sysroot/rustlib/libstd-6fae108520cf72fe.rlib
  • HTML files generated by rustdoc, like doc/std

Examples

  • ./x test tests/ui means to build the stage1 compiler and run compiletest on it. If you're working on the compiler, this is normally the test command you want.
  • ./x test --stage 0 library/std means to run tests on the standard library without building rustc from source ('build with stage0, then test the artifacts'). If you're working on the standard library, this is normally the test command you want.
  • ./x build --stage 0 means to build with the beta rustc.
  • ./x doc --stage 0 means to document using the beta rustdoc.

Examples of what not to do

  • ./x test --stage 0 tests/ui is not useful: it runs tests on the beta compiler and doesn't build rustc from source. Use test tests/ui instead, which builds stage1 from source.
  • ./x test --stage 0 compiler/rustc builds the compiler but runs no tests: it's running cargo test -p rustc, but cargo doesn't understand Rust's tests. You shouldn't need to use this, use test instead (without arguments).
  • ./x build --stage 0 compiler/rustc builds the compiler, but does not build libstd or even libcore. Most of the time, you'll want ./x build library instead, which allows compiling programs without needing to define lang items.

Building vs. running

Note that build --stage N compiler/rustc does not build the stage N compiler: instead it builds the stage N+1 compiler using the stage N compiler.

In short, stage 0 uses the stage0 compiler to create stage0 artifacts which will later be uplifted to be the stage1 compiler.

In each stage, two major steps are performed:

  1. std is compiled by the stage N compiler.
  2. That std is linked to programs built by the stage N compiler, including the stage N artifacts (stage N+1 compiler).

This is somewhat intuitive if one thinks of the stage N artifacts as "just" another program we are building with the stage N compiler: build --stage N compiler/rustc is linking the stage N artifacts to the std built by the stage N compiler.

Stages and std

Note that there are two std libraries in play here:

  1. The library linked to stageN/rustc, which was built by stage N-1 (stage N-1 std)
  2. The library used to compile programs with stageN/rustc, which was built by stage N (stage N std).

Stage N std is pretty much necessary for any useful work with the stage N compiler. Without it, you can only compile programs with #![no_core] -- not terribly useful!

The reason these need to be different is because they aren't necessarily ABI-compatible: there could be new layout optimizations, changes to MIR, or other changes to Rust metadata on nightly that aren't present in beta.

This is also where --keep-stage 1 library/std comes into play. Since most changes to the compiler don't actually change the ABI, once you've produced a std in stage1, you can probably just reuse it with a different compiler. If the ABI hasn't changed, you're good to go, no need to spend time recompiling that std. The flag --keep-stage simply instructs the build script to assumes the previous compile is fine and copies those artifacts into the appropriate place, skipping the cargo invocation.

Cross-compiling rustc

Cross-compiling is the process of compiling code that will run on another architecture. For instance, you might want to build an ARM version of rustc using an x86 machine. Building stage2 std is different when you are cross-compiling.

This is because ./x uses the following logic: if HOST and TARGET are the same, it will reuse stage1 std for stage2! This is sound because stage1 std was compiled with the stage1 compiler, i.e. a compiler using the source code you currently have checked out. So it should be identical (and therefore ABI-compatible) to the std that stage2/rustc would compile.

However, when cross-compiling, stage1 std will only run on the host. So the stage2 compiler has to recompile std for the target.

(See in the table how stage2 only builds non-host std targets).

Why does only libstd use cfg(bootstrap)?

For docs on cfg(bootstrap) itself, see Complications of Bootstrapping.

The rustc generated by the stage0 compiler is linked to the freshly-built std, which means that for the most part only std needs to be cfg-gated, so that rustc can use features added to std immediately after their addition, without need for them to get into the downloaded beta compiler.

Note this is different from any other Rust program: stage1 rustc is built by the beta compiler, but using the master version of libstd!

The only time rustc uses cfg(bootstrap) is when it adds internal lints that use diagnostic items, or when it uses unstable library features that were recently changed.

What is a 'sysroot'?

When you build a project with cargo, the build artifacts for dependencies are normally stored in target/debug/deps. This only contains dependencies cargo knows about; in particular, it doesn't have the standard library. Where do std or proc_macro come from? They comes from the sysroot, the root of a number of directories where the compiler loads build artifacts at runtime. The sysroot doesn't just store the standard library, though - it includes anything that needs to be loaded at runtime. That includes (but is not limited to):

  • Libraries libstd/libtest/libproc_macro.
  • Compiler crates themselves, when using rustc_private. In-tree these are always present; out of tree, you need to install rustc-dev with rustup.
  • Shared object file libLLVM.so for the LLVM project. In-tree this is either built from source or downloaded from CI; out-of-tree, you need to install llvm-tools-preview with rustup.

All the artifacts listed so far are compiler runtime dependencies. You can see them with rustc --print sysroot:

$ ls $(rustc --print sysroot)/lib
libchalk_derive-0685d79833dc9b2b.so  libstd-25c6acf8063a3802.so
libLLVM-11-rust-1.50.0-nightly.so    libtest-57470d2aa8f7aa83.so
librustc_driver-4f0cc9f50e53f0ba.so  libtracing_attributes-e4be92c35ab2a33b.so
librustc_macros-5f0ec4a119c6ac86.so  rustlib

There are also runtime dependencies for the standard library! These are in lib/rustlib/, not lib/ directly.

$ ls $(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/lib | head -n 5
libaddr2line-6c8e02b8fedc1e5f.rlib
libadler-9ef2480568df55af.rlib
liballoc-9c4002b5f79ba0e1.rlib
libcfg_if-512eb53291f6de7e.rlib
libcompiler_builtins-ef2408da76957905.rlib

Directory lib/rustlib/ includes libraries like hashbrown and cfg_if, which are not part of the public API of the standard library, but are used to implement it. Also lib/rustlib/ is part of the search path for linkers, but lib will never be part of the search path.

-Z force-unstable-if-unmarked

Since lib/rustlib/ is part of the search path we have to be careful about which crates are included in it. In particular, all crates except for the standard library are built with the flag -Z force-unstable-if-unmarked, which means that you have to use #![feature(rustc_private)] in order to load it (as opposed to the standard library, which is always available).

The -Z force-unstable-if-unmarked flag has a variety of purposes to help enforce that the correct crates are marked as unstable. It was introduced primarily to allow rustc and the standard library to link to arbitrary crates on crates.io which do not themselves use staged_api. rustc also relies on this flag to mark all of its crates as unstable with the rustc_private feature so that each crate does not need to be carefully marked with unstable.

This flag is automatically applied to all of rustc and the standard library by the bootstrap scripts. This is needed because the compiler and all of its dependencies are shipped in sysroot to all users.

This flag has the following effects:

  • Marks the crate as "unstable" with the rustc_private feature if it is not itself marked as stable or unstable.
  • Allows these crates to access other forced-unstable crates without any need for attributes. Normally a crate would need a #![feature(rustc_private)] attribute to use other unstable crates. However, that would make it impossible for a crate from crates.io to access its own dependencies since that crate won't have a feature(rustc_private) attribute, but everything is compiled with -Z force-unstable-if-unmarked.

Code which does not use -Z force-unstable-if-unmarked should include the #![feature(rustc_private)] crate attribute to access these forced-unstable crates. This is needed for things which link rustc its self, such as MIRI or clippy.

You can find more discussion about sysroots in:

Passing flags to commands invoked by bootstrap

Conveniently ./x allows you to pass stage-specific flags to rustc and cargo when bootstrapping. The RUSTFLAGS_BOOTSTRAP environment variable is passed as RUSTFLAGS to the bootstrap stage (stage0), and RUSTFLAGS_NOT_BOOTSTRAP is passed when building artifacts for later stages. RUSTFLAGS will work, but also affects the build of bootstrap itself, so it will be rare to want to use it. Finally, MAGIC_EXTRA_RUSTFLAGS bypasses the cargo cache to pass flags to rustc without recompiling all dependencies.

  • RUSTDOCFLAGS, RUSTDOCFLAGS_BOOTSTRAP and RUSTDOCFLAGS_NOT_BOOTSTRAP are analogous to RUSTFLAGS, but for rustdoc.
  • CARGOFLAGS will pass arguments to cargo itself (e.g. --timings). CARGOFLAGS_BOOTSTRAP and CARGOFLAGS_NOT_BOOTSTRAP work analogously to RUSTFLAGS_BOOTSTRAP.
  • --test-args will pass arguments through to the test runner. For tests/ui, this is compiletest. For unit tests and doc tests this is the libtest runner.

Most test runner accept --help, which you can use to find out the options accepted by the runner.

Environment Variables

During bootstrapping, there are a bunch of compiler-internal environment variables that are used. If you are trying to run an intermediate version of rustc, sometimes you may need to set some of these environment variables manually. Otherwise, you get an error like the following:

thread 'main' panicked at 'RUSTC_STAGE was not set: NotPresent', library/core/src/result.rs:1165:5

If ./stageN/bin/rustc gives an error about environment variables, that usually means something is quite wrong -- such as you're trying to compile rustc or std or something which depends on environment variables. In the unlikely case that you actually need to invoke rustc in such a situation, you can tell the bootstrap shim to print all env variables by adding -vvv to your x command.

Finally, bootstrap makes use of the cc-rs crate which has its own method of configuring C compilers and C flags via environment variables.

Clarification of build command's stdout

In this part, we will investigate the build command's stdout in an action (similar, but more detailed and complete documentation compare to topic above). When you execute x build --dry-run command, the build output will be something like the following:

Building stage0 library artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Copying stage0 library from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Building stage0 compiler artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Copying stage0 rustc from stage0 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Assembling stage1 compiler (x86_64-unknown-linux-gnu)
Building stage1 library artifacts (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu)
Copying stage1 library from stage1 (x86_64-unknown-linux-gnu -> x86_64-unknown-linux-gnu / x86_64-unknown-linux-gnu)
Building stage1 tool rust-analyzer-proc-macro-srv (x86_64-unknown-linux-gnu)
Building rustdoc for stage1 (x86_64-unknown-linux-gnu)

Building stage0 {std,compiler} artifacts

These steps use the provided (downloaded, usually) compiler to compile the local Rust source into libraries we can use.

Copying stage0 {std,rustc}

This copies the library and compiler artifacts from cargo into stage0-sysroot/lib/rustlib/{target-triple}/lib

Assembling stage1 compiler

This copies the libraries we built in "building stage0 ... artifacts" into the stage1 compiler's lib/ directory. These are the host libraries that the compiler itself uses to run. These aren't actually used by artifacts the new compiler generates. This step also copies the rustc and rustdoc binaries we generated into build/$HOST/stage/bin.

The stage1/bin/rustc is a fully functional compiler, but it doesn't yet have any libraries to link built binaries or libraries to. The next 3 steps will provide those libraries for it; they are mostly equivalent to constructing the stage1/bin compiler so we don't go through them individually here.

How Bootstrap does it

The core concept in Bootstrap is a build Step, which are chained together by Builder::ensure. Builder::ensure takes a Step as input, and runs the Step if and only if it has not already been run. Let's take a closer look at Step.

Synopsis of Step

A Step represents a granular collection of actions involved in the process of producing some artifact. It can be thought of like a rule in Makefiles. The Step trait is defined as:

pub trait Step: 'static + Clone + Debug + PartialEq + Eq + Hash {
    type Output: Clone;

    const DEFAULT: bool = false;
    const ONLY_HOSTS: bool = false;

    // Required methods
    fn run(self, builder: &Builder<'_>) -> Self::Output;
    fn should_run(run: ShouldRun<'_>) -> ShouldRun<'_>;

    // Provided method
    fn make_run(_run: RunConfig<'_>) { ... }
}
  • run is the function that is responsible for doing the work. Builder::ensure invokes run.
  • should_run is the command-line interface, which determines if an invocation such as x build foo should run a given Step. In a "default" context where no paths are provided, then make_run is called directly.
  • make_run is invoked only for things directly asked via the CLI and not for steps which are dependencies of other steps.

The entry points

There's a couple of preliminary steps before core Bootstrap code is reached:

  1. Shell script or make: ./x or ./x.ps1 or make
  2. Convenience wrapper script: x.py
  3. src/bootstrap/bootstrap.py
  4. src/bootstrap/src/bin/main.rs

See src/bootstrap/README.md for a more specific description of the implementation details.

High-Level Compiler Architecture

The remaining parts of this guide discuss how the compiler works. They go through everything from high-level structure of the compiler to how each stage of compilation works. They should be friendly to both readers interested in the end-to-end process of compilation and readers interested in learning about a specific system they wish to contribute to. If anything is unclear, feel free to file an issue on the rustc-dev-guide repo or contact the compiler team, as detailed in this chapter from Part 1.

In this part, we will look at the high-level architecture of the compiler. In particular, we will look at three overarching design choices that impact the whole compiler: the query system, incremental compilation, and interning.

Overview of the compiler

This chapter is about the overall process of compiling a program -- how everything fits together.

The Rust compiler is special in two ways: it does things to your code that other compilers don't do (e.g. borrow-checking) and it has a lot of unconventional implementation choices (e.g. queries). We will talk about these in turn in this chapter, and in the rest of the guide, we will look at the individual pieces in more detail.

What the compiler does to your code

So first, let's look at what the compiler does to your code. For now, we will avoid mentioning how the compiler implements these steps except as needed.

Invocation

Compilation begins when a user writes a Rust source program in text and invokes the rustc compiler on it. The work that the compiler needs to perform is defined by command-line options. For example, it is possible to enable nightly features (-Z flags), perform check-only builds, or emit the LLVM Intermediate Representation (LLVM-IR) rather than executable machine code. The rustc executable call may be indirect through the use of cargo.

Command line argument parsing occurs in the rustc_driver. This crate defines the compile configuration that is requested by the user and passes it to the rest of the compilation process as a rustc_interface::Config.

Lexing and parsing

The raw Rust source text is analyzed by a low-level lexer located in rustc_lexer. At this stage, the source text is turned into a stream of atomic source code units known as tokens. The lexer supports the Unicode character encoding.

The token stream passes through a higher-level lexer located in rustc_parse to prepare for the next stage of the compile process. The StringReader struct is used at this stage to perform a set of validations and turn strings into interned symbols (interning is discussed later). String interning is a way of storing only one immutable copy of each distinct string value.

The lexer has a small interface and doesn't depend directly on the diagnostic infrastructure in rustc. Instead it provides diagnostics as plain data which are emitted in rustc_parse::lexer as real diagnostics. The lexer preserves full fidelity information for both IDEs and procedural macros (sometimes referred to as "proc-macros").

The parser translates the token stream from the lexer into an Abstract Syntax Tree (AST). It uses a recursive descent (top-down) approach to syntax analysis. The crate entry points for the parser are the Parser::parse_crate_mod() and Parser::parse_mod() methods found in rustc_parse::parser::Parser. The external module parsing entry point is rustc_expand::module::parse_external_mod. And the macro-parser entry point is Parser::parse_nonterminal().

Parsing is performed with a set of parser utility methods including bump, check, eat, expect, look_ahead.

Parsing is organized by semantic construct. Separate parse_* methods can be found in the rustc_parse directory. The source file name follows the construct name. For example, the following files are found in the parser:

This naming scheme is used across many compiler stages. You will find either a file or directory with the same name across the parsing, lowering, type checking, Typed High-level Intermediate Representation (THIR) lowering, and Mid-level Intermediate Representation (MIR) building sources.

Macro-expansion, AST-validation, name-resolution, and early linting also take place during the lexing and parsing stage.

The rustc_ast::ast::{Crate, Expr, Pat, ...} AST nodes are returned from the parser while the standard Diag API is used for error handling. Generally Rust's compiler will try to recover from errors by parsing a superset of Rust's grammar, while also emitting an error type.

AST lowering

Next the AST is converted into High-Level Intermediate Representation (HIR), a more compiler-friendly representation of the AST. This process is called "lowering" and involves a lot of desugaring (the expansion and formalizing of shortened or abbreviated syntax constructs) of things like loops and async fn.

We then use the HIR to do type inference (the process of automatic detection of the type of an expression), trait solving (the process of pairing up an impl with each reference to a trait), and type checking. Type checking is the process of converting the types found in the HIR (hir::Ty), which represent what the user wrote, into the internal representation used by the compiler (Ty<'tcx>). It's called type checking because the information is used to verify the type safety, correctness and coherence of the types used in the program.

MIR lowering

The HIR is further lowered to MIR (used for borrow checking) by constructing the THIR (an even more desugared HIR used for pattern and exhaustiveness checking) to convert into MIR.

We do many optimizations on the MIR because it is generic and that improves later code generation and compilation speed. It is easier to do some optimizations at MIR level than at LLVM-IR level. For example LLVM doesn't seem to be able to optimize the pattern the simplify_try MIR-opt looks for.

Rust code is also monomorphized during code generation, which means making copies of all the generic code with the type parameters replaced by concrete types. To do this, we need to collect a list of what concrete types to generate code for. This is called monomorphization collection and it happens at the MIR level.

Code generation

We then begin what is simply called code generation or codegen. The code generation stage is when higher-level representations of source are turned into an executable binary. Since rustc uses LLVM for code generation, the first step is to convert the MIR to LLVM-IR. This is where the MIR is actually monomorphized. The LLVM-IR is passed to LLVM, which does a lot more optimizations on it, emitting machine code which is basically assembly code with additional low-level types and annotations added (e.g. an ELF object or WASM). The different libraries/binaries are then linked together to produce the final binary.

How it does it

Now that we have a high-level view of what the compiler does to your code, let's take a high-level view of how it does all that stuff. There are a lot of constraints and conflicting goals that the compiler needs to satisfy/optimize for. For example,

  • Compilation speed: how fast is it to compile a program? More/better compile-time analyses often means compilation is slower.
    • Also, we want to support incremental compilation, so we need to take that into account. How can we keep track of what work needs to be redone and what can be reused if the user modifies their program?
      • Also we can't store too much stuff in the incremental cache because it would take a long time to load from disk and it could take a lot of space on the user's system...
  • Compiler memory usage: while compiling a program, we don't want to use more memory than we need.
  • Program speed: how fast is your compiled program? More/better compile-time analyses often means the compiler can do better optimizations.
  • Program size: how large is the compiled binary? Similar to the previous point.
  • Compiler compilation speed: how long does it take to compile the compiler? This impacts contributors and compiler maintenance.
  • Implementation complexity: building a compiler is one of the hardest things a person/group can do, and Rust is not a very simple language, so how do we make the compiler's code base manageable?
  • Compiler correctness: the binaries produced by the compiler should do what the input programs says they do, and should continue to do so despite the tremendous amount of change constantly going on.
  • Integration: a number of other tools need to use the compiler in various ways (e.g. cargo, clippy, MIRI) that must be supported.
  • Compiler stability: the compiler should not crash or fail ungracefully on the stable channel.
  • Rust stability: the compiler must respect Rust's stability guarantees by not breaking programs that previously compiled despite the many changes that are always going on to its implementation.
  • Limitations of other tools: rustc uses LLVM in its backend, and LLVM has some strengths we leverage and some aspects we need to work around.

So, as you continue your journey through the rest of the guide, keep these things in mind. They will often inform decisions that we make.

Intermediate representations

As with most compilers, rustc uses some intermediate representations (IRs) to facilitate computations. In general, working directly with the source code is extremely inconvenient and error-prone. Source code is designed to be human-friendly while at the same time being unambiguous, but it's less convenient for doing something like, say, type checking.

Instead most compilers, including rustc, build some sort of IR out of the source code which is easier to analyze. rustc has a few IRs, each optimized for different purposes:

  • Token stream: the lexer produces a stream of tokens directly from the source code. This stream of tokens is easier for the parser to deal with than raw text.
  • Abstract Syntax Tree (AST): the abstract syntax tree is built from the stream of tokens produced by the lexer. It represents pretty much exactly what the user wrote. It helps to do some syntactic sanity checking (e.g. checking that a type is expected where the user wrote one).
  • High-level IR (HIR): This is a sort of desugared AST. It's still close to what the user wrote syntactically, but it includes some implicit things such as some elided lifetimes, etc. This IR is amenable to type checking.
  • Typed HIR (THIR) formerly High-level Abstract IR (HAIR): This is an intermediate between HIR and MIR. It is like the HIR but it is fully typed and a bit more desugared (e.g. method calls and implicit dereferences are made fully explicit). As a result, it is easier to lower to MIR from THIR than from HIR.
  • Middle-level IR (MIR): This IR is basically a Control-Flow Graph (CFG). A CFG is a type of diagram that shows the basic blocks of a program and how control flow can go between them. Likewise, MIR also has a bunch of basic blocks with simple typed statements inside them (e.g. assignment, simple computations, etc) and control flow edges to other basic blocks (e.g., calls, dropping values). MIR is used for borrow checking and other important dataflow-based checks, such as checking for uninitialized values. It is also used for a series of optimizations and for constant evaluation (via MIRI). Because MIR is still generic, we can do a lot of analyses here more efficiently than after monomorphization.
  • LLVM-IR: This is the standard form of all input to the LLVM compiler. LLVM-IR is a sort of typed assembly language with lots of annotations. It's a standard format that is used by all compilers that use LLVM (e.g. the clang C compiler also outputs LLVM-IR). LLVM-IR is designed to be easy for other compilers to emit and also rich enough for LLVM to run a bunch of optimizations on it.

One other thing to note is that many values in the compiler are interned. This is a performance and memory optimization in which we allocate the values in a special allocator called an arena. Then, we pass around references to the values allocated in the arena. This allows us to make sure that identical values (e.g. types in your program) are only allocated once and can be compared cheaply by comparing pointers. Many of the intermediate representations are interned.

Queries

The first big implementation choice is Rust's use of the query system in its compiler. The Rust compiler is not organized as a series of passes over the code which execute sequentially. The Rust compiler does this to make incremental compilation possible -- that is, if the user makes a change to their program and recompiles, we want to do as little redundant work as possible to output the new binary.

In rustc, all the major steps above are organized as a bunch of queries that call each other. For example, there is a query to ask for the type of something and another to ask for the optimized MIR of a function. These queries can call each other and are all tracked through the query system. The results of the queries are cached on disk so that the compiler can tell which queries' results changed from the last compilation and only redo those. This is how incremental compilation works.

In principle, for the query-field steps, we do each of the above for each item individually. For example, we will take the HIR for a function and use queries to ask for the LLVM-IR for that HIR. This drives the generation of optimized MIR, which drives the borrow checker, which drives the generation of MIR, and so on.

... except that this is very over-simplified. In fact, some queries are not cached on disk, and some parts of the compiler have to run for all code anyway for correctness even if the code is dead code (e.g. the borrow checker). For example, currently the mir_borrowck query is first executed on all functions of a crate. Then the codegen backend invokes the collect_and_partition_mono_items query, which first recursively requests the optimized_mir for all reachable functions, which in turn runs mir_borrowck for that function and then creates codegen units. This kind of split will need to remain to ensure that unreachable functions still have their errors emitted.

Moreover, the compiler wasn't originally built to use a query system; the query system has been retrofitted into the compiler, so parts of it are not query-field yet. Also, LLVM isn't our code, so that isn't querified either. The plan is to eventually query-fy all of the steps listed in the previous section, but as of November 2022, only the steps between HIR and LLVM-IR are query-field. That is, lexing, parsing, name resolution, and macro expansion are done all at once for the whole program.

One other thing to mention here is the all-important "typing context", TyCtxt, which is a giant struct that is at the center of all things. (Note that the name is mostly historic. This is not a "typing context" in the sense of Γ or Δ from type theory. The name is retained because that's what the name of the struct is in the source code.) All queries are defined as methods on the TyCtxt type, and the in-memory query cache is stored there too. In the code, there is usually a variable called tcx which is a handle on the typing context. You will also see lifetimes with the name 'tcx, which means that something is tied to the lifetime of the TyCtxt (usually it is stored or interned there).

ty::Ty

Types are really important in Rust, and they form the core of a lot of compiler analyses. The main type (in the compiler) that represents types (in the user's program) is rustc_middle::ty::Ty. This is so important that we have a whole chapter on ty::Ty, but for now, we just want to mention that it exists and is the way rustc represents types!

Also note that the rustc_middle::ty module defines the TyCtxt struct we mentioned before.

Parallelism

Compiler performance is a problem that we would like to improve on (and are always working on). One aspect of that is parallelizing rustc itself.

Currently, there is only one part of rustc that is parallel by default: code generation.

However, the rest of the compiler is still not yet parallel. There have been lots of efforts spent on this, but it is generally a hard problem. The current approach is to turn RefCells into Mutexs -- that is, we switch to thread-safe internal mutability. However, there are ongoing challenges with lock contention, maintaining query-system invariants under concurrency, and the complexity of the code base. One can try out the current work by enabling parallel compilation in config.toml. It's still early days, but there are already some promising performance improvements.

Bootstrapping

rustc itself is written in Rust. So how do we compile the compiler? We use an older compiler to compile the newer compiler. This is called bootstrapping.

Bootstrapping has a lot of interesting implications. For example, it means that one of the major users of Rust is the Rust compiler, so we are constantly testing our own software ("eating our own dogfood").

For more details on bootstrapping, see the bootstrapping section of the guide.

References

High-level overview of the compiler source

Now that we have seen what the compiler does, let's take a look at the structure of the rust-lang/rust repository, where the rustc source code lives.

You may find it helpful to read the "Overview of the compiler" chapter, which introduces how the compiler works, before this one.

Workspace structure

The rust-lang/rust repository consists of a single large cargo workspace containing the compiler, the standard libraries (core, alloc, std, proc_macro, etc), and rustdoc, along with the build system and a bunch of tools and submodules for building a full Rust distribution.

The repository consists of three main directories:

Compiler

The compiler is implemented in the various compiler/ crates. The compiler/ crates all have names starting with rustc_*. These are a collection of around 50 interdependent crates ranging in size from tiny to huge. There is also the rustc crate which is the actual binary (i.e. the main function); it doesn't actually do anything besides calling the rustc_driver crate, which drives the various parts of compilation in other crates.

The dependency structure of these crates is complex, but roughly it is something like this:

You can see the exact dependencies by reading the Cargo.toml for the various crates, just like a normal Rust crate.

One final thing: src/llvm-project is a submodule for our fork of LLVM. During bootstrapping, LLVM is built and the compiler/rustc_llvm crate contains Rust wrappers around LLVM (which is written in C++), so that the compiler can interface with it.

Most of this book is about the compiler, so we won't have any further explanation of these crates here.

Big picture

The dependency structure of the compiler is influenced by two main factors:

  1. Organization. The compiler is a huge codebase; it would be an impossibly large crate. In part, the dependency structure reflects the code structure of the compiler.
  2. Compile-time. By breaking the compiler into multiple crates, we can take better advantage of incremental/parallel compilation using cargo. In particular, we try to have as few dependencies between crates as possible so that we don't have to rebuild as many crates if you change one.

At the very bottom of the dependency tree are a handful of crates that are used by the whole compiler (e.g. rustc_span). The very early parts of the compilation process (e.g. parsing and the Abstract Syntax Tree (AST)) depend on only these.

After the AST is constructed and other early analysis is done, the compiler's query system gets set up. The query system is set up in a clever way using function pointers. This allows us to break dependencies between crates, allowing more parallel compilation. The query system is defined in rustc_middle, so nearly all subsequent parts of the compiler depend on this crate. It is a really large crate, leading to long compile times. Some efforts have been made to move stuff out of it with varying success. Another side-effect is that sometimes related functionality gets scattered across different crates. For example, linting functionality is found across earlier parts of the crate, rustc_lint, rustc_middle, and other places.

Ideally there would be fewer, more cohesive crates, with incremental and parallel compilation making sure compile times stay reasonable. However, incremental and parallel compilation haven't gotten good enough for that yet, so breaking things into separate crates has been our solution so far.

At the top of the dependency tree is rustc_driver and rustc_interface which is an unstable wrapper around the query system helping drive various stages of compilation. Other consumers of the compiler may use this interface in different ways (e.g. rustdoc or maybe eventually rust-analyzer). The rustc_driver crate first parses command line arguments and then uses rustc_interface to drive the compilation to completion.

rustdoc

The bulk of rustdoc is in librustdoc. However, the rustdoc binary itself is src/tools/rustdoc, which does nothing except call rustdoc::main.

There is also JavaScript and CSS for the docs in src/tools/rustdoc-js and src/tools/rustdoc-themes.

You can read more about rustdoc in this chapter.

Tests

The test suite for all of the above is in tests/. You can read more about the test suite in this chapter.

The test harness is in src/tools/compiletest/.

Build System

There are a number of tools in the repository just for building the compiler, standard library, rustdoc, etc, along with testing, building a full Rust distribution, etc.

One of the primary tools is src/bootstrap/. You can read more about bootstrapping in this chapter. The process may also use other tools from src/tools/, such as tidy/ or compiletest/.

Standard library

This code is fairly similar to most other Rust crates except that it must be built in a special way because it can use unstable (nightly) features. The standard library is sometimes referred to as libstd or the "standard facade".

Other

There are a lot of other things in the rust-lang/rust repo that are related to building a full Rust distribution. Most of the time you don't need to worry about them.

These include:

  • src/ci: The CI configuration. This actually quite extensive because we run a lot of tests on a lot of platforms.
  • src/doc: Various documentation, including submodules for a few books.
  • src/etc: Miscellaneous utilities.
  • And more...

Queries: demand-driven compilation

As described in the high-level overview of the compiler, the Rust compiler is still (as of July 2021) transitioning from a traditional "pass-based" setup to a "demand-driven" system. The compiler query system is the key to rustc's demand-driven organization. The idea is pretty simple. Instead of entirely independent passes (parsing, type-checking, etc.), a set of function-like queries compute information about the input source. For example, there is a query called type_of that, given the DefId of some item, will compute the type of that item and return it to you.

Query execution is memoized. The first time you invoke a query, it will go do the computation, but the next time, the result is returned from a hashtable. Moreover, query execution fits nicely into incremental computation; the idea is roughly that, when you invoke a query, the result may be returned to you by loading stored data from disk.1

Eventually, we want the entire compiler control-flow to be query driven. There will effectively be one top-level query (compile) that will run compilation on a crate; this will in turn demand information about that crate, starting from the end. For example:

  • The compile query might demand to get a list of codegen-units (i.e. modules that need to be compiled by LLVM).
  • But computing the list of codegen-units would invoke some subquery that returns the list of all modules defined in the Rust source.
  • That query in turn would invoke something asking for the HIR.
  • This keeps going further and further back until we wind up doing the actual parsing.

Although this vision is not fully realized, large sections of the compiler (for example, generating MIR) currently work exactly like this.

1

The "Incremental Compilation in Detail chapter gives a more in-depth description of what queries are and how they work. If you intend to write a query of your own, this is a good read.

Invoking queries

Invoking a query is simple. The TyCtxt ("type context") struct offers a method for each defined query. For example, to invoke the type_of query, you would just do this:

let ty = tcx.type_of(some_def_id);

How the compiler executes a query

So you may be wondering what happens when you invoke a query method. The answer is that, for each query, the compiler maintains a cache – if your query has already been executed, then, the answer is simple: we clone the return value out of the cache and return it (therefore, you should try to ensure that the return types of queries are cheaply cloneable; insert an Rc if necessary).

Providers

If, however, the query is not in the cache, then the compiler will try to find a suitable provider. A provider is a function that has been defined and linked into the compiler somewhere that contains the code to compute the result of the query.

Providers are defined per-crate. The compiler maintains, internally, a table of providers for every crate, at least conceptually. Right now, there are really two sets: the providers for queries about the local crate (that is, the one being compiled) and providers for queries about external crates (that is, dependencies of the local crate). Note that what determines the crate that a query is targeting is not the kind of query, but the key. For example, when you invoke tcx.type_of(def_id), that could be a local query or an external query, depending on what crate the def_id is referring to (see the self::keys::Key trait for more information on how that works).

Providers always have the same signature:

fn provider<'tcx>(
    tcx: TyCtxt<'tcx>,
    key: QUERY_KEY,
) -> QUERY_RESULT {
    ...
}

Providers take two arguments: the tcx and the query key. They return the result of the query.

How providers are setup

When the tcx is created, it is given the providers by its creator using the Providers struct. This struct is generated by the macros here, but it is basically a big list of function pointers:

struct Providers {
    type_of: for<'tcx> fn(TyCtxt<'tcx>, DefId) -> Ty<'tcx>,
    ...
}

At present, we have one copy of the struct for local crates, and one for external crates, though the plan is that we may eventually have one per crate.

These Providers structs are ultimately created and populated by rustc_driver, but it does this by distributing the work throughout the other rustc_* crates. This is done by invoking various provide functions. These functions tend to look something like this:

pub fn provide(providers: &mut Providers) {
    *providers = Providers {
        type_of,
        ..*providers
    };
}

That is, they take an &mut Providers and mutate it in place. Usually we use the formulation above just because it looks nice, but you could as well do providers.type_of = type_of, which would be equivalent. (Here, type_of would be a top-level function, defined as we saw before.) So, if we want to add a provider for some other query, let's call it fubar, into the crate above, we might modify the provide() function like so:

pub fn provide(providers: &mut Providers) {
    *providers = Providers {
        type_of,
        fubar,
        ..*providers
    };
}

fn fubar<'tcx>(tcx: TyCtxt<'tcx>, key: DefId) -> Fubar<'tcx> { ... }

N.B. Most of the rustc_* crates only provide local providers. Almost all extern providers wind up going through the rustc_metadata crate, which loads the information from the crate metadata. But in some cases there are crates that provide queries for both local and external crates, in which case they define both a provide and a provide_extern function, through wasm_import_module_map, that rustc_driver can invoke.

Adding a new query

How do you add a new query? Defining a query takes place in two steps:

  1. Declare the query name, its arguments and description.
  2. Supply query providers where needed.

To declare the query name and arguments, you simply add an entry to the big macro invocation in compiler/rustc_middle/src/query/mod.rs. Then you need to add a documentation comment to it with some internal description. Then, provide the desc attribute which contains a user-facing description of the query. The desc attribute is shown to the user in query cycles.

This looks something like:

rustc_queries! {
    /// Records the type of every item.
    query type_of(key: DefId) -> Ty<'tcx> {
        cache_on_disk_if { key.is_local() }
        desc { |tcx| "computing the type of `{}`", tcx.def_path_str(key) }
    }
    ...
}

A query definition has the following form:

query type_of(key: DefId) -> Ty<'tcx> { ... }
^^^^^ ^^^^^^^      ^^^^^     ^^^^^^^^   ^^^
|     |            |         |          |
|     |            |         |          query modifiers
|     |            |         result type
|     |            query key type
|     name of query
query keyword

Let's go over these elements one by one:

  • Query keyword: indicates a start of a query definition.
  • Name of query: the name of the query method (tcx.type_of(..)). Also used as the name of a struct (ty::queries::type_of) that will be generated to represent this query.
  • Query key type: the type of the argument to this query. This type must implement the ty::query::keys::Key trait, which defines (for example) how to map it to a crate, and so forth.
  • Result type of query: the type produced by this query. This type should (a) not use RefCell or other interior mutability and (b) be cheaply cloneable. Interning or using Rc or Arc is recommended for non-trivial data types.2
  • Query modifiers: various flags and options that customize how the query is processed (mostly with respect to incremental compilation).

So, to add a query:

  • Add an entry to rustc_queries! using the format above.
  • Link the provider by modifying the appropriate provide method; or add a new one if needed and ensure that rustc_driver is invoking it.
2

The one exception to those rules is the ty::steal::Steal type, which is used to cheaply modify MIR in place. See the definition of Steal for more details. New uses of Steal should not be added without alerting @rust-lang/compiler.

Related design ideas, and tracking issues:

More discussion and issues:

The Query Evaluation Model in Detail

This chapter provides a deeper dive into the abstract model queries are built on. It does not go into implementation details but tries to explain the underlying logic. The examples here, therefore, have been stripped down and simplified and don't directly reflect the compilers internal APIs.

What is a query?

Abstractly we view the compiler's knowledge about a given crate as a "database" and queries are the way of asking the compiler questions about it, i.e. we "query" the compiler's "database" for facts.

However, there's something special to this compiler database: It starts out empty and is filled on-demand when queries are executed. Consequently, a query must know how to compute its result if the database does not contain it yet. For doing so, it can access other queries and certain input values that the database is pre-filled with on creation.

A query thus consists of the following things:

  • A name that identifies the query
  • A "key" that specifies what we want to look up
  • A result type that specifies what kind of result it yields
  • A "provider" which is a function that specifies how the result is to be computed if it isn't already present in the database.

As an example, the name of the type_of query is type_of, its query key is a DefId identifying the item we want to know the type of, the result type is Ty<'tcx>, and the provider is a function that, given the query key and access to the rest of the database, can compute the type of the item identified by the key.

So in some sense a query is just a function that maps the query key to the corresponding result. However, we have to apply some restrictions in order for this to be sound:

  • The key and result must be immutable values.
  • The provider function must be a pure function in the sense that for the same key it must always yield the same result.
  • The only parameters a provider function takes are the key and a reference to the "query context" (which provides access to the rest of the "database").

The database is built up lazily by invoking queries. The query providers will invoke other queries, for which the result is either already cached or computed by calling another query provider. These query provider invocations conceptually form a directed acyclic graph (DAG) at the leaves of which are input values that are already known when the query context is created.

Caching/Memoization

Results of query invocations are "memoized" which means that the query context will cache the result in an internal table and, when the query is invoked with the same query key again, will return the result from the cache instead of running the provider again.

This caching is crucial for making the query engine efficient. Without memoization the system would still be sound (that is, it would yield the same results) but the same computations would be done over and over again.

Memoization is one of the main reasons why query providers have to be pure functions. If calling a provider function could yield different results for each invocation (because it accesses some global mutable state) then we could not memoize the result.

Input data

When the query context is created, it is still empty: No queries have been executed, no results are cached. But the context already provides access to "input" data, i.e. pieces of immutable data that were computed before the context was created and that queries can access to do their computations.

As of January 2021, this input data consists mainly of the HIR map, upstream crate metadata, and the command-line options the compiler was invoked with; but in the future inputs will just consist of command-line options and a list of source files -- the HIR map will itself be provided by a query which processes these source files.

Without inputs, queries would live in a void without anything to compute their result from (remember, query providers only have access to other queries and the context but not any other outside state or information).

For a query provider, input data and results of other queries look exactly the same: It just tells the context "give me the value of X". Because input data is immutable, the provider can rely on it being the same across different query invocations, just as is the case for query results.

An example execution trace of some queries

How does this DAG of query invocations come into existence? At some point the compiler driver will create the, as yet empty, query context. It will then, from outside of the query system, invoke the queries it needs to perform its task. This looks something like the following:

fn compile_crate() {
    let cli_options = ...;
    let hir_map = ...;

    // Create the query context `tcx`
    let tcx = TyCtxt::new(cli_options, hir_map);

    // Do type checking by invoking the type check query
    tcx.type_check_crate();
}

The type_check_crate query provider would look something like the following:

fn type_check_crate_provider(tcx, _key: ()) {
    let list_of_hir_items = tcx.hir_map.list_of_items();

    for item_def_id in list_of_hir_items {
        tcx.type_check_item(item_def_id);
    }
}

We see that the type_check_crate query accesses input data (tcx.hir_map.list_of_items()) and invokes other queries (type_check_item). The type_check_item invocations will themselves access input data and/or invoke other queries, so that in the end the DAG of query invocations will be built up backwards from the node that was initially executed:

         (2)                                                 (1)
  list_of_all_hir_items <----------------------------- type_check_crate()
                                                               |
    (5)             (4)                  (3)                   |
  Hir(foo) <--- type_of(foo) <--- type_check_item(foo) <-------+
                                      |                        |
                    +-----------------+                        |
                    |                                          |
    (7)             v  (6)                  (8)                |
  Hir(bar) <--- type_of(bar) <--- type_check_item(bar) <-------+

// (x) denotes invocation order

We also see that often a query result can be read from the cache: type_of(bar) was computed for type_check_item(foo) so when type_check_item(bar) needs it, it is already in the cache.

Query results stay cached in the query context as long as the context lives. So if the compiler driver invoked another query later on, the above graph would still exist and already executed queries would not have to be re-done.

Cycles

Earlier we stated that query invocations form a DAG. However, it would be easy to form a cyclic graph by, for example, having a query provider like the following:

fn cyclic_query_provider(tcx, key) -> u32 {
  // Invoke the same query with the same key again
  tcx.cyclic_query(key)
}

Since query providers are regular functions, this would behave much as expected: Evaluation would get stuck in an infinite recursion. A query like this would not be very useful either. However, sometimes certain kinds of invalid user input can result in queries being called in a cyclic way. The query engine includes a check for cyclic invocations of queries with the same input arguments. And, because cycles are an irrecoverable error, will abort execution with a "cycle error" message that tries to be human readable.

At some point the compiler had a notion of "cycle recovery", that is, one could "try" to execute a query and if it ended up causing a cycle, proceed in some other fashion. However, this was later removed because it is not entirely clear what the theoretical consequences of this are, especially regarding incremental compilation.

"Steal" Queries

Some queries have their result wrapped in a Steal<T> struct. These queries behave exactly the same as regular with one exception: Their result is expected to be "stolen" out of the cache at some point, meaning some other part of the program is taking ownership of it and the result cannot be accessed anymore.

This stealing mechanism exists purely as a performance optimization because some result values are too costly to clone (e.g. the MIR of a function). It seems like result stealing would violate the condition that query results must be immutable (after all we are moving the result value out of the cache) but it is OK as long as the mutation is not observable. This is achieved by two things:

  • Before a result is stolen, we make sure to eagerly run all queries that might ever need to read that result. This has to be done manually by calling those queries.
  • Whenever a query tries to access a stolen result, we make an ICE (Internal Compiler Error) so that such a condition cannot go unnoticed.

This is not an ideal setup because of the manual intervention needed, so it should be used sparingly and only when it is well known which queries might access a given result. In practice, however, stealing has not turned out to be much of a maintenance burden.

To summarize: "Steal queries" break some of the rules in a controlled way. There are checks in place that make sure that nothing can go silently wrong.

Incremental compilation

The incremental compilation scheme is, in essence, a surprisingly simple extension to the overall query system. We'll start by describing a slightly simplified variant of the real thing – the "basic algorithm" – and then describe some possible improvements.

The basic algorithm

The basic algorithm is called the red-green algorithm1. The high-level idea is that, after each run of the compiler, we will save the results of all the queries that we do, as well as the query DAG. The query DAG is a DAG that indexes which queries executed which other queries. So, for example, there would be an edge from a query Q1 to another query Q2 if computing Q1 required computing Q2 (note that because queries cannot depend on themselves, this results in a DAG and not a general graph).

NOTE: You might think of a query as simply the definition of a query. A thing that you can invoke, a bit like a function, and which either returns a cached result or actually executes the code.

If that's the way you think about queries, it's good to know that in the following text, queries will be said to have colours. Keep in mind though, that here the word query also refers to a certain invocation of the query for a certain input. As you will read later, queries are fingerprinted based on their arguments. The result of a query might change when we give it one argument and be coloured red, while it stays the same for another argument and is thus green.

In short, the word query is here not just used to mean the definition of a query, but also for a specific instance of that query with given arguments.

On the next run of the compiler, then, we can sometimes reuse these query results to avoid re-executing a query. We do this by assigning every query a color:

  • If a query is colored red, that means that its result during this compilation has changed from the previous compilation.
  • If a query is colored green, that means that its result is the same as the previous compilation.

There are two key insights here:

  • First, if all the inputs to query Q are colored green, then the query Q must result in the same value as last time and hence need not be re-executed (or else the compiler is not deterministic).
  • Second, even if some inputs to a query changes, it may be that it still produces the same result as the previous compilation. In particular, the query may only use part of its input.
    • Therefore, after executing a query, we always check whether it produced the same result as the previous time. If it did, we can still mark the query as green, and hence avoid re-executing dependent queries.

The try-mark-green algorithm

At the core of incremental compilation is an algorithm called "try-mark-green". It has the job of determining the color of a given query Q (which must not have yet been executed). In cases where Q has red inputs, determining Q's color may involve re-executing Q so that we can compare its output, but if all of Q's inputs are green, then we can conclude that Q must be green without re-executing it or inspecting its value at all. In the compiler, this allows us to avoid deserializing the result from disk when we don't need it, and in fact enables us to sometimes skip serializing the result as well (see the refinements section below).

Try-mark-green works as follows:

  • First check if the query Q was executed during the previous compilation.
    • If not, we can just re-execute the query as normal, and assign it the color of red.
  • If yes, then load the 'dependent queries' of Q.
  • If there is a saved result, then we load the reads(Q) vector from the query DAG. The "reads" is the set of queries that Q executed during its execution.
    • For each query R in reads(Q), we recursively demand the color of R using try-mark-green.
      • Note: it is important that we visit each node in reads(Q) in same order as they occurred in the original compilation. See the section on the query DAG below.
      • If any of the nodes in reads(Q) wind up colored red, then Q is dirty.
        • We re-execute Q and compare the hash of its result to the hash of the result from the previous compilation.
        • If the hash has not changed, we can mark Q as green and return.
      • Otherwise, all of the nodes in reads(Q) must be green. In that case, we can color Q as green and return.

The query DAG

The query DAG code is stored in compiler/rustc_middle/src/dep_graph. Construction of the DAG is done by instrumenting the query execution.

One key point is that the query DAG also tracks ordering; that is, for each query Q, we not only track the queries that Q reads, we track the order in which they were read. This allows try-mark-green to walk those queries back in the same order. This is important because once a subquery comes back as red, we can no longer be sure that Q will continue along the same path as before. That is, imagine a query like this:

fn main_query(tcx) {
    if tcx.subquery1() {
        tcx.subquery2()
    } else {
        tcx.subquery3()
    }
}

Now imagine that in the first compilation, main_query starts by executing subquery1, and this returns true. In that case, the next query main_query executes will be subquery2, and subquery3 will not be executed at all.

But now imagine that in the next compilation, the input has changed such that subquery1 returns false. In this case, subquery2 would never execute. If try-mark-green were to visit reads(main_query) out of order, however, it might visit subquery2 before subquery1, and hence execute it. This can lead to ICEs and other problems in the compiler.

Improvements to the basic algorithm

In the description of the basic algorithm, we said that at the end of compilation we would save the results of all the queries that were performed. In practice, this can be quite wasteful – many of those results are very cheap to recompute, and serializing and deserializing them is not a particular win. In practice, what we would do is to save the hashes of all the subqueries that we performed. Then, in select cases, we also save the results.

This is why the incremental algorithm separates computing the color of a node, which often does not require its value, from computing the result of a node. Computing the result is done via a simple algorithm like so:

  • Check if a saved result for Q is available. If so, compute the color of Q. If Q is green, deserialize and return the saved result.
  • Otherwise, execute Q.
    • We can then compare the hash of the result and color Q as green if it did not change.

Resources

The initial design document can be found here, which expands on the memoization details, provides more high-level overview and motivation for this system.

Footnotes

1

I have long wanted to rename it to the Salsa algorithm, but it never caught on. -@nikomatsakis

Incremental Compilation In Detail

The incremental compilation scheme is, in essence, a surprisingly simple extension to the overall query system. It relies on the fact that:

  1. queries are pure functions -- given the same inputs, a query will always yield the same result, and
  2. the query model structures compilation in an acyclic graph that makes dependencies between individual computations explicit.

This chapter will explain how we can use these properties for making things incremental and then goes on to discuss version implementation issues.

A Basic Algorithm For Incremental Query Evaluation

As explained in the query evaluation model primer, query invocations form a directed-acyclic graph. Here's the example from the previous chapter again:

  list_of_all_hir_items <----------------------------- type_check_crate()
                                                               |
                                                               |
  Hir(foo) <--- type_of(foo) <--- type_check_item(foo) <-------+
                                      |                        |
                    +-----------------+                        |
                    |                                          |
                    v                                          |
  Hir(bar) <--- type_of(bar) <--- type_check_item(bar) <-------+

Since every access from one query to another has to go through the query context, we can record these accesses and thus actually build this dependency graph in memory. With dependency tracking enabled, when compilation is done, we know which queries were invoked (the nodes of the graph) and for each invocation, which other queries or input has gone into computing the query's result (the edges of the graph).

Now suppose we change the source code of our program so that HIR of bar looks different than before. Our goal is to only recompute those queries that are actually affected by the change while re-using the cached results of all the other queries. Given the dependency graph we can do exactly that. For a given query invocation, the graph tells us exactly what data has gone into computing its results, we just have to follow the edges until we reach something that has changed. If we don't encounter anything that has changed, we know that the query still would evaluate to the same result we already have in our cache.

Taking the type_of(foo) invocation from above as an example, we can check whether the cached result is still valid by following the edges to its inputs. The only edge leads to Hir(foo), an input that has not been affected by the change. So we know that the cached result for type_of(foo) is still valid.

The story is a bit different for type_check_item(foo): We again walk the edges and already know that type_of(foo) is fine. Then we get to type_of(bar) which we have not checked yet, so we walk the edges of type_of(bar) and encounter Hir(bar) which has changed. Consequently the result of type_of(bar) might yield a different result than what we have in the cache and, transitively, the result of type_check_item(foo) might have changed too. We thus re-run type_check_item(foo), which in turn will re-run type_of(bar), which will yield an up-to-date result because it reads the up-to-date version of Hir(bar). Also, we re-run type_check_item(bar) because result of type_of(bar) might have changed.

The Problem With The Basic Algorithm: False Positives

If you read the previous paragraph carefully you'll notice that it says that type_of(bar) might have changed because one of its inputs has changed. There's also the possibility that it might still yield exactly the same result even though its input has changed. Consider an example with a simple query that just computes the sign of an integer:

  IntValue(x) <---- sign_of(x) <--- some_other_query(x)

Let's say that IntValue(x) starts out as 1000 and then is set to 2000. Even though IntValue(x) is different in the two cases, sign_of(x) yields the result + in both cases.

If we follow the basic algorithm, however, some_other_query(x) would have to (unnecessarily) be re-evaluated because it transitively depends on a changed input. Change detection yields a "false positive" in this case because it has to conservatively assume that some_other_query(x) might be affected by that changed input.

Unfortunately it turns out that the actual queries in the compiler are full of examples like this and small changes to the input often potentially affect very large parts of the output binaries. As a consequence, we had to make the change detection system smarter and more accurate.

Improving Accuracy: The red-green Algorithm

The "false positives" problem can be solved by interleaving change detection and query re-evaluation. Instead of walking the graph all the way to the inputs when trying to find out if some cached result is still valid, we can check if a result has actually changed after we were forced to re-evaluate it.

We call this algorithm the red-green algorithm because nodes in the dependency graph are assigned the color green if we were able to prove that its cached result is still valid and the color red if the result has turned out to be different after re-evaluating it.

The meat of red-green change tracking is implemented in the try-mark-green algorithm, that, you've guessed it, tries to mark a given node as green:

fn try_mark_green(tcx, current_node) -> bool {

    // Fetch the inputs to `current_node`, i.e. get the nodes that the direct
    // edges from `node` lead to.
    let dependencies = tcx.dep_graph.get_dependencies_of(current_node);

    // Now check all the inputs for changes
    for dependency in dependencies {

        match tcx.dep_graph.get_node_color(dependency) {
            Green => {
                // This input has already been checked before and it has not
                // changed; so we can go on to check the next one
            }
            Red => {
                // We found an input that has changed. We cannot mark
                // `current_node` as green without re-running the
                // corresponding query.
                return false
            }
            Unknown => {
                // This is the first time we look at this node. Let's try
                // to mark it green by calling try_mark_green() recursively.
                if try_mark_green(tcx, dependency) {
                    // We successfully marked the input as green, on to the
                    // next.
                } else {
                    // We could *not* mark the input as green. This means we
                    // don't know if its value has changed. In order to find
                    // out, we re-run the corresponding query now!
                    tcx.run_query_for(dependency);

                    // Fetch and check the node color again. Running the query
                    // has forced it to either red (if it yielded a different
                    // result than we have in the cache) or green (if it
                    // yielded the same result).
                    match tcx.dep_graph.get_node_color(dependency) {
                        Red => {
                            // The input turned out to be red, so we cannot
                            // mark `current_node` as green.
                            return false
                        }
                        Green => {
                            // Re-running the query paid off! The result is the
                            // same as before, so this particular input does
                            // not invalidate `current_node`.
                        }
                        Unknown => {
                            // There is no way a node has no color after
                            // re-running the query.
                            panic!("unreachable")
                        }
                    }
                }
            }
        }
    }

    // If we have gotten through the entire loop, it means that all inputs
    // have turned out to be green. If all inputs are unchanged, it means
    // that the query result corresponding to `current_node` cannot have
    // changed either.
    tcx.dep_graph.mark_green(current_node);

    true
}

NOTE: The actual implementation can be found in compiler/rustc_query_system/src/dep_graph/graph.rs

By using red-green marking we can avoid the devastating cumulative effect of having false positives during change detection. Whenever a query is executed in incremental mode, we first check if its already green. If not, we run try_mark_green() on it. If it still isn't green after that, then we actually invoke the query provider to re-compute the result. Re-computing the query might then itself involve recursively invoking more queries, which can mean we come back to the try_mark_green() algorithm for the dependencies recursively.

The Real World: How Persistence Makes Everything Complicated

The sections above described the underlying algorithm for incremental compilation but because the compiler process exits after being finished and takes the query context with its result cache with it into oblivion, we have to persist data to disk, so the next compilation session can make use of it. This comes with a whole new set of implementation challenges:

  • The query result cache is stored to disk, so they are not readily available for change comparison.
  • A subsequent compilation session will start off with new version of the code that has arbitrary changes applied to it. All kinds of IDs and indices that are generated from a global, sequential counter (e.g. NodeId, DefId, etc) might have shifted, making the persisted results on disk not immediately usable anymore because the same numeric IDs and indices might refer to completely new things in the new compilation session.
  • Persisting things to disk comes at a cost, so not every tiny piece of information should be actually cached in between compilation sessions. Fixed-sized, plain-old-data is preferred to complex things that need to run through an expensive (de-)serialization step.

The following sections describe how the compiler solves these issues.

A Question Of Stability: Bridging The Gap Between Compilation Sessions

As noted before, various IDs (like DefId) are generated by the compiler in a way that depends on the contents of the source code being compiled. ID assignment is usually deterministic, that is, if the exact same code is compiled twice, the same things will end up with the same IDs. However, if something changes, e.g. a function is added in the middle of a file, there is no guarantee that anything will have the same ID as it had before.

As a consequence we cannot represent the data in our on-disk cache the same way it is represented in memory. For example, if we just stored a piece of type information like TyKind::FnDef(DefId, &'tcx Substs<'tcx>) (as we do in memory) and then the contained DefId points to a different function in a new compilation session we'd be in trouble.

The solution to this problem is to find "stable" forms for IDs which remain valid in between compilation sessions. For the most important case, DefIds, these are the so-called DefPaths. Each DefId has a corresponding DefPath but in place of a numeric ID, a DefPath is based on the path to the identified item, e.g. std::collections::HashMap. The advantage of an ID like this is that it is not affected by unrelated changes. For example, one can add a new function to std::collections but std::collections::HashMap would still be std::collections::HashMap. A DefPath is "stable" across changes made to the source code while a DefId isn't.

There is also the DefPathHash which is just a 128-bit hash value of the DefPath. The two contain the same information and we mostly use the DefPathHash because it simpler to handle, being Copy and self-contained.

This principle of stable identifiers is used to make the data in the on-disk cache resilient to source code changes. Instead of storing a DefId, we store the DefPathHash and when we deserialize something from the cache, we map the DefPathHash to the corresponding DefId in the current compilation session (which is just a simple hash table lookup).

The HirId, used for identifying HIR components that don't have their own DefId, is another such stable ID. It is (conceptually) a pair of a DefPath and a LocalId, where the LocalId identifies something (e.g. a hir::Expr) locally within its "owner" (e.g. a hir::Item). If the owner is moved around, the LocalIds within it are still the same.

Checking Query Results For Changes: HashStable And Fingerprints

In order to do red-green-marking we often need to check if the result of a query has changed compared to the result it had during the previous compilation session. There are two performance problems with this though:

  • We'd like to avoid having to load the previous result from disk just for doing the comparison. We already computed the new result and will use that. Also loading a result from disk will "pollute" the interners with data that is unlikely to ever be used.
  • We don't want to store each and every result in the on-disk cache. For example, it would be wasted effort to persist things to disk that are already available in upstream crates.

The compiler avoids these problems by using so-called Fingerprints. Each time a new query result is computed, the query engine will compute a 128 bit hash value of the result. We call this hash value "the Fingerprint of the query result". The hashing is (and has to be) done "in a stable way". This means that whenever something is hashed that might change in between compilation sessions (e.g. a DefId), we instead hash its stable equivalent (e.g. the corresponding DefPath). That's what the whole HashStable infrastructure is for. This way Fingerprints computed in two different compilation sessions are still comparable.

The next step is to store these fingerprints along with the dependency graph. This is cheap since fingerprints are just bytes to be copied. It's also cheap to load the entire set of fingerprints together with the dependency graph.

Now, when red-green-marking reaches the point where it needs to check if a result has changed, it can just compare the (already loaded) previous fingerprint to the fingerprint of the new result.

This approach works rather well but it's not without flaws:

  • There is a small possibility of hash collisions. That is, two different results could have the same fingerprint and the system would erroneously assume that the result hasn't changed, leading to a missed update.

    We mitigate this risk by using a high-quality hash function and a 128 bit wide hash value. Due to these measures the practical risk of a hash collision is negligible.

  • Computing fingerprints is quite costly. It is the main reason why incremental compilation can be slower than non-incremental compilation. We are forced to use a good and thus expensive hash function, and we have to map things to their stable equivalents while doing the hashing.

A Tale Of Two DepGraphs: The Old And The New

The initial description of dependency tracking glosses over a few details that quickly become a head scratcher when actually trying to implement things. In particular it's easy to overlook that we are actually dealing with two dependency graphs: The one we built during the previous compilation session and the one that we are building for the current compilation session.

When a compilation session starts, the compiler loads the previous dependency graph into memory as an immutable piece of data. Then, when a query is invoked, it will first try to mark the corresponding node in the graph as green. This means really that we are trying to mark the node in the previous dep-graph as green that corresponds to the query key in the current session. How do we do this mapping between current query key and previous DepNode? The answer is again Fingerprints: Nodes in the dependency graph are identified by a fingerprint of the query key. Since fingerprints are stable across compilation sessions, computing one in the current session allows us to find a node in the dependency graph from the previous session. If we don't find a node with the given fingerprint, it means that the query key refers to something that did not yet exist in the previous session.

So, having found the dep-node in the previous dependency graph, we can look up its dependencies (i.e. also dep-nodes in the previous graph) and continue with the rest of the try-mark-green algorithm. The next interesting thing happens when we successfully marked the node as green. At that point we copy the node and the edges to its dependencies from the old graph into the new graph. We have to do this because the new dep-graph cannot acquire the node and edges via the regular dependency tracking. The tracking system can only record edges while actually running a query -- but running the query, although we have the result already cached, is exactly what we want to avoid.

Once the compilation session has finished, all the unchanged parts have been copied over from the old into the new dependency graph, while the changed parts have been added to the new graph by the tracking system. At this point, the new graph is serialized out to disk, alongside the query result cache, and can act as the previous dep-graph in a subsequent compilation session.

Didn't You Forget Something?: Cache Promotion

The system described so far has a somewhat subtle property: If all inputs of a dep-node are green then the dep-node itself can be marked as green without computing or loading the corresponding query result. Applying this property transitively often leads to the situation that some intermediate results are never actually loaded from disk, as in the following example:

   input(A) <-- intermediate_query(B) <-- leaf_query(C)

The compiler might need the value of leaf_query(C) in order to generate some output artifact. If it can mark leaf_query(C) as green, it will load the result from the on-disk cache. The result of intermediate_query(B) is never loaded though. As a consequence, when the compiler persists the new result cache by writing all in-memory query results to disk, intermediate_query(B) will not be in memory and thus will be missing from the new result cache.

If there subsequently is another compilation session that actually needs the result of intermediate_query(B) it will have to be re-computed even though we had a perfectly valid result for it in the cache just before.

In order to prevent this from happening, the compiler does something called "cache promotion": Before emitting the new result cache it will walk all green dep-nodes and make sure that their query result is loaded into memory. That way the result cache doesn't unnecessarily shrink again.

Incremental Compilation and the Compiler Backend

The compiler backend, the part involving LLVM, is using the query system but it is not implemented in terms of queries itself. As a consequence it does not automatically partake in dependency tracking. However, the manual integration with the tracking system is pretty straight-forward. The compiler simply tracks what queries get invoked when generating the initial LLVM version of each codegen unit, which results in a dep-node for each of them. In subsequent compilation sessions it then tries to mark the dep-node for a CGU as green. If it succeeds it knows that the corresponding object and bitcode files on disk are still valid. If it doesn't succeed, the entire codegen unit has to be recompiled.

This is the same approach that is used for regular queries. The main differences are:

  • that we cannot easily compute a fingerprint for LLVM modules (because they are opaque C++ objects),

  • that the logic for dealing with cached values is rather different from regular queries because here we have bitcode and object files instead of serialized Rust values in the common result cache file, and

  • the operations around LLVM are so expensive in terms of computation time and memory consumption that we need to have tight control over what is executed when and what stays in memory for how long.

The query system could probably be extended with general purpose mechanisms to deal with all of the above but so far that seemed like more trouble than it would save.

Query Modifiers

The query system allows for applying modifiers to queries. These modifiers affect certain aspects of how the system treats the query with respect to incremental compilation:

  • eval_always - A query with the eval_always attribute is re-executed unconditionally during incremental compilation. I.e. the system will not even try to mark the query's dep-node as green. This attribute has two use cases:

    • eval_always queries can read inputs (from files, global state, etc). They can also produce side effects like writing to files and changing global state.

    • Some queries are very likely to be re-evaluated because their result depends on the entire source code. In this case eval_always can be used as an optimization because the system can skip recording dependencies in the first place.

  • no_hash - Applying no_hash to a query tells the system to not compute the fingerprint of the query's result. This has two consequences:

    • Not computing the fingerprint can save quite a bit of time because fingerprinting is expensive, especially for large, complex values.

    • Without the fingerprint, the system has to unconditionally assume that the result of the query has changed. As a consequence anything depending on a no_hash query will always be re-executed.

    Using no_hash for a query can make sense in two circumstances:

    • If the result of the query is very likely to change whenever one of its inputs changes, e.g. a function like |a, b, c| -> (a * b * c). In such a case recomputing the query will always yield a red node if one of the inputs is red so we can spare us the trouble and default to red immediately. A counter example would be a function like |a| -> (a == 42) where the result does not change for most changes of a.

    • If the result of a query is a big, monolithic collection (e.g. index_hir) and there are "projection queries" reading from that collection (e.g. hir_owner). In such a case the big collection will likely fulfill the condition above (any changed input means recomputing the whole collection) and the results of the projection queries will be hashed anyway. If we also hashed the collection query it would mean that we effectively hash the same data twice: once when hashing the collection and another time when hashing all the projection query results. no_hash allows us to avoid that redundancy and the projection queries act as a "firewall", shielding their dependents from the unconditionally red no_hash node.

  • cache_on_disk_if - This attribute is what determines which query results are persisted in the incremental compilation query result cache. The attribute takes an expression that allows per query invocation decisions. For example, it makes no sense to store values from upstream crates in the cache because they are already available in the upstream crate's metadata.

  • anon - This attribute makes the system use "anonymous" dep-nodes for the given query. An anonymous dep-node is not identified by the corresponding query key, instead its ID is computed from the IDs of its dependencies. This allows the red-green system to do its change detection even if there is no query key available for a given dep-node -- something which is needed for handling trait selection because it is not based on queries.

The Projection Query Pattern

It's interesting to note that eval_always and no_hash can be used together in the so-called "projection query" pattern. It is often the case that there is one query that depends on the entirety of the compiler's input (e.g. the indexed HIR) and another query that projects individual values out of this monolithic value (e.g. a HIR item with a certain DefId). These projection queries allow for building change propagation "firewalls" because even if the result of the monolithic query changes (which it is very likely to do) the small projections can still mostly be marked as green.

  +------------+
  |            |           +---------------+           +--------+
  |            | <---------| projection(x) | <---------| foo(a) |
  |            |           +---------------+           +--------+
  |            |
  | monolithic |           +---------------+           +--------+
  |   query    | <---------| projection(y) | <---------| bar(b) |
  |            |           +---------------+           +--------+
  |            |
  |            |           +---------------+           +--------+
  |            | <---------| projection(z) | <---------| baz(c) |
  |            |           +---------------+           +--------+
  +------------+

Let's assume that the result monolithic_query changes so that also the result of projection(x) has changed, i.e. both their dep-nodes are being marked as red. As a consequence foo(a) needs to be re-executed; but bar(b) and baz(c) can be marked as green. However, if foo, bar, and baz would have directly depended on monolithic_query then all of them would have had to be re-evaluated.

This pattern works even without eval_always and no_hash but the two modifiers can be used to avoid unnecessary overhead. If the monolithic query is likely to change at any minor modification of the compiler's input it makes sense to mark it as eval_always, thus getting rid of its dependency tracking cost. And it always makes sense to mark the monolithic query as no_hash because we have the projections to take care of keeping things green as much as possible.

Shortcomings of the Current System

There are many things that still can be improved.

Incrementality of on-disk data structures

The current system is not able to update on-disk caches and the dependency graph in-place. Instead it has to rewrite each file entirely in each compilation session. The overhead of doing so is a few percent of total compilation time.

Unnecessary data dependencies

Data structures used as query results could be factored in a way that removes edges from the dependency graph. Especially "span" information is very volatile, so including it in query result will increase the chance that the result won't be reusable. See https://github.com/rust-lang/rust/issues/47389 for more information.

Debugging and Testing Dependencies

Testing the dependency graph

There are various ways to write tests against the dependency graph. The simplest mechanisms are the #[rustc_if_this_changed] and #[rustc_then_this_would_need] annotations. These are used in ui tests to test whether the expected set of paths exist in the dependency graph.

As an example, see tests/ui/dep-graph/dep-graph-caller-callee.rs, or the tests below.

#[rustc_if_this_changed]
fn foo() { }

#[rustc_then_this_would_need(TypeckTables)] //~ ERROR OK
fn bar() { foo(); }

This should be read as

If this (foo) is changed, then this (i.e. bar)'s TypeckTables would need to be changed.

Technically, what occurs is that the test is expected to emit the string "OK" on stderr, associated to this line.

You could also add the lines

#[rustc_then_this_would_need(TypeckTables)] //~ ERROR no path
fn baz() { }

Whose meaning is

If foo is changed, then baz's TypeckTables does not need to be changed. The macro must emit an error, and the error message must contains "no path".

Recall that the //~ ERROR OK is a comment from the point of view of the Rust code we test, but is meaningful from the point of view of the test itself.

Debugging the dependency graph

Dumping the graph

The compiler is also capable of dumping the dependency graph for your debugging pleasure. To do so, pass the -Z dump-dep-graph flag. The graph will be dumped to dep_graph.{txt,dot} in the current directory. You can override the filename with the RUST_DEP_GRAPH environment variable.

Frequently, though, the full dep graph is quite overwhelming and not particularly helpful. Therefore, the compiler also allows you to filter the graph. You can filter in three ways:

  1. All edges originating in a particular set of nodes (usually a single node).
  2. All edges reaching a particular set of nodes.
  3. All edges that lie between given start and end nodes.

To filter, use the RUST_DEP_GRAPH_FILTER environment variable, which should look like one of the following:

source_filter     // nodes originating from source_filter
-> target_filter  // nodes that can reach target_filter
source_filter -> target_filter // nodes in between source_filter and target_filter

source_filter and target_filter are a &-separated list of strings. A node is considered to match a filter if all of those strings appear in its label. So, for example:

RUST_DEP_GRAPH_FILTER='-> TypeckTables'

would select the predecessors of all TypeckTables nodes. Usually though you want the TypeckTables node for some particular fn, so you might write:

RUST_DEP_GRAPH_FILTER='-> TypeckTables & bar'

This will select only the predecessors of TypeckTables nodes for functions with bar in their name.

Perhaps you are finding that when you change foo you need to re-type-check bar, but you don't think you should have to. In that case, you might do:

RUST_DEP_GRAPH_FILTER='Hir & foo -> TypeckTables & bar'

This will dump out all the nodes that lead from Hir(foo) to TypeckTables(bar), from which you can (hopefully) see the source of the erroneous edge.

Tracking down incorrect edges

Sometimes, after you dump the dependency graph, you will find some path that should not exist, but you will not be quite sure how it came to be. When the compiler is built with debug assertions, it can help you track that down. Simply set the RUST_FORBID_DEP_GRAPH_EDGE environment variable to a filter. Every edge created in the dep-graph will be tested against that filter – if it matches, a bug! is reported, so you can easily see the backtrace (RUST_BACKTRACE=1).

The syntax for these filters is the same as described in the previous section. However, note that this filter is applied to every edge and doesn't handle longer paths in the graph, unlike the previous section.

Example:

You find that there is a path from the Hir of foo to the type check of bar and you don't think there should be. You dump the dep-graph as described in the previous section and open dep-graph.txt to see something like:

Hir(foo) -> Collect(bar)
Collect(bar) -> TypeckTables(bar)

That first edge looks suspicious to you. So you set RUST_FORBID_DEP_GRAPH_EDGE to Hir&foo -> Collect&bar, re-run, and then observe the backtrace. Voila, bug fixed!

How Salsa works

This chapter is based on the explanation given by Niko Matsakis in this video about Salsa. To find out more you may want to watch Salsa In More Depth, also by Niko Matsakis.

As of November 2022, although Salsa is inspired by (among other things) rustc's query system, it is not used directly in rustc. It is used in chalk, an implementation of Rust's trait system, and extensively in rust-analyzer, the official implementation of the language server protocol for Rust, but there are no medium or long-term concrete plans to integrate it into the compiler.

What is Salsa?

Salsa is a library for incremental recomputation. This means it allows reusing computations that were already done in the past to increase the efficiency of future computations.

The objectives of Salsa are:

  • Provide that functionality in an automatic way, so reusing old computations is done automatically by the library.
  • Doing so in a "sound", or "correct", way, therefore leading to the same results as if it had been done from scratch.

Salsa's actual model is much richer, allowing many kinds of inputs and many different outputs. For example, integrating Salsa with an IDE could mean that the inputs could be manifests (Cargo.toml, rust-toolchain.toml), entire source files (foo.rs), snippets and so on. The outputs of such an integration could range from a binary executable, to lints, types (for example, if a user selects a certain variable and wishes to see its type), completions, etc.

How does it work?

The first thing that Salsa has to do is identify the "base inputs" that are not something computed but given as input.

Then Salsa has to also identify intermediate, "derived" values, which are something that the library produces, but, for each derived value there's a "pure" function that computes the derived value.

For example, there might be a function ast(x: Path) -> AST. The produced Abstract Syntax Tree (AST) isn't a final value, it's an intermediate value that the library would use for the computation.

This means that when you try to compute with the library, Salsa is going to compute various derived values, and eventually read the input and produce the result for the asked computation.

In the course of computing, Salsa tracks which inputs were accessed and which values are derived. This information is used to determine what's going to happen when the inputs change: are the derived values still valid?

This doesn't necessarily mean that each computation downstream from the input is going to be checked, which could be costly. Salsa only needs to check each downstream computation until it finds one that isn't changed. At that point, it won't check other derived computations since they wouldn't need to change.

It's helpful to think about this as a graph with nodes. Each derived value has a dependency on other values, which could themselves be either base or derived. Base values don't have a dependency.

I <- A <- C ...
          |
J <- B <--+

When an input I changes, the derived value A could change. The derived value B, which does not depend on I, A, or any value derived from A or I, is not subject to change. Therefore, Salsa can reuse the computation done for B in the past, without having to compute it again.

The computation could also terminate early. Keeping the same graph as before, say that input I has changed in some way (and input J hasn't), but when computing A again, it's found that A hasn't changed from the previous computation. This leads to an "early termination", because there's no need to check if C needs to change, since both C direct inputs, A and B, haven't changed.

Key Salsa concepts

Query

A query is some value that Salsa can access in the course of computation. Each query can have a number of keys (from 0 to many), and all queries have a result, akin to functions. 0-key queries are called "input" queries.

Database

The database is basically the context for the entire computation, it's meant to store Salsa's internal state, all intermediate values for each query, and anything else that the computation might need. The database must know all the queries the library is going to do before it can be built, but they don't need to be specified in the same place.

After the database is formed, it can be accessed with queries that are very similar to functions. Since each query's result is stored in the database, when a query is invoked N-times, it will return N-cloned results, without having to recompute the query (unless the input has changed in such a way that it warrants recomputation).

For each input query (0-key), a "set" method is generated, allowing the user to change the output of such query, and trigger previous memoized values to be potentially invalidated.

Query Groups

A query group is a set of queries which have been defined together as a unit. The database is formed by combining query groups. Query groups are akin to "Salsa modules".

A set of queries in a query group are just a set of methods in a trait.

To create a query group a trait annotated with a specific attribute (#[salsa::query_group(...)]) has to be created.

An argument must also be provided to said attribute as it will be used by Salsa to create a struct to be used later when the database is created.

Example input query group:

/// This attribute will process this tree, produce this tree as output, and produce
/// a bunch of intermediate stuff that Salsa also uses. One of these things is a
/// "StorageStruct", whose name we have specified in the attribute.
///
/// This query group is a bunch of **input** queries, that do not rely on any
/// derived input.
#[salsa::query_group(InputsStorage)]
pub trait Inputs {
    /// This attribute (`#[salsa::input]`) indicates that this query is a base
    /// input, therefore `set_manifest` is going to be auto-generated
    #[salsa::input]
    fn manifest(&self) -> Manifest;

    #[salsa::input]
    fn source_text(&self, name: String) -> String;
}

To create a derived query group, one must specify which other query groups this one depends on by specifying them as supertraits, as seen in the following example:

/// This query group is going to contain queries that depend on derived values.
/// A query group can access another query group's queries by specifying the
/// dependency as a supertrait. Query groups can be stacked as much as needed using
/// that pattern.
#[salsa::query_group(ParserStorage)]
pub trait Parser: Inputs {
    /// This query `ast` is not an input query, it's a derived query this means
    /// that a definition is necessary.
    fn ast(&self, name: String) -> String;
}

When creating a derived query the implementation of said query must be defined outside the trait. The definition must take a database parameter as an impl Trait (or dyn Trait), where trait is the query group that the definition belongs to, in addition to the other keys.

/// This is going to be the definition of the `ast` query in the `Parser` trait.
/// So, when the query `ast` is invoked, and it needs to be recomputed, Salsa is
/// going to call this function and it's going to give it the database as `impl
/// Parser`. The function doesn't need to be aware of all the queries of all the
/// query groups
fn ast(db: &impl Parser, name: String) -> String {
    //! Note, `impl Parser` is used here but `dyn Parser` works just as well
    /* code */
    ///By passing an `impl Parser`, this is allowed
    let source_text = db.input_file(name);
    /* do the actual parsing */
    return ast;
}

Eventually, after all the query groups have been defined, the database can be created by declaring a struct.

To specify which query groups are going to be part of the database an attribute (#[salsa::database(...)]) must be added. The argument of said attribute is a list of identifiers, specifying the query groups storages.

///This attribute specifies which query groups are going to be in the database
#[salsa::database(InputsStorage, ParserStorage)]
#[derive(Default)] //optional!
struct MyDatabase {
    ///You also need this one field
    runtime : salsa::Runtime<MyDatabase>,
}
///And this trait has to be implemented
impl salsa::Database for MyDatabase {
    fn salsa_runtime(&self) -> &salsa::Runtime<MyDatabase> {
        &self.runtime
    }
}

Example usage:

fn main() {
    let db = MyDatabase::default();
    db.set_manifest(...);
    db.set_source_text(...);
    loop {
        db.ast(...); //will reuse results
        db.set_source_text(...);
    }
}

Memory Management in Rustc

Generally rustc tries to be pretty careful how it manages memory. The compiler allocates a lot of data structures throughout compilation, and if we are not careful, it will take a lot of time and space to do so.

One of the main way the compiler manages this is using arenas and interning.

Arenas and Interning

Since A LOT of data structures are created during compilation, for performance reasons, we allocate them from a global memory pool. Each are allocated once from a long-lived arena. This is called arena allocation. This system reduces allocations/deallocations of memory. It also allows for easy comparison of types (more on types here) for equality: for each interned type X, we implemented PartialEq for X, so we can just compare pointers. The CtxtInterners type contains a bunch of maps of interned types and the arena itself.

Example: ty::TyKind

Taking the example of ty::TyKind which represents a type in the compiler (you can read more here). Each time we want to construct a type, the compiler doesn’t naively allocate from the buffer. Instead, we check if that type was already constructed. If it was, we just get the same pointer we had before, otherwise we make a fresh pointer. With this schema if we want to know if two types are the same, all we need to do is compare the pointers which is efficient. ty::TyKind should never be constructed on the stack, and it would be unusable if done so. You always allocate them from this arena and you always intern them so they are unique.

At the beginning of the compilation we make a buffer and each time we need to allocate a type we use some of this memory buffer. If we run out of space we get another one. The lifetime of that buffer is 'tcx. Our types are tied to that lifetime, so when compilation finishes all the memory related to that buffer is freed and our 'tcx references would be invalid.

In addition to types, there are a number of other arena-allocated data structures that you can allocate, and which are found in this module. Here are a few examples:

  • GenericArgs, allocated with mk_args – this will intern a slice of types, often used to specify the values to be substituted for generics args (e.g. HashMap<i32, u32> would be represented as a slice &'tcx [tcx.types.i32, tcx.types.u32]).
  • TraitRef, typically passed by value – a trait reference consists of a reference to a trait along with its various type parameters (including Self), like i32: Display (here, the def-id would reference the Display trait, and the args would contain i32). Note that def-id is defined and discussed in depth in the AdtDef and DefId section.
  • Predicate defines something the trait system has to prove (see traits module).

The tcx and how it uses lifetimes

The typing context (tcx) is the central data structure in the compiler. It is the context that you use to perform all manner of queries. The struct TyCtxt defines a reference to this shared context:

tcx: TyCtxt<'tcx>
//          ----
//          |
//          arena lifetime

As you can see, the TyCtxt type takes a lifetime parameter. When you see a reference with a lifetime like 'tcx, you know that it refers to arena-allocated data (or data that lives as long as the arenas, anyhow).

A Note On Lifetimes

The Rust compiler is a fairly large program containing lots of big data structures (e.g. the Abstract Syntax Tree (AST), High-Level Intermediate Representation (HIR), and the type system) and as such, arenas and references are heavily relied upon to minimize unnecessary memory use. This manifests itself in the way people can plug into the compiler (i.e. the driver), preferring a "push"-style API (callbacks) instead of the more Rust-ic "pull" style (think the Iterator trait).

Thread-local storage and interning are used a lot through the compiler to reduce duplication while also preventing a lot of the ergonomic issues due to many pervasive lifetimes. The rustc_middle::ty::tls module is used to access these thread-locals, although you should rarely need to touch it.

Serialization in Rustc

rustc has to serialize and deserialize various data during compilation. Specifically:

  • "Crate metadata", consisting mainly of query outputs, are serialized from a binary format into rlib and rmeta files that are output when compiling a library crate. These rlib and rmeta files are then deserialized by the crates which depend on that library.
  • Certain query outputs are serialized in a binary format to persist incremental compilation results.
  • CrateInfo is serialized to JSON when the -Z no-link flag is used, and deserialized from JSON when the -Z link-only flag is used.

The Encodable and Decodable traits

The rustc_serialize crate defines two traits for types which can be serialized:

pub trait Encodable<S: Encoder> {
    fn encode(&self, s: &mut S) -> Result<(), S::Error>;
}

pub trait Decodable<D: Decoder>: Sized {
    fn decode(d: &mut D) -> Result<Self, D::Error>;
}

It also defines implementations of these for various common standard library primitive types such as integer types, floating point types, bool, char, str, etc.

For types that are constructed from those types, Encodable and Decodable are usually implemented by derives. These generate implementations that forward deserialization to the fields of the struct or enum. For a struct those impls look something like this:

#![feature(rustc_private)]
extern crate rustc_serialize;
use rustc_serialize::{Decodable, Decoder, Encodable, Encoder};

struct MyStruct {
    int: u32,
    float: f32,
}

impl<E: Encoder> Encodable<E> for MyStruct {
    fn encode(&self, s: &mut E) -> Result<(), E::Error> {
        s.emit_struct("MyStruct", 2, |s| {
            s.emit_struct_field("int", 0, |s| self.int.encode(s))?;
            s.emit_struct_field("float", 1, |s| self.float.encode(s))
        })
    }
}

impl<D: Decoder> Decodable<D> for MyStruct {
    fn decode(s: &mut D) -> Result<MyStruct, D::Error> {
        s.read_struct("MyStruct", 2, |d| {
            let int = d.read_struct_field("int", 0, Decodable::decode)?;
            let float = d.read_struct_field("float", 1, Decodable::decode)?;

            Ok(MyStruct { int, float })
        })
    }
}

Encoding and Decoding arena allocated types

rustc has a lot of arena allocated types. Deserializing these types isn't possible without access to the arena that they need to be allocated on. The TyDecoder and TyEncoder traits are supertraits of Decoder and Encoder that allow access to a TyCtxt.

Types which contain arena allocated types can then bound the type parameter of their Encodable and Decodable implementations with these traits. For example

impl<'tcx, D: TyDecoder<'tcx>> Decodable<D> for MyStruct<'tcx> {
    /* ... */
}

The TyEncodable and TyDecodable derive macros will expand to such an implementation.

Decoding the actual arena allocated type is harder, because some of the implementations can't be written due to the orphan rules. To work around this, the RefDecodable trait is defined in rustc_middle. This can then be implemented for any type. The TyDecodable macro will call RefDecodable to decode references, but various generic code needs types to actually be Decodable with a specific decoder.

For interned types instead of manually implementing RefDecodable, using a new type wrapper, like ty::Predicate and manually implementing Encodable and Decodable may be simpler.

Derive macros

The rustc_macros crate defines various derives to help implement Decodable and Encodable.

Shorthands

Ty can be deeply recursive, if each Ty was encoded naively then crate metadata would be very large. To handle this, each TyEncoder has a cache of locations in its output where it has serialized types. If a type being encoded is in the cache, then instead of serializing the type as usual, the byte offset within the file being written is encoded instead. A similar scheme is used for ty::Predicate.

LazyValue<T>

Crate metadata is initially loaded before the TyCtxt<'tcx> is created, so some deserialization needs to be deferred from the initial loading of metadata. The LazyValue<T> type wraps the (relative) offset in the crate metadata where a T has been serialized. There are also some variants, LazyArray<T> and LazyTable<I, T>.

The LazyArray<[T]> and LazyTable<I, T> types provide some functionality over Lazy<Vec<T>> and Lazy<HashMap<I, T>>:

  • It's possible to encode a LazyArray<T> directly from an Iterator, without first collecting into a Vec<T>.
  • Indexing into a LazyTable<I, T> does not require decoding entries other than the one being read.

note: LazyValue<T> does not cache its value after being deserialized the first time. Instead the query system its self is the main way of caching these results.

Specialization

A few types, most notably DefId, need to have different implementations for different Encoders. This is currently handled by ad-hoc specializations, for example: DefId has a default implementation of Encodable<E> and a specialized one for Encodable<CacheEncoder>.

Parallel Compilation

As of November 2024, the parallel front-end is undergoing significant changes, so this page contains quite a bit of outdated information.

Tracking issue: https://github.com/rust-lang/rust/issues/113349

As of November 2024, most of the rust compiler is now parallelized.

  • The codegen part is executed concurrently by default. You can use the -C codegen-units=n option to control the number of concurrent tasks.
  • The parts after HIR lowering to codegen such as type checking, borrowing checking, and mir optimization are parallelized in the nightly version. Currently, they are executed in serial by default, and parallelization is manually enabled by the user using the -Z threads = n option.
  • Other parts, such as lexical parsing, HIR lowering, and macro expansion, are still executed in serial mode.
The following sections are kept for now but are quite outdated.

Code Generation

During monomorphization the compiler splits up all the code to be generated into smaller chunks called codegen units. These are then generated by independent instances of LLVM running in parallel. At the end, the linker is run to combine all the codegen units together into one binary. This process occurs in the rustc_codegen_ssa::base module.

Data Structures

The underlying thread-safe data-structures used in the parallel compiler can be found in the rustc_data_structures::sync module. These data structures are implemented differently depending on whether parallel-compiler is true.

data structureparallelnon-parallel
Lrcstd::sync::Arcstd::rc::Rc
Weakstd::sync::Weakstd::rc::Weak
Atomic{Bool}/{Usize}/{U32}/{U64}std::sync::atomic::Atomic{Bool}/{Usize}/{U32}/{U64}(std::cell::Cell<bool/usize/u32/u64>)
OnceCellstd::sync::OnceLockstd::cell::OnceCell
Lock<T>(parking_lot::Mutex<T>)(std::cell::RefCell)
RwLock<T>(parking_lot::RwLock<T>)(std::cell::RefCell)
MTRef<'a, T>&'a T&'a mut T
MTLock<T>(Lock<T>)(T)
ReadGuardparking_lot::RwLockReadGuardstd::cell::Ref
MappedReadGuardparking_lot::MappedRwLockReadGuardstd::cell::Ref
WriteGuardparking_lot::RwLockWriteGuardstd::cell::RefMut
MappedWriteGuardparking_lot::MappedRwLockWriteGuardstd::cell::RefMut
LockGuardparking_lot::MutexGuardstd::cell::RefMut
MappedLockGuardparking_lot::MappedMutexGuardstd::cell::RefMut
  • These thread-safe data structures are interspersed during compilation which can cause lock contention resulting in degraded performance as the number of threads increases beyond 4. So we audit the use of these data structures which leads to either a refactoring so as to reduce the use of shared state, or the authoring of persistent documentation covering the specific of the invariants, the atomicity, and the lock orderings.

  • On the other hand, we still need to figure out what other invariants during compilation might not hold in parallel compilation.

WorkerLocal

WorkerLocal is a special data structure implemented for parallel compilers. It holds worker-locals values for each thread in a thread pool. You can only access the worker local value through the Deref impl on the thread pool it was constructed on. It panics otherwise.

WorkerLocal is used to implement the Arena allocator in the parallel environment, which is critical in parallel queries. Its implementation is located in the rustc_data_structures::sync::worker_local module. However, in the non-parallel compiler, it is implemented as (OneThread<T>), whose T can be accessed directly through Deref::deref.

Parallel Iterator

The parallel iterators provided by the rayon crate are easy ways to implement parallelism. In the current implementation of the parallel compiler we use a custom fork of rayon to run tasks in parallel.

Some iterator functions are implemented to run loops in parallel when parallel-compiler is true.

Function(Omit Send and Sync)IntroductionOwning Module
par_iter<T: IntoParallelIterator>(t: T) -> T::Itergenerate a parallel iteratorrustc_data_structure::sync
par_for_each_in<T: IntoParallelIterator>(t: T, for_each: impl Fn(T::Item))generate a parallel iterator and run for_each on each elementrustc_data_structure::sync
Map::par_body_owners(self, f: impl Fn(LocalDefId))run f on all hir owners in the craterustc_middle::hir::map
Map::par_for_each_module(self, f: impl Fn(LocalDefId))run f on all modules and sub modules in the craterustc_middle::hir::map
ModuleItems::par_items(&self, f: impl Fn(ItemId))run f on all items in the modulerustc_middle::hir
ModuleItems::par_trait_items(&self, f: impl Fn(TraitItemId))run f on all trait items in the modulerustc_middle::hir
ModuleItems::par_impl_items(&self, f: impl Fn(ImplItemId))run f on all impl items in the modulerustc_middle::hir
ModuleItems::par_foreign_items(&self, f: impl Fn(ForeignItemId))run f on all foreign items in the modulerustc_middle::hir

There are a lot of loops in the compiler which can possibly be parallelized using these functions. As of August 2022, scenarios where the parallel iterator function has been used are as follows:

callerscenariocallee
rustc_metadata::rmeta::encoder::prefetch_mirPrefetch queries which will be needed later by metadata encodingpar_iter
rustc_monomorphize::collector::collect_crate_mono_itemsCollect monomorphized items reachable from non-generic itemspar_for_each_in
rustc_interface::passes::analysisCheck the validity of the match statementsMap::par_body_owners
rustc_interface::passes::analysisMIR borrow checkMap::par_body_owners
rustc_typeck::check::typeck_item_bodiesType checkMap::par_body_owners
rustc_interface::passes::hir_id_validator::check_crateCheck the validity of hirMap::par_for_each_module
rustc_interface::passes::analysisCheck the validity of loops body, attributes, naked functions, unstable abi, const bodysMap::par_for_each_module
rustc_interface::passes::analysisLiveness and intrinsic checking of MIRMap::par_for_each_module
rustc_interface::passes::analysisDeathness checkingMap::par_for_each_module
rustc_interface::passes::analysisPrivacy checkingMap::par_for_each_module
rustc_lint::late::check_crateRun per-module lintsMap::par_for_each_module
rustc_typeck::check_crateWell-formedness checkingMap::par_for_each_module

There are still many loops that have the potential to use parallel iterators.

Query System

The query model has some properties that make it actually feasible to evaluate multiple queries in parallel without too much effort:

  • All data a query provider can access is via the query context, so the query context can take care of synchronizing access.
  • Query results are required to be immutable so they can safely be used by different threads concurrently.

When a query foo is evaluated, the cache table for foo is locked.

  • If there already is a result, we can clone it, release the lock and we are done.
  • If there is no cache entry and no other active query invocation computing the same result, we mark the key as being "in progress", release the lock and start evaluating.
  • If there is another query invocation for the same key in progress, we release the lock, and just block the thread until the other invocation has computed the result we are waiting for. Cycle error detection in the parallel compiler requires more complex logic than in single-threaded mode. When worker threads in parallel queries stop making progress due to interdependence, the compiler uses an extra thread (named deadlock handler) to detect, remove and report the cycle error.

The parallel query feature still has implementation to do, most of which is related to the previous Data Structures and Parallel Iterators. See this open feature tracking issue.

Rustdoc

As of November 2022, there are still a number of steps to complete before rustdoc rendering can be made parallel (see a open discussion of parallel rustdoc).

Resources

Here are some resources that can be used to learn more:

Rustdoc Internals

This page describes rustdoc's passes and modes. For an overview of rustdoc, see the "Rustdoc overview" chapter.

From Crate to Clean

In core.rs are two central items: the rustdoc::core::DocContext struct, and the rustdoc::core::run_global_ctxt function. The latter is where rustdoc calls out to rustc to compile a crate to the point where rustdoc can take over. The former is a state container used when crawling through a crate to gather its documentation.

The main process of crate crawling is done in clean/mod.rs through several functions with names that start with clean_. Each function accepts an hir or ty data structure, and outputs a clean structure used by rustdoc. For example, this function for converting lifetimes:

fn clean_lifetime<'tcx>(lifetime: &hir::Lifetime, cx: &mut DocContext<'tcx>) -> Lifetime {
    if let Some(
        rbv::ResolvedArg::EarlyBound(did)
        | rbv::ResolvedArg::LateBound(_, _, did)
        | rbv::ResolvedArg::Free(_, did),
    ) = cx.tcx.named_bound_var(lifetime.hir_id)
        && let Some(lt) = cx.args.get(&did).and_then(|arg| arg.as_lt())
    {
        return lt.clone();
    }
    Lifetime(lifetime.ident.name)
}

Also, clean/mod.rs defines the types for the "cleaned" Abstract Syntax Tree (AST) used later to render documentation pages. Each usually accompanies a clean_* function that takes some AST or High-Level Intermediate Representation (HIR) type from rustc and converts it into the appropriate "cleaned" type. "Big" items like modules or associated items may have some extra processing in its clean function, but for the most part these impls are straightforward conversions. The "entry point" to this module is clean::utils::krate, which is called by run_global_ctxt.

The first step in clean::utils::krate is to invoke visit_ast::RustdocVisitor to process the module tree into an intermediate visit_ast::Module. This is the step that actually crawls the rustc_hir::Crate, normalizing various aspects of name resolution, such as:

  • handling #[doc(inline)] and #[doc(no_inline)]
  • handling import globs and cycles, so there are no duplicates or infinite directory trees
  • inlining public use exports of private items, or showing a "Reexport" line in the module page
  • inlining items with #[doc(hidden)] if the base item is hidden but the
  • showing #[macro_export]-ed macros at the crate root, regardless of where they're defined reexport is not

After this step, clean::krate invokes clean_doc_module, which actually converts the HIR items to the cleaned AST. This is also the step where cross- crate inlining is performed, which requires converting rustc_middle data structures into the cleaned AST.

The other major thing that happens in clean/mod.rs is the collection of doc comments and #[doc=""] attributes into a separate field of the Attributes struct, present on anything that gets hand-written documentation. This makes it easier to collect this documentation later in the process.

The primary output of this process is a clean::types::Crate with a tree of Items which describe the publicly-documentable items in the target crate.

Passes Anything But a Gas Station (or: Hot Potato)

Before moving on to the next major step, a few important "passes" occur over the cleaned AST. Several of these passes are lints and reports, but some of them mutate or generate new items.

These are all implemented in the librustdoc/passes directory, one file per pass. By default, all of these passes are run on a crate, but the ones regarding dropping private/hidden items can be bypassed by passing --document-private-items to rustdoc. Note that unlike the previous set of AST transformations, the passes are run on the cleaned crate.

Here is the list of passes as of March 2023:

  • calculate-doc-coverage calculates information used for the --show-coverage flag.

  • check-doc-test-visibility runs doctest visibility–related lints. This pass runs before strip-private, which is why it needs to be separate from run-lints.

  • collect-intra-doc-links resolves intra-doc links.

  • collect-trait-impls collects trait impls for each item in the crate. For example, if we define a struct that implements a trait, this pass will note that the struct implements that trait.

  • propagate-doc-cfg propagates #[doc(cfg(...))] to child items.

  • run-lints runs some of rustdoc's lints, defined in passes/lint. This is the last pass to run.

    • bare_urls detects links that are not linkified, e.g., in Markdown such as Go to https://example.com/. It suggests wrapping the link with angle brackets: Go to <https://example.com/>. to linkify it. This is the code behind the rustdoc::bare_urls lint.

    • check_code_block_syntax validates syntax inside Rust code blocks (```rust)

    • html_tags detects invalid HTML (like an unclosed <span>) in doc comments.

  • strip-hidden and strip-private strip all doc(hidden) and private items from the output. strip-private implies strip-priv-imports. Basically, the goal is to remove items that are not relevant for public documentation. This pass is skipped when --document-hidden-items is passed.

  • strip-priv-imports strips all private import statements (use, extern crate) from a crate. This is necessary because rustdoc will handle public imports by either inlining the item's documentation to the module or creating a "Reexports" section with the import in it. The pass ensures that all of these imports are actually relevant to documentation. It is technically only run when --document-private-items is passed, but strip-private accomplishes the same thing.

  • strip-private strips all private items from a crate which cannot be seen externally. This pass is skipped when --document-private-items is passed.

There is also a stripper module in librustdoc/passes, but it is a collection of utility functions for the strip-* passes and is not a pass itself.

From Clean To HTML

This is where the "second phase" in rustdoc begins. This phase primarily lives in the librustdoc/formats and librustdoc/html folders, and it all starts with formats::renderer::run_format. This code is responsible for setting up a type that impl FormatRenderer, which for HTML is Context.

This structure contains methods that get called by run_format to drive the doc rendering, which includes:

  • init generates static.files, as well as search index and src/
  • item generates the item HTML files themselves
  • after_krate generates other global resources like all.html

In item, the "page rendering" occurs, via a mixture of Askama templates and manual write!() calls, starting in html/layout.rs. The parts that have not been converted to templates occur within a series of std::fmt::Display implementations and functions that pass around a &mut std::fmt::Formatter.

The parts that actually generate HTML from the items and documentation start with print_item defined in html/render/print_item.rs, which switches out to one of several item_* functions based on kind of Item being rendered.

Depending on what kind of rendering code you're looking for, you'll probably find it either in html/render/mod.rs for major items like "what sections should I print for a struct page" or html/format.rs for smaller component pieces like "how should I print a where clause as part of some other item".

Whenever rustdoc comes across an item that should print hand-written documentation alongside, it calls out to html/markdown.rs which interfaces with the Markdown parser. This is exposed as a series of types that wrap a string of Markdown, and implement fmt::Display to emit HTML text. It takes special care to enable certain features like footnotes and tables and add syntax highlighting to Rust code blocks (via html/highlight.rs) before running the Markdown parser. There's also a function find_codes which is called by find_testable_codes that specifically scans for Rust code blocks so the test-runner code can find all the doctests in the crate.

From Soup to Nuts (or: "An Unbroken Thread Stretches From Those First Cells To Us")

It's important to note that rustdoc can ask the compiler for type information directly, even during HTML generation. This didn't used to be the case, and a lot of rustdoc's architecture was designed around not doing that, but a TyCtxt is now passed to formats::renderer::run_format, which is used to run generation for both HTML and the (unstable as of March 2023) JSON format.

This change has allowed other changes to remove data from the "clean" AST that can be easily derived from TyCtxt queries, and we'll usually accept PRs that remove fields from "clean" (it's been soft-deprecated), but this is complicated from two other constraints that rustdoc runs under:

  • Docs can be generated for crates that don't actually pass type checking. This is used for generating docs that cover mutually-exclusive platform configurations, such as libstd having a single package of docs that cover all supported operating systems. This means rustdoc has to be able to generate docs from HIR.
  • Docs can inline across crates. Since crate metadata doesn't contain HIR, it must be possible to generate inlined docs from the rustc_middle data.

The "clean" AST acts as a common output format for both input formats. There is also some data in clean that doesn't correspond directly to HIR, such as synthetic impls for auto traits and blanket impls generated by the collect-trait-impls pass.

Some additional data is stored in html::render::context::{Context, SharedContext}. These two types serve as ways to segregate rustdoc's data for an eventual future with multithreaded doc generation, as well as just keeping things organized:

  • Context stores data used for generating the current page, such as its path, a list of HTML IDs that have been used (to avoid duplicate id=""), and the pointer to SharedContext.
  • SharedContext stores data that does not vary by page, such as the tcx pointer, and a list of all types.

Other Tricks Up Its Sleeve

All this describes the process for generating HTML documentation from a Rust crate, but there are couple other major modes that rustdoc runs in. It can also be run on a standalone Markdown file, or it can run doctests on Rust code or standalone Markdown files. For the former, it shortcuts straight to html/markdown.rs, optionally including a mode which inserts a Table of Contents to the output HTML.

For the latter, rustdoc runs a similar partial-compilation to get relevant documentation in test.rs, but instead of going through the full clean and render process, it runs a much simpler crate walk to grab just the hand-written documentation. Combined with the aforementioned "find_testable_code" in html/markdown.rs, it builds up a collection of tests to run before handing them off to the test runner. One notable location in test.rs is the function make_test, which is where hand-written doctests get transformed into something that can be executed.

Some extra reading about make_test can be found here.

Dotting i's And Crossing t's

So that's rustdoc's code in a nutshell, but there's more things in the compiler that deal with it. Since we have the full compiletest suite at hand, there's a set of tests in tests/rustdoc that make sure the final HTML is what we expect in various situations. These tests also use a supplementary script, src/etc/htmldocck.py, that allows it to look through the final HTML using XPath notation to get a precise look at the output. The full description of all the commands available to rustdoc tests (e.g. @has and @matches) is in htmldocck.py.

To use multiple crates in a rustdoc test, add // aux-build:filename.rs to the top of the test file. filename.rs should be placed in an auxiliary directory relative to the test file with the comment. If you need to build docs for the auxiliary file, use // build-aux-docs.

In addition, there are separate tests for the search index and rustdoc's ability to query it. The files in tests/rustdoc-js each contain a different search query and the expected results, broken out by search tab. These files are processed by a script in src/tools/rustdoc-js and the Node.js runtime. These tests don't have as thorough of a writeup, but a broad example that features results in all tabs can be found in basic.js. The basic idea is that you match a given QUERY with a set of EXPECTED results, complete with the full item path of each item.

Testing Locally

Some features of the generated HTML documentation might require local storage to be used across pages, which doesn't work well without an HTTP server. To test these features locally, you can run a local HTTP server, like this:

$ ./x doc library
# The documentation has been generated into `build/[YOUR ARCH]/doc`.
$ python3 -m http.server -d build/[YOUR ARCH]/doc

Now you can browse your documentation just like you would if it was hosted on the internet. For example, the url for std will be rust/std/.

See Also

Rustdoc search

Rustdoc Search is two programs: search_index.rs and search.js. The first generates a nasty JSON file with a full list of items and function signatures in the crates in the doc bundle, and the second reads it, turns it into some in-memory structures, and scans them linearly to search.

Search index format

search.js calls this Raw, because it turns it into a more normal object tree after loading it. For space savings, it's also written without newlines or spaces.

[
    [ "crate_name", {
        // name
        "n": ["function_name", "Data"],
        // type
        "t": "HF",
        // parent module
        "q": [[0, "crate_name"]],
        // parent type
        "i": [2, 0],
        // type dictionary
        "p": [[1, "i32"], [1, "str"], [5, "Data", 0]],
        // function signature
        "f": "{{gb}{d}}`", // [[3, 1], [2]]
        // impl disambiguator
        "b": [],
        // deprecated flag
        "c": "OjAAAAAAAAA=", // empty bitmap
        // empty description flag
        "e": "OjAAAAAAAAA=", // empty bitmap
        // aliases
        "a": [["get_name", 0]],
        // description shards
        "D": "g", // 3
        // inlined re-exports
        "r": [],
    }]
]

src/librustdoc/html/static/js/externs.js defines an actual schema in a Closure @typedef.

KeyNameDescription
nNamesItem names
tItem TypeOne-char item type code
qParent moduleMap<index, path>
iParent typelist of indexes
fFunction signatureencoded
bImpl disambiguatorMap<index, string>
cDeprecation flagroaring bitmap
eDescription is emptyroaring bitmap
pType dictionary[[item type, path]]
aAliasMap<string, index>
Ddescription shardsencoded

The above index defines a crate called crate_name with a free function called function_name and a struct called Data, with the type signature Data, i32 -> str, and an alias, get_name, that equivalently refers to function_name.

The search index needs to fit the needs of the rustdoc compiler, the search.js frontend, and also be compact and fast to decode. It makes a lot of compromises:

  • The rustdoc compiler runs on one crate at a time, so each crate has an essentially separate search index. It merges them by having each crate on one line and looking at the first quoted string.
  • Names in the search index are given in their original case and with underscores. When the search index is loaded, search.js stores the original names for display, but also folds them to lowercase and strips underscores for search. You'll see them called normalized.
  • The f array stores types as offsets into the p array. These types might actually be from another crate, so search.js has to turn the numbers into names and then back into numbers to deduplicate them if multiple crates in the same index mention the same types.
  • It's a JSON file, but not designed to be human-readable. Browsers already include an optimized JSON decoder, so this saves on search.js code and performs better for small crates, but instead of using objects like normal JSON formats do, it tries to put data of the same type next to each other so that the sliding window used by DEFLATE can find redundancies. Where search.js does its own compression, it's designed to save memory when the file is finally loaded, not just size on disk or network transfer.

Parallel arrays and indexed maps

Abstractly, Rustdoc Search data is a table, stored in column-major form. Most data in the index represents a set of parallel arrays (the "columns") which refer to the same data if they're at the same position.

For example, the above search index can be turned into this table:

ntdqifbc
0crate_nameDDocumentationNULL0NULLNULL0
1function_nameHThis function gets the name of an integer with Datacrate_name2{{gb}{d}}NULL0
2DataFThe data structcrate_name0`NULL0

The crate row is implied in most columns, since its type is known (it's a crate), it can't have a parent (crates form the root of the module tree), its name is specified as the map key, and function-specific data like the impl disambiguator can't apply either. However, it can still have a description and it can still be deprecated. The crate, therefore, has a primary key of 0.

The above code doesn't use c, which holds deprecated indices, or b, which maps indices to strings. If crate_name::function_name used both, it might look like this.

        "b": [[0, "impl-Foo-for-Bar"]],
        "c": "OjAAAAEAAAAAAAIAEAAAABUAbgZYCQ==",

This attaches a disambiguator to index 1 and marks it deprecated.

The advantage of this layout is that these APIs often have implicit structure that DEFLATE can take advantage of, but that rustdoc can't assume. Like how names are usually CamelCase or snake_case, but descriptions aren't. It also makes it easier to use a sparse data for things like boolean flags.

q is a Map from the first applicable ID to a parent module path. This is a weird trick, but it makes more sense in pseudo-code:

#![allow(unused)]
fn main() {
let mut parent_module = "";
for (i, entry) in search_index.iter().enumerate() {
    if q.contains(i) {
        parent_module = q.get(i);
    }
    // ... do other stuff with `entry` ...
}
}

This is valid because everything has a parent module (even if it's just the crate itself), and is easy to assemble because the rustdoc generator sorts by path before serializing. Doing this allows rustdoc to not only make the search index smaller, but reuse the same string representing the parent path across multiple in-memory items.

Representing sparse columns

VLQ Hex

This format is, as far as I know, used nowhere other than rustdoc. It follows this grammar:

VLQHex = { VHItem | VHBackref }
VHItem = VHNumber | ( '{', {VHItem}, '}' )
VHNumber = { '@' | 'A' | 'B' | 'C' | 'D' | 'E' | 'F' | 'G' | 'H' | 'I' | 'J' | 'K' | 'L' | 'M' | 'N' | 'O' }, ( '`' | 'a' | 'b' | 'c' | 'd' | 'e' | 'f' | 'g' | 'h' | 'i' | 'j' | 'k ' | 'l' | 'm' | 'n' | 'o' )
VHBackref = ( '0' | '1' | '2' | '3' | '4' | '5' | '6' | '7' | '8' | '9' | ':' | ';' | '<' | '=' | '>' | '?' )

A VHNumber is a variable-length, self-terminating base16 number (terminated because the last hexit is lowercase while all others are uppercase). The sign bit is represented using zig-zag encoding.

This alphabet is chosen because the characters can be turned into hexits by masking off the last four bits of the ASCII encoding.

A major feature of this encoding, as with all of the "compression" done in rustdoc, is that it can remain in its compressed format even in memory at runtime. This is why HBackref is only used at the top level, and why we don't just use Flate for everything: the decoder in search.js will reuse the entire decoded object whenever a backref is seen, saving decode work and memory.

Roaring Bitmaps

Flag-style data, such as deprecation and empty descriptions, are stored using the standard Roaring Bitmap serialization format with runs. The data is then base64 encoded when writing it.

As a brief overview: a roaring bitmap is a chunked array of bits, described in this paper. A chunk can either be a list of integers, a bitfield, or a list of runs. In any case, the search engine has to base64 decode it, and read the chunk index itself, but the payload data stays as-is.

All roaring bitmaps in rustdoc currently store a flag for each item index. The crate is item 0, all others start at 1.

How descriptions are stored

The largest amount of data, and the main thing Rustdoc Search deals with that isn't actually used for searching, is descriptions. In a SERP table, this is what appears on the rightmost column.

item typeitem pathdescription (this part)
functionmy_crate::my_functionThis function gets the name of an integer with Data

When someone runs a search in rustdoc for the first time, their browser will work through a "sandwich workload" of three steps:

  1. Download the search-index.js and search.js files (a network bottleneck).
  2. Perform the actual search (a CPU and memory bandwidth bottleneck).
  3. Download the description data (another network bottleneck).

Reducing the amount of data downloaded here will almost always increase latency, by delaying the decision of what to download behind other work and/or adding data dependencies where something can't be downloaded without first downloading something else. In this case, we can't start downloading descriptions until after the search is done, because that's what allows it to decide which descriptions to download (it needs to sort the results then truncate to 200).

To do this, two columns are stored in the search index, building on both Roaring Bitmaps and on VLQ Hex.

  • e is an index of empty descriptions. It's a roaring bitmap of each item (the crate itself is item 0, the rest start at 1).
  • D is a shard list, stored in VLQ hex as flat list of integers. Each integer gives you the number of descriptions in the shard. As the decoder walks the index, it checks if the description is empty. if it's not, then it's in the "current" shard. When all items are exhausted, it goes on to the next shard.

Inside each shard is a newline-delimited list of descriptions, wrapped in a JSONP-style function call.

i, f, and p

i and f both index into p, the array of parent items.

i is just a one-indexed number (not zero-indexed because 0 is used for items that have no parent item). It's different from q because q represents the parent module or crate, which everything has, while i/q are used for type and trait-associated items like methods.

f, the function signatures, use a VLQ hex tree. A number is either a one-indexed reference into p, a negative number representing a generic, or zero for null.

(the internal object representation also uses negative numbers, even after decoding, to represent generics).

For example, {{gb}{d}} is equivalent to the json [[3, 1], [2]]. Because of zigzag encoding, ` is +0, a is -0 (which is not used), b is +1, and c is -1.

Searching by name

Searching by name works by looping through the search index and running these functions on each:

  • editDistance is always used to determine a match (unless quotes are specified, which would use simple equality instead). It computes the number of swaps, inserts, and removes needed to turn the query name into the entry name. For example, foo has zero distance from itself, but a distance of 1 from ofo (one swap) and foob (one insert). It is checked against an heuristic threshold, and then, if it is within that threshold, the distance is stored for ranking.
  • String.prototype.indexOf is always used to determine a match. If it returns anything other than -1, the result is added, even if editDistance exceeds its threshold, and the index is stored for ranking.
  • checkPath is used if, and only if, a parent path is specified in the query. For example, vec has no parent path, but vec::vec does. Within checkPath, editDistance and indexOf are used, and the path query has its own heuristic threshold, too. If it's not within the threshold, the entry is rejected, even if the first two pass. If it's within the threshold, the path distance is stored for ranking.
  • checkType is used only if there's a type filter, like the struct in struct:vec. If it fails, the entry is rejected.

If all four criteria pass (plus the crate filter, which isn't technically part of the query), the results are sorted by sortResults.

Searching by type

Searching by type can be divided into two phases, and the second phase has two sub-phases.

  • Turn names in the query into numbers.
  • Loop over each entry in the search index:
    • Quick rejection using a bloom filter.
    • Slow rejection using a recursive type unification algorithm.

In the names->numbers phase, if the query has only one name in it, the editDistance function is used to find a near match if the exact match fails, but if there's multiple items in the query, non-matching items are treated as generics instead. This means hahsmap will match hashmap on its own, but hahsmap, u32 is going to match the same things T, u32 matches (though rustdoc will detect this particular problem and warn about it).

Then, when actually looping over each item, the bloom filter will probably reject entries that don't have every type mentioned in the query. For example, the bloom query allows a query of i32 -> u32 to match a function with the type i32, u32 -> bool, but unification will reject it later.

The unification filter ensures that:

  • Bag semantics are respected. If you query says i32, i32, then the function has to mention two i32s, not just one.
  • Nesting semantics are respected. If your query says vec<option>, then vec<option<i32>> is fine, but option<vec<i32>> is not a match.
  • The division between return type and parameter is respected. i32 -> u32 and u32 -> i32 are completely different.

The bloom filter checks none of these things, and, on top of that, can have false positives. But it's fast and uses very little memory, so the bloom filter helps.

Re-exports

Re-export inlining allows the same item to be found by multiple names. Search supports this by giving the same item multiple entries and tracking a canonical path for any items where that differs from the given path.

For example, this sample index has a single struct exported from two paths:

[
    [ "crate_name", {
        "doc": "Documentation",
        "n": ["Data", "Data"],
        "t": "FF",
        "d": ["The data struct", "The data struct"],
        "q": [[0, "crate_name"], [1, "crate_name::submodule"]],
        "i": [0, 0],
        "p": [],
        "f": "``",
        "b": [],
        "c": [],
        "a": [],
        "r": [[0, 1]],
    }]
]

The important part of this example is the r array, which indicates that path entry 1 in the q array is the canonical path for item 0. That is, crate_name::Data has a canonical path of crate_name::submodule::Data.

This might sound like a strange design, since it has the duplicate data. It's done that way because inlining can happen across crates, which are compiled separately and might not all be present in the docs.

[
  [ "crate_name", ... ],
  [ "crate_name_2", { "q": [[0, "crate_name::submodule"], [5, "core::option"]], ... }]
]

In the above example, a canonical path actually comes from a dependency, and another one comes from an inlined standard library item: the canonical path isn't even in the index! The canonical path might also be private. In either case, it's never shown to the user, and is only used for deduplication.

Associated types, like methods, store them differently. These types are connected with an entry in p (their "parent") and each one has an optional third tuple element:

"p": [[5, "Data", 0, 1]]

That's:

  • 5: It's a struct
  • "Data": Its name
  • 0: Its display path, "crate_name"
  • 1: Its canonical path, "crate_name::submodule"

In both cases, the canonical path might not be public at all, or it might be from another crate that isn't in the docs, so it's never shown to the user, but is used for deduplication.

Testing the search engine

While the generated UI is tested using rustdoc-gui tests, the primary way the search engine is tested is the rustdoc-js and rustdoc-js-std tests. They run in NodeJS.

A rustdoc-js test has a .rs and .js file, with the same name. The .rs file specifies the hypothetical library crate to run the searches on (make sure you mark anything you need to find as pub). The .js file specifies the actual searches. The rustdoc-js-std tests are the same, but don't require an .rs file, since they use the standard library.

The .js file is like a module (except the loader takes care of exports for you). It uses these variables:

NameTypeDescription
FILTER_CRATEstringOnly include results from the given crate. In the GUI, this is the "Results in crate" drop-down menu.
EXPECTED[ResultsTable]\|ResultsTableList of tests to run, specifying what the hypothetical user types into the search box and sees in the tabs
PARSED[ParsedQuery]\|ParsedQueryList of parser tests to run, without running an actual search

FILTER_CRATE can be left out (equivalent to searching "all crates"), but you have to specify EXPECTED or PARSED.

By default, the test fails if any of the results specified in the test case are not found after running the search, or if the results found after running the search don't appear in the same order that they do in the test. The actual search results may, however, include results that aren't in the test. To override this, specify any of the following magic comments. Put them on their own line, without indenting.

  • // exact-check: If search results appear that aren't part of the test case, then fail.
  • // ignore-order: Allow search results to appear in any order.
  • // should-fail: Used to write negative tests.

Standard library tests usually shouldn't specify // exact-check, since we want the libs team to be able to add new items without causing unrelated tests to fail, but standalone tests will use it more often.

The ResultsTable and ParsedQuery types are specified in externs.js.

For example, imagine we needed to fix a bug where a function named constructor couldn't be found. To do this, write two files:

#![allow(unused)]
fn main() {
// tests/rustdoc-js/constructor_search.rs
// The test case needs to find this result.
pub fn constructor(_input: &str) -> i32 { 1 }
}
// tests/rustdoc-js/constructor_search.js
// exact-check
// Since this test runs against its own crate,
// new items should not appear in the search results.
const EXPECTED = [
  // This first test targets name-based search.
  {
    query: "constructor",
    others: [
      { path: "constructor_search", name: "constructor" },
    ],
    in_args: [],
    returned: [],
  },
  // This test targets the second tab.
  {
    query: "str",
    others: [],
    in_args: [
      { path: "constructor_search", name: "constructor" },
    ],
    returned: [],
  },
  // This test targets the third tab.
  {
    query: "i32",
    others: [],
    in_args: [],
    returned: [
      { path: "constructor_search", name: "constructor" },
    ],
  },
  // This test targets advanced type-driven search.
  {
    query: "str -> i32",
    others: [
      { path: "constructor_search", name: "constructor" },
    ],
    in_args: [],
    returned: [],
  },
]

Source Code Representation

This part describes the process of taking raw source code from the user and transforming it into various forms that the compiler can work with easily. These are called intermediate representations (IRs).

This process starts with compiler understanding what the user has asked for: parsing the command line arguments given and determining what it is to compile. After that, the compiler transforms the user input into a series of IRs that look progressively less like what the user wrote.

Syntax and the AST

Working directly with source code is very inconvenient and error-prone. Thus, before we do anything else, we convert raw source code into an Abstract Syntax Tree (AST). It turns out that doing this involves a lot of work, including lexing, parsing, macro expansion, name resolution, conditional compilation, feature-gate checking, and validation of the AST. In this chapter, we take a look at all of these steps.

Notably, there isn't always a clean ordering between these tasks. For example, macro expansion relies on name resolution to resolve the names of macros and imports. And parsing requires macro expansion, which in turn may require parsing the output of the macro.

Lexing and Parsing

The very first thing the compiler does is take the program (in UTF-8 Unicode text) and turn it into a data format the compiler can work with more conveniently than strings. This happens in two stages: Lexing and Parsing.

  1. Lexing takes strings and turns them into streams of tokens. For example, foo.bar + buz would be turned into the tokens foo, ., bar, +, and buz. This is implemented in rustc_lexer.
  1. Parsing takes streams of tokens and turns them into a structured form which is easier for the compiler to work with, usually called an Abstract Syntax Tree (AST) .

The AST

The AST mirrors the structure of a Rust program in memory, using a Span to link a particular AST node back to its source text. The AST is defined in rustc_ast, along with some definitions for tokens and token streams, data structures/traits for mutating ASTs, and shared definitions for other AST-related parts of the compiler (like the lexer and macro-expansion).

Every node in the AST has its own NodeId, including top-level items such as structs, but also individual statements and expressions. A NodeId is an identifier number that uniquely identifies an AST node within a crate.

However, because they are absolute within a crate, adding or removing a single node in the AST causes all the subsequent NodeIds to change. This renders NodeIds pretty much useless for incremental compilation, where you want as few things as possible to change.

NodeIds are used in all the rustc bits that operate directly on the AST, like macro expansion and name resolution (more on these over the next couple chapters).

Parsing

The parser is defined in rustc_parse, along with a high-level interface to the lexer and some validation routines that run after macro expansion. In particular, the rustc_parse::parser contains the parser implementation.

The main entrypoint to the parser is via the various parse_* functions and others in rustc_parse. They let you do things like turn a SourceFile (e.g. the source in a single file) into a token stream, create a parser from the token stream, and then execute the parser to get a Crate (the root AST node).

To minimize the amount of copying that is done, both StringReader and Parser have lifetimes which bind them to the parent ParseSess. This contains all the information needed while parsing, as well as the SourceMap itself.

Note that while parsing, we may encounter macro definitions or invocations. We set these aside to be expanded (see Macro Expansion). Expansion itself may require parsing the output of a macro, which may reveal more macros to be expanded, and so on.

More on Lexical Analysis

Code for lexical analysis is split between two crates:

  • rustc_lexer crate is responsible for breaking a &str into chunks constituting tokens. Although it is popular to implement lexers as generated finite state machines, the lexer in rustc_lexer is hand-written.

  • StringReader integrates rustc_lexer with data structures specific to rustc. Specifically, it adds Span information to tokens returned by rustc_lexer and interns identifiers.

Macro expansion

N.B. rustc_ast, rustc_expand, and rustc_builtin_macros are all undergoing refactoring, so some of the links in this chapter may be broken.

Rust has a very powerful macro system. In the previous chapter, we saw how the parser sets aside macros to be expanded (using temporary placeholders). This chapter is about the process of expanding those macros iteratively until we have a complete Abstract Syntax Tree (AST) for our crate with no unexpanded macros (or a compile error).

First, we discuss the algorithm that expands and integrates macro output into ASTs. Next, we take a look at how hygiene data is collected. Finally, we look at the specifics of expanding different types of macros.

Many of the algorithms and data structures described below are in rustc_expand, with fundamental data structures in rustc_expand::base.

Also of note, cfg and cfg_attr are treated specially from other macros, and are handled in rustc_expand::config.

Expansion and AST Integration

Firstly, expansion happens at the crate level. Given a raw source code for a crate, the compiler will produce a massive AST with all macros expanded, all modules inlined, etc. The primary entry point for this process is the MacroExpander::fully_expand_fragment method. With few exceptions, we use this method on the whole crate (see "Eager Expansion" below for more detailed discussion of edge case expansion issues).

At a high level, fully_expand_fragment works in iterations. We keep a queue of unresolved macro invocations (i.e. macros we haven't found the definition of yet). We repeatedly try to pick a macro from the queue, resolve it, expand it, and integrate it back. If we can't make progress in an iteration, this represents a compile error. Here is the algorithm:

  1. Initialize a queue of unresolved macros.
  2. Repeat until queue is empty (or we make no progress, which is an error):
    1. Resolve imports in our partially built crate as much as possible.
    2. Collect as many macro Invocations as possible from our partially built crate (fn-like, attributes, derives) and add them to the queue.
    3. Dequeue the first element and attempt to resolve it.
    4. If it's resolved:
      1. Run the macro's expander function that consumes a TokenStream or AST and produces a TokenStream or AstFragment (depending on the macro kind). (A TokenStream is a collection of TokenTrees, each of which are a token (punctuation, identifier, or literal) or a delimited group (anything inside ()/[]/{})).
        • At this point, we know everything about the macro itself and can call set_expn_data to fill in its properties in the global data; that is the hygiene data associated with ExpnId (see Hygiene below).
      2. Integrate that piece of AST into the currently-existing though partially-built AST. This is essentially where the "token-like mass" becomes a proper set-in-stone AST with side-tables. It happens as follows:
        • If the macro produces tokens (e.g. a proc macro), we parse into an AST, which may produce parse errors.
        • During expansion, we create SyntaxContexts (hierarchy 2) (see Hygiene below).
        • These three passes happen one after another on every AST fragment freshly expanded from a macro:
      3. After expanding a single macro and integrating its output, continue to the next iteration of fully_expand_fragment.
    5. If it's not resolved:
      1. Put the macro back in the queue.
      2. Continue to next iteration...

Error Recovery

If we make no progress in an iteration we have reached a compilation error (e.g. an undefined macro). We attempt to recover from failures (i.e. unresolved macros or imports) with the intent of generating diagnostics. Failure recovery happens by expanding unresolved macros into ExprKind::Err and allows compilation to continue past the first error so that rustc can report more errors than just the original failure.

Name Resolution

Notice that name resolution is involved here: we need to resolve imports and macro names in the above algorithm. This is done in rustc_resolve::macros, which resolves macro paths, validates those resolutions, and reports various errors (e.g. "not found", "found, but it's unstable", "expected x, found y"). However, we don't try to resolve other names yet. This happens later, as we will see in the chapter: Name Resolution.

Eager Expansion

Eager expansion means we expand the arguments of a macro invocation before the macro invocation itself. This is implemented only for a few special built-in macros that expect literals; expanding arguments first for some of these macro results in a smoother user experience. As an example, consider the following:

macro bar($i: ident) { $i }
macro foo($i: ident) { $i }

foo!(bar!(baz));

A lazy-expansion would expand foo! first. An eager-expansion would expand bar! first.

Eager-expansion is not a generally available feature of Rust. Implementing eager-expansion more generally would be challenging, so we implement it for a few special built-in macros for the sake of user-experience. The built-in macros are implemented in rustc_builtin_macros, along with some other early code generation facilities like injection of standard library imports or generation of test harness. There are some additional helpers for building AST fragments in rustc_expand::build. Eager-expansion generally performs a subset of the things that lazy (normal) expansion does. It is done by invoking fully_expand_fragment on only part of a crate (as opposed to the whole crate, like we normally do).

Other Data Structures

Here are some other notable data structures involved in expansion and integration:

  • ResolverExpand - a trait used to break crate dependencies. This allows the resolver services to be used in rustc_ast, despite rustc_resolve and pretty much everything else depending on rustc_ast.
  • ExtCtxt/ExpansionData - holds various intermediate expansion infrastructure data.
  • Annotatable - a piece of AST that can be an attribute target, almost the same thing as AstFragment except for types and patterns that can be produced by macros but cannot be annotated with attributes.
  • MacResult - a "polymorphic" AST fragment, something that can turn into a different AstFragment depending on its AstFragmentKind (i.e. an item, expression, pattern, etc).

Hygiene and Hierarchies

If you have ever used the C/C++ preprocessor macros, you know that there are some annoying and hard-to-debug gotchas! For example, consider the following C code:

#define DEFINE_FOO struct Bar {int x;}; struct Foo {Bar bar;};

// Then, somewhere else
struct Bar {
    ...
};

DEFINE_FOO

Most people avoid writing C like this – and for good reason: it doesn't compile. The struct Bar defined by the macro clashes names with the struct Bar defined in the code. Consider also the following example:

#define DO_FOO(x) {\
    int y = 0;\
    foo(x, y);\
    }

// Then elsewhere
int y = 22;
DO_FOO(y);

Do you see the problem? We wanted to generate a call foo(22, 0), but instead we got foo(0, 0) because the macro defined its own y!

These are both examples of macro hygiene issues. Hygiene relates to how to handle names defined within a macro. In particular, a hygienic macro system prevents errors due to names introduced within a macro. Rust macros are hygienic in that they do not allow one to write the sorts of bugs above.

At a high level, hygiene within the Rust compiler is accomplished by keeping track of the context where a name is introduced and used. We can then disambiguate names based on that context. Future iterations of the macro system will allow greater control to the macro author to use that context. For example, a macro author may want to introduce a new name to the context where the macro was called. Alternately, the macro author may be defining a variable for use only within the macro (i.e. it should not be visible outside the macro).

The context is attached to AST nodes. All AST nodes generated by macros have context attached. Additionally, there may be other nodes that have context attached, such as some desugared syntax (non-macro-expanded nodes are considered to just have the "root" context, as described below). Throughout the compiler, we use rustc_span::Spans to refer to code locations. This struct also has hygiene information attached to it, as we will see later.

Because macros invocations and definitions can be nested, the syntax context of a node must be a hierarchy. For example, if we expand a macro and there is another macro invocation or definition in the generated output, then the syntax context should reflect the nesting.

However, it turns out that there are actually a few types of context we may want to track for different purposes. Thus, there are not just one but three expansion hierarchies that together comprise the hygiene information for a crate.

All of these hierarchies need some sort of "macro ID" to identify individual elements in the chain of expansions. This ID is ExpnId. All macros receive an integer ID, assigned continuously starting from 0 as we discover new macro calls. All hierarchies start at ExpnId::root, which is its own parent.

The rustc_span::hygiene crate contains all of the hygiene-related algorithms (with the exception of some hacks in Resolver::resolve_crate_root) and structures related to hygiene and expansion that are kept in global data.

The actual hierarchies are stored in HygieneData. This is a global piece of data containing hygiene and expansion info that can be accessed from any Ident without any context.

The Expansion Order Hierarchy

The first hierarchy tracks the order of expansions, i.e., when a macro invocation is in the output of another macro.

Here, the children in the hierarchy will be the "innermost" tokens. The ExpnData struct itself contains a subset of properties from both macro definition and macro call available through global data. ExpnData::parent tracks the child-to-parent link in this hierarchy.

For example:

macro_rules! foo { () => { println!(); } }

fn main() { foo!(); }

In this code, the AST nodes that are finally generated would have hierarchy root -> id(foo) -> id(println).

The Macro Definition Hierarchy

The second hierarchy tracks the order of macro definitions, i.e., when we are expanding one macro another macro definition is revealed in its output. This one is a bit tricky and more complex than the other two hierarchies.

SyntaxContext represents a whole chain in this hierarchy via an ID. SyntaxContextData contains data associated with the given SyntaxContext; mostly it is a cache for results of filtering that chain in different ways. SyntaxContextData::parent is the child-to-parent link here, and SyntaxContextData::outer_expns are individual elements in the chain. The "chaining-operator" is SyntaxContext::apply_mark in compiler code.

A Span, mentioned above, is actually just a compact representation of a code location and SyntaxContext. Likewise, an Ident is just an interned Symbol + Span (i.e. an interned string + hygiene data).

For built-in macros, we use the context: SyntaxContext::empty().apply_mark(expn_id), and such macros are considered to be defined at the hierarchy root. We do the same for proc macros because we haven't implemented cross-crate hygiene yet.

If the token had context X before being produced by a macro then after being produced by the macro it has context X -> macro_id. Here are some examples:

Example 0:

macro m() { ident }

m!();

Here ident which initially has context SyntaxContext::root has context ROOT -> id(m) after it's produced by m.

Example 1:

macro m() { macro n() { ident } }

m!();
n!();

In this example the ident has context ROOT initially, then ROOT -> id(m) after the first expansion, then ROOT -> id(m) -> id(n).

Example 2:

Note that these chains are not entirely determined by their last element, in other words ExpnId is not isomorphic to SyntaxContext.

macro m($i: ident) { macro n() { ($i, bar) } }

m!(foo);

After all expansions, foo has context ROOT -> id(n) and bar has context ROOT -> id(m) -> id(n).

Currently this hierarchy for tracking macro definitions is subject to the so-called "context transplantation hack". Modern (i.e. experimental) macros have stronger hygiene than the legacy "Macros By Example" (MBE) system which can result in weird interactions between the two. The hack is intended to make things "just work" for now.

The Call-site Hierarchy

The third and final hierarchy tracks the location of macro invocations.

In this hierarchy ExpnData::call_site is the child -> parent link.

Here is an example:

macro bar($i: ident) { $i }
macro foo($i: ident) { $i }

foo!(bar!(baz));

For the baz AST node in the final output, the expansion-order hierarchy is ROOT -> id(foo) -> id(bar) -> baz, while the call-site hierarchy is ROOT -> baz.

Macro Backtraces

Macro backtraces are implemented in rustc_span using the hygiene machinery in rustc_span::hygiene.

Producing Macro Output

Above, we saw how the output of a macro is integrated into the AST for a crate, and we also saw how the hygiene data for a crate is generated. But how do we actually produce the output of a macro? It depends on the type of macro.

There are two types of macros in Rust:

  1. macro_rules! macros (a.k.a. "Macros By Example" (MBE)), and,
  2. procedural macros (proc macros); including custom derives.

During the parsing phase, the normal Rust parser will set aside the contents of macros and their invocations. Later, macros are expanded using these portions of the code.

Some important data structures/interfaces here:

Macros By Example

MBEs have their own parser distinct from the Rust parser. When macros are expanded, we may invoke the MBE parser to parse and expand a macro. The MBE parser, in turn, may call the Rust parser when it needs to bind a metavariable (e.g. $my_expr) while parsing the contents of a macro invocation. The code for macro expansion is in compiler/rustc_expand/src/mbe/.

Example

macro_rules! printer {
    (print $mvar:ident) => {
        println!("{}", $mvar);
    };
    (print twice $mvar:ident) => {
        println!("{}", $mvar);
        println!("{}", $mvar);
    };
}

Here $mvar is called a metavariable. Unlike normal variables, rather than binding to a value at runtime, a metavariable binds at compile time to a tree of tokens. A token is a single "unit" of the grammar, such as an identifier (e.g. foo) or punctuation (e.g. =>). There are also other special tokens, such as EOF, which its self indicates that there are no more tokens. There are token trees resulting from the paired parentheses-like characters ((...), [...], and {...}) – they include the open and close and all the tokens in between (Rust requires that parentheses-like characters be balanced). Having macro expansion operate on token streams rather than the raw bytes of a source-file abstracts away a lot of complexity. The macro expander (and much of the rest of the compiler) doesn't consider the exact line and column of some syntactic construct in the code; it considers which constructs are used in the code. Using tokens allows us to care about what without worrying about where. For more information about tokens, see the Parsing chapter of this book.

printer!(print foo); // `foo` is a variable

The process of expanding the macro invocation into the syntax tree println!("{}", foo) and then expanding the syntax tree into a call to Display::fmt is one common example of macro expansion.

The MBE parser

There are two parts to MBE expansion done by the macro parser:

  1. parsing the definition, and,
  2. parsing the invocations.

We think of the MBE parser as a nondeterministic finite automaton (NFA) based regex parser since it uses an algorithm similar in spirit to the Earley parsing algorithm. The macro parser is defined in compiler/rustc_expand/src/mbe/macro_parser.rs.

The interface of the macro parser is as follows (this is slightly simplified):

fn parse_tt(
    &mut self,
    parser: &mut Cow<'_, Parser<'_>>,
    matcher: &[MatcherLoc]
) -> ParseResult

We use these items in macro parser:

  • a parser variable is a reference to the state of a normal Rust parser, including the token stream and parsing session. The token stream is what we are about to ask the MBE parser to parse. We will consume the raw stream of tokens and output a binding of metavariables to corresponding token trees. The parsing session can be used to report parser errors.
  • a matcher variable is a sequence of MatcherLocs that we want to match the token stream against. They're converted from token trees before matching.

In the analogy of a regex parser, the token stream is the input and we are matching it against the pattern defined by matcher. Using our examples, the token stream could be the stream of tokens containing the inside of the example invocation print foo, while matcher might be the sequence of token (trees) print $mvar:ident.

The output of the parser is a ParseResult, which indicates which of three cases has occurred:

  • Success: the token stream matches the given matcher and we have produced a binding from metavariables to the corresponding token trees.
  • Failure: the token stream does not match matcher and results in an error message such as "No rule expected token ...".
  • Error: some fatal error has occurred in the parser. For example, this happens if there is more than one pattern match, since that indicates the macro is ambiguous.

The full interface is defined here.

The macro parser does pretty much exactly the same as a normal regex parser with one exception: in order to parse different types of metavariables, such as ident, block, expr, etc., the macro parser must call back to the normal Rust parser. Both the definition and invocation of macros are parsed using the parser in a process which is non-intuitively self-referential.

The code to parse macro definitions is in compiler/rustc_expand/src/mbe/macro_rules.rs. It defines the pattern for matching a macro definition as $( $lhs:tt => $rhs:tt );+. In other words, a macro_rules definition should have in its body at least one occurrence of a token tree followed by => followed by another token tree. When the compiler comes to a macro_rules definition, it uses this pattern to match the two token trees per the rules of the definition of the macro, thereby utilizing the macro parser itself. In our example definition, the metavariable $lhs would match the patterns of both arms: (print $mvar:ident) and (print twice $mvar:ident). And $rhs would match the bodies of both arms: { println!("{}", $mvar); } and { println!("{}", $mvar); println!("{}", $mvar); }. The parser keeps this knowledge around for when it needs to expand a macro invocation.

When the compiler comes to a macro invocation, it parses that invocation using a NFA-based macro parser described above. However, the matcher variable used is the first token tree ($lhs) extracted from the arms of the macro definition. Using our example, we would try to match the token stream print foo from the invocation against the matchers print $mvar:ident and print twice $mvar:ident that we previously extracted from the definition. The algorithm is exactly the same, but when the macro parser comes to a place in the current matcher where it needs to match a non-terminal (e.g. $mvar:ident), it calls back to the normal Rust parser to get the contents of that non-terminal. In this case, the Rust parser would look for an ident token, which it finds (foo) and returns to the macro parser. Then, the macro parser proceeds in parsing as normal. Also, note that exactly one of the matchers from the various arms should match the invocation; if there is more than one match, the parse is ambiguous, while if there are no matches at all, there is a syntax error.

For more information about the macro parser's implementation, see the comments in compiler/rustc_expand/src/mbe/macro_parser.rs.

Procedural Macros

Procedural macros are also expanded during parsing. However, rather than having a parser in the compiler, proc macros are implemented as custom, third-party crates. The compiler will compile the proc macro crate and specially annotated functions in them (i.e. the proc macro itself), passing them a stream of tokens. A proc macro can then transform the token stream and output a new token stream, which is synthesized into the AST.

The token stream type used by proc macros is stable, so rustc does not use it internally. The compiler's (unstable) token stream is defined in rustc_ast::tokenstream::TokenStream. This is converted into the stable proc_macro::TokenStream and back in rustc_expand::proc_macro and rustc_expand::proc_macro_server. Since the Rust ABI is currently unstable, we use the C ABI for this conversion.

Custom Derive

Custom derives are a special type of proc macro.

Macros By Example and Macros 2.0

There is an legacy and mostly undocumented effort to improve the MBE system by giving it more hygiene-related features, better scoping and visibility rules, etc. Internally this uses the same machinery as today's MBEs with some additional syntactic sugar and are allowed to be in namespaces.

Name resolution

In the previous chapters, we saw how the Abstract Syntax Tree (AST) is built with all macros expanded. We saw how doing that requires doing some name resolution to resolve imports and macro names. In this chapter, we show how this is actually done and more.

In fact, we don't do full name resolution during macro expansion -- we only resolve imports and macros at that time. This is required to know what to even expand. Later, after we have the whole AST, we do full name resolution to resolve all names in the crate. This happens in rustc_resolve::late. Unlike during macro expansion, in this late expansion, we only need to try to resolve a name once, since no new names can be added. If we fail to resolve a name, then it is a compiler error.

Name resolution is complex. There are different namespaces (e.g. macros, values, types, lifetimes), and names may be valid at different (nested) scopes. Also, different types of names can fail resolution differently, and failures can happen differently at different scopes. For example, in a module scope, failure means no unexpanded macros and no unresolved glob imports in that module. On the other hand, in a function body scope, failure requires that a name be absent from the block we are in, all outer scopes, and the global scope.

Basics

In our programs we refer to variables, types, functions, etc, by giving them a name. These names are not always unique. For example, take this valid Rust program:

#![allow(unused)]
fn main() {
type x = u32;
let x: x = 1;
let y: x = 2;
}

How do we know on line 3 whether x is a type (u32) or a value (1)? These conflicts are resolved during name resolution. In this specific case, name resolution defines that type names and variable names live in separate namespaces and therefore can co-exist.

The name resolution in Rust is a two-phase process. In the first phase, which runs during macro expansion, we build a tree of modules and resolve imports. Macro expansion and name resolution communicate with each other via the ResolverAstLoweringExt trait.

The input to the second phase is the syntax tree, produced by parsing input files and expanding macros. This phase produces links from all the names in the source to relevant places where the name was introduced. It also generates helpful error messages, like typo suggestions, traits to import or lints about unused items.

A successful run of the second phase (Resolver::resolve_crate) creates kind of an index the rest of the compilation may use to ask about the present names (through the hir::lowering::Resolver interface).

The name resolution lives in the rustc_resolve crate, with the bulk in lib.rs and some helpers or symbol-type specific logic in the other modules.

Namespaces

Different kind of symbols live in different namespaces ‒ e.g. types don't clash with variables. This usually doesn't happen, because variables start with lower-case letter while types with upper-case one, but this is only a convention. This is legal Rust code that will compile (with warnings):

#![allow(unused)]
fn main() {
type x = u32;
let x: x = 1;
let y: x = 2; // See? x is still a type here.
}

To cope with this, and with slightly different scoping rules for these namespaces, the resolver keeps them separated and builds separate structures for them.

In other words, when the code talks about namespaces, it doesn't mean the module hierarchy, it's types vs. values vs. macros.

Scopes and ribs

A name is visible only in certain area in the source code. This forms a hierarchical structure, but not necessarily a simple one ‒ if one scope is part of another, it doesn't mean a name visible in the outer scope is also visible in the inner scope, or that it refers to the same thing.

To cope with that, the compiler introduces the concept of Ribs. This is an abstraction of a scope. Every time the set of visible names potentially changes, a new Rib is pushed onto a stack. The places where this can happen include for example:

  • The obvious places ‒ curly braces enclosing a block, function boundaries, modules.
  • Introducing a let binding ‒ this can shadow another binding with the same name.
  • Macro expansion border ‒ to cope with macro hygiene.

When searching for a name, the stack of ribs is traversed from the innermost outwards. This helps to find the closest meaning of the name (the one not shadowed by anything else). The transition to outer Rib may also affect what names are usable ‒ if there are nested functions (not closures), the inner one can't access parameters and local bindings of the outer one, even though they should be visible by ordinary scoping rules. An example:

#![allow(unused)]
fn main() {
fn do_something<T: Default>(val: T) { // <- New rib in both types and values (1)
    // `val` is accessible, as is the helper function
    // `T` is accessible
    let helper = || { // New rib on `helper` (2) and another on the block (3)
        // `val` is accessible here
    }; // End of (3)
    // `val` is accessible, `helper` variable shadows `helper` function
    fn helper() { // <- New rib in both types and values (4)
        // `val` is not accessible here, (4) is not transparent for locals
        // `T` is not accessible here
    } // End of (4)
    let val = T::default(); // New rib (5)
    // `val` is the variable, not the parameter here
} // End of (5), (2) and (1)
}

Because the rules for different namespaces are a bit different, each namespace has its own independent Rib stack that is constructed in parallel to the others. In addition, there's also a Rib stack for local labels (e.g. names of loops or blocks), which isn't a full namespace in its own right.

Overall strategy

To perform the name resolution of the whole crate, the syntax tree is traversed top-down and every encountered name is resolved. This works for most kinds of names, because at the point of use of a name it is already introduced in the Rib hierarchy.

There are some exceptions to this. Items are bit tricky, because they can be used even before encountered ‒ therefore every block needs to be first scanned for items to fill in its Rib.

Other, even more problematic ones, are imports which need recursive fixed-point resolution and macros, that need to be resolved and expanded before the rest of the code can be processed.

Therefore, the resolution is performed in multiple stages.

Speculative crate loading

To give useful errors, rustc suggests importing paths into scope if they're not found. How does it do this? It looks through every module of every crate and looks for possible matches. This even includes crates that haven't yet been loaded!

Eagerly loading crates to include import suggestions that haven't yet been loaded is called speculative crate loading, because any errors it encounters shouldn't be reported: rustc_resolve decided to load them, not the user. The function that does this is lookup_import_candidates and lives in rustc_resolve::diagnostics.

To tell the difference between speculative loads and loads initiated by the user, rustc_resolve passes around a record_used parameter, which is false when the load is speculative.

TODO: #16

This is a result of the first pass of learning the code. It is definitely incomplete and not detailed enough. It also might be inaccurate in places. Still, it probably provides useful first guidepost to what happens in there.

  • What exactly does it link to and how is that published and consumed by following stages of compilation?
  • Who calls it and how it is actually used.
  • Is it a pass and then the result is only used, or can it be computed incrementally?
  • The overall strategy description is a bit vague.
  • Where does the name Rib come from?
  • Does this thing have its own tests, or is it tested only as part of some e2e testing?

Attributes

Attributes come in two types: inert (or built-in) and active (non-builtin).

Builtin/inert attributes

These attributes are defined in the compiler itself, in compiler/rustc_feature/src/builtin_attrs.rs.

Examples include #[allow] and #[macro_use].

These attributes have several important characteristics:

  • They are always in scope, and do not participate in typical path-based resolution.
  • They cannot be renamed. For example, use allow as foo will compile, but writing #[foo] will produce an error.
  • They are 'inert', meaning they are left as-is by the macro expansion code. As a result, any behavior comes as a result of the compiler explicitly checking for their presence. For example, lint-related code explicitly checks for #[allow], #[warn], #[deny], and #[forbid], rather than the behavior coming from the expansion of the attributes themselves.

'Non-builtin'/'active' attributes

These attributes are defined by a crate - either the standard library, or a proc-macro crate.

Important: Many non-builtin attributes, such as #[derive], are still considered part of the core Rust language. However, they are not called 'builtin attributes', since they have a corresponding definition in the standard library.

Definitions of non-builtin attributes take two forms:

  1. Proc-macro attributes, defined via a function annotated with #[proc_macro_attribute] in a proc-macro crate.
  2. AST-based attributes, defined in the standard library. These attributes have special 'stub' macros defined in places like library/core/src/macros/mod.rs.

These definitions exist to allow the macros to participate in typical path-based resolution - they can be imported, re-exported, and renamed just like any other item definition. However, the body of the definition is empty. Instead, the macro is annotated with the #[rustc_builtin_macro] attribute, which tells the compiler to run a corresponding function in rustc_builtin_macros.

All non-builtin attributes have the following characteristics:

  • Like all other definitions (e.g. structs), they must be brought into scope via an import. Many standard library attributes are included in the prelude - this is why writing #[derive] works without an import.
  • They participate in macro expansion. The implementation of the macro may leave the attribute target unchanged, modify the target, produce new AST nodes, or remove the target entirely.

The #[test] attribute

Many Rust programmers rely on a built-in attribute called #[test]. All you have to do is mark a function and include some asserts like so:

#[test]
fn my_test() {
    assert!(2+2 == 4);
}

When this program is compiled using rustc --test or cargo test, it will produce an executable that can run this, and any other test function. This method of testing allows tests to live alongside code in an organic way. You can even put tests inside private modules:

mod my_priv_mod {
    fn my_priv_func() -> bool {}

    #[test]
    fn test_priv_func() {
        assert!(my_priv_func());
    }
}

Private items can thus be easily tested without worrying about how to expose them to any sort of external testing apparatus. This is key to the ergonomics of testing in Rust. Semantically, however, it's rather odd. How does any sort of main function invoke these tests if they're not visible? What exactly is rustc --test doing?

#[test] is implemented as a syntactic transformation inside the compiler's rustc_ast. Essentially, it's a fancy macro that rewrites the crate in 3 steps:

Step 1: Re-Exporting

As mentioned earlier, tests can exist inside private modules, so we need a way of exposing them to the main function, without breaking any existing code. To that end, rustc_ast will create local modules called __test_reexports that recursively reexport tests. This expansion translates the above example into:

mod my_priv_mod {
    fn my_priv_func() -> bool {}

    pub fn test_priv_func() {
        assert!(my_priv_func());
    }

    pub mod __test_reexports {
        pub use super::test_priv_func;
    }
}

Now, our test can be accessed as my_priv_mod::__test_reexports::test_priv_func. For deeper module structures, __test_reexports will reexport modules that contain tests, so a test at a::b::my_test becomes a::__test_reexports::b::__test_reexports::my_test. While this process seems pretty safe, what happens if there is an existing __test_reexports module? The answer: nothing.

To explain, we need to understand how Rust's Abstract Syntax Tree represents identifiers. The name of every function, variable, module, etc. is not stored as a string, but rather as an opaque Symbol which is essentially an ID number for each identifier. The compiler keeps a separate hashtable that allows us to recover the human-readable name of a Symbol when necessary (such as when printing a syntax error). When the compiler generates the __test_reexports module, it generates a new Symbol for the identifier, so while the compiler-generated __test_reexports may share a name with your hand-written one, it will not share a Symbol. This technique prevents name collision during code generation and is the foundation of Rust's macro hygiene.

Step 2: Harness Generation

Now that our tests are accessible from the root of our crate, we need to do something with them using rustc_ast generates a module like so:

#[main]
pub fn main() {
    extern crate test;
    test::test_main_static(&[&path::to::test1, /*...*/]);
}

Here path::to::test1 is a constant of type test::TestDescAndFn.

While this transformation is simple, it gives us a lot of insight into how tests are actually run. The tests are aggregated into an array and passed to a test runner called test_main_static. We'll come back to exactly what TestDescAndFn is, but for now, the key takeaway is that there is a crate called test that is part of Rust core, that implements all of the runtime for testing. test's interface is unstable, so the only stable way to interact with it is through the #[test] macro.

Step 3: Test Object Generation

If you've written tests in Rust before, you may be familiar with some of the optional attributes available on test functions. For example, a test can be annotated with #[should_panic] if we expect the test to cause a panic. It looks something like this:

#[test]
#[should_panic]
fn foo() {
    panic!("intentional");
}

This means our tests are more than just simple functions, they have configuration information as well. test encodes this configuration data into a struct called TestDesc. For each test function in a crate, rustc_ast will parse its attributes and generate a TestDesc instance. It then combines the TestDesc and test function into the predictably named TestDescAndFn struct, that test_main_static operates on. For a given test, the generated TestDescAndFn instance looks like so:

self::test::TestDescAndFn{
  desc: self::test::TestDesc{
    name: self::test::StaticTestName("foo"),
    ignore: false,
    should_panic: self::test::ShouldPanic::Yes,
    allow_fail: false,
  },
  testfn: self::test::StaticTestFn(||
    self::test::assert_test_result(::crate::__test_reexports::foo())),
}

Once we've constructed an array of these test objects, they're passed to the test runner via the harness generated in Step 2.

Inspecting the generated code

On nightly rustc, there's an unstable flag called unpretty that you can use to print out the module source after macro expansion:

$ rustc my_mod.rs -Z unpretty=hir

Panicking in rust

Step 1: Invocation of the panic! macro.

There are actually two panic macros - one defined in core, and one defined in std. This is due to the fact that code in core can panic. core is built before std, but we want panics to use the same machinery at runtime, whether they originate in core or std.

core definition of panic!

The core panic! macro eventually makes the following call (in library/core/src/panicking.rs):

#![allow(unused)]
fn main() {
// NOTE This function never crosses the FFI boundary; it's a Rust-to-Rust call
extern "Rust" {
    #[lang = "panic_impl"]
    fn panic_impl(pi: &PanicInfo<'_>) -> !;
}

let pi = PanicInfo::internal_constructor(Some(&fmt), location);
unsafe { panic_impl(&pi) }
}

Actually resolving this goes through several layers of indirection:

  1. In compiler/rustc_middle/src/middle/weak_lang_items.rs, panic_impl is declared as 'weak lang item', with the symbol rust_begin_unwind. This is used in rustc_hir_analysis/src/collect.rs to set the actual symbol name to rust_begin_unwind.

    Note that panic_impl is declared in an extern "Rust" block, which means that core will attempt to call a foreign symbol called rust_begin_unwind (to be resolved at link time)

  2. In library/std/src/panicking.rs, we have this definition:

#![allow(unused)]
fn main() {
/// Entry point of panic from the core crate.
#[cfg(not(test))]
#[panic_handler]
#[unwind(allowed)]
pub fn begin_panic_handler(info: &PanicInfo<'_>) -> ! {
    ...
}
}

The special panic_handler attribute is resolved via compiler/rustc_middle/src/middle/lang_items. The extract function converts the panic_handler attribute to a panic_impl lang item.

Now, we have a matching panic_handler lang item in the std. This function goes through the same process as the extern { fn panic_impl } definition in core, ending up with a symbol name of rust_begin_unwind. At link time, the symbol reference in core will be resolved to the definition of std (the function called begin_panic_handler in the Rust source).

Thus, control flow will pass from core to std at runtime. This allows panics from core to go through the same infrastructure that other panics use (panic hooks, unwinding, etc)

std implementation of panic!

This is where the actual panic-related logic begins. In library/std/src/panicking.rs, control passes to rust_panic_with_hook. This method is responsible for invoking the global panic hook, and checking for double panics. Finally, we call __rust_start_panic, which is provided by the panic runtime.

The call to __rust_start_panic is very weird - it is passed a *mut &mut dyn PanicPayload, converted to an usize. Let's break this type down:

  1. PanicPayload is an internal trait. It is implemented for PanicPayload (a wrapper around the user-supplied payload type), and has a method fn take_box(&mut self) -> *mut (dyn Any + Send). This method takes the user-provided payload (T: Any + Send), boxes it, and converts the box to a raw pointer.

  2. When we call __rust_start_panic, we have an &mut dyn PanicPayload. However, this is a fat pointer (twice the size of a usize). To pass this to the panic runtime across an FFI boundary, we take a mutable reference to this mutable reference (&mut &mut dyn PanicPayload), and convert it to a raw pointer (*mut &mut dyn PanicPayload). The outer raw pointer is a thin pointer, since it points to a Sized type (a mutable reference). Therefore, we can convert this thin pointer into a usize, which is suitable for passing across an FFI boundary.

Finally, we call __rust_start_panic with this usize. We have now entered the panic runtime.

Step 2: The panic runtime

Rust provides two panic runtimes: panic_abort and panic_unwind. The user chooses between them at build time via their Cargo.toml

panic_abort is extremely simple: its implementation of __rust_start_panic just aborts, as you would expect.

panic_unwind is the more interesting case.

In its implementation of __rust_start_panic, we take the usize, convert it back to a *mut &mut dyn PanicPayload, dereference it, and call take_box on the &mut dyn PanicPayload. At this point, we have a raw pointer to the payload itself (a *mut (dyn Send + Any)): that is, a raw pointer to the actual value provided by the user who called panic!.

At this point, the platform-independent code ends. We now call into platform-specific unwinding logic (e.g unwind). This code is responsible for unwinding the stack, running any 'landing pads' associated with each frame (currently, running destructors), and transferring control to the catch_unwind frame.

Note that all panics either abort the process or get caught by some call to catch_unwind. In particular, in std's runtime service, the call to the user-provided main function is wrapped in catch_unwind.

AST Validation

AST validation is a separate AST pass that visits each item in the tree and performs simple checks. This pass doesn't perform any complex analysis, type checking or name resolution.

Before performing any validation, the compiler first expands the macros. Then this pass performs validations to check that each AST item is in the correct state. And when this pass is done, the compiler runs the crate resolution pass.

Validations

Validations are defined in AstValidator type, which itself is located in rustc_ast_passes crate. This type implements various simple checks which emit errors when certain language rules are broken.

In addition, AstValidator implements Visitor trait that defines how to visit AST items (which can be functions, traits, enums, etc).

For each item, visitor performs specific checks. For example, when visiting a function declaration, AstValidator checks that the function has:

  • no more than u16::MAX parameters;
  • c-variadic argument goes the last in the declaration;
  • documentation comments aren't applied to function parameters;
  • and other validations.

Feature Gate Checking

TODO: this chapter #1158

Lang items

The compiler has certain pluggable operations; that is, functionality that isn't hard-coded into the language, but is implemented in libraries, with a special marker to tell the compiler it exists. The marker is the attribute #[lang = "..."], and there are various different values of ..., i.e. various different 'lang items'.

Many such lang items can be implemented only in one sensible way, such as add (trait core::ops::Add) or future_trait (trait core::future::Future). Others can be overridden to achieve some specific goals; for example, you can control your binary's entrypoint.

Features provided by lang items include:

  • overloadable operators via traits: the traits corresponding to the ==, <, dereference (*), +, etc. operators are all marked with lang items; those specific four are eq, ord, deref, and add respectively.
  • panicking and stack unwinding; the eh_personality, panic and panic_bounds_checks lang items.
  • the traits in std::marker used to indicate properties of types used by the compiler; lang items send, sync and copy.
  • the special marker types used for variance indicators found in core::marker; lang item phantom_data.

Lang items are loaded lazily by the compiler; e.g. if one never uses Box then there is no need to define functions for exchange_malloc and box_free. rustc will emit an error when an item is needed but not found in the current crate or any that it depends on.

Most lang items are defined by the core library, but if you're trying to build an executable with #![no_std], you'll still need to define a few lang items that are usually provided by std.

Retrieving a language item

You can retrieve lang items by calling tcx.lang_items().

Here's a small example of retrieving the trait Sized {} language item:

#![allow(unused)]
fn main() {
// Note that in case of `#![no_core]`, the trait is not available.
if let Some(sized_trait_def_id) = tcx.lang_items().sized_trait() {
    // do something with `sized_trait_def_id`
}
}

Note that sized_trait() returns an Option, not the DefId itself. That's because language items are defined in the standard library, so if someone compiles with #![no_core] (or for some lang items, #![no_std]), the lang item may not be present. You can either:

  • Give a hard error if the lang item is necessary to continue (don't panic, since this can happen in user code).
  • Proceed with limited functionality, by just omitting whatever you were going to do with the DefId.

List of all language items

You can find language items in the following places:

  • An exhaustive reference in the compiler documentation: rustc_hir::LangItem
  • An auto-generated list with source locations by using ripgrep: rg '#\[.*lang =' library/

Note that language items are explicitly unstable and may change in any new release.

The HIR

The HIR – "High-Level Intermediate Representation" – is the primary IR used in most of rustc. It is a compiler-friendly representation of the abstract syntax tree (AST) that is generated after parsing, macro expansion, and name resolution (see Lowering for how the HIR is created). Many parts of HIR resemble Rust surface syntax quite closely, with the exception that some of Rust's expression forms have been desugared away. For example, for loops are converted into a loop and do not appear in the HIR. This makes HIR more amenable to analysis than a normal AST.

This chapter covers the main concepts of the HIR.

You can view the HIR representation of your code by passing the -Z unpretty=hir-tree flag to rustc:

cargo rustc -- -Z unpretty=hir-tree

You can also use the -Z unpretty=hir option to generate a HIR that is closer to the original source code expression:

cargo rustc -- -Z unpretty=hir

Out-of-band storage and the Crate type

The top-level data-structure in the HIR is the Crate, which stores the contents of the crate currently being compiled (we only ever construct HIR for the current crate). Whereas in the AST the crate data structure basically just contains the root module, the HIR Crate structure contains a number of maps and other things that serve to organize the content of the crate for easier access.

For example, the contents of individual items (e.g. modules, functions, traits, impls, etc) in the HIR are not immediately accessible in the parents. So, for example, if there is a module item foo containing a function bar():

#![allow(unused)]
fn main() {
mod foo {
    fn bar() { }
}
}

then in the HIR the representation of module foo (the Mod struct) would only have the ItemId I of bar(). To get the details of the function bar(), we would lookup I in the items map.

One nice result from this representation is that one can iterate over all items in the crate by iterating over the key-value pairs in these maps (without the need to trawl through the whole HIR). There are similar maps for things like trait items and impl items, as well as "bodies" (explained below).

The other reason to set up the representation this way is for better integration with incremental compilation. This way, if you gain access to an &rustc_hir::Item (e.g. for the mod foo), you do not immediately gain access to the contents of the function bar(). Instead, you only gain access to the id for bar(), and you must invoke some function to lookup the contents of bar() given its id; this gives the compiler a chance to observe that you accessed the data for bar(), and then record the dependency.

Identifiers in the HIR

The HIR uses a bunch of different identifiers that coexist and serve different purposes.

  • A DefId, as the name suggests, identifies a particular definition, or top-level item, in a given crate. It is composed of two parts: a CrateNum which identifies the crate the definition comes from, and a DefIndex which identifies the definition within the crate. Unlike HirIds, there isn't a DefId for every expression, which makes them more stable across compilations.

  • A LocalDefId is basically a DefId that is known to come from the current crate. This allows us to drop the CrateNum part, and use the type system to ensure that only local definitions are passed to functions that expect a local definition.

  • A HirId uniquely identifies a node in the HIR of the current crate. It is composed of two parts: an owner and a local_id that is unique within the owner. This combination makes for more stable values which are helpful for incremental compilation. Unlike DefIds, a HirId can refer to [fine-grained entities][Node] like expressions, but stays local to the current crate.

  • A BodyId identifies a HIR Body in the current crate. It is currently only a wrapper around a HirId. For more info about HIR bodies, please refer to the HIR chapter.

These identifiers can be converted into one another through the HIR map.

The HIR Map

Most of the time when you are working with the HIR, you will do so via the HIR Map, accessible in the tcx via tcx.hir() (and defined in the hir::map module). The HIR map contains a number of methods to convert between IDs of various kinds and to lookup data associated with a HIR node.

For example, if you have a LocalDefId, and you would like to convert it to a HirId, you can use tcx.hir().local_def_id_to_hir_id(def_id). You need a LocalDefId, rather than a DefId, since only local items have HIR nodes.

Similarly, you can use tcx.hir().find(n) to lookup the node for a HirId. This returns a Option<Node<'hir>>, where Node is an enum defined in the map. By matching on this, you can find out what sort of node the HirId referred to and also get a pointer to the data itself. Often, you know what sort of node n is – e.g. if you know that n must be some HIR expression, you can do tcx.hir().expect_expr(n), which will extract and return the &hir::Expr, panicking if n is not in fact an expression.

Finally, you can use the HIR map to find the parents of nodes, via calls like tcx.hir().get_parent(n).

HIR Bodies

A rustc_hir::Body represents some kind of executable code, such as the body of a function/closure or the definition of a constant. Bodies are associated with an owner, which is typically some kind of item (e.g. an fn() or const), but could also be a closure expression (e.g. |x, y| x + y). You can use the HIR map to find the body associated with a given def-id (maybe_body_owned_by) or to find the owner of a body (body_owner_def_id).

AST lowering

The AST lowering step converts AST to HIR. This means many structures are removed if they are irrelevant for type analysis or similar syntax agnostic analyses. Examples of such structures include but are not limited to

  • Parenthesis
    • Removed without replacement, the tree structure makes order explicit
  • for loops and while (let) loops
    • Converted to loop + match and some let bindings
  • if let
    • Converted to match
  • Universal impl Trait
    • Converted to generic arguments (but with some flags, to know that the user didn't write them)
  • Existential impl Trait
    • Converted to a virtual existential type declaration

Lowering needs to uphold several invariants in order to not trigger the sanity checks in compiler/rustc_passes/src/hir_id_validator.rs:

  1. A HirId must be used if created. So if you use the lower_node_id, you must use the resulting NodeId or HirId (either is fine, since any NodeIds in the HIR are checked for existing HirIds)
  2. Lowering a HirId must be done in the scope of the owning item. This means you need to use with_hir_id_owner if you are creating parts of an item other than the one being currently lowered. This happens for example during the lowering of existential impl Trait
  3. A NodeId that will be placed into a HIR structure must be lowered, even if its HirId is unused. Calling let _ = self.lower_node_id(node_id); is perfectly legitimate.
  4. If you are creating new nodes that didn't exist in the AST, you must create new ids for them. This is done by calling the next_id method, which produces both a new NodeId as well as automatically lowering it for you so you also get the HirId.

If you are creating new DefIds, since each DefId needs to have a corresponding NodeId, it is advisable to add these NodeIds to the AST so you don't have to generate new ones during lowering. This has the advantage of creating a way to find the DefId of something via its NodeId. If lowering needs this DefId in multiple places, you can't generate a new NodeId in all those places because you'd also get a new DefId then. With a NodeId from the AST this is not an issue.

Having the NodeId also allows the DefCollector to generate the DefIds instead of lowering having to do it on the fly. Centralizing the DefId generation in one place makes it easier to refactor and reason about.

HIR Debugging

Use the -Z unpretty=hir flag to produce a human-readable representation of the HIR. For cargo projects this can be done with cargo rustc -- -Z unpretty=hir. This output is useful when you need to see at a glance how your code was desugared and transformed during AST lowering.

For a full Debug dump of the data in the HIR, use the -Z unpretty=hir-tree flag. This may be useful when you need to see the full structure of the HIR from the perspective of the compiler.

If you are trying to correlate NodeIds or DefIds with source code, the -Z unpretty=expanded,identified flag may be useful.

TODO: anything else? #1159

The THIR

The THIR ("Typed High-Level Intermediate Representation"), previously called HAIR for "High-Level Abstract IR", is another IR used by rustc that is generated after type checking. It is (as of January 2024) used for MIR construction, exhaustiveness checking, and unsafety checking.

As the name might suggest, the THIR is a lowered version of the HIR where all the types have been filled in, which is possible after type checking has completed. But it has some other interesting features that distinguish it from the HIR:

  • Like the MIR, the THIR only represents bodies, i.e. "executable code"; this includes function bodies, but also const initializers, for example. Specifically, all body owners have THIR created. Consequently, the THIR has no representation for items like structs or traits.

  • Each body of THIR is only stored temporarily and is dropped as soon as it's no longer needed, as opposed to being stored until the end of the compilation process (which is what is done with the HIR).

  • Besides making the types of all nodes available, the THIR also has additional desugaring compared to the HIR. For example, automatic references and dereferences are made explicit, and method calls and overloaded operators are converted into plain function calls. Destruction scopes are also made explicit.

  • Statements, expressions, and match arms are stored separately. For example, statements in the stmts array reference expressions by their index (represented as a ExprId) in the exprs array.

The THIR lives in rustc_mir_build::thir. To construct a thir::Expr, you can use the thir_body function, passing in the memory arena where the THIR will be allocated. Dropping this arena will result in the THIR being destroyed, which is useful to keep peak memory in check. Having a THIR representation of all bodies of a crate in memory at the same time would be very heavy.

You can get a debug representation of the THIR by passing the -Zunpretty=thir-tree flag to rustc.

To demonstrate, let's use the following example:

fn main() {
    let x = 1 + 2;
}

Here is how that gets represented in THIR (as of Aug 2022):

#![allow(unused)]
fn main() {
Thir {
    // no match arms
    arms: [],
    exprs: [
        // expression 0, a literal with a value of 1
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:13: 2:14 (#0),
            kind: Literal {
                lit: Spanned {
                    node: Int(
                        1,
                        Unsuffixed,
                    ),
                    span: oneplustwo.rs:2:13: 2:14 (#0),
                },
                neg: false,
            },
        },
        // expression 1, scope surrounding literal 1
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:13: 2:14 (#0),
            kind: Scope {
                // reference to expression 0 above
                region_scope: Node(3),
                lint_level: Explicit(
                    HirId {
                        owner: DefId(0:3 ~ oneplustwo[6932]::main),
                        local_id: 3,
                    },
                ),
                value: e0,
            },
        },
        // expression 2, literal 2
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:17: 2:18 (#0),
            kind: Literal {
                lit: Spanned {
                    node: Int(
                        2,
                        Unsuffixed,
                    ),
                    span: oneplustwo.rs:2:17: 2:18 (#0),
                },
                neg: false,
            },
        },
        // expression 3, scope surrounding literal 2
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:17: 2:18 (#0),
            kind: Scope {
                region_scope: Node(4),
                lint_level: Explicit(
                    HirId {
                        owner: DefId(0:3 ~ oneplustwo[6932]::main),
                        local_id: 4,
                    },
                ),
                // reference to expression 2 above
                value: e2,
            },
        },
        // expression 4, represents 1 + 2
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:13: 2:18 (#0),
            kind: Binary {
                op: Add,
                // references to scopes surrounding literals above
                lhs: e1,
                rhs: e3,
            },
        },
        // expression 5, scope surrounding expression 4
        Expr {
            ty: i32,
            temp_lifetime: Some(
                Node(1),
            ),
            span: oneplustwo.rs:2:13: 2:18 (#0),
            kind: Scope {
                region_scope: Node(5),
                lint_level: Explicit(
                    HirId {
                        owner: DefId(0:3 ~ oneplustwo[6932]::main),
                        local_id: 5,
                    },
                ),
                value: e4,
            },
        },
        // expression 6, block around statement
        Expr {
            ty: (),
            temp_lifetime: Some(
                Node(9),
            ),
            span: oneplustwo.rs:1:11: 3:2 (#0),
            kind: Block {
                body: Block {
                    targeted_by_break: false,
                    region_scope: Node(8),
                    opt_destruction_scope: None,
                    span: oneplustwo.rs:1:11: 3:2 (#0),
                    // reference to statement 0 below
                    stmts: [
                        s0,
                    ],
                    expr: None,
                    safety_mode: Safe,
                },
            },
        },
        // expression 7, scope around block in expression 6
        Expr {
            ty: (),
            temp_lifetime: Some(
                Node(9),
            ),
            span: oneplustwo.rs:1:11: 3:2 (#0),
            kind: Scope {
                region_scope: Node(9),
                lint_level: Explicit(
                    HirId {
                        owner: DefId(0:3 ~ oneplustwo[6932]::main),
                        local_id: 9,
                    },
                ),
                value: e6,
            },
        },
        // destruction scope around expression 7
        Expr {
            ty: (),
            temp_lifetime: Some(
                Node(9),
            ),
            span: oneplustwo.rs:1:11: 3:2 (#0),
            kind: Scope {
                region_scope: Destruction(9),
                lint_level: Inherited,
                value: e7,
            },
        },
    ],
    stmts: [
        // let statement
        Stmt {
            kind: Let {
                remainder_scope: Remainder { block: 8, first_statement_index: 0},
                init_scope: Node(1),
                pattern: Pat {
                    ty: i32,
                    span: oneplustwo.rs:2:9: 2:10 (#0),
                    kind: Binding {
                        mutability: Not,
                        name: "x",
                        mode: ByValue,
                        var: LocalVarId(
                            HirId {
                                owner: DefId(0:3 ~ oneplustwo[6932]::main),
                                local_id: 7,
                            },
                        ),
                        ty: i32,
                        subpattern: None,
                        is_primary: true,
                    },
                },
                initializer: Some(
                    e5,
                ),
                else_block: None,
                lint_level: Explicit(
                    HirId {
                        owner: DefId(0:3 ~ oneplustwo[6932]::main),
                        local_id: 6,
                    },
                ),
            },
            opt_destruction_scope: Some(
                Destruction(1),
            ),
        },
    ],
}
}

The MIR (Mid-level IR)

MIR is Rust's Mid-level Intermediate Representation. It is constructed from HIR. MIR was introduced in RFC 1211. It is a radically simplified form of Rust that is used for certain flow-sensitive safety checks – notably the borrow checker! – and also for optimization and code generation.

If you'd like a very high-level introduction to MIR, as well as some of the compiler concepts that it relies on (such as control-flow graphs and desugaring), you may enjoy the rust-lang blog post that introduced MIR.

Introduction to MIR

MIR is defined in the compiler/rustc_middle/src/mir/ module, but much of the code that manipulates it is found in compiler/rustc_mir_build, compiler/rustc_mir_transform, and compiler/rustc_mir_dataflow.

Some of the key characteristics of MIR are:

  • It is based on a control-flow graph.
  • It does not have nested expressions.
  • All types in MIR are fully explicit.

Key MIR vocabulary

This section introduces the key concepts of MIR, summarized here:

  • Basic blocks: units of the control-flow graph, consisting of:
    • statements: actions with one successor
    • terminators: actions with potentially multiple successors; always at the end of a block
    • (if you're not familiar with the term basic block, see the background chapter)
  • Locals: Memory locations allocated on the stack (conceptually, at least), such as function arguments, local variables, and temporaries. These are identified by an index, written with a leading underscore, like _1. There is also a special "local" (_0) allocated to store the return value.
  • Places: expressions that identify a location in memory, like _1 or _1.f.
  • Rvalues: expressions that produce a value. The "R" stands for the fact that these are the "right-hand side" of an assignment.
    • Operands: the arguments to an rvalue, which can either be a constant (like 22) or a place (like _1).

You can get a feeling for how MIR is constructed by translating simple programs into MIR and reading the pretty printed output. In fact, the playground makes this easy, since it supplies a MIR button that will show you the MIR for your program. Try putting this program into play (or clicking on this link), and then clicking the "MIR" button on the top:

fn main() {
    let mut vec = Vec::new();
    vec.push(1);
    vec.push(2);
}

You should see something like:

// WARNING: This output format is intended for human consumers only
// and is subject to change without notice. Knock yourself out.
fn main() -> () {
    ...
}

This is the MIR format for the main function. MIR shown by above link is optimized. Some statements like StorageLive are removed in optimization. This happens because the compiler notices the value is never accessed in the code. We can use rustc [filename].rs -Z mir-opt-level=0 --emit mir to view unoptimized MIR. This requires the nightly toolchain.

Variable declarations. If we drill in a bit, we'll see it begins with a bunch of variable declarations. They look like this:

let mut _0: ();                      // return place
let mut _1: std::vec::Vec<i32>;      // in scope 0 at src/main.rs:2:9: 2:16
let mut _2: ();
let mut _3: &mut std::vec::Vec<i32>;
let mut _4: ();
let mut _5: &mut std::vec::Vec<i32>;

You can see that variables in MIR don't have names, they have indices, like _0 or _1. We also intermingle the user's variables (e.g., _1) with temporary values (e.g., _2 or _3). You can tell apart user-defined variables because they have debuginfo associated to them (see below).

User variable debuginfo. Below the variable declarations, we find the only hint that _1 represents a user variable:

scope 1 {
    debug vec => _1;                 // in scope 1 at src/main.rs:2:9: 2:16
}

Each debug <Name> => <Place>; annotation describes a named user variable, and where (i.e. the place) a debugger can find the data of that variable. Here the mapping is trivial, but optimizations may complicate the place, or lead to multiple user variables sharing the same place. Additionally, closure captures are described using the same system, and so they're complicated even without optimizations, e.g.: debug x => (*((*_1).0: &T));.

The "scope" blocks (e.g., scope 1 { .. }) describe the lexical structure of the source program (which names were in scope when), so any part of the program annotated with // in scope 0 would be missing vec, if you were stepping through the code in a debugger, for example.

Basic blocks. Reading further, we see our first basic block (naturally it may look slightly different when you view it, and I am ignoring some of the comments):

bb0: {
    StorageLive(_1);
    _1 = const <std::vec::Vec<T>>::new() -> bb2;
}

A basic block is defined by a series of statements and a final terminator. In this case, there is one statement:

StorageLive(_1);

This statement indicates that the variable _1 is "live", meaning that it may be used later – this will persist until we encounter a StorageDead(_1) statement, which indicates that the variable _1 is done being used. These "storage statements" are used by LLVM to allocate stack space.

The terminator of the block bb0 is the call to Vec::new:

_1 = const <std::vec::Vec<T>>::new() -> bb2;

Terminators are different from statements because they can have more than one successor – that is, control may flow to different places. Function calls like the call to Vec::new are always terminators because of the possibility of unwinding, although in the case of Vec::new we are able to see that indeed unwinding is not possible, and hence we list only one successor block, bb2.

If we look ahead to bb2, we will see it looks like this:

bb2: {
    StorageLive(_3);
    _3 = &mut _1;
    _2 = const <std::vec::Vec<T>>::push(move _3, const 1i32) -> [return: bb3, unwind: bb4];
}

Here there are two statements: another StorageLive, introducing the _3 temporary, and then an assignment:

_3 = &mut _1;

Assignments in general have the form:

<Place> = <Rvalue>

A place is an expression like _3, _3.f or *_3 – it denotes a location in memory. An Rvalue is an expression that creates a value: in this case, the rvalue is a mutable borrow expression, which looks like &mut <Place>. So we can kind of define a grammar for rvalues like so:

<Rvalue>  = & (mut)? <Place>
          | <Operand> + <Operand>
          | <Operand> - <Operand>
          | ...

<Operand> = Constant
          | copy Place
          | move Place

As you can see from this grammar, rvalues cannot be nested – they can only reference places and constants. Moreover, when you use a place, we indicate whether we are copying it (which requires that the place have a type T where T: Copy) or moving it (which works for a place of any type). So, for example, if we had the expression x = a + b + c in Rust, that would get compiled to two statements and a temporary:

TMP1 = a + b
x = TMP1 + c

(Try it and see, though you may want to do release mode to skip over the overflow checks.)

MIR data types

The MIR data types are defined in the compiler/rustc_middle/src/mir/ module. Each of the key concepts mentioned in the previous section maps in a fairly straightforward way to a Rust type.

The main MIR data type is Body. It contains the data for a single function (along with sub-instances of Mir for "promoted constants", but you can read about those below).

  • Basic blocks: The basic blocks are stored in the field Body::basic_blocks; this is a vector of BasicBlockData structures. Nobody ever references a basic block directly: instead, we pass around BasicBlock values, which are newtype'd indices into this vector.
  • Statements are represented by the type Statement.
  • Terminators are represented by the Terminator.
  • Locals are represented by a newtype'd index type Local. The data for a local variable is found in the Body::local_decls vector. There is also a special constant RETURN_PLACE identifying the special "local" representing the return value.
  • Places are identified by the struct Place. There are a few fields:
    • Local variables like _1
    • Projections, which are fields or other things that "project out" from a base place. These are represented by the newtype'd type ProjectionElem. So e.g. the place _1.f is a projection, with f being the "projection element" and _1 being the base path. *_1 is also a projection, with the * being represented by the ProjectionElem::Deref element.
  • Rvalues are represented by the enum Rvalue.
  • Operands are represented by the enum Operand.

Representing constants

When code has reached the MIR stage, constants can generally come in two forms: MIR constants (mir::Constant) and type system constants (ty::Const). MIR constants are used as operands: in x + CONST, CONST is a MIR constant; similarly, in x + 2, 2 is a MIR constant. Type system constants are used in the type system, in particular for array lengths but also for const generics.

Generally, both kinds of constants can be "unevaluated" or "already evaluated". An unevaluated constant simply stores the DefId of what needs to be evaluated to compute this result. An evaluated constant (a "value") has already been computed; their representation differs between type system constants and MIR constants: MIR constants evaluate to a mir::ConstValue; type system constants evaluate to a ty::ValTree.

Type system constants have some more variants to support const generics: they can refer to local const generic parameters, and they are subject to inference. Furthermore, the mir::Constant::Ty variant lets us use an arbitrary type system constant as a MIR constant; this happens whenever a const generic parameter is used as an operand.

MIR constant values

In general, a MIR constant value (mir::ConstValue) was computed by evaluating some constant the user wrote. This const evaluation produces a very low-level representation of the result in terms of individual bytes. We call this an "indirect" constant (mir::ConstValue::Indirect) since the value is stored in-memory.

However, storing everything in-memory would be awfully inefficient. Hence there are some other variants in mir::ConstValue that can represent certain simple and common values more efficiently. In particular, everything that can be directly written as a literal in Rust (integers, floats, chars, bools, but also "string literals" and b"byte string literals") has an optimized variant that avoids the full overhead of the in-memory representation.

ValTrees

An evaluated type system constant is a "valtree". The ty::ValTree datastructure allows us to represent

  • arrays,
  • many structs,
  • tuples,
  • enums and,
  • most primitives.

The most important rule for this representation is that every value must be uniquely represented. In other words: a specific value must only be representable in one specific way. For example: there is only one way to represent an array of two integers as a ValTree: ValTree::Branch(&[ValTree::Leaf(first_int), ValTree::Leaf(second_int)]). Even though theoretically a [u32; 2] could be encoded in a u64 and thus just be a ValTree::Leaf(bits_of_two_u32), that is not a legal construction of ValTree (and is very complex to do, so it is unlikely anyone is tempted to do so).

These rules also mean that some values are not representable. There can be no unions in type level constants, as it is not clear how they should be represented, because their active variant is unknown. Similarly there is no way to represent raw pointers, as addresses are unknown at compile-time and thus we cannot make any assumptions about them. References on the other hand can be represented, as equality for references is defined as equality on their value, so we ignore their address and just look at the backing value. We must make sure that the pointer values of the references are not observable at compile time. We thus encode &42 exactly like 42. Any conversion from valtree back to a MIR constant value must reintroduce an actual indirection. At codegen time the addresses may be deduplicated between multiple uses or not, entirely depending on arbitrary optimization choices.

As a consequence, all decoding of ValTree must happen by matching on the type first and making decisions depending on that. The value itself gives no useful information without the type that belongs to it.

See the const-eval WG's docs on promotion.

MIR construction

The lowering of HIR to MIR occurs for the following (probably incomplete) list of items:

  • Function and closure bodies
  • Initializers of static and const items
  • Initializers of enum discriminants
  • Glue and shims of any kind
    • Tuple struct initializer functions
    • Drop code (the Drop::drop function is not called directly)
    • Drop implementations of types without an explicit Drop implementation

The lowering is triggered by calling the mir_built query. The MIR builder does not actually use the HIR but operates on the THIR instead, processing THIR expressions recursively.

The lowering creates local variables for every argument as specified in the signature. Next, it creates local variables for every binding specified (e.g. (a, b): (i32, String)) produces 3 bindings, one for the argument, and two for the bindings. Next, it generates field accesses that read the fields from the argument and writes the value to the binding variable.

With this initialization out of the way, the lowering triggers a recursive call to a function that generates the MIR for the body (a Block expression) and writes the result into the RETURN_PLACE.

unpack! all the things

Functions that generate MIR tend to fall into one of two patterns. First, if the function generates only statements, then it will take a basic block as argument onto which those statements should be appended. It can then return a result as normal:

fn generate_some_mir(&mut self, block: BasicBlock) -> ResultType {
   ...
}

But there are other functions that may generate new basic blocks as well. For example, lowering an expression like if foo { 22 } else { 44 } requires generating a small "diamond-shaped graph". In this case, the functions take a basic block where their code starts and return a (potentially) new basic block where the code generation ends. The BlockAnd type is used to represent this:

fn generate_more_mir(&mut self, block: BasicBlock) -> BlockAnd<ResultType> {
    ...
}

When you invoke these functions, it is common to have a local variable block that is effectively a "cursor". It represents the point at which we are adding new MIR. When you invoke generate_more_mir, you want to update this cursor. You can do this manually, but it's tedious:

let mut block;
let v = match self.generate_more_mir(..) {
    BlockAnd { block: new_block, value: v } => {
        block = new_block;
        v
    }
};

For this reason, we offer a macro that lets you write let v = unpack!(block = self.generate_more_mir(...)). It simply extracts the new block and overwrites the variable block that you named in the unpack!.

Lowering expressions into the desired MIR

There are essentially four kinds of representations one might want of an expression:

  • Place refers to a (or part of a) preexisting memory location (local, static, promoted)
  • Rvalue is something that can be assigned to a Place
  • Operand is an argument to e.g. a + operation or a function call
  • a temporary variable containing a copy of the value

The following image depicts a general overview of the interactions between the representations:

Click here for a more detailed view

We start out with lowering the function body to an Rvalue so we can create an assignment to RETURN_PLACE, This Rvalue lowering will in turn trigger lowering to Operand for its arguments (if any). Operand lowering either produces a const operand, or moves/copies out of a Place, thus triggering a Place lowering. An expression being lowered to a Place can in turn trigger a temporary to be created if the expression being lowered contains operations. This is where the snake bites its own tail and we need to trigger an Rvalue lowering for the expression to be written into the local.

Operator lowering

Operators on builtin types are not lowered to function calls (which would end up being infinite recursion calls, because the trait impls just contain the operation itself again). Instead there are Rvalues for binary and unary operators and index operations. These Rvalues later get codegened to llvm primitive operations or llvm intrinsics.

Operators on all other types get lowered to a function call to their impl of the operator's corresponding trait.

Regardless of the lowering kind, the arguments to the operator are lowered to Operands. This means all arguments are either constants, or refer to an already existing value somewhere in a local or static.

Method call lowering

Method calls are lowered to the same TerminatorKind that function calls are. In MIR there is no difference between method calls and function calls anymore.

Conditions

if conditions and match statements for enums with variants that have no fields are lowered to TerminatorKind::SwitchInt. Each possible value (so 0 and 1 for if conditions) has a corresponding BasicBlock to which the code continues. The argument being branched on is (again) an Operand representing the value of the if condition.

Pattern matching

match statements for enums with variants that have fields are lowered to TerminatorKind::SwitchInt, too, but the Operand refers to a Place where the discriminant of the value can be found. This often involves reading the discriminant to a new temporary variable.

Aggregate construction

Aggregate values of any kind (e.g. structs or tuples) are built via Rvalue::Aggregate. All fields are lowered to Operators. This is essentially equivalent to one assignment statement per aggregate field plus an assignment to the discriminant in the case of enums.

MIR visitor

The MIR visitor is a convenient tool for traversing the MIR and either looking for things or making changes to it. The visitor traits are defined in the rustc_middle::mir::visit module – there are two of them, generated via a single macro: Visitor (which operates on a &Mir and gives back shared references) and MutVisitor (which operates on a &mut Mir and gives back mutable references).

To implement a visitor, you have to create a type that represents your visitor. Typically, this type wants to "hang on" to whatever state you will need while processing MIR:

struct MyVisitor<...> {
    tcx: TyCtxt<'tcx>,
    ...
}

and you then implement the Visitor or MutVisitor trait for that type:

impl<'tcx> MutVisitor<'tcx> for MyVisitor {
    fn visit_foo(&mut self, ...) {
        ...
        self.super_foo(...);
    }
}

As shown above, within the impl, you can override any of the visit_foo methods (e.g., visit_terminator) in order to write some code that will execute whenever a foo is found. If you want to recursively walk the contents of the foo, you then invoke the super_foo method. (NB. You never want to override super_foo.)

A very simple example of a visitor can be found in LocalFinder. By implementing visit_local method, this visitor identifies local variables that can be candidates for reordering.

Traversal

In addition the visitor, the rustc_middle::mir::traversal module contains useful functions for walking the MIR CFG in different standard orders (e.g. pre-order, reverse post-order, and so forth).

MIR queries and passes

If you would like to get the MIR:

  • for a function - you can use the optimized_mir query (typically used by codegen) or the mir_for_ctfe query (typically used by compile time function evaluation, i.e., CTFE);
  • for a promoted - you can use the promoted_mir query.

These will give you back the final, optimized MIR. For foreign def-ids, we simply read the MIR from the other crate's metadata. But for local def-ids, the query will construct the optimized MIR by requesting a pipeline of upstream queries1. Each query will contain a series of passes. This section describes how those queries and passes work and how you can extend them.

To produce the optimized MIR for a given def-id D, optimized_mir(D) goes through several suites of passes, each grouped by a query. Each suite consists of passes which perform linting, analysis, transformation or optimization. Each query represent a useful intermediate point where we can access the MIR dialect for type checking or other purposes:

  • mir_built(D) – it gives the initial MIR just after it's built;
  • mir_const(D) – it applies some simple transformation passes to make MIR ready for const qualification;
  • mir_promoted(D) - it extracts promotable temps into separate MIR bodies, and also makes MIR ready for borrow checking;
  • mir_drops_elaborated_and_const_checked(D) - it performs borrow checking, runs major transformation passes (such as drop elaboration) and makes MIR ready for optimization;
  • optimized_mir(D) – it performs all enabled optimizations and reaches the final state.
1

See the Queries chapter for the general concept of query.

Implementing and registering a pass

A MirPass is some bit of code that processes the MIR, typically transforming it along the way somehow. But it may also do other things like linting (e.g., CheckPackedRef, CheckConstItemMutation, FunctionItemReferences, which implement MirLint) or optimization (e.g., SimplifyCfg, RemoveUnneededDrops). While most MIR passes are defined in the rustc_mir_transform crate, the MirPass trait itself is found in the rustc_middle crate, and it basically consists of one primary method, run_pass, that simply gets an &mut Body (along with the tcx). The MIR is therefore modified in place (which helps to keep things efficient).

A basic example of a MIR pass is RemoveStorageMarkers, which walks the MIR and removes all storage marks if they won't be emitted during codegen. As you can see from its source, a MIR pass is defined by first defining a dummy type, a struct with no fields:

#![allow(unused)]
fn main() {
pub struct RemoveStorageMarkers;
}

for which we implement the MirPass trait. We can then insert this pass into the appropriate list of passes found in a query like mir_built, optimized_mir, etc. (If this is an optimization, it should go into the optimized_mir list.)

Another example of a simple MIR pass is CleanupPostBorrowck, which walks the MIR and removes all statements that are not relevant to code generation. As you can see from its source, it is defined by first defining a dummy type, a struct with no fields:

#![allow(unused)]
fn main() {
pub struct CleanupPostBorrowck;
}

for which we implement the MirPass trait:

#![allow(unused)]
fn main() {
impl<'tcx> MirPass<'tcx> for CleanupPostBorrowck {
    fn run_pass(&self, tcx: TyCtxt<'tcx>, body: &mut Body<'tcx>) {
        ...
    }
}
}

We register this pass inside the mir_drops_elaborated_and_const_checked query. (If this is an optimization, it should go into the optimized_mir list.)

If you are writing a pass, there's a good chance that you are going to want to use a MIR visitor. MIR visitors are a handy way to walk all the parts of the MIR, either to search for something or to make small edits.

Stealing

The intermediate queries mir_const() and mir_promoted() yield up a &'tcx Steal<Body<'tcx>>, allocated using tcx.alloc_steal_mir(). This indicates that the result may be stolen by a subsequent query – this is an optimization to avoid cloning the MIR. Attempting to use a stolen result will cause a panic in the compiler. Therefore, it is important that you do not accidentally read from these intermediate queries without the consideration of the dependency in the MIR processing pipeline.

Because of this stealing mechanism, some care must be taken to ensure that, before the MIR at a particular phase in the processing pipeline is stolen, anyone who may want to read from it has already done so.

Concretely, this means that if you have a query foo(D) that wants to access the result of mir_promoted(D), you need to have foo(D) calling the mir_const(D) query first. This will force it to execute even though you don't directly require its result.

This mechanism is a bit dodgy. There is a discussion of more elegant alternatives in rust-lang/rust#41710.

Overview

Below is an overview of the stealing dependency in the MIR processing pipeline2:

flowchart BT
  mir_for_ctfe* --borrow--> id40
  id5 --steal--> id40

  mir_borrowck* --borrow--> id3
  id41 --steal part 1--> id3
  id40 --steal part 0--> id3

  mir_const_qualif* -- borrow --> id2
  id3 -- steal --> id2

  id2 -- steal --> id1

  id1([mir_built])
  id2([mir_const])
  id3([mir_promoted])
  id40([mir_drops_elaborated_and_const_checked])
  id41([promoted_mir])
  id5([optimized_mir])

  style id1 fill:#bbf
  style id2 fill:#bbf
  style id3 fill:#bbf
  style id40 fill:#bbf
  style id41 fill:#bbf
  style id5 fill:#bbf

The stadium-shape queries (e.g., mir_built) with a deep color are the primary queries in the pipeline, while the rectangle-shape queries (e.g., mir_const_qualif*3) with a shallow color are those subsequent queries that need to read the results from &'tcx Steal<Body<'tcx>>. With the stealing mechanism, the rectangle-shape queries must be performed before any stadium-shape queries, that have an equal or larger height in the dependency tree, ever do.

2

The mir_promoted query will yield up a tuple (&'tcx Steal<Body<'tcx>>, &'tcx Steal<IndexVec<Promoted, Body<'tcx>>>), promoted_mir will steal part 1 (&'tcx Steal<IndexVec<Promoted, Body<'tcx>>>) and mir_drops_elaborated_and_const_checked will steal part 0 (&'tcx Steal<Body<'tcx>>). And their stealing is irrelevant to each other, i.e., can be performed separately.

3

Note that the * suffix in the queries represent a set of queries with the same prefix. For example, mir_borrowck* represents mir_borrowck, mir_borrowck_const_arg and mir_borrowck_opt_const_arg.

Example

As an example, consider MIR const qualification. It wants to read the result produced by the mir_const query. However, that result will be stolen by the mir_promoted query at some time in the pipeline. Before mir_promoted is ever queried, calling the mir_const_qualif query will succeed since mir_const will produce (if queried the first time) or cache (if queried multiple times) the Steal result and the result is not stolen yet. After mir_promoted is queried, the result would be stolen and calling the mir_const_qualif query to read the result would cause a panic.

Therefore, with this stealing mechanism, mir_promoted should guarantee any mir_const_qualif* queries are called before it actually steals, thus ensuring that the reads have already happened (remember that queries are memoized, so executing a query twice simply loads from a cache the second time).

Inline assembly

Overview

Inline assembly in rustc mostly revolves around taking an asm! macro invocation and plumbing it through all of the compiler layers down to LLVM codegen. Throughout the various stages, an InlineAsm generally consists of 3 components:

  • The template string, which is stored as an array of InlineAsmTemplatePiece. Each piece represents either a literal or a placeholder for an operand (just like format strings).

    #![allow(unused)]
    fn main() {
    pub enum InlineAsmTemplatePiece {
        String(String),
        Placeholder { operand_idx: usize, modifier: Option<char>, span: Span },
    }
    }
    
  • The list of operands to the asm! (in, [late]out, in[late]out, sym, const). These are represented differently at each stage of lowering, but follow a common pattern:

    • in, out and inout all have an associated register class (reg) or explicit register ("eax").
    • inout has 2 forms: one with a single expression that is both read from and written to, and one with two separate expressions for the input and output parts.
    • out and inout have a late flag (lateout / inlateout) to indicate that the register allocator is allowed to reuse an input register for this output.
    • out and the split variant of inout allow _ to be specified for an output, which means that the output is discarded. This is used to allocate scratch registers for assembly code.
    • const refers to an anonymous constants and generally works like an inline const.
    • sym is a bit special since it only accepts a path expression, which must point to a static or a fn.
  • The options set at the end of the asm! macro. The only ones that are of particular interest to rustc are NORETURN which makes asm! return ! instead of (), and RAW which disables format string parsing. The remaining options are mostly passed through to LLVM with little processing.

    #![allow(unused)]
    fn main() {
    bitflags::bitflags! {
        pub struct InlineAsmOptions: u16 {
            const PURE = 1 << 0;
            const NOMEM = 1 << 1;
            const READONLY = 1 << 2;
            const PRESERVES_FLAGS = 1 << 3;
            const NORETURN = 1 << 4;
            const NOSTACK = 1 << 5;
            const ATT_SYNTAX = 1 << 6;
            const RAW = 1 << 7;
            const MAY_UNWIND = 1 << 8;
        }
    }
    }
    

AST

InlineAsm is represented as an expression in the AST with the ast::InlineAsm type.

The asm! macro is implemented in rustc_builtin_macros and outputs an InlineAsm AST node. The template string is parsed using fmt_macros, positional and named operands are resolved to explicit operand indices. Since target information is not available to macro invocations, validation of the registers and register classes is deferred to AST lowering.

HIR

InlineAsm is represented as an expression in the HIR with the hir::InlineAsm type.

AST lowering is where InlineAsmRegOrRegClass is converted from Symbols to an actual register or register class. If any modifiers are specified for a template string placeholder, these are validated against the set allowed for that operand type. Finally, explicit registers for inputs and outputs are checked for conflicts (same register used for different operands).

Type checking

Each register class has a whitelist of types that it may be used with. After the types of all operands have been determined, the intrinsicck pass will check that these types are in the whitelist. It also checks that split inout operands have compatible types and that const operands are integers or floats. Suggestions are emitted where needed if a template modifier should be used for an operand based on the type that was passed into it.

THIR

InlineAsm is represented as an expression in the THIR with the InlineAsmExpr type.

The only significant change compared to HIR is that Sym has been lowered to either a SymFn whose expr is a Literal ZST of the fn, or a SymStatic which points to the DefId of a static.

MIR

InlineAsm is represented as a Terminator in the MIR with the TerminatorKind::InlineAsm variant

As part of THIR lowering, InOut and SplitInOut operands are lowered to a split form with a separate in_value and out_place.

Semantically, the InlineAsm terminator is similar to the Call terminator except that it has multiple output places where a Call only has a single return place output.

Codegen

Operands are lowered one more time before being passed to LLVM codegen, this is represented by the InlineAsmOperandRef type from rustc_codegen_ssa.

The operands are lowered to LLVM operands and constraint codes as follow:

  • out and the output part of inout operands are added first, as required by LLVM. Late output operands have a = prefix added to their constraint code, non-late output operands have a =& prefix added to their constraint code.
  • in operands are added normally.
  • inout operands are tied to the matching output operand.
  • sym operands are passed as function pointers or pointers, using the "s" constraint.
  • const operands are formatted to a string and directly inserted in the template string.

The template string is converted to LLVM form:

  • $ characters are escaped as $$.
  • const operands are converted to strings and inserted directly.
  • Placeholders are formatted as ${X:M} where X is the operand index and M is the modifier character. Modifiers are converted from the Rust form to the LLVM form.

The various options are converted to clobber constraints or LLVM attributes, refer to the RFC for more details.

Note that LLVM is sometimes rather picky about what types it accepts for certain constraint codes so we sometimes need to insert conversions to/from a supported type. See the target-specific ISelLowering.cpp files in LLVM for details of what types are supported for each register class.

Adding support for new architectures

Adding inline assembly support to an architecture is mostly a matter of defining the registers and register classes for that architecture. All the definitions for register classes are located in compiler/rustc_target/asm/.

Additionally you will need to implement lowering of these register classes to LLVM constraint codes in compiler/rustc_codegen_llvm/asm.rs.

When adding a new architecture, make sure to cross-reference with the LLVM source code:

  • LLVM has restrictions on which types can be used with a particular constraint code. Refer to the getRegForInlineAsmConstraint function in lib/Target/${ARCH}/${ARCH}ISelL