This subchapter is about the bootstrapping process.
Bootstrapping is the process of using a compiler to compile itself. More accurately, it means using an older compiler to compile a newer version of the same compiler.
This raises a chicken-and-egg paradox: where did the first compiler come from? It must have been written in a different language. In Rust's case it was written in OCaml. However it was abandoned long ago and the only way to build a modern version of rustc is a slightly less modern version.
This is exactly how
x.py works: it downloads the current beta release of
rustc, then uses it to compile the new compiler.
rustc is done in stages:
- Stage 0: the stage0 compiler is usually (you can configure
x.pyto use something else) the current beta
rustccompiler and its associated dynamic libraries (which
x.pywill download for you). This stage0 compiler is then used only to compile
rustc. When compiling
rustc, this stage0 compiler uses the freshly compiled
std. There are two concepts at play here: a compiler (with its set of dependencies) and its 'target' or 'object' libraries (
rustc). Both are staged, but in a staggered manner.
- Stage 1: the code in your clone (for new version) is then
compiled with the stage0 compiler to produce the stage1 compiler.
However, it was built with an older compiler (stage0), so to
optimize the stage1 compiler we go to next the stage.
- In theory, the stage1 compiler is functionally identical to the
stage2 compiler, but in practice there are subtle differences. In
particular, the stage1 compiler itself was built by stage0 and
hence not by the source in your working directory: this means that
the symbol names used in the compiler source may not match the
symbol names that would have been made by the stage1 compiler. This is
important when using dynamic linking and the lack of ABI compatibility
between versions. This primarily manifests when tests try to link with any
rustc_*crates or use the (now deprecated) plugin infrastructure. These tests are marked with
- In theory, the stage1 compiler is functionally identical to the stage2 compiler, but in practice there are subtle differences. In particular, the stage1 compiler itself was built by stage0 and hence not by the source in your working directory: this means that the symbol names used in the compiler source may not match the symbol names that would have been made by the stage1 compiler. This is important when using dynamic linking and the lack of ABI compatibility between versions. This primarily manifests when tests try to link with any of the
- Stage 2: we rebuild our stage1 compiler with itself to produce the stage2 compiler (i.e. it builds itself) to have all the latest optimizations. (By default, we copy the stage1 libraries for use by the stage2 compiler, since they ought to be identical.)
- (Optional) Stage 3: to sanity check our new compiler, we can build the libraries with the stage2 compiler. The result ought to be identical to before, unless something has broken.
stage2 compiler is the one distributed with
rustup and all other
install methods. However, it takes a very long time to build because one must
first build the new compiler with an older compiler and then use that to
build the new compiler with itself. For development, you usually only want
x.py build library/std.
x.py tries to be helpful and pick the stage you most likely meant for each subcommand.
These defaults are as follows:
You can always override the stage by passing
--stage N explicitly.
For more information about stages, see below.
Since the build system uses the current beta compiler to build the stage-1
bootstrapping compiler, the compiler source code can't use some features
until they reach beta (because otherwise the beta compiler doesn't support
them). On the other hand, for compiler intrinsics and internal
features, the features have to be used. Additionally, the compiler makes
heavy use of nightly features (
#![feature(...)]). How can we resolve this
There are two methods used:
- The build system sets
--cfg bootstrapwhen building with
stage0, so we can use
cfg(not(bootstrap))to only use features when built with
stage1. This is useful for e.g. features that were just stabilized, which require
#![feature(...)]when built with
stage0, but not for
- The build system sets
RUSTC_BOOTSTRAP=1. This special variable means to break the stability guarantees of rust: Allow using
#![feature(...)]with a compiler that's not nightly. This should never be used except when bootstrapping the compiler.
When you use the bootstrap system, you'll call it through
However, most of the code lives in
bootstrap has a difficult problem: it is written in Rust, but yet it is run
before the rust compiler is built! To work around this, there are two
components of bootstrap: the main one written in rust, and
bootstrap.py is what gets run by x.py. It takes care of downloading the
stage0 compiler, which will then build the bootstrap binary written in
Because there are two separate codebases behind
x.py, they need to
be kept in sync. In particular, both
bootstrap.py and the bootstrap binary
config.toml and read the same command line arguments.
keeps these in sync by setting various environment variables, and the
programs sometimes have to add arguments that are explicitly ignored, to be
read by the other.
This section is a work in progress. In the meantime, you can see an example contribution here.
This is a detailed look into the separate bootstrap stages.
x.py uses is that:
--stage Nflag means to run the stage N compiler (
- A "stage N artifact" is a build artifact that is produced by the stage N compiler.
- The "stage (N+1) compiler" is assembled from "stage N artifacts". This process is called uplifting.
Anything you can build with
x.py is a build artifact.
Build artifacts include, but are not limited to:
- binaries, like
- shared objects, like
- rlib files, like
- HTML files generated by rustdoc, like
There is a separate step between building the compiler and making it possible
to run. This step is called assembling or uplifting the compiler. It copies
all the necessary build artifacts from
build/stage(N+1), which allows you to use
build/stage(N+1) as a toolchain
rustup toolchain link.
There is no way to trigger this step on its own, but
perform it automatically any time you build with stage N+1.
x.py build --stage 0means to build with the beta
x.py doc --stage 0means to document using the beta
x.py test --stage 0 library/stdmeans to run tests on the standard library without building
rustcfrom source ('build with stage 0, then test the artifacts'). If you're working on the standard library, this is normally the test command you want.
x.py test src/test/uimeans to build the stage 1 compiler and run
compileteston it. If you're working on the compiler, this is normally the test command you want.
x.py test --stage 0 src/test/uiis not meaningful: it runs tests on the beta compiler and doesn't build
rustcfrom source. Use
test src/test/uiinstead, which builds stage 1 from source.
x.py test --stage 0 compiler/rustcbuilds the compiler but runs no tests: it's running
cargo test -p rustc, but cargo doesn't understand Rust's tests. You shouldn't need to use this, use
testinstead (without arguments).
x.py build --stage 0 compiler/rustcbuilds the compiler, but does not assemble it. Use
x.py build library/stdinstead, which puts the compiler in
build --stage N compiler/rustc does not build the stage N compiler:
instead it builds the stage N+1 compiler using the stage N compiler.
In short, stage 0 uses the stage0 compiler to create stage0 artifacts which will later be uplifted to be the stage1 compiler.
In each stage, two major steps are performed:
stdis compiled by the stage N compiler.
stdis linked to programs built by the stage N compiler, including the stage N artifacts (stage (N+1) compiler).
This is somewhat intuitive if one thinks of the stage N artifacts as "just"
another program we are building with the stage N compiler:
build --stage N compiler/rustc is linking the stage N artifacts to the
built by the stage N compiler.
Here is a chart of a full build using
Keep in mind this diagram is a simplification, i.e.
rustdoc can be built at
different stages, the process is a bit different when passing flags such as
--keep-stage, or if there are non-host targets.
The stage 2 compiler is what is shipped to end-users.
Note that there are two
std libraries in play here:
- The library linked to
stageN/rustc, which was built by stage N-1 (stage N-1
- The library used to compile programs with
stageN/rustc, which was built by stage N (stage N
std is pretty much necessary for any useful work with the stage N compiler.
Without it, you can only compile programs with
#![no_core] -- not terribly useful!
The reason these need to be different is because they aren't necessarily ABI-compatible: there could be a new layout optimizations, changes to MIR, or other changes to Rust metadata on nightly that aren't present in beta.
This is also where
--keep-stage 1 library/std comes into play. Since most
changes to the compiler don't actually change the ABI, once you've produced a
std in stage 1, you can probably just reuse it with a different compiler.
If the ABI hasn't changed, you're good to go, no need to spend time
--keep-stage simply assumes the previous compile is fine and copies those
artifacts into the appropriate place, skipping the cargo invocation.
std is different depending on whether you are cross-compiling or not
(see in the table how stage2 only builds non-host
This is because
x.py uses a trick: if
TARGET are the same,
it will reuse stage1
std for stage2! This is sound because stage1
was compiled with the stage1 compiler, i.e. a compiler using the source code
you currently have checked out. So it should be identical (and therefore ABI-compatible)
stage2/rustc would compile.
However, when cross-compiling, stage1
std will only run on the host.
So the stage2 compiler has to recompile
std for the target.
rustc generated by the stage0 compiler is linked to the freshly-built
std, which means that for the most part only
std needs to be cfg-gated,
rustc can use features added to std immediately after their addition,
without need for them to get into the downloaded beta.
Note this is different from any other Rust program: stage1
is built by the beta compiler, but using the master version of libstd!
The only time
cfg(bootstrap) is when it adds internal lints
that use diagnostic items. This happens very rarely.
When you build a project with cargo, the build artifacts for dependendencies
are normally stored in
target/debug/deps. This only contains dependencies cargo
knows about; in particular, it doesn't have the standard library. Where do
proc_macro come from? It comes from the sysroot, the root
of a number of directories where the compiler loads build artifacts at runtime.
The sysroot doesn't just store the standard library, though - it includes
anything that needs to be loaded at runtime. That includes (but is not limited
- The compiler crates themselves, when using
rustc_private. In-tree these are always present; out of tree, you need to install
libLLVM.so, the shared object file for the LLVM project. In-tree this is either built from source or downloaded from CI; out-of-tree, you need to install
All the artifacts listed so far are compiler runtime dependencies. You can
see them with
rustc --print sysroot:
$ ls $(rustc --print sysroot)/lib libchalk_derive-0685d79833dc9b2b.so libstd-25c6acf8063a3802.so libLLVM-11-rust-1.50.0-nightly.so libtest-57470d2aa8f7aa83.so librustc_driver-4f0cc9f50e53f0ba.so libtracing_attributes-e4be92c35ab2a33b.so librustc_macros-5f0ec4a119c6ac86.so rustlib
There are also runtime dependencies for the standard library! These are in
$ ls $(rustc --print sysroot)/lib/rustlib/x86_64-unknown-linux-gnu/lib | head -n 5 libaddr2line-6c8e02b8fedc1e5f.rlib libadler-9ef2480568df55af.rlib liballoc-9c4002b5f79ba0e1.rlib libcfg_if-512eb53291f6de7e.rlib libcompiler_builtins-ef2408da76957905.rlib
rustlib includes libraries like
cfg_if, which are not part
of the public API of the standard library, but are used to implement it.
rustlib is part of the search path for linkers, but
lib will never be part
of the search path.
rustlib is part of the search path, it means we have to be careful
about which crates are included in it. In particular, all crates except for
the standard library are built with the flag
which means that you have to use
#![feature(rustc_private)] in order to
load it (as opposed to the standard library, which is always available).
You can find more discussion about sysroots in:
- The rustdoc PR explaining why it uses
extern cratefor dependencies loaded from sysroot
- Discussions about sysroot on Zulip
- Discussions about building rustdoc out of tree
The following tables indicate the outputs of various stage actions:
|Stage 0 Action||Output|
--stage=0 stops here.
|Stage 1 Action||Output|
|copy (uplift) |
|copy (uplift) |
|copy (uplift) |
--stage=1 stops here.
|Stage 2 Action||Output|
|copy (uplift) |
|copy (uplift) |
--stage=2 stops here.
x.py allows you to pass stage-specific flags to
rustc when bootstrapping.
RUSTFLAGS_BOOTSTRAP environment variable is passed as RUSTFLAGS to the bootstrap stage
RUSTFLAGS_NOT_BOOTSTRAP is passed when building artifacts for later stages.
During bootstrapping, there are a bunch of compiler-internal environment
variables that are used. If you are trying to run an intermediate version of
rustc, sometimes you may need to set some of these environment variables
manually. Otherwise, you get an error like the following:
thread 'main' panicked at 'RUSTC_STAGE was not set: NotPresent', library/core/src/result.rs:1165:5
./stageN/bin/rustc gives an error about environment variables, that
usually means something is quite wrong -- or you're trying to compile e.g.
std or something that depends on environment variables. In
the unlikely case that you actually need to invoke rustc in such a situation,
you can find the environment variable values by adding the following flag to