- Running a subset of the test suites
- Run unit tests on the compiler/library
- Running an individual test
- Editing and updating the reference files
- Configuring test running
- Using incremental compilation
- Running tests with different "compare modes"
- Running tests manually
- Running tests on a remote machine
- Testing on emulators
You can run the tests using
x.py. The most basic command – which
you will almost never want to use! – is as follows:
This will build the stage 1 compiler and then run the whole test suite. You probably don't want to do this very often, because it takes a very long time, and anyway bors / GitHub Actions will do it for you. (Often, I will run this command in the background after opening a PR that I think is done, but rarely otherwise. -nmatsakis)
The test results are cached and previously successful tests are
ignored during testing. The stdout/stderr contents as well as a
timestamp file for every test can be found under
To force-rerun a test (e.g. in case the test runner fails to notice a change)
you can simply remove the timestamp file, or use the
Note that some tests require a Python-enabled gdb. You can test if
your gdb install supports Python by using the
python command from
within gdb. Once invoked you can type some Python code (e.g.
print("hi")) followed by return and then
CTRL+D to execute it.
If you are building gdb from source, you will need to configure with
When working on a specific PR, you will usually want to run a smaller set of tests. For example, a good "smoke test" that can be used after modifying rustc to see if things are generally working correctly would be the following:
./x.py test src/test/ui
This will run the
ui test suite. Of course, the choice
of test suites is somewhat arbitrary, and may not suit the task you are
doing. For example, if you are hacking on debuginfo, you may be better off
with the debuginfo test suite:
./x.py test src/test/debuginfo
If you only need to test a specific subdirectory of tests for any
given test suite, you can pass that directory to
./x.py test src/test/ui/const-generics
Likewise, you can test a single file by passing its path:
./x.py test src/test/ui/const-generics/const-test.rs
./x.py test tidy
./x.py test --stage 0 library/std
Note that this only runs tests on
std; if you want to test
core or other crates,
you have to specify those explicitly.
./x.py test --stage 0 tidy library/std
./x.py test --stage 1 library/std
By listing which test suites you want to run you avoid having to run tests for components you did not change at all.
Warning: Note that bors only runs the tests with the full stage 2 build; therefore, while the tests usually work fine with stage 1, there are some limitations.
./x.py test --stage 2
You almost never need to do this; CI will run these tests for you.
You may want to run unit tests on a specific file with following:
./x.py test compiler/rustc_data_structures/src/thin_vec/tests.rs
But unfortunately, it's impossible. You should invoke following instead:
./x.py test compiler/rustc_data_structures/ --test-args thin_vec
Another common thing that people want to do is to run an individual
test, often the test they are trying to fix. As mentioned earlier,
you may pass the full file path to achieve this, or alternatively one
x.py with the
./x.py test src/test/ui --test-args issue-1234
Under the hood, the test runner invokes the standard Rust test runner
(the same one you get with
#[test]), so this command would wind up
filtering for tests that include "issue-1234" in the name. (Thus
--test-args is a good way to run a collection of related tests.)
If you have changed the compiler's output intentionally, or you are
making a new test, you can pass
--bless to the test subcommand. E.g.
if some tests in
src/test/ui are failing, you can run
./x.py test src/test/ui --bless
to automatically adjust the
.fixed files of
all tests. Of course you can also target just specific tests with the
--test-args your_test_name flag, just like when running the tests.
There are a few options for running tests:
false, each test will print a single dot (the default). If
true, the name of every test will be printed. This is equivalent to the
--quietoption in the Rust test harness
- The environment variable
RUST_TEST_THREADScan be set to the number of concurrent threads to use for testing.
Pass UI tests now have three modes,
--pass $mode is passed, these tests will be forced
to run under the given
$mode unless the directive
exists in the test file. For example, you can run all the tests in
./x.py test src/test/ui --pass check
--pass $mode, you can reduce the testing time. For each
mode, please see Controlling pass/fail
You can further enable the
--incremental flag to save additional
time in subsequent rebuilds:
./x.py test src/test/ui --incremental --test-args issue-1234
If you don't want to include the flag with every command, you can
enable it in the
[rust] incremental = true
Note that incremental compilation will use more disk space than usual.
If disk space is a concern for you, you might want to check the size
build directory from time to time.
UI tests may have different output depending on certain "modes" that
the compiler is in. For example, when using the Polonius
mode, a test
foo.rs will first look for expected output in
foo.polonius.stderr, falling back to the usual
foo.stderr if not found.
The following will run the UI test suite in Polonius mode:
./x.py test src/test/ui --compare-mode=polonius
See Compare modes for more details.
Sometimes it's easier and faster to just run the test by hand.
Most tests are just
rs files, so after
creating a rustup toolchain,
you can do something like:
rustc +stage1 src/test/ui/issue-1234.rs
This is much faster, but doesn't always work. For example, some tests include directives that specify specific compiler flags, or which rely on other crates, and they may not run the same without those options.
Tests may be run on a remote machine (e.g. to test builds for a different
architecture). This is done using
remote-test-client on the build machine
to send test programs to
remote-test-server running on the remote machine.
remote-test-server executes the test programs and sends the results back to
the build machine.
remote-test-server provides unauthenticated remote code
execution so be careful where it is used.
To do this, first build
remote-test-server for the remote
machine, e.g. for RISC-V
./x.py build src/tools/remote-test-server --target riscv64gc-unknown-linux-gnu
The binary will be created at
this over to the remote machine.
On the remote machine, run the
remote-test-server with the
--bind 0.0.0.0:12345 flag (and optionally
-v for verbose output). Output should
look like this:
$ ./remote-test-server -v --bind 0.0.0.0:12345 starting test server listening on 0.0.0.0:12345!
Note that binding the server to 0.0.0.0 will allow all hosts able to reach your machine to execute arbitrary code on your machine. We strongly recommend either setting up a firewall to block external access to port 12345, or to use a more restrictive IP address when binding.
You can test if the
remote-test-server is working by connecting to it and
ping\n. It should reply
$ nc $REMOTE_IP 12345 ping pong
To run tests using the remote runner, set the
variable then use
x.py as usual. For example, to run
ui tests for a RISC-V
machine with the IP address
export TEST_DEVICE_ADDR="220.127.116.11:12345" ./x.py test src/test/ui --target riscv64gc-unknown-linux-gnu
remote-test-server was run with the verbose flag, output on the test machine
may look something like
[...] run "/tmp/work/test1007/a" run "/tmp/work/test1008/a" run "/tmp/work/test1009/a" run "/tmp/work/test1010/a" run "/tmp/work/test1011/a" run "/tmp/work/test1012/a" run "/tmp/work/test1013/a" run "/tmp/work/test1014/a" run "/tmp/work/test1015/a" run "/tmp/work/test1016/a" run "/tmp/work/test1017/a" run "/tmp/work/test1018/a" [...]
Tests are built on the machine running
x.py not on the remote machine. Tests
which fail to build unexpectedly (or
ui tests producing incorrect build
output) may fail without ever running on the remote machine.
Some platforms are tested via an emulator for architectures that aren't readily available. For architectures where the standard library is well supported and the host operating system supports TCP/IP networking, see the above instructions for testing on a remote machine (in this case the remote machine is emulated).
There is also a set of tools for orchestrating running the
tests within the emulator. Platforms such as
arm-unknown-linux-gnueabihf are set up to automatically run the tests under
emulation on GitHub Actions. The following will take a look at how a target's tests
are run under emulation.
The Docker image for armhf-gnu includes QEMU to emulate the ARM CPU
architecture. Included in the Rust tree are the tools remote-test-client
and remote-test-server which are programs for sending test programs and
libraries to the emulator, and running the tests within the emulator, and
reading the results. The Docker image is set up to launch
remote-test-server and the build tools use
communicate with the server to coordinate running tests (see
TODO: Is there any support for using an iOS emulator?
It's also unclear to me how the wasm or asm.js tests are run.