Skip to content

Bazel Testing Tips

One of Bazel’s key features is that tests are treated as the same as other build actions. Bazel provides a uniform command line interface for running tests no matter the underlying language or test framework. While there’s much to be said about writing test rules and frameworks that mesh well with Bazel, this post will focus on the experience of running tests as a developer with bazel test. Running tests is a core software engineering workflow, so it’s not surprising Bazel has many useful features for iterating locally with a test.

Quickly viewing test logs in the terminal

Bazel test logs always start with a shell line:

exec ${PAGER:-/usr/bin/less} "$0" || exit 1
This means test logs are executable programs! While not universally beloved, this trick facilitates displaying test logs in a pager simply by pasting the log’s file name into a terminal window and hitting enter; there’s no need to prepend less to the file name.

Controlling test output to the terminal

By default, Bazel only prints the file name for logs of failing tests to the terminal. Even considering the pager trick described in the previous section, it’s often more convenient to see the test output directly in the terminal.

The --test_output flag controls how Bazel displays the test log:


displays PASS or FAIL for each test and no log output in the terminal. This is the default.


displays the logs for all failed tests


displays all test logs in the terminal


displays the test log in the terminal as the test runs. Be aware that streaming prevents tests from running concurrently with any other action.

The --test_summary flag controls an additional dimension of test-related output on the terminal:


prints the PASS or FAIL status for each test target requested by the invocation. This is the default.


is like --test_summary=short but excludes successful tests


removes all test status output completely.


prints out the status of individual test cases within test targets in addition to PASS or FAIL for overall targets.


displays the aggregate count of passed and failed test cases within a test target.

(Both --test_summary=detailed and --test_summary=testcase require the test to output a valid JUnit XML file to work.)

Filtering tests

Like other Bazel targets, Bazel tests may have a tags attribute filled with free-form strings. The --test_tag_filters flag includes or excludes test targets based on their tags. For example, --test_tag_filters=dog,cat,-hairless would select test targets with the dog or cat tags but exclude the ones tagged hairless.

Every test target has a size attribute, which roughly indicates the resource consumption and length of the test. The --test_size_filters flag filters based on the test size. For example, --test_size_filters=small could be used to quickly smoke-test a change.

Frequently, one wants to run just a single test case within a test target during development. Bazel’s --test_filter flag accomplishes that goal. The value of --test_filter is interpreted by the test itself; there’s no common syntax across languages and test frameworks. However, one can generally expect the value of --test_filter to be matched against test case names, perhaps as a regular expression. For example, --test_filter=testFrob.* is likely to match test functions called testFrobicate and testFrobber.

Passing arguments and environment to tests

Bazel’s --test_arg and --test_env flags allow passing extra command line arguments and environmental variables respectively to tests. This is often useful to enable additional logging or other debugging features in the test.

Controlling when test failures stop the build

An interesting difference between test actions and other build actions is the effect their failure has on the Bazel invocation. By default, Bazel will abort an invocation if a build action fails but not if a test action fails. Passing --test_keep_going=false makes Bazel stop when a test action fails.

Within a test, the --test_runner_fail_fast flag asks the test to stop running at soon as the first test case fails.

Controlling caching of test results

Like any other action, Bazel declines to run test actions it has a up-to-date successful results for, either locally or from a remote cache. When Bazel uses a cached test result, it prints (cached) by the test target name. The --cache_test_results=false flag ensures tests will always run anew.

Running tests multiple times

Passing --runs_per_test=N instructs Bazel to run all tests in the invocation N times. This is mainly useful for debugging flaky tests, sometimes in combination with --test_keep_going=false. If N is greater than 1, test caching is automatically disabled as if --cache_test_results=false was also passed.

The --runs_per_test flag also has an advanced regular expression syntax that allows configuring the number of times to run by test label. For example, --runs_per_test=//app/.*@3 --runs_per_test=//lib/.*@4 runs any tests in the app/ tree 3 times and any in the lib/ tree 4 times.

Changing test timeouts

The --test_timeout flag is found more often in .bazelrc files than the command line because it is used to configure site-specific timeouts for Bazel’s four test timeout categories. However, --test_timeout can be useful locally in some circumstances. For instance, if a test is expected to finish in a few seconds but is hanging, --test_timeout=5 can be used while debugging it in order to shorten the iteration time.

Building tests only

Bazel’s test command should be thought of as a superset of the build command. It’s in fact permissible to run bazel test on a non-test target; Bazel will simply build the non-test target. This often comes up when running bazel test with wildcard target patterns like ... and :*. Such patterns will pick for building up all matching test and non-target targets. Instruct Bazel to only build test targets with the --build_tests_only flag.

Further resources

Here is some further reference material about Bazel tests and Bazel flags: