Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 3¶
After the massive, yet warranted digression regarding updating legacy WORKSPACE macros that use Label, we now resume our regularly scheduled programming.
We've covered techniques for ensuring that your project remains compatible with
different Bazel versions, both Bzlmod and legacy WORKSPACE
builds, and older
dependency versions. However, we shouldn't make any promises until we've
validated that these properties actually hold, preferably via automated testing
and continuous integration.
This third post in our four part trilogy covers writing Bazel tests that allow for flexibly switching between various Bazel configurations. We'll consider advice on how to run the tests locally while developing and how to run them in continuous integration.
This article is part of the series "Migrating to Bazel Modules (a.k.a. Bzlmod)":
- Migrating to Bazel Modules (a.k.a. Bzlmod) - The Easy Parts
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names and Runfiles
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names and rules_pkg
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names, Macros, and Variables
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Module Extensions
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Fixing and Patching Breakages
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names, Again…
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Toolchainization
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 1
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 2
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 3
Prerequisites¶
As always, please acquaint yourself with the following concepts pertaining to external repositories if you've yet to do so:
Shameless BazelCon 2025 Bzlmod Migration Bootcamp plug
If you're registered for the BazelCon 2025 Bzlmod Migration Bootcamp, I'd love to hear your biggest questions and concerns in advance! Please respond to the Bzlmod Migration Bootcamp thread in the #bazelcon channel of the Bazel Slack Workspace if so inspired.
Verifying compatibility with Bazel versions and legacy WORKSPACE builds¶
We discussed the value of maintaining compatibility with as broad a range of dependency versions as possible in Maintaining Compatibility, Part 1. To recap:
-
Supporting older dependency versions makes it easier for users to upgrade your project by itself, without having to upgrade other dependencies.
-
Supporting newer dependency versions enables users to upgrade other dependencies whenever they want, without being held back by your project's compatibility constraints.
-
Supporting legacy
WORKSPACE
builds enable users to make progress migrating to Bzlmod without having to switch to using Bzlmod immediately.
That previous post mentioned how to structure MODULE.bazel
files and legacy
WORKSPACE
files and macros to support a range of dependency versions.
Confidently supporting a range of dependency versions, Bazel versions, and
both Bzlmod and legacy WORKSPACE
builds requires verification via automated
testing.
This post covers how to write and run tests that support multiple Bazel versions and both build modes, while using the latest dependency versions. The next post will cover writing and running a test to validate compatibility with older dependency versions (including, but not limited to, older Bazel versions).
unittest.bzl vs. Bash scripts¶
There are (at least) two common choices for implementing Bazel repository or module tests:
-
bazel_skylib's unittest.bzl contains the analysistest and unittest libraries for fine grained testing of rules and utility functions, respectively.
rules_scala
has a number of these tests. -
However, the vast majority of
rules_scala
tests are Bash scripts that invoke Bazel with different arguments, kicked off bytest_all.sh
.
These Bash based tests cover an exhaustive set of behaviors, and can manipulate
execution parameters that unittest.bzl
tests, to the best of my knowledge,
cannot. For example:
-
Executing test cases on different Scala versions using
--action_env=SCALA_VERSION=...
-
Using different toolchains to build the same target and validate the result
-
Validating module extension helper behaviors (because
module_ctx
seems impractical to try to stub out in Starlark, if it's even truly possible) -
Validating warning and error messages, and other expected log outputs (e.g., Buildifier recommendations), which is just as important as validating successful outcomes!
And, as we'll see in the next blog post:
- Configuring a standard smoke test suite with different versions of Bazel and
other key dependencies like
rules_java
andprotobuf
From the standpoint of Bzlmod and legacy WORKSPACE
compatibility, these tests
provide the benefit of breaking usually very obviously and very quickly. When a
test case executes, if anything is amiss in the configuration, Bazel will
usually fail at the beginning of the loading phase. As John Cater
explained in a Bazel Slack thread on 2025-04-16:
technically, all of module/workspace handling of external repositories is the earliest part of the loading phase (as it, like macros, creates new targets for the rest of the build to use)
it's unlike the rest of the loading phase because it actually executes code and has side effects, of course
this is because the three phase load/analyze/execute system was designed before external repos existed
bazel-contrib/rules_bazel_integration_test is another possibility.
I happened upon bazel-contrib/rules_bazel_integration_test while
writing this post. It has some nice features that may be worth investigating
to see if it suits your testing needs. However, it's unclear whether it
handles testing failure modes and expected log output, as many rules_scala
tests cover.
A confession from a long time automated testing advocate¶
I have an exceptionally deep history of promoting the concept of small, medium,
and large tests (a.k.a. the Test Pyramid) and applying the concept
thoughtfully. I have a lot of experience writing more smaller than larger tests,
though I consider them equally important due to their different, yet
complimentary properties. There is a part of me that would prefer a
proliferation of smaller unittest.bzl
tests instead of larger, effectively end
to end Bash tests.
However, in this case, even though the tests are larger and slower, I like the
Bash suite from rules_scala
. Tests using unittest.bzl
utilities require a
lot of boilerplate, and it's unclear whether they could validate all the same
behaviors as the Bash tests. The Bash tests are actually pretty stable and easy
to work with (given a handful of custom test helpers), and they run fast enough.
I wish they ran faster, but most individual test cases and test files run fairly
quickly, and I still run the entire suite often. Best of all, they've caught
countless problems that forced me to better understand what I was doing and to
seek better solutions. I've even added more Bash tests to this suite, and
improved existing tests and helpers in a few ways.
Alternatives to Bash exist, but the principles remain the same.
Personally, I'm very comfortable with Bash, and it's available on practically all development and continuous integration platforms. However, my colleague Jay Conrod noted some potential difficulties:
- Installing Bash on Windows can be painful.
- You can't always count on "standard" utilities being installed or supporting all the same operations, especially across operating systems.
- Bash has known different behaviors between versions, and between operating systems when using the same version.
Granted, you can mitigate these pain points if you can control the
development and continuous integration operating system images. You may even
choose to invest in developing your own testing framework. rules_scala
actually defines its own miniature Bash testing framework via its
test_runner.sh
and test_helper.sh
library functions.
If you still find Bash too troublesome, or just prefer not to use it, Python
or another language may work just as well. rules_go
developed its own
go_bazel_test rule and bazel_testing library, using Go as the primary
language for writing Bazel tests.
Even so, all the same principles of Bazel testing illustrated using Bash in this post and the next are the same no matter the language. The singer may change, but the song remains the same.
rules_scala tests¶
Feel free to review the rules_scala
tests for yourself while following along
with this post:
- The main entry point for the entire suite is test_all.sh.
- Most of the test scripts reside in the test/shell directory.
- .bazelci/presubmit.yml contains the continuous integration configuration, based on the bazelbuild/continuous-integration framework.
- The rules_scala continuous integration dashboard, linked from the bazel-contrib/rules_scala README, shows recent daily and pull request branch builds.
Now we'll review many of the elements that these tests or their test repos/modules have in common.
Check out what rules_go
did instead.
You may want to consider the rules_go Bazel testing framework as a working example of an alternative Bazel test suite implementation.
.bazelrc flags¶
Setting flags in .bazelrc eliminates a lot of boilerplate from test
scripts. This way, flags set specifically for a test stand out from the defaults
shared by all tests. Setting flags as common
options applies them across
bazel build
, bazel test
, bazel run
, and bazel query
, ensuring
consistency between these commands. This can also impact build performance
during development, as different flags for, say, build
and query
can make
switching between the two commands painfully slow.
As we'll see, using .bazelrc
can make running tests inside nested modules more
consistent as well. More on that below.
Here are the essential flags for switching between Bzlmod and legacy WORKSPACE
build modes. For an example, see the top level rules_scala .bazelrc file.
Mode | .bazelrc Flags |
---|---|
Bzlmod | common --noenable_workspace --incompatible_use_plus_in_repo_names |
legacy WORKSPACE |
common --enable_workspace --noenable_bzlmod |
incompatible_use_plus_in_repo_names helps Bazel 7 Windows performance.
Bazel 7 uses ~
as a delimiter in canonical repository names, which
produces an obscure and bizarre performance issue on Windows.
For this reason, Bazel 8 uses +
as its canonical repo name delimiter, and
setting --incompatible_use_plus_in_repo_names vastly improves Bazel 7
performance on Windows.
Setting this flag isn't strictly necessary if you only use Bazel 8, or never build or test on Windows. But if you build with Bazel 7 at all, it effectively helps guarantee compatibility with a future Bazel 8 upgrade.
Switching between Bzlmod and legacy WORKSPACE modes¶
There are several options for switching between Bzlmod and legacy WORKSPACE
flags without editing the .bazelrc
file each time:
-
You can label the flag configurations as
common:bzlmod
andcommon:legacy
and use the --config flag to switch between them. -
You can have separate
.bazelrc
files for Bzlmod and legacyWORKSPACE
build modes, and select one or the other using the --bazelrc flag. -
You can generate a
.bazelrc
for each test.
In practice, I've done none of these things (except for generating a .bazelrc
for the dependency version test, as we'll see in the next post). I've merely
kept both sets of flags in the .bazelrc
file, and manually commented and
uncommented them between test runs. I've found this to be more straightforward
and less of a maintenance burden, since:
-
When iterating on a problem, I'm usually running specific tests multiple times in the same mode.
-
After I've solved the problem, I'll switch modes to ensure the specific tests still pass. If they don't, I'll go back to iterating until they do.
-
After the specific tests pass under both Bzlmod and legacy
WORKSPACE
builds, I'll run the entire test suite in one mode, then the other. If anything breaks, I'm back to iterating using specific tests again.
Basically, automating switching between modes seems like more trouble than it's
worth, because that's not where most of my time goes. Instead, investing in
updating the utilities from test/shell/test_runner.sh
and test/shell/test_helper.sh
has saved me lots of time. This includes (but isn't limited to) adding support
for the RULES_SCALA_TEST_ONLY
, RULES_SCALA_TEST_VERBOSE
, and
RULES_SCALA_TEST_REGEX
environment variables. Making it easier to run specific
test functions multiple times, and to inspect their Bazel output, has yielded
far more value.
Selecting Bazel versions with Bazelisk and .bazelversion¶
Bazelisk is a must-have Bazel wrapper that makes building with specific
versions of Bazel very easy. Generally your repository will have a
.bazelversion
file in its root directory, with a specific Bazel version number
or another Bazelisk version specifier. Alternatively, you can export the
USE_BAZEL_VERSION
environment variable, which also overrides the version from
.bazelversion
.
As with the aforementioned .bazelrc
settings, I usually update .bazelversion
manually between runs, or set USE_BAZEL_VERSION
on the command line. For the
same reasons as with .bazelrc
, automating the switching of Bazel versions
hasn't seemed worth it.
Don't set the Bazel version in test scripts (except in one special case)¶
The underlying principle is that test scripts should not control the Bazel version. Instead, the user or the continuous integration environment should control the Bazel version, and the tests should pass for all supported Bazel versions.
The one exception is writing dependency compatibility tests, which is a special case that we'll cover in detail in the next post. This is the one kind of test suite where the test script must control the Bazel version.
This is because different Bazel versions have their own minimum supported
versions of rules_java
, protobuf
, and other dependencies. Compatibility test
cases make assertions on combinations of older Bazel versions and their minimum
required dependency versions.
Choosing a Bazel version to keep in .bazelversion¶
The rules_scala v7.1.1 .bazelversion file specifies Bazel 7.6.1. This was
the latest release of the Bazel 7 series at the time of the release of v7.1.1.
We chose 7.6.1 because rules_scala
v7 officially supports Bazel 7, and our
default build benefits from improvements in the latest Bazel 7 release.
As we'll see in the next post, our version compatibility smoke test will test
Bazel 7.1.0 and a couple of Bazel 8 versions. The .bazelci/presubmit.yml
file
also contains an optional job running the 'last_green' Bazel version (a
special, self explanatory Bazelisk version specifier). In this way, we
ensure that rules_scala
remains compatible with a range of Bazel builds from
7.1.0 to the latest build passing continuous integration.
Why is the last_green continuous integration job optional?
We want to keep the last_green
build passing, but we don't want to block
pull requests because something changed in an unreleased Bazel build. If the
last_green
job fails, we need to ensure the pull request didn't break it.
However, more often than not, last_green
breakages are due to changes in
Bazel itself. We should fix such breakages, but in a separate pull request.
Sometimes, something changes in the last_green
Bazel build that we can't
fix on our end, at least not without help. For an example of both kinds of
breakages in a single pull request, see bazel-contrib/rules_scala#1754: Fix
builds for Bazel >= 9.0.0-pre.20250714.1.
The first problem, which the pull request fixes, was due to
bazelbuild/bazel#26493 removing the visibility
attribute from
repository rules. The second problem happened when
bazelbuild/bazel#26477 removed dependencies from the builtin
@bazel_tools//src/main/protobuf
package, breaking targets depending on it.
After I commented on bazelbuild/bazel#26579 about rules_scala
,
Xúdōng Yàng suggested using bazel-worker-api or
bazel_worker_java to fix it. I did so in
bazel-contrib/rules_scala#1756, which indeed fixed the last_green
build.
(And then I changed it to depend on bazel_worker_api directly, to resolve a rules_jvm_external problem introduced by bazel_worker_java.)
Consider leaving the door cracked for obsolete Bazel versions (for now)¶
At some point you have to draw the line and drop support for dependency versions
so old that maintaining compatibility with them inhibits progress. For that
reason, rules_scala
v7.0.0 officially dropped support for Bazel 6, requiring a
minimum of Bazel 7.1.0.
However, throughout most of the Bzlmod compatibility work I did, I also tested
to ensure compatibility with Bazel 6.5.0 (until we decided to drop
support). As a result, the Limited Bazel 6.5.0 compatibility section
of the README
leaves clues on getting rules_scala
v7.0.0 to build with Bazel
6.5.0. In fact, bazel-contrib/rules_scala#1756 removed the protobuf
v29
maximum version constraint for legacy WORKSPACE
builds with Bazel 6.5.0. With
those changes, the rules_scala
test suite once again passes for such builds.
So while rules_scala
no longer officially supports Bazel 6.5.0, there's still
a lifeline enabling those users to upgrade to rules_scala
v7.0.0 (or v7.1.1)
first. Once they do, they'll be closer to upgrading to Bazel 7 (with which
rules_scala
v6.6.0 is not compatible), or even Bazel 8. After that, it's not
much more work to enable Bzlmod.
Nested test modules¶
Nested repositories enable you to write tests that validate the main
repository's behavior from a user's perspective. As of v7.1.1, rules_scala
has
fifteen nested repositories used for various tests. This includes (but is far
from limited to) the nested repositories/modules in the examples directory.
Each nested repository in rules_scala
contains:
-
Its own
MODULE.bazel
and legacyWORKSPACE
files that import the parent repository/module -
Imports of the latest dependency versions supported by the parent repository/module
-
A
.bazelrc
file that uses import to use the same flags as the parent module (e.g.,import ../.bazelrc
) -
A
.bazelversion
file
scripts/sync-bazelversion.sh updates all .bazelversion
files to match
the root .bazelversion
, since .bazelversion
doesn't support an import
directive.
Add a .bazelignore entry for each nested module¶
Each nested module directory also appears in the parent module's
.bazelignore file. This excludes it from bazel {build,test} //...
invocations under Bzlmod (until the resolution of bazelbuild/bazel#22208).
Add a .bazelversion file or other Bazel version selection mechanism¶
Assuming you're using Bazelisk to select the Bazel version, alternatives
to having individual .bazelversion
files in each module include:
-
Consistently exporting
USE_BAZEL_VERSION
in the test environment. However, explicit.bazelversion
files help avoid surprising situations when working directly in nested modules withoutUSE_BAZEL_VERSION
set. -
Using symlinks instead of a sync script. This will work on most platforms, but may break Windows builds unless the Windows configuration allows symlinks. (Maybe this is an obsolete concern? Let me know if so!)
Create a nested latest_dependencies Bazel module¶
Nested test modules won't have access to the latest dependencies specified in
the parent module's single_version_override directives. However, you can
define a special nested latest_dependencies
module that can specify these
versions. Bazel will then resolve the dependencies of all nested test modules
using the latest_dependencies
module to their latest supported versions.
For example, rules_scala
v7.1.1 defines its latest_dependencies
module under
deps/latest. Ironically, it does set bazel_compatibility to the
lowest officially supported version of Bazel.
Use local_path_override in nested MODULE.bazel files¶
In the nested MODULE.bazel
files, use local_path_override to import both
the top level module and the latest_depenencies
module. For example, from the
test_cross_build module from rules_scala:
Example from test_cross_build/MODULE.bazel from rules_scala v7.1.1 | |
---|---|
Notice that we're using latest_dependencies
as a dev_dependency
. This
ensures that the compatibility test we'll examine in the next post can invoke
test targets from the module if desired. Without dev_dependency = True
, Bazel
will break due to the lack of a version
attribute. With only version =
"0.0.0"
, Bazel will break because local_path_override
doesn't apply and
latest_dependencies
doesn't appear in the Bazel Central Registry. Though
the compatibility test only invokes @rules_scala
and
@multi_frameworks_toolchain
targets, it's good practice to use dev_dependency
= True
consistently.
Why not use latest_dependencies in the parent MODULE.bazel file?
You could use local_path_override
to import latest_dependencies
instead
of using single_version_override
in the top level MODULE.bazel
file.
That would eliminate the duplication of version information between the top
level module and the nested module. However, that will produce a number of
annoying warnings while building and testing the top level module:
Use local_repository and latest_deps.bzl in nested legacy WORKSPACE files¶
In the nested repositories' legacy WORKSPACE
files, refer to the parent
repository using local_repository and load its latest_deps.bzl file:
Importing the parent repository using local_repository | |
---|---|
Use Bash regular expressions to accommodate log differences¶
One perhaps not so surprising aspect of Bazel's evolution is that its log output may change in subtle ways between releases or build modes. For example, this command runs the test_stamped_target_label_loading test from rules_scala 7.1.1. This test validates an expected log message containing a buildozer command:
Command to run test_stamped_target_label_loading in isolation | |
---|---|
In legacy WORKSPACE
runs, the output contains the following buildozer
command (with newlines added for readability):
Buildozer message from a legacy WORKSPACE build | |
---|---|
In Bzlmod runs, the buildozer
command looks much different, since it now
includes the canonical repo name for @io_bazel_rules_scala_guava_2_12_20
:
Buildozer message from a Bzlmod build | |
---|---|
The solution was to build up a Bash regular expression that would match both versions of the output (ultimately evaluated by _expect_failure_with_messages):
Bash regex matching both Bzlmod and legacy WORKSPACE messages | |
---|---|
Consider using Bash regular expressions in place of grep.
It's common to use grep
in Bash conditionals, but it's not strictly
necessary. See the description of the =~
operator in the Conditional
Constructs section of the Bash manual, and the description of the
BASH_REMATCH variable.
For checking the presence or absence of a pattern within a string, the replacement is trivial:
A Bash equivalent of grepping for the presence of a string | |
---|---|
For collecting or iterating over a list of matching lines, it's slightly
more involved (note that the $pattern
regex is not quoted on purpose):
Always escape literal curly braces in Bash regular expressions.
One cross-platform gotcha: Always escape literal curly brace characters
(e.g., \{
) in Bash regular expressions. Bash on macOS doesn't require
this, but Bash on Linux and Windows does, likely due to different underlying
regular expression implementations.
Generating test modules¶
As an alternative to including permanent nested test modules, you can instead create templates for generating test modules on the fly. This is especially useful for creating version compatibility smoke tests, as we'll see in the next blog post.
Without stealing thunder from that next post, here's a brief overview of test_bzlmod_macros.sh, which also generates test modules:
-
setup_suite
invokes setup_test_tmpdir_for_file to create the$test_tmpdir
directory, which will contain the generated test module. It also sets several variables used by the test suite. -
setup_test_module
copies test files from scala/private/macros/test into the test module directory. It substitutes${rules_scala_dir}
in theMODULE.bazel
file, then appends the function arguments to it (i.e., test-specific configuration lines). -
Each test case calls
setup_test_module
with lines to append toMODULE.bazel
, then calls helper functions to execute Bazel and validate its output. -
teardown_suite
invokes teardown_test_tmpdir to shut down Bazel, expunge its working tree (see below), and delete the temporary directory.
For another example of a test helper that generates (and cleans up) test modules, see run_in_test_repo from test_version.sh.
Generate test files in the test cases themselves, if feasible¶
You may consider generating test files from within the test case functions themselves, instead of copying other files. This may be useful if the files are relatively small and unique to a specific test case, preventing the proliferation of many extra test files.
The basic pattern is:
Generating a test file within a test case function in Bash | |
---|---|
For many more examples of this, see bazel_coverage_java_test.sh from bazelbuild/bazel. For more background on the mechanism, see the description of Here Documents in the Bash Manual.
Run bazel clean --expunge_async to reclaim resources from generated modules¶
Run bazel clean --expunge_async at the end of tests that generate their
own test repositories or modules to limit resource usage. (Note that this
command also implies bazel shutdown
.) Otherwise each generated module's Bazel
server will continue running and its bazel info output_base
directory will
continue to exist and consume space. This is especially important for any
uniquely generated test modules, i.e., modules generated into a random
directory.
I learned the hard way.
For example, after months of running test_version.sh
as part of my
rules_scala
work, my Mac's drive filled up with "System Data." It turned
out that this script called bazel shutdown
after every test, but its
output_base
for every randomly generated test repository remained. This
caused my outputUserRoot directory to eventually fill the disk with
stale output_base
directories from the modules generated by these test
runs.
The solution was to rm -rf ${outputUserRoot}/*
and then to update
test_version.sh
to run 'bazel clean --expunge_async' after every
test.
You may also do this even for nongenerated nested modules, though it's less critical in that case. You might consider writing a script like test_cleanup.sh to occasionally reclaim storage from nested modules instead of expunging them after every test.
This doesn't clear --disk_cache space.
Running bazel clean --expunge_async
clears the disk space for the project,
but has no effect on the local disk cache configured via --disk_cache.
Consider using a consistent directory for test repositories or modules¶
Alternatively, local development speed may be more of a concern than conserving resource usage or guaranteeing a brand new working directory every time. In that case, consider generating test repositories or modules into consistently named test directories instead.
A generated module in a well known test directory will consume a single Bazel
server and output directory. Tests will run faster when run repeatedly during
development, since the Bazel server will already be running and won't have to
rebuild the entire module. Tests will consume an essentially bounded amount of
resources, even without bazel clean --expunge_async
, since there will be a
bounded number of test modules.
The trade off is that the test script must take extra care to ensure the correct starting state for each test case. This usually isn't difficult, but requires more care than generating a new test directory for each test case or test suite run.
rules_scala has a helper for generating a consistent working dir.
The setup_test_tmpdir_for_file helper in rules_scala
creates a new,
consistent working directory and changes into it before returning. Its
intended use is to create the new directory, named after the caller's source
file, within the tmp/
subdirectory of the project's root directory. This
also requires adding tmp/
to the project's .gitignore file.
Several rules_scala
tests define setup_suite
and teardown_suite
functions that invoke setup_test_tmpdir_for_file
and
teardown_test_tmpdir, respectively. However, while setup_suite
always runs, teardown_suite
generally will not run if a test case fails.
This provides the nice property of keeping the test's generated working
directory and its resources intact while fixing a broken test. The script
will then clean up the test directory's resources after the tests pass
again. The test_dependency_versions.sh script we'll discuss in the
next post is one such example.
It would be trivial to add an environment variable to control whether enable
or disable the teardown_test_tmpdir
invocation. You may choose to
implement such a mechanism in your own test suite.
Running tests under multiple Bazel versions and configurations¶
Now that we have a robust, flexible test suite, here's some advice on running them to ensure new changes are compatible across different Bazel versions.
Maintain a common entry point for running all tests¶
Entry points such as bazel test //...
or test_all.sh
make it easy to run
all the same tests locally as in continuous integration. This makes it easier
for developers to catch most potential problems before opening a pull request
(or to fix them after a continuous integration failure).
Define parallelizable test suites¶
Test suites (and ideally, individual test cases) without dependencies between them can run in parallel, and are easier to work with and maintain over time. While the common entry point is important for running all tests, configuring parallel test jobs will enable you to minimize continuous integration cycle times.
For example, test_all.sh runs a series of other scripts, which .bazelci/presubmit.yml configures as separate, parallel continuous integration jobs.
Make continuous integration operating system images available¶
If possible, make continuous integration operating system images available for local testing and debugging. Most of the time, this shouldn't be necessary, but it can prove critical for debugging within complicated environments or other operating systems.
For example, rules_scala
relies on images from the
bazelbuild/continuous-integration framework. I patched
buildkite/docker/ubuntu2004/Dockerfile to create an arm64 Linux image for
Docker on my macOS machine. This enabled me to debug and repair the Bash
regular expression curly brace failures from
bazel-contrib/rules_scala#1722. While not as convenient as pulling an
existing image, it's better than nothing.
Exercise multiple Bazel versions and both build modes locally¶
We may not have multiple operating systems or architectures available for local
development. However, we can easily run our test suites locally with different
Bazel versions, under both Bzlmod and legacy WORKSPACE
build modes.
While developing rules_scala
locally, I'll usually run ./test_all.sh
under
each of the following configurations before creating (or updating) a pull
request:
- Using the default
.bazelversion
(Bazel 7) and Bzlmod - Switching
.bazelrc
to the legacyWORKSPACE
build - Updating
.bazelversion
to the latest Bazel 8 release - Switching
.bazelrc
back to the Bzlmod build - Updating
.bazelversion
torolling
(Bazel 9 prerelease) - Updating
.bazelversion
tolast_green
(Bazel 9 pre-prerelease)
This isn't just an academic exercise; I've found and fixed many actual compatibility bugs with this process. To unpack a few details:
-
Every time I update
.bazelversion
, I run scripts/sync-bazelversion.sh. When I'm done,git restore **.bazelversion
returns all.bazelversion
files to their original state.export USE_BAZEL_VERSION=... ./test_all.sh
would probably work, but when debugging, updating.bazelversion
avoids having to set an environment variable on every command..bazelversion could be a symlink, if Windows isn't a concern.
As mentioned earlier, each
.bazelversion
instance is a regular file synchronized byscripts/sync-bazelversion.sh
. This is because, historically, Windows systems haven't enabled symlinks by default. However, the continuous integration workers running Windows do have symlinks enabled, and theprotobuf.patch
symlinks in each nested module works fine. So we may turn these into symlinks and removescripts/sync-bazelversion.sh
at some point. -
I run the test suite using the latest versions of Bazel 7 and 8, plus
rolling
andlast_green
. These latter two are special Bazelisk version specifiers for prerelease versions of Bazel, which are useful for ensuring compatibility with the upcoming Bazel 9 release. -
For Bazel 7 and 8, I switch between
WORKSPACE
and Bzlmod by updating the flags in .bazelrc. Therolling
andlast_green
releases have already removedWORKSPACE
support (Praise Kier!), so I can only use Bzlmod for those runs anyway. (The compatibility test suite we'll discuss in the next post always builds under Bzlmod, and sets its own Bazel versions directly.)
Testing in continuous integration¶
Running the entire test suite using every Bazel version, under both Bzlmod and
legacy WORKSPACE
modes, would require significant continuous integration
resources. However, it's usually easier to execute test suites in
parallel, and to test across multiple operating systems and architectures.
Here are a view bits of advice to get value from continuous intergration beyond what's usually available during local development, without consuming too many resources.
Only use Bzlmod when it's available¶
Because the legacy WORKSPACE mode is going away in Bazel 9, ensuring
Bzlmod compatibility is the priority. Also, what works for Bzlmod usually also
works for legacy WORKSPACE
builds, since Bzlmod is far more strict to begin
with. Running the tests locally under legacy WORKSPACE
configurations is still
important, but can happen periodically during local development without
introducing too much risk.
The rules_scala
continuous integration system now only builds using Bzlmod,
ever since Bzlmod compatibility landed in bazel-contrib/rules_scala#1722.
That said, nothing's stopping you from running continuous integration jobs that
use both Bzlmod and legacy WORKSPACE
builds. It's just questionable whether
the additional legacy WORKSPACE
jobs provide the same return on investment
once Bzlmod jobs are in place. It's your money; spend it how you want.
Use the minimum and last_green Bazel versions¶
Configure the majority of continuous integration jobs to use the latest release
of the oldest supported major Bazel version. Configure one job to use the
last_green
Bazel version. This achieves a good balance between thoroughness
and resource consumption, while catching issues with the future major Bazel
release as early as possible. This is usually sufficient to catch problems that
could break the current Bazel version as well (Bazel 8).
At the time of writing, the rules_scala Buildkite continuous integration
build predominantly uses the latest Bazel 7 version (currently 7.6.1). One
job uses the last_green
prerelease of Bazel 9. A regularly scheduled build
helps ensure that last_green
compatibility holds over time, independent of
pull request activity.
test_rules_scala.sh runs the most essential tests (not that other tests
aren't as important, but it covers the bulk of the core functionality). Having
the one last_green
job run this script catches most compatibility issues with
Bazel 8 and future releases.
Run tests in parallel across different operating systems and architectures¶
Running test suites in parallel reduces the overall running time of the continuous integration build. Running them across different operating systems and architectures provides greater confidence in the changes under test. Doing both at the same time gets you the best of both worlds.
You may not need to run every test suite on every operating system; for example
rules_scala
only runs test_lint.sh
on Linux. However, the most essential
tests that are most likely to surface platform incompatibilities run on Linux,
macOS, and Windows.
Extra credit: Examine an alternative approach from rules_go¶
My colleague and reviewer Jay Conrod was once a maintainer of rules_go, and his summary of that project's approach is worth considering:
-
go/bazel/tools/bazel_testing has a testing library and a go_bazel_test macro to simplify its use.
-
go_bazel_test
wraps go_test and is invoked through Bazel as a normal test. Inputs are all files that are part of the ruleset; changing other files like theREADME
won't invalidate test results. However,go_bazel_test
sets the "local" and "exclusive" tags, meaning these tests cannot be run remotely and can only run one at a time. -
The testing library has a TestMain function. You can call it with a test workspace (expressed as a txtar archive string) and some other options. It extracts the workspace into a temporary directory for the duration of the test.
-
The testing library has some utilities for running Bazel. Test cases are written in Go and are mostly platform agnostic.
-
When writing
MODULE.bazel
or legacyWORKSPACE
files, the testing library uses local_path_override or local_repository for some important repos. This ensures that tests actually exercise the localrules_go
and avoids wasting time downloading toolchains.Note: This aligns with the local_path_override and local_repository advice above.
-
There is some hackery around TEST_TMPDIR and the output root because Bazel behaves differently when run inside a test. The test tries to use a consistent output directory to avoid rebuilding everything every time.
For examples, see the go_bazel_test BUILD targets within rules_go.
Conclusion¶
We've covered many elements of writing tests that are compatible with a range of
Bazel versions, and both Bzlmod and legacy WORKSPACE
builds. We've considered
how to run them during local development and in continuous integration to catch
Bazel compatibility bugs early and often. In the next post, we'll cover writing
a smoke test to ensure that the project preserves compatibility with its oldest
declared dependency versions, including Bazel.
Combined, the two testing approaches described in this post and the next validate compatibility with a broad range of dependency versions. Users will appreciate having the ability to upgrade your project without upgrading others, while having the ability to upgrade other dependencies easily whenever they choose. (Whereby the measurement of "appreciation" is the lack of complaints more so than direct expressions of gratitude, of course.)
As always, I'm open to questions, suggestions, corrections, and updates relating to this series of Bzlmodification posts. Check the Updates sections of previous posts for new information, including today's updates to Maintaining Compatibility, Part 1. It's easiest to find me lurking in the #bzlmod channel of the Bazel Slack workspace. I'd love to hear how your own Bzlmod migration is going—especially if these blog posts have helped!
Shameless BazelCon 2025 Bzlmod Migration Bootcamp plug (Slight Return)
If you're registered for the BazelCon 2025 Bzlmod Migration Bootcamp, I'd love to hear your biggest questions and concerns in advance! Please respond to the Bzlmod Migration Bootcamp thread in the #bazelcon channel of the Bazel Slack Workspace if so inspired.