Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 4¶
This "Maintaining Compatibility" trilogy began by describing how to, well,
maintain compatibility with Bzlmod builds, legacy WORKSPACE
builds, and a
range of dependency versions. However, this was only half the story. Automated
testing is essential for validating our compatibility assertions, not lying to
ourselves and our users, and preventing the undoing of our hard work.
The previous post described how to write and run tests
that enable switching between Bazel versions and the Bzlmod and legacy
WORKSPACE
build modes. Those tests use the latest versions of our non-Bazel
dependencies to ensure forward compatibility.
This fourth and final part of our trilogy describes how to write tests to validate backwards compatibility with combinations of older dependency versions. We'll build on the techniques from the previous post, while learning what makes these backwards compatibility tests substantially different from other tests in the suite.
All posts in the "Migrating to Bazel Modules" series
- Migrating to Bazel Modules (a.k.a. Bzlmod) - The Easy Parts
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names and Runfiles
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names and rules_pkg
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names, Macros, and Variables
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Module Extensions
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Fixing and Patching Breakages
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Repo Names, Again…
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Toolchainization
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 1
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 2
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 3
- Migrating to Bazel Modules (a.k.a. Bzlmod) - Maintaining Compatibility, Part 4
BazelCon 2025 Bzlmod Migration Bootcamp input¶
Before we begin, if you're registered for the BazelCon 2025 Bzlmod Migration Bootcamp, I'd love to hear your questions and concerns in advance! Please respond to the Bzlmod Migration Bootcamp thread in the #bazelcon channel of the Bazel Slack Workspace.
Prerequisites¶
As always, please acquaint yourself with the following concepts pertaining to external repositories if you've yet to do so:
To review many key differences between Bzlmod and the legacy WORKSPACE
model,
see the comparison table from the "Module Extensions" post.
Test combinations of older versions with parameterized smoke tests¶
Assume that all existing tests run under multiple Bazel versions, and under
Bzlmod and legacy WORKSPACE
builds, and achieve sufficient
coverage. Our first priority is to make sure the entire
suite keeps passing with these different Bazels, build modes, and the latest
non-Bazel dependency versions. This provides confidence that legacy WORKSPACE
users can easily update our project before migrating their projects to Bzlmod.
It also provides confidence that our project continues to remain compatible with
the most recent dependency versions, instead of being locked into requiring
obsolete versions.
At the same time, we do still need to support the oldest dependency versions we possibly can, while remaining compatible with the newest versions. As we've discussed, this gives our users maximum flexibility in terms of upgrading our project and its related dependencies independently, in whichever order they choose. However, running the entire suite using every combination of older Bazels and older dependency versions would be expensive, time consuming, and provide marginal value.
A parameterized smoke test suite can validate compatibility with various combinations of Bazel versions and non-Bazel dependencies in a reasonable amount of time. By reusing a subset of test cases from other parts of the test suite, it probes for potential incompatibilities with specific dependencies or combinations thereof. It will quickly and clearly break if any changes to the project are fundamentally incompatible with our officially supported dependency versions.
What is a "smoke test?"
The "smoke test" metaphor comes from turning on a machine and checking whether any smoke comes out. In other words, a smoke test is a large, end-to-end test that fails fast if anything is fundamentally wrong with the system as a whole. It trades off depth of validation for a faster indication of whether or not catastrophic errors exist.
The rest of the test suite is responsible for validating finer grained expectations about the system's behavior. However, if the smoke test fails, we can halt running the rest of the test suite and begin investigating the failure right away. While the appropriate response is often to add a smaller test to reproduce the error, then get it to pass, that isn't always the case. The failure could be due to a configuration or dependency requirements issue, which the test cases described in this post will detect.
If all other tests pass, and the dependency compatibility smoke test case fails, we need do to one of the following:
-
Fix the code such that the smoke test passes, and all other tests using the latest dependency versions continue to pass. (Consider also adding a new, smaller test to reproduce the problem first, if warranted.)
-
Configure our parameterized smoke test case to use newer dependency versions that fix the test, and clearly document the updated dependency version requirements for users.
As an example, we'll examine test_dependency_versions.sh from rules_scala
v7.1.2 and its test module template files within the deps/test directory.
It validates the dependency versions described in the Compatible Bazel
versions and Using a precompiled protocol compiler sections of the
rules_scala v7.1.2 README file.
Design a test module¶
The compatibility smoke test should exercise a test module that validates one or
more permutations of older dependencies. There may be more permutations to test
beyond the minimum versions for all dependencies, since not all older
dependencies will be compatible with one another. For example, validating the
oldest versions of Bazel 7 and Bazel 8 requires validating different minimum
rules_java
and protobuf
versions for each.
You could write separate permanent modules for every permutation of
dependencies you wish to test. However, I find it more convenient and
maintainable to create a configurable template module for reuse across
multiple test cases. Each test case generates the MODULE.bazel
file from the
template, so adding new test cases is easy, and there's no accidental
differences between modules. Such test cases are also to easier to maintain as
minimum dependency requirements change over time.
Test using Bzlmod only¶
Designing the compatibility smoke tests to run only under Bzlmod reduces the overall complexity of the task. This is because:
-
Bzlmod presents a relatively uniform means of setting up dependencies, which then largely initialize themselves. This setup structure remains constant across versions, at least relative to what's required by this test suite.
-
In contrast, some packages have changed their legacy
WORKSPACE
setup macros between even minor version releases. For example, rules_java 7.12.4 and rules_java 8.4.0 have the same legacyWORKSPACE
setup instructions, but rules_java 8.5.0 updates them. Some of these macros have ordering dependencies on macros from other packages. Trying to accommodate these legacyWORKSPACE
differences rapidly becomes a nightmare. -
The Bzlmod and legacy
WORKSPACE
configuration outcomes would have to be equivalent anyway, to reliably validate specific versions of each dependency. The rest of the test suite already ensures that equivalent configurations yield equivalent outcomes under Bzlmod and legacyWORKSPACE
builds.
Create MODULE.bazel.template with placeholders, overrides, and toolchains¶
deps/test/MODULE.bazel.template
(seen below) contains several string
substitution patterns and overrides for setting dependency versions, amongst
other important features.
It uses local_path_override to import the parent module, as well as the
nested @multi_frameworks_toolchain
module. test_dependency_versions.sh
executes several test targets directly from the @rules_scala
and
@multi_frameworks_toolchain
modules, as we'll see later.
MODULE.bazel.template
also applies single_version_override to each
dependency module. As shown below, test_dependency_versions.sh
substitutes
placeholders such as ${skylib_version}
with the versions under test. The
overrides ensure we get exactly the dependency versions we expect, preventing
Bazel from resolving them to different versions based on declarations in other
modules.
Most consumers usually wouldn't need to use single_version_override.
We're using this test module template to force Bazel to use exact
combinations of dependency versions that we want to test with our published
module. Under normal circumstances, consumers would not need to use
single_version_override
and should allow Bzlmod to perform its normal
module version selection. (Unless those consumers are dependencies of
other modules, per the bazel_dep and single_version_override advice
from Maintaining Compatibility, Part 1.) If Bazel emits module
resolution warnings indicating the presence of newer dependency versions in
the module graph, users should update their module's bazel_dep
versions.
bazel mod tidy
can make these updates automatically.
The purpose of this test module, however, is to validate compatibility with
combinations of the lowest possible dependency versions that can work
together. We're asserting that our published module requires users to use at
least these minimum versions of other dependencies. Which specific
combination of minimum dependency versions applies to a particular user
depends on their project's requirements. It confirms the minimum dependency
versions required to upgrade to Bazel 7 or 8, or to protobuf
versions
supporting the precompiled protocol compiler toolchain. Our published
module's bazel_dep declarations then specify the absolute minimum version
of every dependency, as specified by the test case validating these
earliest versions.
The rest of the test suite guarantees compatibility with the latest available versions, as described in Maintaining Compatibility, Part 3. So if our published module works with the combinations of minimum dependency versions described here, it should also work with combinations of newer dependency versions. Testing every possible combination of dependency versions isn't feasible, but this strategy provides reasonable confidence that our module remains compatible with specific dependency version ranges.
In addition to defining dependency versions, MODULE.bazel.template
also
instantiates every toolchain offered by the scala_deps
module extension.
test_dependency_versions.sh
invokes test targets defined by rules that
rely upon all of these toolchains. Deleting or commenting out any of them will
cause the test to fail. (Well, except scala_deps.junit()
, since
scala_deps.specs2()
also instantiates the JUnit toolchain and repos.)
Add a .bazelignore entry to avoid breaking local_path_override directives¶
The .bazelignore file contains deps/
so that target patterns like
//...
won't inadvertently match any files in deps/
when building the top
level module. More importantly, it prevents the local_path_override
directives in deps/test/MODULE.bazel.template with relative parent
directory paths from breaking the build.
This shouldn't be necessary, and may not be one day.
local_repository
calls in legacy WORKSPACE
files that reference relative
parent directory paths actually succeed without such .bazelignore
entries.
Equivalent support for local_path_override
is pending, tracked in
bazelbuild/bazel#22208.
Add BUILD
and .bzl
files to define special case targets and toolchains¶
Certain targets and toolchains may only work when invoked within the context of
the main module, requiring special treatment. This is why deps/test
also
contains the following files, which are necessary for defining a target to
exercise the Scalafmt toolchain. For more info on why the Scalafmt
toolchain is a special case, see the Motivation section of the description
from bazel-contrib/rules_scala#1758.
defs.bzl
defines thescalafmt_scala_test
rule.BUILD.bazel.test
contains ascalafmt_scala_test
target,:ScalafmtTest
.
test_dependency_versions.sh
copies the :ScalafmtTest
source file,
ScalaTestExampleTest.scala
, from the multi_frameworks_toolchain
example
module. We could've copied this into deps/test
directly, but this way the
test implementation won't drift from the original file.
Why did the original deps/test contain more files, targets, etc.?
The rules_scala v7.0.0 version of deps/test contained two Scala source
files, and more BUILD
targets and deps.bzl
symbols copied from other
tests. Making those copies seemed like the path of least resistance at the
time.
While writing this blog post, I grew disgusted with myself over my crimes of
duplication, and experimented with invoking @rules_scala
test targets
directly. Except for the one Scalafmt target, the experiment worked,
yielding bazel-contrib/rules_scala#1758 and the much improved testing
advice that you're reading now.
Select test targets from the module under test¶
The next task is to select a representative set of test targets exercising all the toolchains, rules, and macros that your project provides. Again, we're not looking to exercise all behaviors and expectations of these elements, which is what the rest of the test suite is for. We only need enough representative targets to catch catastrophic breakages due to incompatibilities with specifically configured dependency versions.
We'll see later how we specify the selected targets in the script itself, but there's another issue with selected targets to consider first.
Partition test packages that use dev dependencies from ones that don't¶
The packages containing the selected test targets cannot contain load
statements for dev dependency modules or repositories. This is because
these dev dependencies will not be available to such packages when the module is
not the main module. Trying to run test targets from such packages will break
the build.
For example, rules_scala
depends fully upon rules_java
, so the test can run
targets from @rules_scala
packages that depend only on @rules_java
.
However, rules_python
and rules_shell
are dev dependencies, so the test
would break when running targets from packages that depend on either of them.
To solve this, create new packages for the tests that require dev
dependencies. This mostly involves moving files from one directory to
another, plus moving targets to the new BUILD
files and updating some of their
dependency references. bazel-contrib/rules_scala#1758 made several tests
available to test_dependency_versions.sh
test modules in this way.
Write a test suite based on a parameterized test function¶
Open test_dependency_versions.sh from rules_scala v7.1.2 in a side tab and have it handy, as I won't copy every detail into this post. I will, however, describe each section of the test file.
Test runner and helper setup¶
The opening block is standard boilerplate for rules_scala
Bash test files. It
sets dir
to the root of the rules_scala
git repository, and test_source
as
the path to the test file relative to dir
. It then loads the standard test
runner and test helper functions used throughout the rules_scala
test suite.
Common rules_scala Bash test preamble | |
---|---|
setup_suite and teardown_suite¶
setup_suite
handles several tasks:
-
It creates the temporary directory in which we'll create the test module and run the tests, via the
setup_test_tmpdir_for_file
helper. This helper willcd
into the new test directory, updating the value of$PWD
. -
It sets the
original_dir
andtest_tmpdir
variables used byteardown_suite
. -
It defines a regex to ensure only tests using the precompiled protobuf compiler run on Windows.
teardown_suite
calls the teardown_test_tmpdir
helper, which runs bazel
clean --expunge_async
and removes the temporary directory. It also changes back
to $original_dir
.
If the test fails, the temporary module directory remains.
If the test fails, the teardown_suite
function will not execute. This
means that the temporary module directory, in this case
tmp/test_dependency_versions
, will remain. This can prove convenient for
debugging, and when the test script runs again and passes, it will properly
clean up the directory.
If you want to keep the test module around between runs even after the test
passes, you may omit this teardown step. Then it's up to you to cleanup the
Bazel OUTPUT_BASE
for the module whenever you may need to reclaim the
space. You may consider cleaning up only when running under continuous
integration, if space is at a premium in that environment. (i.e., Exit the
implementation early, via [[ -z "$CI" ]] && return
or some such.)
The next part explicitly disables the USE_BAZEL_VERSION
environment variable,
to ensure Bazelisk uses the Bazel version specified by each test case.
Ensuring tests use their own configured Bazel versions | |
---|---|
Finally, this setup section specifies the test targets that each compatibility test case will execute, exercising every published rule, macro, and toolchain. As mentioned earlier, these should be from packages with no dev dependencies.
This specific script partitions @rules_scala
and @multi_frameworks_toolchain
test targets into separate arrays to prevent repeating the repo names so much.
The ALL_TARGETS
array combines these arrays, injecting the repo names for
each, and includes the //...
pattern to include all BUILD.bazel.test
targets.
do_build_and_test parameterized test function¶
do_build_and_test
is the parameterized function that does all the heavy
lifting of generating the test module inside a temporary directory:
-
It initializes all dependency versions to their minimum supported values by default, then updates each version as specified by flag arguments. It also sets a couple of other relevant configuration parameters based on flag arguments.
-
It validates the specified Bazel version and creates the '.bazelversion' file used by Bazelisk.
-
It sets a few common
.bazelrc
flags, then sets other flags based on the Bazel version and other flag arguments (e.g., precompiledprotoc
). -
It copies or creates protobuf.patch and generates
MODULE.bazel
from deps/test/MODULE.bazel.template using the configured dependency versions. -
It copies test files from
deps/test
and any other required files. In this case, the only additional file isScalaTestExampleTest.scala
from@multi_frameworks_toolchain//example
.
After all that preparation, it builds all the targets and runs all the tests inside the test module. Everything's expected to pass across all combinations of dependency versions exercised by each test case:
After all that setup, the moment of truth | |
---|---|
Alternative Bazel configuration options¶
do_build_and_test
could've used these alternative approaches to configuring
its Bazel invocations:
-
Assigning the
USE_BAZEL_VERSION
environment variable instead of generating.bazelversion
to control the Bazel version via Bazelisk -
Building an array of Bazel flags (
bazel build "${flags[@]}" //...
) instead of generating.bazelrc
You're welcome to try these other approaches in your own tests. The advantage of
generating .bazelversion
and .bazelrc
is that, if the test fails,
teardown_suite
doesn't execute. As a result, the test module directory and all
its configuration remains intact for debugging. Setting USE_BAZEL_VERSION
might not be too bad, but invoking Bazel with the flags required by a particular
combination of dependency versions might be.
Individual test cases calling do_build_and_test¶
Since do_build_and_test
adjusts configuration details based on dependency
versions and executes all the tests, each version compatibility test case is
purely declarative:
The first test case tests the oldest supported dependency versions, as set by
do_build_and_test
. Subsequent test cases then increment one or more
dependencies to more recent versions.
In this way, it's easy to ensure the compatibility test cases line up with the
versions declared in the aforementioned README sections. There's no
automation to detect when the tests and the README
are out of sync, but the
tests are the ultimate source of truth.
main() block¶
The final block calls setup_suite
, runs every test_*
function in the file
using the run_tests
helper, and calls teardown_suite
afterwards. The
RULES_SCALA_TEST_REGEX
expression ensures that Windows only runs the test
cases that use the precompiled protobuf compiler (unless explicitly overridden).
The main() block at the end of test_dependency_versions.sh | |
---|---|
Conclusion¶
Thus concludes the fourth and final post in the "Maintaining Compatibility"
trilogy*. We've learned how to make our Bazel projects compatible with a
range of Bazel and other dependency versions, and even legacy WORKSPACE
builds. Just as importantly, we've learned how to write tests to guarantee that
our promises of broad compatibility aren't delusional. In the process, we've
learned how to move legacy WORKSPACE
users a bit closer to Bzlmod adoption
without them having to do very much. Win! Win! Four times, win!
This also might be the last post in the Bzlmod series. I am
considering revisiting MODULE.bazel.lock
, which I've yet to use or recommend
due to its historical noisiness. Recent developments that I've yet to dig into
suggest that it might have stabilized, which may make its benefits readily
available without the previous drawbacks. There's also the topic of using
compatibility_level to signal breaking changes that I've mentioned before.
Then there's the issue of migrating to Bazel 7 or Bazel 8, independent of
migrating to Bzlmod, possibly even before migrating to Bzlmod. Resolving
non-Bzlmod incompatibilities between these major Bazel versions and their
dependencies could stand as its own blog-worthy topic. We'll see.
Either way, I hope this "Maintaining Compatibility" trilogy* proves useful to those tasked with publishing Bazel modules, or to consumers curious about dependency compatibility maintenance. As always, I'm open to questions, suggestions, corrections, and updates relating to this series of Bzlmodification posts. Check the Updates sections of previous posts for new information. It's easiest to find me lurking in the #bzlmod channel of the Bazel Slack workspace. I'd love to hear how your own Bzlmod migration is going—especially if these blog posts have helped!