Makefile: Add a pcheck option to run tests in parallel

Running tests in parallel is much faster, e.g. 15 seconds to run the tests
on sandbox (only), instead of 100 seconds (on a 16-core machine). Add a
'make pcheck' option to access this feature.

Note that the tools/ tests still run each tool's tests once after the
other, although within that, they do run in parallel. So for example,
the buildman tests run in parallel, then the binman tests run in
parallel. There would be a signiificant advantage to running them all
in parallel together, but that would require a large amount of
refactoring, e.g. with more use of pytest fixtures.

Update the documentation to represent the current state.

Signed-off-by: Simon Glass <sjg@chromium.org>
This commit is contained in:
Simon Glass 2022-08-06 17:51:59 -06:00 committed by Tom Rini
parent e1c0811114
commit d1962ac797
4 changed files with 64 additions and 27 deletions

View File

@ -521,8 +521,8 @@ env_h := include/generated/environment.h
no-dot-config-targets := clean clobber mrproper distclean \ no-dot-config-targets := clean clobber mrproper distclean \
help %docs check% coccicheck \ help %docs check% coccicheck \
ubootversion backup tests check qcheck tcheck pylint \ ubootversion backup tests check pcheck qcheck tcheck \
pylint_err pylint pylint_err
config-targets := 0 config-targets := 0
mixed-targets := 0 mixed-targets := 0
@ -2364,6 +2364,7 @@ help:
@echo 'Test targets:' @echo 'Test targets:'
@echo '' @echo ''
@echo ' check - Run all automated tests that use sandbox' @echo ' check - Run all automated tests that use sandbox'
@echo ' pcheck - Run quick automated tests in parallel'
@echo ' qcheck - Run quick automated tests that use sandbox' @echo ' qcheck - Run quick automated tests that use sandbox'
@echo ' tcheck - Run quick automated tests on tools' @echo ' tcheck - Run quick automated tests on tools'
@echo ' pylint - Run pylint on all Python files' @echo ' pylint - Run pylint on all Python files'
@ -2409,6 +2410,9 @@ help:
tests check: tests check:
$(srctree)/test/run $(srctree)/test/run
pcheck:
$(srctree)/test/run parallel
qcheck: qcheck:
$(srctree)/test/run quick $(srctree)/test/run quick

View File

@ -121,31 +121,36 @@ more options.
Running tests in parallel Running tests in parallel
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
Note: This does not fully work yet and is documented only so you can try to Note: Not all tests can run in parallel at present, so the usual approach is
fix the problems. to just run those that can.
First install support for parallel tests:: First install support for parallel tests::
sudo apt install python3-pytest-xdist
or:::
pip3 install pytest-xdist pip3 install pytest-xdist
Then build sandbox in a suitable build directory. It is not possible to use Then run the tests in parallel using the -n flag::
the --build flag with xdist.
Finally, run the tests in parallel using the -n flag:: test/py/test.py -B sandbox --build --build-dir /tmp/b/sandbox -q -k \
'not slow and not bootstd and not spi_flash' -n16
# build sandbox first, in a suitable build directory. It is not possible You can also use `make pcheck` to run all tests in parallel. This uses a maximum
# to use the --build flag with -n of 16 threads, since the setup time is significant and there are under 1000
test/py/test.py -B sandbox --build-dir /tmp/b/sandbox -q -k 'not slow' -n32 tests.
At least the following non-slow tests are known to fail: Note that the `test-log.html` output does not work correctly at present with
parallel testing. All the threads write to it at once, so it is garbled.
- test_fit_ecdsa Note that the `tools/` tests still run each tool's tests once after the other,
- test_bind_unbind_with_uclass although within that, they do run in parallel. So for example, the buildman
- ut_dm_spi_flash tests run in parallel, then the binman tests run in parallel. There would be a
- test_gpt_rename_partition significant advantage to running them all in parallel together, but that would
- test_gpt_swap_partitions require a large amount of refactoring, e.g. with more use of pytest fixtures.
- test_pinmux_status The code-coverage tests are omitted since they cannot run in parallel due to a
- test_sqfs_load Python limitation.
Testing under a debugger Testing under a debugger

View File

@ -28,8 +28,12 @@ run. Type this::
make tcheck make tcheck
You can also run a selection tests in parallel with::
make pcheck
All of the above use the test/run script with a paremeter to select which tests All of the above use the test/run script with a paremeter to select which tests
are run. are run. See :doc:`py_testing` for more information.
Sandbox Sandbox

View File

@ -14,27 +14,46 @@ run_test() {
} }
# Select test attributes # Select test attributes
ut_mark_expr=test_ut
if [ "$1" = "quick" ]; then if [ "$1" = "quick" ]; then
mark_expr="not slow" mark_expr="not slow"
ut_mark_expr="test_ut and not slow"
skip=--skip-net-tests skip=--skip-net-tests
fi fi
[ "$1" == "tools" ] && tools_only=y [ "$1" == "tools" ] && tools_only=y
if [ "$1" = "parallel" ]; then
if ! echo 'import xdist' | python3 2>/dev/null; then
echo "Please install python3-pytest-xdist - see doc/develop/py_testing.rst"
exit 1
fi
jobs="$(($(nproc) > 16 ? 16 : $(nproc)))"
para="-n${jobs} -q"
prompt="Building and..."
skip=--skip-net-tests
mark_expr="not slow and not bootstd and not spi_flash"
ut_mark_expr="test_ut and not slow and not bootstd and not spi_flash"
echo "Note: test log is garbled with parallel tests"
fi
failures=0 failures=0
if [ -z "$tools_only" ]; then if [ -z "$tools_only" ]; then
# Run all tests that the standard sandbox build can support # Run all tests that the standard sandbox build can support
run_test "sandbox" ./test/py/test.py --bd sandbox --build \ echo "${prompt}"
run_test "sandbox" ./test/py/test.py --bd sandbox --build ${para} \
-k "${mark_expr}" -k "${mark_expr}"
fi fi
# Run tests which require sandbox_spl # Run tests which require sandbox_spl
run_test "sandbox_spl" ./test/py/test.py --bd sandbox_spl --build \ echo "${prompt}"
run_test "sandbox_spl" ./test/py/test.py --bd sandbox_spl --build ${para} \
-k 'test_ofplatdata or test_handoff or test_spl' -k 'test_ofplatdata or test_handoff or test_spl'
# Run the sane tests with sandbox_noinst (i.e. without OF_PLATDATA_INST) # Run the sane tests with sandbox_noinst (i.e. without OF_PLATDATA_INST)
run_test "sandbox_spl" ./test/py/test.py --bd sandbox_noinst --build \ echo "${prompt}"
run_test "sandbox_spl" ./test/py/test.py --bd sandbox_noinst --build ${para} \
-k 'test_ofplatdata or test_handoff or test_spl' -k 'test_ofplatdata or test_handoff or test_spl'
if [ -z "$tools_only" ]; then if [ -z "$tools_only" ]; then
@ -42,8 +61,9 @@ if [ -z "$tools_only" ]; then
# build which does not enable CONFIG_OF_LIVE for the live device tree, so we can # build which does not enable CONFIG_OF_LIVE for the live device tree, so we can
# check that functionality is the same. The standard sandbox build (above) uses # check that functionality is the same. The standard sandbox build (above) uses
# CONFIG_OF_LIVE. # CONFIG_OF_LIVE.
echo "${prompt}"
run_test "sandbox_flattree" ./test/py/test.py --bd sandbox_flattree \ run_test "sandbox_flattree" ./test/py/test.py --bd sandbox_flattree \
--build -k test_ut ${para} --build -k "${ut_mark_expr}"
fi fi
# Set up a path to dtc (device-tree compiler) and libfdt.py, a library it # Set up a path to dtc (device-tree compiler) and libfdt.py, a library it
@ -64,10 +84,14 @@ run_test "dtoc" ./tools/dtoc/dtoc -t
# This needs you to set up Python test coverage tools. # This needs you to set up Python test coverage tools.
# To enable Python test coverage on Debian-type distributions (e.g. Ubuntu): # To enable Python test coverage on Debian-type distributions (e.g. Ubuntu):
# $ sudo apt-get install python-pytest python-coverage # $ sudo apt-get install python-pytest python-coverage
export PATH=$PATH:${TOOLS_DIR}
run_test "binman code coverage" ./tools/binman/binman test -T # Code-coverage tests cannot run in parallel, so skip them in that case
run_test "dtoc code coverage" ./tools/dtoc/dtoc -T if [ -z "${para}" ]; then
run_test "fdt code coverage" ./tools/dtoc/test_fdt -T export PATH=$PATH:${TOOLS_DIR}
run_test "binman code coverage" ./tools/binman/binman test -T
run_test "dtoc code coverage" ./tools/dtoc/dtoc -T
run_test "fdt code coverage" ./tools/dtoc/test_fdt -T
fi
if [ $failures == 0 ]; then if [ $failures == 0 ]; then
echo "Tests passed!" echo "Tests passed!"