Run benchmark tests "logic only" on Travis CI #451
+148
−30
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The benchmark tests are run only once without reports.
Besides using the command line option
--benchmark-disable
to run the tests only once, a custom marker is used to select only one parameter value or tuple (apytest
param
object) for test functions that are parameterized. The marker is namedskip_bench
for now 😄. Here's an example of how this works:The above test can then be run in three different ways:
pytest -m skip_bench ...
- only tests that are marked withskip_bench
will be run;pytest -m "no skip_bench" ...
- tests that are marked withskip_bench
will be skipped, andpytest ...
- (no marker is used) all tests will be run.When the goal is to benchmark all, there's no need to know about this marker, and option 3 above will do. The main purpose is really for cases such as on Travis CI, in which one wishes to quickly check whether the tests run for a "subsample" of the parameters. Option 2, is not really useful, and is just a consequence of how the
marker
mechanism works.If someone has a better idea for the name of the marker please say as I don't particularly like the name!
See the
benchmark/README.md
for more explanations.Tiny issues were also fixed:
use_fft
was replaced withuse_omega_powers
intest_benchmark_reed_solomon.py
setup=
keyword argumentbenchmark.pedantic()
is used instead ofbenchmark()
intests/fixtures.py