Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run benchmark tests "logic only" on Travis CI #451

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from

Conversation

sbellem
Copy link
Collaborator

@sbellem sbellem commented May 20, 2020

The benchmark tests are run only once without reports.

Besides using the command line option --benchmark-disable to run the tests only once, a custom marker is used to select only one parameter value or tuple (a pytest param object) for test functions that are parameterized. The marker is named skip_bench for now 😄. Here's an example of how this works:

@mark.parametrize(
    "t, k",
    [
        param(1, 5, marks=mark.skip_bench),
        (3, 5),
        (5, 5),
        # ...
    ],
)
def test_benchmark_hbavss_lite_dealer(test_router, benchmark, t, k): 
    # ...

The above test can then be run in three different ways:

  1. pytest -m skip_bench ... - only tests that are marked with skip_bench will be run;
  2. pytest -m "no skip_bench" ... - tests that are marked with skip_bench will be skipped, and
  3. pytest ... - (no marker is used) all tests will be run.

When the goal is to benchmark all, there's no need to know about this marker, and option 3 above will do. The main purpose is really for cases such as on Travis CI, in which one wishes to quickly check whether the tests run for a "subsample" of the parameters. Option 2, is not really useful, and is just a consequence of how the marker mechanism works.

If someone has a better idea for the name of the marker please say as I don't particularly like the name!

See the benchmark/README.md for more explanations.

Tiny issues were also fixed:

@sbellem sbellem added benchmarking Requires benchmarking in order to measure performance/overheads/optimizations maintenance better way to write existing code, better tools/packages which can simplify things and removed benchmarking Requires benchmarking in order to measure performance/overheads/optimizations labels May 20, 2020
The benchmark tests are run only once without reports.
@sbellem sbellem force-pushed the benchmark/tests/run-on-ci branch from 01d66d8 to 2821aa0 Compare May 20, 2020 07:05
@codecov
Copy link

codecov bot commented May 20, 2020

Codecov Report

Merging #451 into dev will increase coverage by 0.07129%.
The diff coverage is n/a.

@@                 Coverage Diff                 @@
##                 dev        #451         +/-   ##
===================================================
+ Coverage   77.02727%   77.09856%   +0.07129%     
===================================================
  Files             50          50                 
  Lines           5611        5611                 
  Branches         859         859                 
===================================================
+ Hits            4322        4326          +4     
+ Misses          1113        1111          -2     
+ Partials         176         174          -2     

@sbellem sbellem requested a review from amiller May 20, 2020 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
maintenance better way to write existing code, better tools/packages which can simplify things
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant