Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rethink composition of experiments and value precedence #169

Open
smarr opened this issue Nov 7, 2021 · 1 comment
Open

Rethink composition of experiments and value precedence #169

smarr opened this issue Nov 7, 2021 · 1 comment

Comments

@smarr
Copy link
Owner

smarr commented Nov 7, 2021

At the moment, we compose experiments/runIs bottom up, with the highest elements having priority. See https://rebench.readthedocs.io/en/latest/config/#priority-of-configuration-elements

Though, in some cases this doesn't seem to be very useful.

For instance, if I want to override the number of iterations that a benchmark is supposed to run, I can't, if it's a per-benchmark setting, which can be useful for large, diverse benchmark suites.

So, if I want to limit the number of iterations for a specific experiment, I need a separate suite definition, as I have been doing for many of my configurations.

Though, things might be more reusable, if priority would work the other way around.

Or, if I could mark a specific value as important.
Perhaps similar to https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity#the_!important_exception

@smarr
Copy link
Owner Author

smarr commented Nov 8, 2021

A first version is implemented with #170, but it's only partial.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant