You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Though, in some cases this doesn't seem to be very useful.
For instance, if I want to override the number of iterations that a benchmark is supposed to run, I can't, if it's a per-benchmark setting, which can be useful for large, diverse benchmark suites.
So, if I want to limit the number of iterations for a specific experiment, I need a separate suite definition, as I have been doing for many of my configurations.
Though, things might be more reusable, if priority would work the other way around.
At the moment, we compose experiments/runIs bottom up, with the highest elements having priority. See https://rebench.readthedocs.io/en/latest/config/#priority-of-configuration-elements
Though, in some cases this doesn't seem to be very useful.
For instance, if I want to override the number of iterations that a benchmark is supposed to run, I can't, if it's a per-benchmark setting, which can be useful for large, diverse benchmark suites.
So, if I want to limit the number of iterations for a specific experiment, I need a separate suite definition, as I have been doing for many of my configurations.
Though, things might be more reusable, if priority would work the other way around.
Or, if I could mark a specific value as important.
Perhaps similar to https://developer.mozilla.org/en-US/docs/Web/CSS/Specificity#the_!important_exception
The text was updated successfully, but these errors were encountered: