-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Simple LHA benchmark workflow #227
Conversation
I think something like 1dc2fde can work |
Thanks @felixhekhorn for taking care of the pytest markers. |
Shouldn't be complicated, right? so why not ... this way we can keep track in the PR |
It is rather simple, but which was the counter-argument for using |
Should we drop the |
If we're using this by default, consider adding both the env var and the marker string to the |
The thing would be more visible in the PR - else it would be hidden somewhere in the "crosses and checkmarks" somewhere, no? |
Is the label thingy showing up somewhere else than in the "cross and checkmarks"? I wouldn't see where... |
nono, the workflow will of course run everything (so no markers there - this was just an example for people ;-) ) - instead about the Numba hack I'm not sure... should we keep outside to let the user decide or hard-coded force inside |
Uhm... maybe we should but the env var by default as what we are using in the workflow. |
it is shown in the PR discussion as e.g. here NNPDF/nnpdf#1604 |
Just tried, it's working.
as we're doing for the other commands :) |
I'm fine both with the label or |
Do we actually really need to run the full LHA benchmark in the CI? Is actually taking forever... I believe the most useful thing to be running something, instead of nothing. But I'm not sure we need so much to run everything. |
yes - I believe we want everything here! This is to check at the end of a PR that we are not breaking, so it is a one-time thing (as said I expect >30min). Developers might not want to do this, but they maybe want to run only a specific case on which they are worried. Let me remind you that I did run something before #215 was detected, but simply not the case of VFNS+SV, so we better check everything ... Just as an intermediate result:
also I wonder whether we run into problems with the output length at github - we may want to suppress the eko logger |
I agree, let's run everything.
Thant might be helpful to facilitate reading the log as well. |
FYI: in the end we took 1h 6m 46s (much more then I anticipated) - using 2 cores, it seems break down of timings
|
what if we run everything but only at NNLO (and NLO for polarised)? In the end if something do not shows up there it would be strange that is broken at only lower orders... |
so ...
|
It isn't. Cut down useless things. You don't need all of them to find out if it's working or not. And we'd like to know in less than 1h. The problem is not triggering many times a workflow, but having a workflow that lasts 1h :) |
This PR is to introduce a simple LHA benchmark workflow to check if benchmarks are working before merging on Mater.