-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Manual/semi-automatic performance regression checking #356
Manual/semi-automatic performance regression checking #356
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #356 +/- ##
===========================================
+ Coverage 95.29% 96.09% +0.80%
===========================================
Files 28 35 +7
Lines 1720 2766 +1046
===========================================
+ Hits 1639 2658 +1019
- Misses 81 108 +27 ☔ View full report in Codecov by Sentry. |
The scripts are currently lacking any error checking etc, but before I do more, here are some questions @davidorme :
|
I think it would be better to do this all in Python (maybe using GitPython?) but that's mostly to avoid issues for currently hypothetical Windows based developers. So, for the moment, I think this is fine.
I don't think it should - IIUC it's just that the
Yeah - I think that increasing the load is probably the way to go, at least initially. The other thing is to dive deeper into the results to find which functions have changed, but that's probably a new PR. |
Do we want this in the CI (falling over if the codes get 5% slower), or as a manual thing? |
local profiling and benchmarking. | ||
|
||
See the [profiling and benchmarking | ||
page](https://pyrealm.readthedocs.io/en/latest/development/profiling_and_benchmarking.md) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That link doe not work. Looking at https://github.com/ImperialCollegeLondon/pyrealm/blob/develop/docs/source/development/profiling_and_benchmarking.md , a lot of that information is also no longer valid. We will need to adapt that when we have decided how to proceed (i.e., keep the old benchmarking code, if and when to run the new code automatically, etc.)
profiling/simple_benchmarking.py
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@davidorme would you prefer me to use this to replace the old run_benchmarking.py
, or leave the old script in and keep this as simple_benchmarking.py
? That decision will also influence how the above-mentioned documentation is extended or adapted/rewritten.
Co-authored-by: James Emberton <[email protected]>
Something is up with the profiling in the CI. The profiling YAML is re-running the whole of the Also, the purpose of this job is to check that the profiling script works? We only need it to run so we can drop all of the graphviz stuff from the job and make it faster and cleaner. I've pushed an update to set up what I think works? We'll see if it does! |
I think that's cleaner? It's not really doing a different workflow, it's adding a new CI test to the standard test and build suite, so it seems reasonable to just add it there. With that change, I think we can delete:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM - this works for me and I think it is a much saner approach. We need to work on the docs and possibly moving things into Python and adding functionality, but this gets us back on track.
Co-authored-by: David Orme <[email protected]>
I have created #358 as a follow-up regarding the old code and documentation. I am merging this in now, as incremental improvement. |
Description
This branch includes code to run profiling on PyRealm, and check whether performance has degraded between
Fixes #256
Type of change
Key checklist
pre-commit
checks:$ pre-commit run -a
$ poetry run pytest
Further checks