-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] benchmark / smoke-test scripts: planning/structure #9
Comments
@sfmig would the suggested structure allow you to easily benchmark the sub-steps ( |
thanks for putting this together @alessandrofelder! Some comments below
Yes, I think that sounds like a good structure! We can have a suite of benchmarks that time each of these steps and also the full workflow. Just to clarify,
I'd suggest maybe just calling this folder
I think a |
Hopefully, yes - but I don't know. They will at least be conceptual groupings of several API functions and maybe also some generic functions like some image filters from
Yea this naming suggestion was tied to the
I think so too. Let's do that! |
Actually, maybe, if they are not yet directly API functions, they should become API functions? |
yes, ideally the process would be:
|
Discussion of workflows general structure with @alessandrofelder
|
Why an environment variable to point to the config file?
|
Small thought experiment trying to predict ~what the GH actions yaml would look like depending on whether we make the config file an optional CLI argument or an environmental variable - having an optional CLI argument makes the job slightly shorter and more explicit, I think. This is one of our main use cases, so I am classing this as a pro for the optional CLI argument.
|
pros optional CLI argument v env var:
cons optional CLI argument v env var:
other:
|
Are these that relevant here? This is code that 99% of users won't ever see. Also worth bearing in mind that the large data workflows won't run on |
Is your feature request related to a problem? Please describe.
We'd like to have Python scripts that execute some key workflows discussed in the developer meeting on 14/09/2023.
These are
bg-atlasapi
Create example script for access atlas data #13We can farm out the writing of each of these scripts to a separate issue, once we've agreed on what requirements we have for them.
Describe the solution you'd like
I propose the requirements are (based on dev meeting discussion)
The main purpose of these scripts is benchmarking and smoke-testing (on large data).
It would be nice if they would be re-usable in teaching materials/docs/tutorials.
Naively suggested structure
benchmark-workflows
folder in this repo.%load file.py
can be used to haveexample_workflow
in a jupyter notebookDescribe alternatives you've considered
brainglobe-benchmarks
orbrainglobe-scripts
repo instead? (As much as I'd like to reduce the number of repos!) - we can still do this at a later point, I guess (with a bit more effort...)The text was updated successfully, but these errors were encountered: