-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: add test for Huggingface Accelerate #1257
base: main
Are you sure you want to change the base?
Conversation
b728584
to
6666291
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to clean the transformers tests of the first step and then expand to the ecosystem libraries
- name: Prepare Stock XPU Pytorch | ||
run: | | ||
source activate $CONDA_ENV_NAME | ||
if [ -z "${{ inputs.nightly_whl }}" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we plan this test only target for torch cpu nightly wheel? latest nightly or a specific nightly version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to target it for latest nightly triggered per schedule daily. @chuanqi129 : are you ok if I will drop input parameter to set nightly version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dropped inputs.nightly_whl
for now and just install latest nightly. Note that this parameter in any case would need a change since it controls only torch
version and does not allow to set version for torchvision
and torchaudio
. So, if needed, we can add these options later on.
ZE_AFFINITY_MASK: 0 | ||
PARSE_JUNIT: ${{ github.workspace }}/torch-xpu-ops/.github/scripts/parse-junitxml.py | ||
steps: | ||
- name: Checkout torch-xpu-ops |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems we don't need checkout torch-xpu-ops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need it. Workflow uses:
- .github/scripts/parse-junitxml.py
- .github/actions/print-environment/action.yml
from torch-xpu-ops repo. However, I forgot to add .py script to the dependencies of this workflow and I also I forgot to drop spec.py
which is Transformers specific - I will fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
However, I forgot to add .py script to the dependencies of this workflow and I also I forgot to drop spec.py which is Transformers specific - I will fix that.
Done.
For Accelerate it's not actually expansion to ecosystem. It's closing a test gap for Transformers. Note - Accelerate is a prerequisite library for Transformers. If something breaks in Accelerate there is high chance that Transformer tests will also start to fall apart. |
f3a9804
to
c959881
Compare
Script parses output of JUnit result XML files which can be generated by pytest -junit-xml=report.xml option. The intended usage of the script: * Print report tables in Markdown format. Use script arguments to control output: * `parse-junitxml.py --stats` to print stats table * `parse-junitxml.py --failed` to print table with failed cases * `parse-junitxml.py --skipped` to print table with skipped cases * etc. * Print report tables in Json or Json-line formats. Add `--json` to the script command line. Signed-off-by: Dmitry Rogozhkin <[email protected]>
Signed-off-by: Dmitry Rogozhkin <[email protected]>
Signed-off-by: Dmitry Rogozhkin <[email protected]>
Changes:
--junitxml=
option