E2E performance Rolling, PyTorch 2.5 #378
e2e-performance.yml
on: workflow_dispatch
Setup
1s
Print inputs
1s
Matrix: Run test matrix
Annotations
2 errors
Run test matrix (huggingface, training, bfloat16) / Test huggingface bfloat16 training performance
The job running on runner triton-1550-5 has exceeded the maximum execution time of 720 minutes.
|
Run test matrix (huggingface, training, bfloat16) / Test huggingface bfloat16 training performance
The operation was canceled.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
logs-huggingface-amp_bf16-inference-no-freezing-performance
|
4.03 KB |
|
logs-huggingface-amp_bf16-inference-performance
|
3.87 KB |
|
logs-huggingface-amp_bf16-training-performance
|
4.1 KB |
|
logs-huggingface-amp_fp16-inference-no-freezing-performance
|
4.03 KB |
|
logs-huggingface-amp_fp16-inference-performance
|
3.86 KB |
|
logs-huggingface-amp_fp16-training-performance
|
4.09 KB |
|
logs-huggingface-bfloat16-inference-no-freezing-performance
|
4.03 KB |
|
logs-huggingface-bfloat16-inference-performance
|
3.87 KB |
|
logs-huggingface-float16-inference-no-freezing-performance
|
4 KB |
|
logs-huggingface-float16-inference-performance
|
3.86 KB |
|
logs-huggingface-float16-training-performance
|
4.06 KB |
|
logs-huggingface-float32-inference-no-freezing-performance
|
4.02 KB |
|
logs-huggingface-float32-inference-performance
|
3.88 KB |
|
logs-huggingface-float32-training-performance
|
4.09 KB |
|
logs-timm_models-amp_bf16-inference-no-freezing-performance
|
3.29 KB |
|
logs-timm_models-amp_bf16-inference-performance
|
3.2 KB |
|
logs-timm_models-amp_bf16-training-performance
|
3.3 KB |
|
logs-timm_models-amp_fp16-inference-no-freezing-performance
|
3.29 KB |
|
logs-timm_models-amp_fp16-inference-performance
|
3.19 KB |
|
logs-timm_models-amp_fp16-training-performance
|
3.3 KB |
|
logs-timm_models-bfloat16-inference-no-freezing-performance
|
3.29 KB |
|
logs-timm_models-bfloat16-inference-performance
|
3.2 KB |
|
logs-timm_models-bfloat16-training-performance
|
3.3 KB |
|
logs-timm_models-float16-inference-no-freezing-performance
|
3.28 KB |
|
logs-timm_models-float16-inference-performance
|
3.18 KB |
|
logs-timm_models-float16-training-performance
|
3.29 KB |
|
logs-timm_models-float32-inference-no-freezing-performance
|
3.29 KB |
|
logs-timm_models-float32-inference-performance
|
3.19 KB |
|
logs-timm_models-float32-training-performance
|
3.31 KB |
|
logs-torchbench-amp_bf16-inference-no-freezing-performance
|
2.91 KB |
|
logs-torchbench-amp_bf16-inference-performance
|
2.83 KB |
|
logs-torchbench-amp_bf16-training-performance
|
2.99 KB |
|
logs-torchbench-amp_fp16-inference-no-freezing-performance
|
2.91 KB |
|
logs-torchbench-amp_fp16-inference-performance
|
2.82 KB |
|
logs-torchbench-amp_fp16-training-performance
|
2.99 KB |
|
logs-torchbench-bfloat16-inference-no-freezing-performance
|
2.9 KB |
|
logs-torchbench-bfloat16-inference-performance
|
2.82 KB |
|
logs-torchbench-bfloat16-training-performance
|
3 KB |
|
logs-torchbench-float16-inference-no-freezing-performance
|
2.89 KB |
|
logs-torchbench-float16-inference-performance
|
2.8 KB |
|
logs-torchbench-float16-training-performance
|
2.99 KB |
|
logs-torchbench-float32-inference-no-freezing-performance
|
2.88 KB |
|
logs-torchbench-float32-inference-performance
|
2.79 KB |
|
logs-torchbench-float32-training-performance
|
2.99 KB |
|