Skip to content

E2E performance Rolling, PyTorch 2.5 #378

E2E performance Rolling, PyTorch 2.5

E2E performance Rolling, PyTorch 2.5 #378

Manually triggered November 25, 2024 05:16
Status Failure
Total duration 12h 28m 48s
Artifacts 44

e2e-performance.yml

on: workflow_dispatch
Print inputs
1s
Print inputs
Matrix: Run test matrix
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Run test matrix (huggingface, training, bfloat16) / Test huggingface bfloat16 training performance
The job running on runner triton-1550-5 has exceeded the maximum execution time of 720 minutes.

Artifacts

Produced during runtime
Name Size
logs-huggingface-amp_bf16-inference-no-freezing-performance
4.03 KB
logs-huggingface-amp_bf16-inference-performance
3.87 KB
logs-huggingface-amp_bf16-training-performance
4.1 KB
logs-huggingface-amp_fp16-inference-no-freezing-performance
4.03 KB
logs-huggingface-amp_fp16-inference-performance
3.86 KB
logs-huggingface-amp_fp16-training-performance
4.09 KB
logs-huggingface-bfloat16-inference-no-freezing-performance
4.03 KB
logs-huggingface-bfloat16-inference-performance
3.87 KB
logs-huggingface-float16-inference-no-freezing-performance
4 KB
logs-huggingface-float16-inference-performance
3.86 KB
logs-huggingface-float16-training-performance
4.06 KB
logs-huggingface-float32-inference-no-freezing-performance
4.02 KB
logs-huggingface-float32-inference-performance
3.88 KB
logs-huggingface-float32-training-performance
4.09 KB
logs-timm_models-amp_bf16-inference-no-freezing-performance
3.29 KB
logs-timm_models-amp_bf16-inference-performance
3.2 KB
logs-timm_models-amp_bf16-training-performance
3.3 KB
logs-timm_models-amp_fp16-inference-no-freezing-performance
3.29 KB
logs-timm_models-amp_fp16-inference-performance
3.19 KB
logs-timm_models-amp_fp16-training-performance
3.3 KB
logs-timm_models-bfloat16-inference-no-freezing-performance
3.29 KB
logs-timm_models-bfloat16-inference-performance
3.2 KB
logs-timm_models-bfloat16-training-performance
3.3 KB
logs-timm_models-float16-inference-no-freezing-performance
3.28 KB
logs-timm_models-float16-inference-performance
3.18 KB
logs-timm_models-float16-training-performance
3.29 KB
logs-timm_models-float32-inference-no-freezing-performance
3.29 KB
logs-timm_models-float32-inference-performance
3.19 KB
logs-timm_models-float32-training-performance
3.31 KB
logs-torchbench-amp_bf16-inference-no-freezing-performance
2.91 KB
logs-torchbench-amp_bf16-inference-performance
2.83 KB
logs-torchbench-amp_bf16-training-performance
2.99 KB
logs-torchbench-amp_fp16-inference-no-freezing-performance
2.91 KB
logs-torchbench-amp_fp16-inference-performance
2.82 KB
logs-torchbench-amp_fp16-training-performance
2.99 KB
logs-torchbench-bfloat16-inference-no-freezing-performance
2.9 KB
logs-torchbench-bfloat16-inference-performance
2.82 KB
logs-torchbench-bfloat16-training-performance
3 KB
logs-torchbench-float16-inference-no-freezing-performance
2.89 KB
logs-torchbench-float16-inference-performance
2.8 KB
logs-torchbench-float16-training-performance
2.99 KB
logs-torchbench-float32-inference-no-freezing-performance
2.88 KB
logs-torchbench-float32-inference-performance
2.79 KB
logs-torchbench-float32-training-performance
2.99 KB