Skip to content

E2E performance Rolling, PyTorch 2.5 #371

E2E performance Rolling, PyTorch 2.5

E2E performance Rolling, PyTorch 2.5 #371

Manually triggered November 18, 2024 05:16
Status Failure
Total duration 13h 13m 15s
Artifacts 44

e2e-performance.yml

on: workflow_dispatch
Print inputs
1s
Print inputs
Matrix: Run test matrix
Fit to window
Zoom out
Zoom in

Annotations

2 errors
Run test matrix (torchbench, inference, float32) / Test torchbench float32 inference performance
The job running on runner triton-1550-2 has exceeded the maximum execution time of 720 minutes.

Artifacts

Produced during runtime
Name Size
logs-huggingface-amp_bf16-inference-no-freezing-performance
4.03 KB
logs-huggingface-amp_bf16-inference-performance
3.86 KB
logs-huggingface-amp_bf16-training-performance
4.08 KB
logs-huggingface-amp_fp16-inference-no-freezing-performance
4.03 KB
logs-huggingface-amp_fp16-inference-performance
3.87 KB
logs-huggingface-amp_fp16-training-performance
4.09 KB
logs-huggingface-bfloat16-inference-no-freezing-performance
4.02 KB
logs-huggingface-bfloat16-inference-performance
3.87 KB
logs-huggingface-bfloat16-training-performance
4.07 KB
logs-huggingface-float16-inference-no-freezing-performance
4.02 KB
logs-huggingface-float16-inference-performance
3.87 KB
logs-huggingface-float16-training-performance
4.07 KB
logs-huggingface-float32-inference-no-freezing-performance
4.01 KB
logs-huggingface-float32-inference-performance
3.87 KB
logs-huggingface-float32-training-performance
4.08 KB
logs-timm_models-amp_bf16-inference-no-freezing-performance
3.28 KB
logs-timm_models-amp_bf16-inference-performance
3.19 KB
logs-timm_models-amp_bf16-training-performance
3.3 KB
logs-timm_models-amp_fp16-inference-no-freezing-performance
3.29 KB
logs-timm_models-amp_fp16-inference-performance
3.2 KB
logs-timm_models-amp_fp16-training-performance
3.3 KB
logs-timm_models-bfloat16-inference-no-freezing-performance
3.29 KB
logs-timm_models-bfloat16-inference-performance
3.2 KB
logs-timm_models-bfloat16-training-performance
3.31 KB
logs-timm_models-float16-inference-no-freezing-performance
3.28 KB
logs-timm_models-float16-inference-performance
3.19 KB
logs-timm_models-float16-training-performance
3.29 KB
logs-timm_models-float32-inference-no-freezing-performance
3.29 KB
logs-timm_models-float32-inference-performance
3.2 KB
logs-timm_models-float32-training-performance
3.3 KB
logs-torchbench-amp_bf16-inference-no-freezing-performance
2.93 KB
logs-torchbench-amp_bf16-inference-performance
2.84 KB
logs-torchbench-amp_bf16-training-performance
2.99 KB
logs-torchbench-amp_fp16-inference-no-freezing-performance
2.94 KB
logs-torchbench-amp_fp16-inference-performance
2.85 KB
logs-torchbench-amp_fp16-training-performance
3.04 KB
logs-torchbench-bfloat16-inference-no-freezing-performance
2.9 KB
logs-torchbench-bfloat16-inference-performance
2.84 KB
logs-torchbench-bfloat16-training-performance
2.99 KB
logs-torchbench-float16-inference-no-freezing-performance
2.89 KB
logs-torchbench-float16-inference-performance
2.81 KB
logs-torchbench-float16-training-performance
3.02 KB
logs-torchbench-float32-inference-no-freezing-performance
2.91 KB
logs-torchbench-float32-training-performance
3.02 KB