Add flash_attention_benchmark and gemm_benchmark #259
pr.yaml
on: pull_request
h100-pytorch-test
/
linux-test-h100
5m 57s
h100-triton-main-test
/
linux-test-h100
6m 2s