Skip to content

Commit

Permalink
Fix fp8_attention
Browse files Browse the repository at this point in the history
  • Loading branch information
xuzhao9 committed Nov 19, 2024
1 parent 850ec97 commit a67aa88
Showing 1 changed file with 3 additions and 0 deletions.
3 changes: 3 additions & 0 deletions test/test_gpu/skip_tests_h100_pytorch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,9 @@ flash_attention:
# - flex_attention
fp8_attention:
- colfax_fmha
# triton_flash_v2 now requires the main branch of Triton
# pytorch version does not work
- triton_flash_v2
fp8_fused_quant_gemm_rowwise:
fp8_gemm:
- triton_persistent_fp8_gemm
Expand Down

0 comments on commit a67aa88

Please sign in to comment.