Skip to content

Commit

Permalink
Fix flash_attention kernels
Browse files Browse the repository at this point in the history
  • Loading branch information
xuzhao9 committed Nov 9, 2024
1 parent 541969a commit 10727f5
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions test/test_gpu/skip_tests_h100_pytorch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ flash_attention:
- triton_tutorial_flash_v2_tma
- triton_op_flash_v2
- xformers_splitk
- colfax_cutlass
fp8_attention:
- colfax_fmha
fp8_fused_quant_gemm_rowwise:
Expand Down

0 comments on commit 10727f5

Please sign in to comment.