Skip to content

Commit

Permalink
Add ragged attention to skip list of pytorch
Browse files Browse the repository at this point in the history
  • Loading branch information
xuzhao9 committed Dec 4, 2024
1 parent 6221771 commit 8c2e6c1
Showing 1 changed file with 6 additions and 0 deletions.
6 changes: 6 additions & 0 deletions test/test_gpu/skip_tests_h100_pytorch.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -44,5 +44,11 @@ jagged_mean:
jagged_softmax:
jagged_sum:
ragged_attention:
# ../../../lib/Tools/LinearLayout.cpp:565: LinearLayout
# mlir::triton::LinearLayout::reshapeOuts(ArrayRef<std::pair<StringAttr, int32_t>>) const:
# Assertion `getTotalOutDimSize() == std::accumulate( newOutDims.begin(), newOutDims.end(),
# 1, [&](int32_t acc, auto &outDim) { return acc * outDim.second; })' failed.
- hstu_triton_ragged_attention
# presistent kernel is not ready for OSS
- hstu_triton_ragged_attention_persistent
test_op:

0 comments on commit 8c2e6c1

Please sign in to comment.