Skip to content
This repository has been archived by the owner on Aug 30, 2024. It is now read-only.

Commit

Permalink
disable MHA
Browse files Browse the repository at this point in the history
  • Loading branch information
yuchengliu1 committed Mar 13, 2024
1 parent 863859b commit f2e9171
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion neural_speed/core/layers/mha_dense.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,9 @@ bool bestla_reordered_attn_fp32_support(const attn_shape_t* params) {
// TODO(Yi): check K V's layout
if (_cd->AMX_BF16()) return true;
#endif
return !_cd->AVX512F() || _cd->AVX2(); // use avx2 and f16c on avx2 platforms
// use avx2 and f16c on avx2 platforms
// todo: check avx2 mha on sever
return !_cd->AVX512F() && _cd->AVX2();
}
// kv cache sizes in bytes per layer per batch per beam for;
void bestla_reordered_attn_fp32_batch_kv_info(const kv_shape_t* params, kv_cache_info_t* out) {
Expand Down

0 comments on commit f2e9171

Please sign in to comment.