Skip to content

Commit

Permalink
add _transformer_encoder_layer_fwd
Browse files Browse the repository at this point in the history
  • Loading branch information
yucai-intel committed Nov 11, 2024
1 parent e250a26 commit 199b4ce
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 1 deletion.
2 changes: 1 addition & 1 deletion test/xpu/skip_list_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -810,7 +810,7 @@
# https://github.com/intel/torch-xpu-ops/issues/761
# AssertionError: False is not true
# CPU fallback failure. To support aten::transformer_encoder_layer_forward with proper priority.
"test_disable_fastpath_xpu",
# "test_disable_fastpath_xpu",
# We have no mechanism to handle SDPBackend::ERROR so far. Will give a fully support when we support all SDPBackends.
"test_dispatch_fails_no_backend_xpu",
# Could not run 'aten::_to_copy' with arguments from the 'NestedTensorXPU' backend
Expand Down
7 changes: 7 additions & 0 deletions yaml/native/native_functions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5585,6 +5585,13 @@
XPU: _dirichlet_grad_xpu
autogen: _dirichlet_grad.out

# Apparently, putting "forward" in the name will cause Python bindings to be skipped, so "fwd" it is.
- func: _transformer_encoder_layer_fwd(Tensor src, int embed_dim, int num_heads, Tensor qkv_weight, Tensor qkv_bias, Tensor proj_weight, Tensor proj_bias, bool use_gelu, bool norm_first, float eps, Tensor norm_weight_1, Tensor norm_bias_1, Tensor norm_weight_2, Tensor norm_bias_2, Tensor ffn_weight_1, Tensor ffn_bias_1, Tensor ffn_weight_2, Tensor ffn_bias_2, Tensor? mask=None, int? mask_type=None) -> Tensor
variants: function
dispatch:
XPU: transformer_encoder_layer_forward
autogen: _transformer_encoder_layer_fwd.out

# Fused implementation detail for transformers. Adds in-projection bias to QKV and divides Q by sqrt(D/num_heads).
- func: _transform_bias_rescale_qkv(Tensor qkv, Tensor qkv_bias, int num_heads) -> (Tensor, Tensor, Tensor)
dispatch:
Expand Down

0 comments on commit 199b4ce

Please sign in to comment.