Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vit模型转换报错 #5744

Open
sungerk opened this issue Oct 17, 2024 · 5 comments
Open

vit模型转换报错 #5744

sungerk opened this issue Oct 17, 2024 · 5 comments

Comments

@sungerk
Copy link

sungerk commented Oct 17, 2024

使用deit_tiny_patch16_224训练出来的pth模型转成onnx模型再用pnnx和ncnn转换都报错

pnnx报错

./pnnx model.onnx inputshape=[1,3,224,224]
pnnxparam = model.pnnx.param
pnnxbin = model.pnnx.bin
pnnxpy = model_pnnx.py
pnnxonnx = model.pnnx.onnx
ncnnparam = model.ncnn.param
ncnnbin = model.ncnn.bin
ncnnpy = model_ncnn.py
fp16 = 1
optlevel = 2
device = cpu
inputshape = [1,3,224,224]f32
inputshape2 =
customop =
moduleop =
############# pass_level0 onnx
inline_containers ... 0.00ms
eliminate_noop ... 0.29ms
fold_constants ... 0.12ms
canonicalize ... 0.57ms
shape_inference ... 77.85ms
fold_constants_dynamic_shape ... 0.11ms
inline_if_graph ... 0.01ms
fuse_constant_as_attribute ... 0.18ms
eliminate_noop_with_shape ... 0.14ms
┌──────────────────┬──────────┬──────────┐
│ │ orig │ opt │
├──────────────────┼──────────┼──────────┤
│ node │ 580 │ 580 │
│ initializer │ 164 │ 159 │
│ functions │ 0 │ 0 │
├──────────────────┼──────────┼──────────┤
│ nn module op │ 0 │ 0 │
│ custom module op │ 0 │ 0 │
│ aten op │ 0 │ 0 │
│ prims op │ 0 │ 0 │
│ onnx native op │ 580 │ 580 │
├──────────────────┼──────────┼──────────┤
│ Add │ 135 │ 135 │
│ Concat │ 1 │ 1 │
│ Conv │ 1 │ 1 │
│ Div │ 37 │ 37 │
│ Erf │ 12 │ 12 │
│ Gather │ 1 │ 1 │
│ Gemm │ 1 │ 1 │
│ MatMul │ 72 │ 72 │
│ Mul │ 73 │ 73 │
│ Pow │ 25 │ 25 │
│ ReduceMean │ 50 │ 50 │
│ Reshape │ 25 │ 25 │
│ Softmax │ 12 │ 12 │
│ Split │ 12 │ 12 │
│ Sqrt │ 25 │ 25 │
│ Squeeze │ 36 │ 36 │
│ Sub │ 25 │ 25 │
│ Transpose │ 37 │ 37 │
└──────────────────┴──────────┴──────────┘
############# pass_level1 onnx
############# pass_level2
############# pass_level3
open failed
############# pass_level4
############# pass_level5
############# pass_ncnn
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_158 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_158 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_159 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_159 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_160 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_160 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_161 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_161 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_162 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_162 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_163 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_163 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_164 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_164 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_165 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_165 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_166 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_166 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_167 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_167 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_168 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_168 param is_causal=False
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_169 param dropout_p=0.000000e+00
ignore F.scaled_dot_product_attention F.scaled_dot_product_attention_169 param is_causal=False

使用模型执行脚本也报错
import numpy as np
import ncnn
import torch

def test_inference():
torch.manual_seed(0)
in0 = torch.rand(1, 3, 224, 224, dtype=torch.float)
out = []

with ncnn.Net() as net:
    net.load_param("model.ncnn.param")
    net.load_model("model.ncnn.bin")

    with net.create_extractor() as ex:
        ex.input("in0", ncnn.Mat(in0.squeeze(0).numpy()).clone())

        _, out0 = ex.extract("out0")
        out.append(torch.from_numpy(np.array(out0)).unsqueeze(0))

if len(out) == 1:
    return out[0]
else:
    return tuple(out)

if name == "main":
print(test_inference())

layer F.scaled_dot_product_attention not exists or registered

@wzyforgit
Copy link
Contributor

你还可以试试先把pth导成torchscript,然后再用pnnx直接转到ncnn

@sungerk
Copy link
Author

sungerk commented Oct 18, 2024

你还可以试试先把pth导成torchscript,然后再用pnnx直接转到ncnn

试过一样报错

@MarginGitHub
Copy link

scaled_dot_product_attention算子不支持啊

@sungerk
Copy link
Author

sungerk commented Oct 26, 2024

@MarginGitHub 我知道呀,我啥时候能够支持呢?

@wzyforgit
Copy link
Contributor

@MarginGitHub 我知道呀,我啥时候能够支持呢?

你都定位到原因了,那就两个方法呗,要么自己用C++实现一个然后注册到ncnn上,要么就是在pytorch里面找到等价的一系列支持的小算子,然后把这个拆开去实现。。。当然你等nihui去添加支持也可以

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants