Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ONNX Upsample model compile error #166

Open
SCP-tatsunami opened this issue Nov 12, 2020 · 0 comments
Open

ONNX Upsample model compile error #166

SCP-tatsunami opened this issue Nov 12, 2020 · 0 comments

Comments

@SCP-tatsunami
Copy link

SCP-tatsunami commented Nov 12, 2020

Hi!

I have a compilation problem with sagemaker neo. May I report it here?
When I try to convert an onnx graph like the one below, I get an error and cannot convert it.

graph(%input : Float(1:519168, 3:173056, 416:416, 416:1),
%basenet.slice1.0.weight : Float(64:27, 3:9, 3:3, 3:1),
%basenet.slice1.0.bias : Float(64:1),
%basenet.slice1.1.weight : Float(64:1),
%basenet.slice1.1.bias : Float(64:1),
%basenet.slice1.1.running_mean : Float(64:1),
%basenet.slice1.1.running_var : Float(64:1),
%basenet.slice1.3.weight : Float(64:576, 64:9, 3:3, 3:1),
%basenet.slice1.3.bias : Float(64:1),
%basenet.slice1.4.weight : Float(64:1),
%basenet.slice1.4.bias : Float(64:1),
%basenet.slice1.4.running_mean : Float(64:1),
%basenet.slice1.4.running_var : Float(64:1),
%basenet.slice1.7.weight : Float(128:576, 64:9, 3:3, 3:1),
%basenet.slice1.7.bias : Float(128:1),
%basenet.slice1.8.weight : Float(128:1),
%basenet.slice1.8.bias : Float(128:1),
%basenet.slice1.8.running_mean : Float(128:1),
%basenet.slice1.8.running_var : Float(128:1),
%basenet.slice1.10.weight : Float(128:1152, 128:9, 3:3, 3:1),
%basenet.slice1.10.bias : Float(128:1),
%basenet.slice1.11.weight : Float(128:1),
%basenet.slice1.11.bias : Float(128:1),
%basenet.slice1.11.running_mean : Float(128:1),
%basenet.slice1.11.running_var : Float(128:1),
%basenet.slice2.14.weight : Float(256:1152, 128:9, 3:3, 3:1),
%basenet.slice2.14.bias : Float(256:1),
%basenet.slice2.15.weight : Float(256:1),
%basenet.slice2.15.bias : Float(256:1),
%basenet.slice2.15.running_mean : Float(256:1),
%basenet.slice2.15.running_var : Float(256:1),
%basenet.slice2.17.weight : Float(256:2304, 256:9, 3:3, 3:1),
%basenet.slice2.17.bias : Float(256:1),
%basenet.slice2.18.weight : Float(256:1),
%basenet.slice2.18.bias : Float(256:1),
%basenet.slice2.18.running_mean : Float(256:1),
%basenet.slice2.18.running_var : Float(256:1),
%basenet.slice3.20.weight : Float(256:2304, 256:9, 3:3, 3:1),
%basenet.slice3.20.bias : Float(256:1),
%basenet.slice3.21.weight : Float(256:1),
%basenet.slice3.21.bias : Float(256:1),
%basenet.slice3.21.running_mean : Float(256:1),
%basenet.slice3.21.running_var : Float(256:1),
%basenet.slice3.24.weight : Float(512:2304, 256:9, 3:3, 3:1),
%basenet.slice3.24.bias : Float(512:1),
%basenet.slice3.25.weight : Float(512:1),
%basenet.slice3.25.bias : Float(512:1),
%basenet.slice3.25.running_mean : Float(512:1),
%basenet.slice3.25.running_var : Float(512:1),
%basenet.slice3.27.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice3.27.bias : Float(512:1),
%basenet.slice3.28.weight : Float(512:1),
%basenet.slice3.28.bias : Float(512:1),
%basenet.slice3.28.running_mean : Float(512:1),
%basenet.slice3.28.running_var : Float(512:1),
%basenet.slice4.30.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.30.bias : Float(512:1),
%basenet.slice4.31.weight : Float(512:1),
%basenet.slice4.31.bias : Float(512:1),
%basenet.slice4.31.running_mean : Float(512:1),
%basenet.slice4.31.running_var : Float(512:1),
%basenet.slice4.34.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.34.bias : Float(512:1),
%basenet.slice4.35.weight : Float(512:1),
%basenet.slice4.35.bias : Float(512:1),
%basenet.slice4.35.running_mean : Float(512:1),
%basenet.slice4.35.running_var : Float(512:1),
%basenet.slice4.37.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.37.bias : Float(512:1),
%basenet.slice4.38.weight : Float(512:1),
%basenet.slice4.38.bias : Float(512:1),
%basenet.slice4.38.running_mean : Float(512:1),
%basenet.slice4.38.running_var : Float(512:1),
%basenet.slice5.1.weight : Float(1024:4608, 512:9, 3:3, 3:1),
%basenet.slice5.1.bias : Float(1024:1),
%basenet.slice5.2.weight : Float(1024:1024, 1024:1, 1:1, 1:1),
%basenet.slice5.2.bias : Float(1024:1),
%upconv1.conv.0.weight : Float(512:1536, 1536:1, 1:1, 1:1),
%upconv1.conv.0.bias : Float(512:1),
%upconv1.conv.1.weight : Float(512:1),
%upconv1.conv.1.bias : Float(512:1),
%upconv1.conv.1.running_mean : Float(512:1),
%upconv1.conv.1.running_var : Float(512:1),
%upconv1.conv.3.weight : Float(256:4608, 512:9, 3:3, 3:1),
%upconv1.conv.3.bias : Float(256:1),
%upconv1.conv.4.weight : Float(256:1),
%upconv1.conv.4.bias : Float(256:1),
%upconv1.conv.4.running_mean : Float(256:1),
%upconv1.conv.4.running_var : Float(256:1)):
%103 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input, %basenet.slice1.0.weight, %basenet.slice1.0.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%104 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%103, %basenet.slice1.1.weight, %basenet.slice1.1.bias, %basenet.slice1.1.running_mean, %basenet.slice1.1.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%105 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Relu(%104) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%106 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%105, %basenet.slice1.3.weight, %basenet.slice1.3.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%107 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%106, %basenet.slice1.4.weight, %basenet.slice1.4.bias, %basenet.slice1.4.running_mean, %basenet.slice1.4.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%108 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Relu(%107) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%109 : Float(1:2768896, 64:43264, 208:208, 208:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%110 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%109, %basenet.slice1.7.weight, %basenet.slice1.7.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%111 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%110, %basenet.slice1.8.weight, %basenet.slice1.8.bias, %basenet.slice1.8.running_mean, %basenet.slice1.8.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%112 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Relu(%111) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%113 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%112, %basenet.slice1.10.weight, %basenet.slice1.10.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%114 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%113, %basenet.slice1.11.weight, %basenet.slice1.11.bias, %basenet.slice1.11.running_mean, %basenet.slice1.11.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%115 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Relu(%114) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%116 : Float(1:1384448, 128:10816, 104:104, 104:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%117 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%116, %basenet.slice2.14.weight, %basenet.slice2.14.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%118 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%117, %basenet.slice2.15.weight, %basenet.slice2.15.bias, %basenet.slice2.15.running_mean, %basenet.slice2.15.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%119 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%118) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%120 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%119, %basenet.slice2.17.weight, %basenet.slice2.17.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%121 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%120, %basenet.slice2.18.weight, %basenet.slice2.18.bias, %basenet.slice2.18.running_mean, %basenet.slice2.18.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%122 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%121) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%123 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%122, %basenet.slice3.20.weight, %basenet.slice3.20.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%124 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%123, %basenet.slice3.21.weight, %basenet.slice3.21.bias, %basenet.slice3.21.running_mean, %basenet.slice3.21.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%125 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%124) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%126 : Float(1:692224, 256:2704, 52:52, 52:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%127 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%126, %basenet.slice3.24.weight, %basenet.slice3.24.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%128 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%127, %basenet.slice3.25.weight, %basenet.slice3.25.bias, %basenet.slice3.25.running_mean, %basenet.slice3.25.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%129 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%128) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%130 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%129, %basenet.slice3.27.weight, %basenet.slice3.27.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%131 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%130, %basenet.slice3.28.weight, %basenet.slice3.28.bias, %basenet.slice3.28.running_mean, %basenet.slice3.28.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%132 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%131) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%133 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%132, %basenet.slice4.30.weight, %basenet.slice4.30.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%134 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%133, %basenet.slice4.31.weight, %basenet.slice4.31.bias, %basenet.slice4.31.running_mean, %basenet.slice4.31.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%135 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%134) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%136 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%137 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%136, %basenet.slice4.34.weight, %basenet.slice4.34.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%138 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%137, %basenet.slice4.35.weight, %basenet.slice4.35.bias, %basenet.slice4.35.running_mean, %basenet.slice4.35.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%139 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Relu(%138) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%140 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%139, %basenet.slice4.37.weight, %basenet.slice4.37.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%141 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%140, %basenet.slice4.38.weight, %basenet.slice4.38.bias, %basenet.slice4.38.running_mean, %basenet.slice4.38.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%142 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::MaxPoolkernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%143 : Float(1:692224, 1024:676, 26:26, 26:1) = onnx::Conv[dilations=[6, 6], group=1, kernel_shape=[3, 3], pads=[6, 6, 6, 6], strides=[1, 1]](%142, %basenet.slice5.1.weight, %basenet.slice5.1.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%144 : Float(1:692224, 1024:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%143, %basenet.slice5.2.weight, %basenet.slice5.2.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%145 : Float(1:1038336, 1536:676, 26:26, 26:1) = onnx::Concat[axis=1](%144, %141) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:63:0
%146 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%145, %upconv1.conv.0.weight, %upconv1.conv.0.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%147 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%146, %upconv1.conv.1.weight, %upconv1.conv.1.bias, %upconv1.conv.1.running_mean, %upconv1.conv.1.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%148 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Relu(%147) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%149 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%148, %upconv1.conv.3.weight, %upconv1.conv.3.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%150 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%149, %upconv1.conv.4.weight, %upconv1.conv.4.bias, %upconv1.conv.4.running_mean, %upconv1.conv.4.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%151 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::Relu(%150) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%152 : Tensor = onnx::Shape(%132)
%153 : Tensor = onnx::Constantvalue={2}
%154 : Long() = onnx::Gather[axis=0](%152, %153) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:66:0
%155 : Tensor = onnx::Shape(%132)
%156 : Tensor = onnx::Constantvalue={3}
%157 : Long() = onnx::Gather[axis=0](%155, %156) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:66:0
%158 : Tensor = onnx::Unsqueezeaxes=[0]
%159 : Tensor = onnx::Unsqueezeaxes=[0]
%160 : Tensor = onnx::Concat[axis=0](%158, %159)
%161 : Tensor = onnx::Constantvalue= 1 1 [ CPUFloatType{2} ]
%162 : Tensor = onnx::Castto=1
%163 : Tensor = onnx::Shape(%151)
%164 : Tensor = onnx::Sliceaxes=[0], ends=[9223372036854775807], starts=[2]
%165 : Tensor = onnx::Castto=1
%166 : Tensor = onnx::Div(%162, %165)
%167 : Tensor = onnx::Concat[axis=0](%161, %166)
%168 : Float(1:692224, 256:2704, 52:52, 52:1) = onnx::Upsample[mode="linear"](%151, %167) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:3163:0
return (%168)

I'm getting the following error on AWS

ClientError: InputConfiguration: TVM cannot convert ONNX model. Please make sure the framework you selected is correct. <class 'tvm.tir.expr.Any'> has no attribute value

When I tried it on Oct 9, 2020, there was no problem even with Upsample, but as of Nov 12, 2020, the error occurs.
The same thing happens with Resize

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant