-
-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MatMul op is not supported #61
Comments
Put in a PR adding the op 😉 |
I'm 100% a noob here. I've added something like: function load_node!(tape::Tape, ::OpConfig{:ONNX, :MatMul}, args::VarVec, attrs::AttrDict)
return push_call!(tape, mul, args...)
end The mul function ( |
I'm not exactly sure what would be equivalent, but if it can match the examples in https://numpy.org/doc/stable/reference/generated/numpy.matmul.html then you're probably on the right track. |
So, according to those examples, Julia does the same thing: julia> a = [1 0; 0 1]
2×2 Matrix{Int64}:
1 0
0 1
julia> b = [4 1; 2 2]
2×2 Matrix{Int64}:
4 1
2 2
julia> a * b
2×2 Matrix{Int64}:
4 1
2 2 This means the implementation should be: function load_node!(tape::Tape, ::OpConfig{:ONNX, :MatMul}, args::VarVec, attrs::AttrDict)
return push_call!(tape, *, args...)
end Right? I don't know is this makes sense. Also, I'm trying to load the following .onnx file. It first asks me: julia> ONNX.load("model_C.onnx")
ERROR: AssertionError: Neither initializer, nor argument is provided for input dense_1_input I understand that I have to input some dummy data? I try to but get a bunch of dimension missmatch errors. I understand that another option is to run |
That's just one example. Vector-vector and >2D arrays don't work in
I don't understand the API terribly well myself, but have a look at how this is done in the tests. |
That makes sense. I'm not entirelly sure how to implement this. Oh well. |
I didn't mean to discourage you! Perhaps as a start we could just consider the 2D case by importing |
I think the correct thing to do would be to define In terms of loading the operation function load_node!(tape::Tape, ::OpConfig{:ONNX, :MatMul}, args::VarVec, attrs::AttrDict)
return push_call!(tape, matmul, args...)
end should work. For running your model, you need to pass in some dummy data. This can be done by initializing an input that's the same size as your input data: # dummy input that is 224 x 224 x 3 x 1 (the last dimension is 1 batch)
x = rand(Float32, 224, 224, 3, 1)
model = ONNX.load("model_C.onnx", x) |
My model is supposed to work on a 2D case, so I've implemented the
# dummy input that is 224 x 224 x 3 x 1 (the last dimension is 1 batch)
x = rand(Float32, 224, 224, 3, 1)
model = ONNX.load("model_C.onnx", x) Did you actually download the model and test these lines, or are these just an example? Because I get: julia> model = ONNX.load("model_C.onnx", x)
ERROR: MethodError: no method matching *(::Array{Float32, 4}, ::Matrix{Float32}) It somehow has to do with the |
You'd need to fork the repo first, create a branch and then file a PR with that. Happy to help walk through the steps if you're interested. I think Kyle's snippet was just an example. Judging from the MethodError, model_C actually needs the higher-dim case and not the matrix-matrix multiplication that |
I'm currently working on forking the thing. I understand that, those dims were not in accordance to what I expected, so I also think it was an example. I've changed the |
Yes, these are just an example. Do you know what size input your model expects? These lines will need to be changed to match what your model expects. Or if this is indeed the correct dimensionality for your model, then you appear to need the higher dim matmul like Brian mentioned. |
Okay, I downloaded your model and took a look. It seems like your input should be "400 x N" where N is the batch size? In that case, the dummy input should be: x = rand(Float32, 400, 1) |
Yes, the dimensions should be 400 x 1. In the meanwhile I have a PR request to do with these changes, but I don't think it makes sense to do it, since it's not currently working. Ok, so: julia> model = ONNX.load("model_C.onnx", x)
ERROR: DimensionMismatch("arrays could not be broadcast to a common size; got a dimension with lengths 400 and 32") If the julia> model = ONNX.load("model_C.onnx", x)
ERROR: DimensionMismatch("dot product arguments have lengths 400 and 12800") Does this give you any useful information? |
Try this: ENV["JULIA_DEBUG"] = Main # this will help to debug the loading
function load_node!(tape::Tape, ::OpConfig{:ONNX, :MatMul}, args::VarVec, attrs::AttrDict)
return push_call!(tape, *, args[2], args[1])
end
function load_node!(tape::Tape, ::OpConfig{:ONNX, :Sigmoid}, args::VarVec, attrs::AttrDict)
return push_call!(tape, Broadcast.broadcast, NNlib.sigmoid, args...)
end
load(filename, rand(Float32, 400, 1)) Your matrices are indeed two-dimensional, but the order of arguments in Python/C and Julia are different (e.g. see the implementation for Gemm, which is just * on steroids). |
Also, it looks like we can't load the model with batch size > 1, but it may be a limitation of the saved model itself. |
@dfdx, this worked! I think I had everything the same, but my implementation of the Again, thank you so much. Should I try to push these changes in a PR, or you'll push them directly? 😄 |
MatMul needs a bit more work to be useful in general case, so let's leave it as is for now. I'm currently close to finishing a large update in some other packages and coming back to ONNX.jl, so it won't be long before MatMul gets to the repo anyway. |
Awesome, can't wait! Great work, and again, thank you so much. |
what's the status on MatMul? after reading the thread a bit, I think basically we need |
According to the spec, ONNX Matmul behaves like numpy matmul, i.e.:
So yes, it's a mix of |
Hey, sorry to bother, but do we have an update on this topic? I've been using my ops-fix PR, but for some reason it's giving me headaches:
I have no idea why all of a sudden is giving me this pre-compilation error. Anyone has any clue? |
Do you use |
I think ONNX.jl does, that's why it's giving this error. module T
model = ONNX.load(...)
end
T.model I load the models during runtime, in the module T
function __init__()
@eval(T, model = ONNX.load(...))
end
end
T.model Maybe I'm wrong and doing a really bad thing, I don't know ahah. But it works. For now! ahah |
I can't reproduce the issue using either of these snippets, so perhaps there's something else in between. Other notes:
Exact code I successfully used from the branch of this PR: module T
import ONNX
args = (rand(Float32, 224, 224, 3, 1),)
model = ONNX.load(expanduser("~/data/onnx/resnet18-v1-7.onnx"), args...)
end
T.model and module T
import ONNX
function __init__()
args = (rand(Float32, 224, 224, 3, 1),)
@eval(T, model = ONNX.load(expanduser("~/data/onnx/resnet18-v1-7.onnx"), $args...))
end
end
T.model |
I added MatMul implementation for several most popular cases in #69. It's incomplete, but perhaps it makes more sense to open a new issue for improvements. |
That's probably what's happening. I saw somewhere that the @eval allows to the |
I'm trying to load an .ONNX file, and get the error on the title.
Also:
If I understand correctly, there's not a
load_node!
dispatch for the :MatMul operation?How can I fix this? Thank you.
The text was updated successfully, but these errors were encountered: