-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[cpu] Integrate IStaticShapeInfer wirth IShapeInfer #27770
[cpu] Integrate IStaticShapeInfer wirth IShapeInfer #27770
Conversation
- NgraphShapeInfer replaced by IStaticShapeInfer instances Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
Signed-off-by: Pawel Raasz <[email protected]>
@@ -582,18 +612,50 @@ const IStaticShapeInferFactory::TRegistry IStaticShapeInferFactory::registry{ | |||
#undef _OV_OP_SHAPE_INFER_MASK_REG | |||
#undef _OV_OP_SHAPE_INFER_VA_REG | |||
|
|||
class ShapeInferCustomMask : public IShapeInfer { | |||
public: | |||
ShapeInferCustomMask(ShapeInferPtr shape_infer, port_mask_t port_mask) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, why do we need a custom mask? At first glance - the whole idea behind this subsystem is to decouple the plugin and shape inference as much as possible and ideally no custom masks should be allowed. Also the mask is the contract between the shape inference implementation itself and the caller, as the shape inference defines required data via the mask, and once it's defined outside of the shape inference subsystem, how the correctness may be insured? Meaning that there may be arbitrary custom mask, which may not match the really required inputs in general case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with approach that shape inference should provide.
Its done to fit current usage for custom CPU shape inference which defines mask depends on runtime properties of operator. The shape inference provide compile time value.
The affected cpu nodes:
- eye.cpp
- deconv.cpp
- reference.cpp
If you can confirm is not required in this implementations I will remove it. Or I can make as separate PR to see potential issues in tests. Will follow your recommendation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The affected cpu nodes:
eye.cpp
deconv.cpp
reference.cpp
- eye.cpp - the custom factory may be removed, as it looks like the only factor defining the mask is the number of inputs. But in the reality there may be the only mask defining all the possible inputs the shape infer depends on, and if the 4th input is absent, it's not a problem as it won't be considered anyway as the routines analyzing the inputs dependency and preparing the data consider the number of inputs.
- deconv.cpp - indeed tricky part, as the deconvolution layer may fuse bias and it becomes the 3rd or 4th input depending on the optional input of the ngraph operation. But, in such scenarios we would probably have to simply write a wrapper over the default shape infer in order to modify the behavior considering plugin specifics, rather than keep the custom masks in the generic shape inference part.
- reference.cpp this is even more interesting case, as there may be any arbitrary type of nod (internaly and externaly dynamic), if the op is provided by the user. But if it's an operation from opset, we could implement the existing port mask for the operation. But it will require some modifications of the reference node implementation itself.
So to summarize. We can remove custom shape inference factory for eye. For deconv and reference, it looks more logical to write a simple wrapper class for shape inference, rather than keep custom mask allowed for the generic shape inference mechanism. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this PR pass all check then I will make another one where try:
- Remove custom shape inference factory for eye, deconv, reference.
- If there will be any issues with deconv and reference then simple wrappers can be introduced. Or depends on errors reason other solution can be applied.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. Let's agree to remove custom mask in a follow up PR.
Signed-off-by: Raasz, Pawel <[email protected]>
Signed-off-by: Raasz, Pawel <[email protected]>
Details:
IStaticShapeInfer
interface extendsIShapeInfer
.NgraphShapeInfer
class as its functionality is replaced byIStaticShapeInfer
.ov::Shape
to avoid interpretation asintel_cpu::Shape
.ShapeVector
toStaticShapeVector
.Tickets: