Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cpu] Integrate IStaticShapeInfer wirth IShapeInfer #27770

Conversation

praasz
Copy link
Contributor

@praasz praasz commented Nov 27, 2024

Details:

  • The IStaticShapeInfer interface extends IShapeInfer.
  • Remove NgraphShapeInfer class as its functionality is replaced by IStaticShapeInfer.
  • Refactor shape inference unit test to avoid names clashes with CPU plugin types:
    • use ov::Shape to avoid interpretation as intel_cpu::Shape.
    • rename test type ShapeVector to StaticShapeVector.

Tickets:

- NgraphShapeInfer replaced by IStaticShapeInfer instances

Signed-off-by: Pawel Raasz <[email protected]>
@praasz praasz added this to the 2025.0 milestone Nov 27, 2024
@praasz praasz requested a review from maxnick November 27, 2024 10:44
@praasz praasz requested review from a team as code owners November 27, 2024 10:44
@github-actions github-actions bot added the category: CPU OpenVINO CPU plugin label Nov 27, 2024
src/plugins/intel_cpu/src/nodes/rnn.cpp Outdated Show resolved Hide resolved
@@ -582,18 +612,50 @@ const IStaticShapeInferFactory::TRegistry IStaticShapeInferFactory::registry{
#undef _OV_OP_SHAPE_INFER_MASK_REG
#undef _OV_OP_SHAPE_INFER_VA_REG

class ShapeInferCustomMask : public IShapeInfer {
public:
ShapeInferCustomMask(ShapeInferPtr shape_infer, port_mask_t port_mask)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious, why do we need a custom mask? At first glance - the whole idea behind this subsystem is to decouple the plugin and shape inference as much as possible and ideally no custom masks should be allowed. Also the mask is the contract between the shape inference implementation itself and the caller, as the shape inference defines required data via the mask, and once it's defined outside of the shape inference subsystem, how the correctness may be insured? Meaning that there may be arbitrary custom mask, which may not match the really required inputs in general case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with approach that shape inference should provide.
Its done to fit current usage for custom CPU shape inference which defines mask depends on runtime properties of operator. The shape inference provide compile time value.
The affected cpu nodes:

  • eye.cpp
  • deconv.cpp
  • reference.cpp

If you can confirm is not required in this implementations I will remove it. Or I can make as separate PR to see potential issues in tests. Will follow your recommendation.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The affected cpu nodes:
eye.cpp
deconv.cpp
reference.cpp

  1. eye.cpp - the custom factory may be removed, as it looks like the only factor defining the mask is the number of inputs. But in the reality there may be the only mask defining all the possible inputs the shape infer depends on, and if the 4th input is absent, it's not a problem as it won't be considered anyway as the routines analyzing the inputs dependency and preparing the data consider the number of inputs.
  2. deconv.cpp - indeed tricky part, as the deconvolution layer may fuse bias and it becomes the 3rd or 4th input depending on the optional input of the ngraph operation. But, in such scenarios we would probably have to simply write a wrapper over the default shape infer in order to modify the behavior considering plugin specifics, rather than keep the custom masks in the generic shape inference part.
  3. reference.cpp this is even more interesting case, as there may be any arbitrary type of nod (internaly and externaly dynamic), if the op is provided by the user. But if it's an operation from opset, we could implement the existing port mask for the operation. But it will require some modifications of the reference node implementation itself.

So to summarize. We can remove custom shape inference factory for eye. For deconv and reference, it looks more logical to write a simple wrapper class for shape inference, rather than keep custom mask allowed for the generic shape inference mechanism. What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this PR pass all check then I will make another one where try:

  1. Remove custom shape inference factory for eye, deconv, reference.
  2. If there will be any issues with deconv and reference then simple wrappers can be introduced. Or depends on errors reason other solution can be applied.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Let's agree to remove custom mask in a follow up PR.

@praasz praasz requested a review from maxnick December 4, 2024 11:23
@praasz praasz enabled auto-merge December 4, 2024 16:32
@praasz praasz added this pull request to the merge queue Dec 4, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 4, 2024
@praasz praasz added this pull request to the merge queue Dec 4, 2024
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Dec 4, 2024
@praasz praasz added this pull request to the merge queue Dec 4, 2024
Merged via the queue into openvinotoolkit:master with commit c975788 Dec 4, 2024
169 checks passed
@praasz praasz deleted the shape-infer/integrate-istaiticshapeinfer-with-ishapeinfer branch December 4, 2024 21:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: CPU OpenVINO CPU plugin
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants