Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-standard reshape fail during reshape_lazy lowering #2419

Open
attila-dusnoki-htec opened this issue Nov 9, 2023 · 4 comments
Open

Non-standard reshape fail during reshape_lazy lowering #2419

attila-dusnoki-htec opened this issue Nov 9, 2023 · 4 comments
Assignees

Comments

@attila-dusnoki-htec
Copy link
Collaborator

AWSRN-BAM fails with the following:

migraphx-driver: /code/AMDMIGraphX/src/targets/gpu/lowering.cpp:76: void migraphx::gpu::miopen_apply::check_shape(shape, instruction_ref): Assertion 'x == i->get_shape()' failed.

This script can be used for creating the model.

The issue occures during lowering:

BEFORE
@619 = gpu::precompile_op[op=pointwise,additional_args=1,ignore_modules=0](@571,@617,@599,@615,@145,@618), [main:pointwise127] -> float_type, {1, 3, 85, 2, 85, 2}, {86700, 28900, 85, 14450, 1, 7225}, target_id=0
@620 = reshape[dims={1, 3, 170, 170}](@619) -> float_type, {1, 3, 170, 170}, {86700, 28900, 14450, 7225}, target_id=0
@621 = @return(@620), target_id=0

AFTER
@619 = gpu::precompile_op[op=pointwise,additional_args=1,ignore_modules=0](@571,@617,@599,@615,@145,@618), [main:pointwise127] -> float_type, {1, 3, 85, 2, 85, 2}, {86700, 28900, 85, 14450, 1, 7225}, target_id=0
@620 = allocate[shape=float_type, {1, 3, 85, 2, 85, 2}, {86700, 28900, 85, 14450, 1, 7225},buf_type=nullopt] -> float_type, {1, 3, 85, 2, 85, 2}, {86700, 28900, 85, 14450, 1, 7225}, target_id=0
@621 = gpu::contiguous(@619,@620) -> float_type, {1, 3, 85, 2, 85, 2}, {86700, 28900, 340, 170, 2, 1}, target_id=0
@622 = allocate[shape=float_type, {1, 3, 170, 170}, {86700, 28900, 170, 1},buf_type=nullopt] -> float_type, {1, 3, 170, 170}, {86700, 28900, 170, 1}, target_id=0
@623 = reshape_lazy[dims={1, 3, 170, 170}](@621) -> float_type, {1, 3, 170, 170}, {86700, 28900, 170, 1}, target_id=0
@624 = gpu::contiguous(@623,@622) -> float_type, {1, 3, 170, 170}, {86700, 28900, 170, 1}, target_id=0
@625 = @return(@624), target_id=0

The original reshape is in non-standard format. The gpu::contiguous which is added by add_reshape_lazy_op is in standard format.

{1, 3, 170, 170}, {86700, 28900, 14450, 7225} // before
{1, 3, 170, 170}, {86700, 28900, 170, 1} // after

cc @causten @pfultz2

@gyulaz-htec
Copy link
Collaborator

Could be related to: #2487

@causten
Copy link
Collaborator

causten commented Nov 30, 2023

@bpickrel see if this is a duplicate

@bpickrel
Copy link
Contributor

Not a clear duplicate--the same assertion fails but with two different model files, and a different sequence of operators. I'll wait and see if they both have the same underlying cause, though.

@bpickrel bpickrel linked a pull request Dec 6, 2023 that will close this issue
@bpickrel
Copy link
Contributor

bpickrel commented Dec 6, 2023

The code branch for #2038 seems to fix this problem. It runs all 3 of these models successfully, and without needing the MIGRAPHX_DISABLE_MLIR setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants