Skip to content

Tensor Size Mismatch During Training #1949

Answered by AdeelH
natedailey7 asked this question in Q&A
Discussion options

You must be logged in to vote

I can reproduce your error with this:

import torch

model = torch.hub.load(
    'AdeelH/pytorch-fpn:0.3',
    'make_fpn_resnet',
    name='resnet18',
    fpn_type='panoptic',
    num_classes=2,
    fpn_channels=128,
    in_channels=3,
    out_size=(326, 326),
    pretrained=True)

x = torch.randn((1, 3, 163, 163))
out = model(x)

Assuming this is correct, the problem is the mismatch between out_size passed to the model and the size of the input going into the model. You need to make sure that the out_size matches the size of the inputs to the model. In your case, this is currently 163. If you want to resize the 163x163 chips before they are passed to the model to e.g. 256x256, you can do t…

Replies: 2 comments 2 replies

Comment options

You must be logged in to vote
1 reply
@natedailey7
Comment options

Answer selected by natedailey7
Comment options

You must be logged in to vote
1 reply
@natedailey7
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants