You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I notice that if batchsize for image_embedding ( torch.Size([4, 512, 1, 20, 20])) and point_coord (torch.Size([1, 1, 1, 40, 3])) are different ,grid_sample does not work.
I did a copy operation, is that correct? So that the batchsize for point_embeddings is 4.
if image_embedding.shape[0] != point_coord.shape[0]:
b, c, d, h, w = image_embedding.shape
point_coord = torch.repeat_interleave(point_coord , image_embedding.shape[0], dim=0)
and in the forward of transformer, I reshape point_embeddings to 1 to add with global_query
b, n, c = point_embed.shape
point_embed = point_embed.reshape(1,-1,c)# Self attention block
q = torch.cat([self.global_query, point_embed], dim=1)
The text was updated successfully, but these errors were encountered:
Thanks a lot for your work.
point_embedding = F.grid_sample(image_embedding, point_coord, align_corners=False).squeeze(2).squeeze(2)
I notice that if batchsize for image_embedding ( torch.Size([4, 512, 1, 20, 20])) and point_coord (torch.Size([1, 1, 1, 40, 3])) are different ,grid_sample does not work.
I did a copy operation, is that correct? So that the batchsize for point_embeddings is 4.
if image_embedding.shape[0] != point_coord.shape[0]:
b, c, d, h, w = image_embedding.shape
point_coord = torch.repeat_interleave(point_coord , image_embedding.shape[0], dim=0)
and in the forward of transformer, I reshape point_embeddings to 1 to add with global_query
b, n, c = point_embed.shape
point_embed = point_embed.reshape(1,-1,c)# Self attention block
q = torch.cat([self.global_query, point_embed], dim=1)
The text was updated successfully, but these errors were encountered: