Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doubt about the DeformableTransformerDecoder of batchformerv2 #17

Open
wanliliuxiansen opened this issue Nov 24, 2022 · 4 comments
Open

Comments

@wanliliuxiansen
Copy link

I found the two parameters bboxembed and class_embed of DeformableTransformerDecoder are not set.Can you give me the complete code of DeformableTransformerDecoder? Thank you very much!

@zhihou7
Copy link
Owner

zhihou7 commented Nov 24, 2022

Hi @wanliliuxiansen,
I do not get your point. Does Deformable-DETR require to set the two parameters by hand or in DeformableTransformerDecoder? The directory batchformer-v2 is the complete code.

Regards,
Zhi Hou

@wanliliuxiansen
Copy link
Author

wanliliuxiansen commented Nov 24, 2022 via email

@zhihou7
Copy link
Owner

zhihou7 commented Nov 25, 2022

Hi @wanliliuxiansen,
Thanks for your interest. You do not need to set bboxembed and class_embed. I implement the BatchFormerV2 with a two-stream pipeline. Specifically, I copy the original batch, and apply batchformerv2 on the new batch, then concatenate the two batches to input the next modules (c.f.

). Meanwhile, I also repeat the labels to match the dimension of the future (c.f.
if len(outputs['pred_logits']) > len(targets):
)

For the corresponding part in the code, I have added more annotations. Feel free to ask if you have further questions.

Regards,

@wanliliuxiansen
Copy link
Author

wanliliuxiansen commented Nov 25, 2022 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants