-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doubt about the DeformableTransformerDecoder of batchformerv2 #17
Comments
Hi @wanliliuxiansen, Regards, |
OK,thank you very much!I'm currently working on Image Harmony,the topic belongs to the problem of image generation.I want to use the BatchFormer to address the problem,but I don't know how to set the two parameters.Image Harmony is also limited by the features of a single image,so it needs the features of other images.I have read your paper,which has related details.However,I do not understand the two shared classifiers,and i don't konw whether the Batchformer is helpful to solve Image Harmony.
…------------------ 原始邮件 ------------------
发件人: "Zhi ***@***.***>;
发送时间: 2022年11月24日(星期四) 晚上10:08
收件人: ***@***.***>;
抄送: ***@***.***>; ***@***.***>;
主题: Re: [zhihou7/BatchFormer] doubt about the DeformableTransformerDecoder of batchformerv2 (Issue #17)
Hi @wanliliuxiansen,
I do not get your point. Does Deformable-DETR require to set the two parameters by hand or in DeformableTransformerDecoder?
Regards,
Zhi Hou
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Hi @wanliliuxiansen,
BatchFormer/batchformer-v2/engine.py Line 43 in e22f5ff
For the corresponding part in the code, I have added more annotations. Feel free to ask if you have further questions. Regards, |
Thank you for your help!
…------------------ 原始邮件 ------------------
发件人: "zhihou7/BatchFormer" ***@***.***>;
发送时间: 2022年11月25日(星期五) 上午9:18
***@***.***>;
***@***.******@***.***>;
主题: Re: [zhihou7/BatchFormer] doubt about the DeformableTransformerDecoder of batchformerv2 (Issue #17)
Hi @wanliliuxiansen,
Thanks for your interest. You do not need to set bboxembed and class_embed. I implement the BatchFormerV2 with a two-stream pipeline. Specifically, I copy the original batch, and apply batchformerv2 on the new batch, then concatenate the two batches to input the next modules (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/models/deformable_transformer.py#L278). Meanwhile, I also repeat the ground truth to match the dimension of the future (c.f. https://github.com/zhihou7/BatchFormer/blob/e22f5ff895b04d10f3aa4e745f9a226f3d9b7641/batchformer-v2/engine.py#L43)
For the corresponding part in the code, I have comment more annotations. Feel free if you have further questions.
Regards,
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I found the two parameters bboxembed and class_embed of DeformableTransformerDecoder are not set.Can you give me the complete code of DeformableTransformerDecoder? Thank you very much!
The text was updated successfully, but these errors were encountered: