Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the model explanation #10

Open
ohwi opened this issue Mar 17, 2021 · 9 comments
Open

About the model explanation #10

ohwi opened this issue Mar 17, 2021 · 9 comments

Comments

@ohwi
Copy link

ohwi commented Mar 17, 2021

Hi. Thank you for your impressive work.

I've read your work and want to understand your model clearly.

From #2 , I know there is no paper, but I found similar paper with your work.

Does the figure below explain your work?

image

Thank you!

@ohwi
Copy link
Author

ohwi commented Mar 17, 2021

I saw little difference at the backbone. The paper uses ViT and this work uses CNN.

@saahiluppal
Copy link
Owner

Hey, Thanks for the feedback.

This work is inspired from Facebook AI's (Detection Transformer) which aims to do object detection with transformers.

The paper you've enclosed is very recent work on this similar topic, but they have not provided any implementation.

@ohwi
Copy link
Author

ohwi commented Mar 17, 2021

Thank you for your reply.

I think I understand the structure of your work. Thank you!!

@parthskansara
Copy link

parthskansara commented Mar 28, 2021

Hi @saahiluppal, I am trying to understand where the object detection part is occurring in the code, and what exact algorithm you're using.

@saahiluppal
Copy link
Owner

Hey,
The model is not doing Object Detection at any phase.

Image is fed to a resnet and this backbone will give us the feature embedding along with the corresponding mask for the image.
Then these features and mask are fed to the transformer,
and the rest is handled by attention.

That is the versatility of attention mechanism.

@saahiluppal
Copy link
Owner

PS: Recent research shows that doing "Object Detection" prior to "Image Captioning" doesn't bring any additional improvement, instead it will just increase complexity.

@ohwi
Copy link
Author

ohwi commented Apr 1, 2021

PS: Recent research shows that doing "Object Detection" prior to "Image Captioning" doesn't bring any additional improvement, instead it will just increase complexity.

Hi. Would you let me know what is the paper you referenced? Thank you.

@saahiluppal
Copy link
Owner

I've read it in ablation studies of some paper, not sure which paper.
I'll share the name of the paper as soon as i come across it again.

@Tough-Stone
Copy link

Have you found which paper the structure of this code refers to?Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants