We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi,
thank you for releasing your model and code!
I have a question about your recent CCLM-X2VLM models. The configs in the X2-VLM repository (https://github.com/zengyan-97/X2-VLM/blob/main/configs/pretrain/multilingual_cclm_x2vlm_base.yaml) suggest (if I understand the code correctly) that you train the model like X2VLM with the region and object annotation and the BBox regression along with the parallel sentences. However, your recent paper (https://aclanthology.org/2023.acl-long.315) does not mention this and only lists the "standard" image-caption data.
Can you clarify how you trained those models?
Thank you very much!
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hi,
thank you for releasing your model and code!
I have a question about your recent CCLM-X2VLM models. The configs in the X2-VLM repository (https://github.com/zengyan-97/X2-VLM/blob/main/configs/pretrain/multilingual_cclm_x2vlm_base.yaml) suggest (if I understand the code correctly) that you train the model like X2VLM with the region and object annotation and the BBox regression along with the parallel sentences. However, your recent paper (https://aclanthology.org/2023.acl-long.315) does not mention this and only lists the "standard" image-caption data.
Can you clarify how you trained those models?
Thank you very much!
The text was updated successfully, but these errors were encountered: