You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding results provided in Table 3 for xFlickr&CO with the "-w/o parallel sentence pairs" setting, I find it incomprehensible.
As described in the paper, the "-w/o parallel sentence pairs" setting uses only CC3M with translated text during pre-training, which means that the model sees almost only 3 out of 7 languages on xFlickr&CO during pre-training, so how it achieves about 43% of the average Recall@1.
Could you please provide the specific performance of this setting on each language of xFlickr&CO ?
Or am I misunderstanding the settings? Thanks!
The text was updated successfully, but these errors were encountered:
Arthurizijar
changed the title
On the ablation results on xFlickr&CO
The ablation results on xFlickr&CO
Apr 26, 2023
Regarding results provided in Table 3 for xFlickr&CO with the "-w/o parallel sentence pairs" setting, I find it incomprehensible.
As described in the paper, the "-w/o parallel sentence pairs" setting uses only CC3M with translated text during pre-training, which means that the model sees almost only 3 out of 7 languages on xFlickr&CO during pre-training, so how it achieves about 43% of the average Recall@1.
Could you please provide the specific performance of this setting on each language of xFlickr&CO ?
Or am I misunderstanding the settings? Thanks!
The text was updated successfully, but these errors were encountered: