You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear authors,
I wonder whether a script for automatically evaluating clip models on your benchmark exists. I find the dataset of your benchmark in OHD-Caps dataset. But I don't find the code to evaluate models on it. It seems that the main_aro.py in evaluate_clip has some path problem in evaluating models on OHD.
Therefore, is there a script for this?
Thanks
The text was updated successfully, but these errors were encountered:
The file path may need to be modified. The script for evaluating OHD performance is 'main_aro.py'. By passing the '--dataset' argument, COCO_Object, Flickr_Object, and Nocaps_Object refer to the three subsets in OHD.
Dear authors,
I wonder whether a script for automatically evaluating clip models on your benchmark exists. I find the dataset of your benchmark in OHD-Caps dataset. But I don't find the code to evaluate models on it. It seems that the main_aro.py in evaluate_clip has some path problem in evaluating models on OHD.
Therefore, is there a script for this?
Thanks
The text was updated successfully, but these errors were encountered: