Skip to content
This repository has been archived by the owner on Jun 15, 2023. It is now read-only.

question about accuracy #11

Open
pengzhiliang opened this issue Sep 13, 2020 · 2 comments
Open

question about accuracy #11

pengzhiliang opened this issue Sep 13, 2020 · 2 comments

Comments

@pengzhiliang
Copy link

Thanks for the great work!

I have a question. Is the accuracy reported in your paper on the validation set or the test set? If it is on the validation set, but I use the resnet50 pretrainmodel of the first stage you provided on github, and the second stage using crt and lws can reach 48.0%, which is slightly higher than the results in the paper. But if it is on the test set, the results of using crt and lws in the second stage are 46.3% and 46.6%, respectively, which is little far from the results reported in the paper. I am very confused about this question.

@pengzhiliang
Copy link
Author

The dataset is ImageNet-LT

@pengzhiliang
Copy link
Author

OK, I know it. According to the second stage weights(Renext50 crt) you provided on github, I tested its accuracy on the test set, and the output is as follows:

Phase: test
100%|████████████████████████████████████████████████████████████████████████| 98/98 [01:36<00:00,  1.12it/s]


 Phase: test 

 Evaluation_accuracy_micro_top1: 0.481 
 Averaged F-measure: 0.467 
 Many_shot_accuracy_top1: 0.602 Median_shot_accuracy_top1: 0.445 Low_shot_accuracy_top1: 0.266 

60.2     44.5    26.6    48.1

which is very different from Table 7 in the paper

61.8 46.2 27.4 49.6

Therefore, the accuracy in the paper in on validation set

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant