Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Triplet prediction Accuracy #7

Open
tangchen2 opened this issue Nov 7, 2019 · 12 comments
Open

About Triplet prediction Accuracy #7

tangchen2 opened this issue Nov 7, 2019 · 12 comments

Comments

@tangchen2
Copy link

I have implemented the code in Pytorch, and trained it with the whole dataset. However the triplet prediction accuracy seems much lower than the paper which reached 81%.
My question is what about your triplet accuracy?

@druefena
Copy link

@tangchen2 Have you used the pretrained weights for FaceNet, as they mention in the paper? Otherwise, the results are likely quite a bit worse if you are training only on the FEC dataset.

@tangchen2
Copy link
Author

@tangchen2 Have you used the pretrained weights for FaceNet, as they mention in the paper? Otherwise, the results are likely quite a bit worse if you are training only on the FEC dataset.

I used the pretrained weights for FaceNet and freeze the FaceNet layers just the same as the paper, but i just got around 60+% accuracy, which really confuses me...

@druefena
Copy link

Assuming that you are following the same training schedule (and sampling) as they report in the paper, it looks like there is some bug in your implementation. 60% is validation accuracy? What about training accuracy, is it also at around 60% or higher?
(For reference: I am using a different Face feature extractor that takes 112x112 input, and I also made some mods to the DenseLayer to reflect the smaller input size, and am getting around 76% validation accuracy.)

@druefena
Copy link

Another thing worth checking is that the order of the color channels that you input to FaceNet (RGB versus BGR).

@tangchen2
Copy link
Author

Another thing worth checking is that the order of the color channels that you input to FaceNet (RGB versus BGR).

Well, i really didnt concern about the color channel, and i will check it.
Second, i also replace the facenet in backbone with insightface which requires 112 input, and i got 68% val accuracy, maybe i need to check the color channel. Thank you so much about that!

@druefena
Copy link

Last (obvious) thing to check is that you are only considering what they refer to as "strong" triplets, i.e., the ones where at least 66% of the raters agree on. The 76% validation accuracy I get is when I only consider the strong triplets (both in training and validation). Hope this helps.

@tangchen2
Copy link
Author

Last (obvious) thing to check is that you are only considering what they refer to as "strong" triplets, i.e., the ones where at least 66% of the raters agree on. The 76% validation accuracy I get is when I only consider the strong triplets (both in training and validation). Hope this helps.

Oh,i think thats the key issue! I just used all triplets in training before, and now i filter the strong triplet, i will report the result later !
By the way, i have two more questions, hope you can give me some advice, first, did you crop & align the image in dataset(like MTCNN or RetinaFace ) before training? Second, what about the size of you train images? The number of images i download successfully is much lower than reported in paper, i.e. i got around 80k train images compared with 130516 in paper and 20k test images compared with 25427 in papaer. Maybe the size of my train images also has a negative impact on my performance?
Thanks again !

@tangchen2
Copy link
Author

Last (obvious) thing to check is that you are only considering what they refer to as "strong" triplets, i.e., the ones where at least 66% of the raters agree on. The 76% validation accuracy I get is when I only consider the strong triplets (both in training and validation). Hope this helps.

Finally, i got the 75% accuracy on validation, and the mian reason is that i changed the dataset with strong Triplet instead of all Triplets, really thank you about that remind!

@SvenSu
Copy link

SvenSu commented Dec 27, 2019

Because of the loss of crawler data, I cannot have all the triple pairs. How can I get the original data. l appreciate that you can share the data to me! Thanks!

@lnguyen
Copy link

lnguyen commented Dec 31, 2019

same!

@tangchen2
Copy link
Author

Because of the loss of crawler data, I cannot have all the triple pairs. How can I get the original data. l appreciate that you can share the data to me! Thanks!

I download the data directly from the url provided, and compared the official dataset, it is incomplete because some url are wrong. Anyway my performance is trained under incomplete dataset. Hope that can help you.

@YunShen66666
Copy link

Because of the loss of crawler data, I cannot have all the triple pairs. How can I get the original data. l appreciate that you can share the data to me! Thanks!

I download the data directly from the url provided, and compared the official dataset, it is incomplete because some url are wrong. Anyway my performance is trained under incomplete dataset. Hope that can help you.

Hi , I test the pretrained model in pytorch and the result is different from the paper.Did you test the official pretrained model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants