Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why the AUPRO lower than the WinCLIP? #14

Open
Ringhu opened this issue Nov 23, 2023 · 1 comment
Open

why the AUPRO lower than the WinCLIP? #14

Ringhu opened this issue Nov 23, 2023 · 1 comment

Comments

@Ringhu
Copy link

Ringhu commented Nov 23, 2023

Thank you for your working first. I found that both your AUROC and F1max score on mvtec-ad dataset for zero-shot segmentation are higher than the WinCLIP, but the AUPRO is lower (64.6 for WinCLIP and 44 for your work), can you provide some explanation for it? Thank you.

@ByChelsea
Copy link
Owner

Due to the presence of overfitting issues, especially when training and testing on different datasets, it is challenging to determine when it is appropriate to stop training our method. Different epochs may yield different results, with some epochs showing higher PRO metrics.

Additionally, I observe that existing metrics have preferences (e.g., AUROC being insensitive to a large number of normal instances predicted as anomalous). Therefore, evaluating the performance of anomaly detection using existing metrics may not be entirely suitable. Designing more appropriate metrics for future studies or thoroughly discussing the preferences of various metrics could be an interesting research direction.

Please note that the above is my personal viewpoint and is provided for reference only. However, I highly welcome mutual exchange and counterarguments. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants