-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug in evaluation script during training #252
Labels
bug
Something isn't working
Comments
Thanks, excellent catch. The good thing is that this bug only influenced the validation metrics reported by Casanovo directly (e.g. during training/validation). The results and figures included in the papers were obtained with external evaluation scripts, so normally those should still be correct. |
bittremieux
added a commit
that referenced
this issue
Oct 13, 2023
melihyilmaz
added a commit
that referenced
this issue
Oct 24, 2023
Fixes #252. Co-authored-by: Melih Yilmaz <[email protected]>
melihyilmaz
added a commit
that referenced
this issue
Nov 2, 2023
* Remove unused custom_encoder option (#254) * resolves issue #238: remove custom_encoder option * fixed lint issue * fixed lint issue * Revert "fixed lint issue" This reverts commit bd1366c. * lint * lint issue * Consistently format changelog. --------- Co-authored-by: Isha Gokhale <[email protected]> Co-authored-by: Wout Bittremieux <[email protected]> * Correctly report AA precision and recall during validation (#253) Fixes #252. Co-authored-by: Melih Yilmaz <[email protected]> * Remove gradient calculation during inference (#258) * Remove force_grad in inference * Upgrade required PyTorch version * Update CHANGELOG.md * Update CHANGELOG.md * Fix typo in torch version * Specify correct Pytorch version change --------- Co-authored-by: Wout Bittremieux <[email protected]> * Add label smoothing * Modify config file * Minor fix config.yaml * Run black * Lint casanovo.py --------- Co-authored-by: ishagokhale <[email protected]> Co-authored-by: Isha Gokhale <[email protected]> Co-authored-by: Wout Bittremieux <[email protected]> Co-authored-by: Wout Bittremieux <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
There are some inconsistencies in how arguments are passed to the various evaluation functions. This might be causing amino acid precision and recall to be switched.
In the validation step, the peptide precision is calculated as follows:
The evaluate.aa_match_metrics expects the arguments in the order of first
n_aa_true
and thenn_aa_pred
.The evaluate.aa_match_batch function returns
n_aa1
andn_aa2
in the same order aspeptides1
andpeptides2
The problem comes in that in the validation step, we pass
peptides1=peptides_pred
andpeptides2=peptides_true
, thereby passingn_aa_true
as the number of number of predicted amino acids, and vice-versa forn_aa_pred
.This shouldn't cause an issue to the current peptide-level precision calculation, but it may affect older versions of the codebase.
The text was updated successfully, but these errors were encountered: