You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I observed a significant discrepancy in the CLAP scores when using different pretrained CLAP models to evaluate Musicgen. Specifically, I used two distinct pretrained CLAP checkpoints to assess Musicgen's performance on the MusicCaps Test set, and I noticed a large variation in the evaluation results:
630k-audioset-fusion-best.pt (For general audio with variable length, used by Stable Audio Open for evaluation)
music_speech_audioset_epoch_15_esc_89.98.pt (For speech, music, and general audio, used by Musicgen for evaluation)
CLAP Score
630k-audioset-fusion-best.pt
music_speech_audioset_epoch_15_esc_89.98.pt
Stable Audio Open
0.292
0.277
Musicgen-medium
0.129
0.312
Musicgen-large
0.130
0.311
Given this, what could explain such a large difference in CLAP scores when using these different pretrained CLAP models for Musicgen evaluation?
The text was updated successfully, but these errors were encountered:
I observed a significant discrepancy in the CLAP scores when using different pretrained CLAP models to evaluate Musicgen. Specifically, I used two distinct pretrained CLAP checkpoints to assess Musicgen's performance on the MusicCaps Test set, and I noticed a large variation in the evaluation results:
Given this, what could explain such a large difference in CLAP scores when using these different pretrained CLAP models for Musicgen evaluation?
The text was updated successfully, but these errors were encountered: