Question about SelfSupervisedLoss #632
-
Hi Kevin! first of all thanks for the contribution, nice repository. I want to ask you about the new SelfSupervisedLoss. Could you explain in general how it works (if I understand well, the labels are generated randomly) and how the augmented data should be generated. It would be interesting and useful to have a code example with this new feature. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
It assumes that:
Here's how it's implemented: Pseudo-code example of how you can use it: I've added "example for SelfSupervisedLoss" to my todo list: #633 |
Beta Was this translation helpful? Give feedback.
-
I think that this is implied in all of the comments and documentation so far, but just to be sure: each Just double checking because the alternative could be that only the 0th will be positive and all others will be treated as the negative. But I don't think that the documentation (or my review of the code) indicate that this is the case. Rather, I think that the former (each batch index gets its turn as the index for the positive pair) is how things seem to work. |
Beta Was this translation helpful? Give feedback.
It assumes that:
ref_emb[i]
is an augmented version ofembeddings[i]
.ref_emb[i]
is the only augmented version ofembeddings[i]
in the batch.Here's how it's implemented:
pytorch-metric-learning/src/pytorch_metric_learning/losses/self_supervised_loss.py
Lines 48 to 62 in c38c07c