You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I have some questions about this loss function because there are some differences between the paper and the code.
First of all, the paper:
The loss is the sum of the average logsoftmax between the query point and other prototypes and the average distance between the query point and the corresponding prototypes.
I had the same confusion when replicated the paper's results.
You could basically run cross-entropy and results will be the same.
In this formula, he uses log_softmax on distances, then take only the right ones with gather function and compute mean on that.
So it is basically the same.
Hi, I have some questions about this loss function because there are some differences between the paper and the code.
First of all, the paper:
The loss is the sum of the average logsoftmax between the query point and other prototypes and the average distance between the query point and the corresponding prototypes.
But in the code:
The loss in code is only the sum of LogSoftmax between the query point and the corresponding prototypes.
I'm confused. Is it my understanding of the code or my understanding of the paper?
The text was updated successfully, but these errors were encountered: