You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, the authors highlight that using contrastive learning can carry more precious environmental information, and the t-SNE visualization indicates the stronger representation ability of this method.
However, when I attempted to run the open-source code, the swap loss didn't drop, regardless of short or long iterations.
I am wondering if this is an expected behavior for contrastive learning algorithms, or if there might be an issue with my implementation.
The text was updated successfully, but these errors were encountered:
Thanks for your sharing! I have met the same problms about swap loss.And I also try ablation study without contrastive learning by setting swap loss = 0,finding that the linear velocity tracking reward during training seems to make no difference with original version.
In the paper, the authors highlight that using contrastive learning can carry more precious environmental information, and the t-SNE visualization indicates the stronger representation ability of this method.
However, when I attempted to run the open-source code, the swap loss didn't drop, regardless of short or long iterations.
I am wondering if this is an expected behavior for contrastive learning algorithms, or if there might be an issue with my implementation.
The text was updated successfully, but these errors were encountered: