You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ClashLuke opened this issue
May 24, 2022
· 2 comments
Labels
coreImproves core model while keeping core idea intactMLRequires machine-learning knowledge (can be built up on the fly)researchCreative project that might fail but could give high returns
Currently, we're using only the softmax classification/cross-entropy loss to create a language-modeling loss for next-token prediction. However, other works such as T-Few showed that adding alternative losses for external benefits such as length explicit penalties during training can help downstream-task performance. Additionally, other works like DCL and InfoLOOB demonstrated that changing the fundamental structure of the loss from softmax classification to something different can help speed up convergence. That's why a similar approach could be beneficial for us.
In this issue, we'll explore whether InfoLOOB's classification loss for the language-modeling objective helps or if we should change the entire objective.
The text was updated successfully, but these errors were encountered:
ClashLuke
added
research
Creative project that might fail but could give high returns
ML
Requires machine-learning knowledge (can be built up on the fly)
core
Improves core model while keeping core idea intact
labels
May 25, 2022
PolyLoss (green) performs quite a bit worse than CrossEntropy (bisque):
We could still try InfoLOOB (DCL) as it appeared promising before:
However, after reaching a loss of -100, InfoLOOB ran into NaN, which halted the training. Nothing like this has happened with CrossEntropy, which is why CrossEntropy achieved a better final model even though the initial convergence was slower.
coreImproves core model while keeping core idea intactMLRequires machine-learning knowledge (can be built up on the fly)researchCreative project that might fail but could give high returns
Currently, we're using only the softmax classification/cross-entropy loss to create a language-modeling loss for next-token prediction. However, other works such as T-Few showed that adding alternative losses for external benefits such as length explicit penalties during training can help downstream-task performance. Additionally, other works like DCL and InfoLOOB demonstrated that changing the fundamental structure of the loss from softmax classification to something different can help speed up convergence. That's why a similar approach could be beneficial for us.
In this issue, we'll explore whether InfoLOOB's classification loss for the language-modeling objective helps or if we should change the entire objective.
The text was updated successfully, but these errors were encountered: