Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative Losses #54

Open
ClashLuke opened this issue May 24, 2022 · 2 comments
Open

Alternative Losses #54

ClashLuke opened this issue May 24, 2022 · 2 comments
Labels
core Improves core model while keeping core idea intact ML Requires machine-learning knowledge (can be built up on the fly) research Creative project that might fail but could give high returns

Comments

@ClashLuke
Copy link
Member

Currently, we're using only the softmax classification/cross-entropy loss to create a language-modeling loss for next-token prediction. However, other works such as T-Few showed that adding alternative losses for external benefits such as length explicit penalties during training can help downstream-task performance. Additionally, other works like DCL and InfoLOOB demonstrated that changing the fundamental structure of the loss from softmax classification to something different can help speed up convergence. That's why a similar approach could be beneficial for us.
In this issue, we'll explore whether InfoLOOB's classification loss for the language-modeling objective helps or if we should change the entire objective.

@ClashLuke ClashLuke added research Creative project that might fail but could give high returns ML Requires machine-learning knowledge (can be built up on the fly) core Improves core model while keeping core idea intact labels May 25, 2022
@ClashLuke
Copy link
Member Author

PolyLoss vs CrossEntropy

@ClashLuke
Copy link
Member Author

PolyLoss (green) performs quite a bit worse than CrossEntropy (bisque):
grafik

We could still try InfoLOOB (DCL) as it appeared promising before:
grafik
However, after reaching a loss of -100, InfoLOOB ran into NaN, which halted the training. Nothing like this has happened with CrossEntropy, which is why CrossEntropy achieved a better final model even though the initial convergence was slower.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
core Improves core model while keeping core idea intact ML Requires machine-learning knowledge (can be built up on the fly) research Creative project that might fail but could give high returns
Projects
None yet
Development

No branches or pull requests

1 participant