You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ClashLuke opened this issue
Apr 30, 2022
· 0 comments
Labels
coreImproves core model while keeping core idea intactMLRequires machine-learning knowledge (can be built up on the fly)researchCreative project that might fail but could give high returns
Many modern architectures, such as Memorizing Transformers, RETRO and PKM have an explicit memory where the model can retrieve information from and optionally even store it. Some hypothesise that Mixture-of-Experts embeds fuzzy representations of books and other things it must memorise into its weights.
That's why adding explicit memory to our models could give them a considerable boost in performance. Instead of storing this information in dense layers and having the weights fight about whether they should be storing concepts or memorising sequences, our model would be able to do both.
This issue is about implementing such an explicit memory (be it PKM, MoE or even a new architecture) and improving the convergence of our language model at the same runtime.
The text was updated successfully, but these errors were encountered:
ClashLuke
added
research
Creative project that might fail but could give high returns
ML
Requires machine-learning knowledge (can be built up on the fly)
labels
Apr 30, 2022
coreImproves core model while keeping core idea intactMLRequires machine-learning knowledge (can be built up on the fly)researchCreative project that might fail but could give high returns
Many modern architectures, such as Memorizing Transformers, RETRO and PKM have an explicit memory where the model can retrieve information from and optionally even store it. Some hypothesise that Mixture-of-Experts embeds fuzzy representations of books and other things it must memorise into its weights.
That's why adding explicit memory to our models could give them a considerable boost in performance. Instead of storing this information in dense layers and having the weights fight about whether they should be storing concepts or memorising sequences, our model would be able to do both.
This issue is about implementing such an explicit memory (be it PKM, MoE or even a new architecture) and improving the convergence of our language model at the same runtime.
The text was updated successfully, but these errors were encountered: