You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Playing around with this, I was wondering the following: At what rate should new learning items be introduced with ebisu?
Say I have an effectively infinite/very large collection of items (like the 10k most common words in a target language), how do i handle that? If I just set all of them to due now, will the Bayesian statistics take care of that in a reasonable way? And will this result more in a depth-first (drilling the same items until perfection) or breadth-first (learning many items a little) approach? Or should the wrapper learning app take care of introducing new items gradually?
To be clear, this is not a complaint or anything and I totally understand if this is out of scope. After all, Anki doesn't handle this either, instead prompting the user manually to set a rate of introduction for a new card.
So I'm just asking: Is there any kind of recommendation or any interesting findings regarding this?
Cheerz :)
The text was updated successfully, but these errors were encountered:
Yeahhh great question but alas this is out of scope for Ebisu. If you initialize the same model for 1000 flashcards, then in an hour, all of them will have the same probability of recall, so presumably your student will do a bunch before passing out—so those models that were reviewed will get updated and the rest won't, so repeated quiz sessions will slowly lead to increasing diversity amongst the models.
I expect apps that uses Ebisu to have opinions about how many items to introduce per day, or to get that information from users, etc.
I do hope that in the future we can find some ways to formalize inter-card correlation. It'd be nice to have some mechanism to infer from a bunch of quizzes that "flashcard X and Y are confusers, so if a user failed a quiz on X, let's automatically weaken the memory on Y" (you could do this with Ebisu's fuzzy-binary quizzes, we talked a bit about this here fasiha/ebisu#63 (comment)) but yeah right now all the quiz apps I make explicitly track such inter-flashcard relationships themselves without any help from Ebisu.
I do agree, inter-card relationships would be cool (but also a very deep rabbit hole)! 'Think I need to check out the fuzzy-binary function next, so far I've been lazy and doing anything with Anki-style ratings...
Anyways, thanks again and looking forward to see what you come up with in the future.
Hi,
very cool project, thanks for making this!
Playing around with this, I was wondering the following: At what rate should new learning items be introduced with ebisu?
Say I have an effectively infinite/very large collection of items (like the 10k most common words in a target language), how do i handle that? If I just set all of them to due now, will the Bayesian statistics take care of that in a reasonable way? And will this result more in a depth-first (drilling the same items until perfection) or breadth-first (learning many items a little) approach? Or should the wrapper learning app take care of introducing new items gradually?
To be clear, this is not a complaint or anything and I totally understand if this is out of scope. After all, Anki doesn't handle this either, instead prompting the user manually to set a rate of introduction for a new card.
So I'm just asking: Is there any kind of recommendation or any interesting findings regarding this?
Cheerz :)
The text was updated successfully, but these errors were encountered: