You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was looking into #149 and realized that the underlying problem is an assumption that NGramDocuments' n-grams are made up of individual tokens, but each n-gram is actually just a single, regular string (hence the unexpected stemming behavior).
It might be worth considering altering NGramDocument to contain vectors of strings (i.e. tokens) or to make the behavior of the n-grams align more with how StringDocuments are treated. Or, if it would make more sense, we could also just make NGramDocuments actually consist of TokenDocuments or StringDocuments. Personally, I think I would lean towards making the n-gram a vector of strings or its own TokenDocument—that seems to make the most sense for the actual meaning of an n-gram.
With any approach, it looks like, fundamentally, all this would take to implement would be to update the way a "token" of an n-gram is defined in ngramizer.jl. I'd be glad to implement this (and fix the other issue mentioned) if people think this is worth pursuing! I'm just not sure which direction would be most beneficial, or if it would introduce any unforeseen issues.
The text was updated successfully, but these errors were encountered:
I was looking into #149 and realized that the underlying problem is an assumption that
NGramDocument
s' n-grams are made up of individual tokens, but each n-gram is actually just a single, regular string (hence the unexpected stemming behavior).It might be worth considering altering
NGramDocument
to contain vectors of strings (i.e. tokens) or to make the behavior of the n-grams align more with howStringDocument
s are treated. Or, if it would make more sense, we could also just makeNGramDocument
s actually consist ofTokenDocument
s orStringDocument
s. Personally, I think I would lean towards making the n-gram a vector of strings or its ownTokenDocument
—that seems to make the most sense for the actual meaning of an n-gram.With any approach, it looks like, fundamentally, all this would take to implement would be to update the way a "token" of an n-gram is defined in ngramizer.jl. I'd be glad to implement this (and fix the other issue mentioned) if people think this is worth pursuing! I'm just not sure which direction would be most beneficial, or if it would introduce any unforeseen issues.
The text was updated successfully, but these errors were encountered: