You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enry right now consist of the sequence matching of strategies that narrow down the possible language options based on different available information:
finelame + extension
first line of the content
regexp heuristics of the raw content
naive bayesian classifier of the tokenized content
As a users, as each strategy can be used independently, I would like to know how accurate will the language detection be for each of the distinct use cases.
Use cases
all strategies together (default)
filename-only language detection
content-only language detection
Evaluation
Right now, the only measure of overall accuracy of language detection process we have is binary (similar to linguist): if the linguist/examples/ are all classified or not.
This issue is about picking a better way of quantifying the prediction quality for the three use cases above.
a notebook with PoC of the evaluation, to pick the best metric (using Python API from Python bindings for enry #154)
a script that runs entry for each use case on this dataset \w a given metric (e.g as part of CI)
The focus of this task is not to get best possible evaluation, but rather to quickly kick off the automation of having at least some evaluation, that will be improved in subsequent work.
The text was updated successfully, but these errors were encountered:
Enry right now consist of the sequence matching of strategies that narrow down the possible language options based on different available information:
As a users, as each strategy can be used independently, I would like to know how accurate will the language detection be for each of the distinct use cases.
Use cases
Evaluation
Right now, the only measure of overall accuracy of language detection process we have is binary (similar to linguist): if the
linguist/examples/
are all classified or not.This issue is about picking a better way of quantifying the prediction quality for the three use cases above.
Steps
The focus of this task is not to get best possible evaluation, but rather to quickly kick off the automation of having at least some evaluation, that will be improved in subsequent work.
The text was updated successfully, but these errors were encountered: