You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two questions about the pre-processed training data of NQ data.
How is it possible for 'has_gold_answer' to be False when 'em' is 1 and 'f1' is 1.0?
What criteria were used to select 'positive_ctxs'? In QA tasks, it is mentioned that the context with the highest EM score was chosen, but how were 'positive_ctxs' set when there were multiple sentences with an EM of 1?
The text was updated successfully, but these errors were encountered:
has_gold_answer denotes whether the retrieved documents contains the gold answer, while EM and F1 measures whether the model outputs the correct answer. It is possible that model outputs the correct answer while it is not in the retrieved documents.
When there are multiple sentences with EM equals to 1, we select the one with the highest P(gold_answer | sentence) (i.e. the sentence which leads to the highest probability of the gold answer when prepended).
You mention that in case there are multiple sentences with EM equals to 1, you would select the one sentence that would resulted in the highest probability of generating the gold answer. How is this probability measured exactly? Also, may I know the code necessary to make the pre-processed training data? That would be really helpful.
I have two questions about the pre-processed training data of NQ data.
How is it possible for 'has_gold_answer' to be False when 'em' is 1 and 'f1' is 1.0?
What criteria were used to select 'positive_ctxs'? In QA tasks, it is mentioned that the context with the highest EM score was chosen, but how were 'positive_ctxs' set when there were multiple sentences with an EM of 1?
The text was updated successfully, but these errors were encountered: