-
Notifications
You must be signed in to change notification settings - Fork 791
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Common Problem: KeyError: 0 for Metrics during "Run ragas metrics for evaluating RAG" #1784
Comments
Hi, there~ I also met the same question as you. I failed in "Faithfulness", so I comment this metric and use the others. Then I successed! Here is the code snippet: |
Hi @Austin-QW, thanks for your response! |
I got Nan on the “Faithfulness” , anyone else experiencing the same problem |
@neverlatetolearn0 following shahules786's explanation from #1773 NaN means undetermined. |
Problem was fixed since 0.2.9 |
[v] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug
I have the same problem as trish11953 from the issue #1770 if I only use the FactualCorrectness or LLMContextPrecisionWithReference (1 out of 5 times) metric. Maybe it's also somehow similar to the issue with Faithfulness?
I also checked metrics LLMContextRecall and SemanticSimilarity - they work perfect each time.
UPD: I checked metric LLMContextRecall again and received the same error, that was shown in error trace. I think it's a common problem for all metrics.
Ragas version: 0.2.8
Python version: 3.12.2
Code to Reproduce
...
Error trace
Expected behavior
Output the top evaluations results
The text was updated successfully, but these errors were encountered: