-
Notifications
You must be signed in to change notification settings - Fork 770
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while using Gemini models with RAGAS #1632
Comments
@a-s-poorna thanks for reporting this. This does mean the llm failed in generation but it could be an error on our end. Do you use any tracing tools? |
Hi @jjmachan Code
The error i am facing is this is in ragas updated ragas==0.2.2version. When we were using ragas earlier we didn't faced this issue for gemini model |
hey @a-s-poorna I will check it out but I don't have access to a gemini model of the back, will have to setup sometime to configure everything what would help is having access to metadata for the response like so any tracing tool will have this, this is from langsmith but you can also use arize which runs local |
Hey @jjmachan , I investigated using Arize and Gemini's The As a workaround, one can bypass the issue by providing the following def custom_is_finished_parser(response: LLMResult):
is_finished_list = []
for g in response.flatten():
resp = g.generations[0][0]
if resp.generation_info is not None:
# generation_info is provided - so we parse that
# Gemini uses "STOP" to indicate that the generation is finished
# and is stored in 'finish_reason' key in generation_info
if resp.generation_info.get("finish_reason") is not None:
is_finished_list.append(
resp.generation_info.get("finish_reason") == "STOP"
)
# if generation_info is empty, we parse the response_metadata
# this is less reliable
elif (
isinstance(resp, ChatGeneration)
and t.cast(ChatGeneration, resp).message is not None
):
resp_message: BaseMessage = t.cast(ChatGeneration, resp).message
if resp_message.response_metadata.get("finish_reason") is not None:
is_finished_list.append(
resp_message.response_metadata.get("finish_reason") == "STOP"
)
# default to True
else:
is_finished_list.append(True)
return all(is_finished_list)
...
ragas_llm = LangchainLLMWrapper(
llm,
is_finished_parser=custom_is_finished_parser,
) Best, |
Thank you for the help @cymarechal-devoteam |
[ ] I checked the documentation and related resources and couldn't find an answer to my question.
Your Question
How to integrate Gemini models with RAGAS without facing any error
Code Examples
Additional context
We are following the official documentation of RAGAS and built it as it is . for llm we are trying to pass Gemini 1.5 pro and Gemini 1.5 flash but getting
The LLM generation was not completed. Please increase try increasing the max_tokens and try again. (Error)
even for least size of tokens.The text was updated successfully, but these errors were encountered: