-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support to parse VertexAI Model garden Llama2 model output #14718
Add support to parse VertexAI Model garden Llama2 model output #14718
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Ignored Deployment
|
…tex-model-garden-llama2-support
@baskaryan Yes absolutely if this has not been accomplished there already I would to love to make that change. Thanks. |
@izo0x90 could you share the exact error you're getting, please? there's an integration tests for Llama and it seems to be working fine:
|
Sorry for the tardy response, I finally had the time to create an update PR with the new location that houses this functionality. langchain-ai/langchain-google#41 As to why the test passes successfully, the core of the issue is that the generic class interprets each character of the Llama2 response as a "prediction" as such it created numerous Generation objects for each char of the response text. The test only checks if the "correct" objects are generated and the type of the response is string, which a singe letter of the response text satisfies, and the test technically passes, of course this is not what we actually need as an end user. |
Description: The output of Llama2 models in GCP VertexAI Model garden is not compatible with the generic VertexAiModelGarden handler class. This PR adds a class to handle/ properly parse the output returned by those models. The handler also adds support for storing/ passing parameters to the model that is consistent with the core VertexAI LLM models.
Dependencies: None