Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug] error with openai proxy and lang chain #1052

Open
dtam opened this issue Sep 5, 2024 · 5 comments
Open

[bug] error with openai proxy and lang chain #1052

dtam opened this issue Sep 5, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@dtam
Copy link
Contributor

dtam commented Sep 5, 2024

Describe the bug
should be able to chain.invoke via the OpenAI callable in langchain and a base url pointing at a guardrails server but it errors on role of type None

To Reproduce
Steps to reproduce the behavior:

  1. setup a guardrails server with 1 guard and 1 validator such as gibberish text
  2. configure a langchain chain with a openai callable that has base_url set to the gr servers guards/guard_name/openai/v1 endpoint
  3. invoke the chain syncronously

Expected behavior
should complete and validate

Library version:
0.5.0

Additional context

@dtam dtam added the bug Something isn't working label Sep 5, 2024
@CalebCourier
Copy link
Collaborator

@dtam I think they originally were using ainvoke which might explain the behaviour. I believe this comment in the docs means that, for LCEL, ainvoke is equivalent to astream:

Whenever using runnables without LCEL, make sure to call .astream() on LLMs rather than .ainvoke to force the LLM to stream tokens.

Source: https://python.langchain.com/v0.1/docs/expression_language/interface/#async-stream-events-beta

This would explain why it worked when telling the model to activate streaming (that should send the stream=True keyword to the guardrails server). Still not sure why it blew up otherwise, but probably has something to do with the fact that Langchain was expecting a text/event-stream but was just getting application/json

Copy link

github-actions bot commented Oct 6, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

@github-actions github-actions bot added the Stale label Oct 6, 2024
Copy link

This issue was closed because it has been stalled for 14 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Oct 20, 2024
@CalebCourier CalebCourier reopened this Nov 22, 2024
@CalebCourier
Copy link
Collaborator

CalebCourier commented Nov 22, 2024

I tested this again today to see where we were on being able to support this. The call actually goes through and validates appropriately but since we are not responding with the entire message structure including the role, Langchain's OpenAI pydantic model blows up.

Also, the streaming implementation does not work with ainvoke with or without setting streaming=True on the OpenAI model. It yields the following error in the API meaning we're not handling the structures correctly:
data: {"error": {"message": "object generator can't be used in 'await' expression"}}

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants