Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert_messages throws away required data (openai tool calling) #1164

Open
5 tasks
chiradeep opened this issue Nov 11, 2024 · 0 comments
Open
5 tasks

convert_messages throws away required data (openai tool calling) #1164

chiradeep opened this issue Nov 11, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@chiradeep
Copy link

chiradeep commented Nov 11, 2024

  • [X ] This is actually a bug report.
  • I am not getting good LLM Results
  • I have tried asking for help in the community on discord or discussions and have not received a response.
  • [ X] I have tried searching the documentation and have not found an answer.

What Model are you using?

  • gpt-3.5-turbo
  • gpt-4-turbo
  • [ X] gpt-4o
  • Other (please specify)

Describe the bug
When coding a ReAct-style agent (llm suggests parallel tool calls, make suggested tool calls, provide tool responses to llm), the step of providing the tool response to the llm fails because this line

if isinstance(content, str):

358:       if isinstance(content, str):
359:           converted_messages.append({"role": role, "content": content})  # type: ignore

throws away the essential tool_call_id field of the message that provides the tool response to the llm. An example tool response message is as follows:

{'role': 'tool', 'content': 'tool_response1', 'tool_call_id': 'call_apQ6WjmVhuxGz3kYXPWCwqOx'}, 

When the tool_call_id is dropped by line 359, this exception ensues:

instructor.exceptions.InstructorRetryException: Error code: 400 - {'error': {'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id'.", 'type': 'invalid_request_error', 'param': 'messages.[3].tool_call_id', 'code': None}}

if line 359 is replaced with

359:           converted_messages.append(message)  # type: ignore

the error goes away

To Reproduce

  1. Call the API will mode=instructor.Mode.PARALLEL_TOOLS and response_model=None.
  2. model_dump the resulting response.choices[0].message.
  3. Make the suggested tool calls.
  4. Provide the model_dump from step 2 and the tool output to the LLM in {"role": "tool",...} message s(described here: https://platform.openai.com/docs/guides/function-calling#customizing-function-calling-behavior)
  5. Make the API call with no mode and a response_model.

Expected behavior
The LLM reasons over the original message and the tool call output correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant