You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried asking for help in the community on discord or discussions and have not received a response.
[ X] I have tried searching the documentation and have not found an answer.
What Model are you using?
gpt-3.5-turbo
gpt-4-turbo
[ X] gpt-4o
Other (please specify)
Describe the bug
When coding a ReAct-style agent (llm suggests parallel tool calls, make suggested tool calls, provide tool responses to llm), the step of providing the tool response to the llm fails because this line
throws away the essential tool_call_id field of the message that provides the tool response to the llm. An example tool response message is as follows:
When the tool_call_id is dropped by line 359, this exception ensues:
instructor.exceptions.InstructorRetryException: Error code: 400 - {'error': {'message': "Missing parameter 'tool_call_id': messages with role 'tool' must have a 'tool_call_id'.", 'type': 'invalid_request_error', 'param': 'messages.[3].tool_call_id', 'code': None}}
What Model are you using?
Describe the bug
When coding a ReAct-style agent (llm suggests parallel tool calls, make suggested tool calls, provide tool responses to llm), the step of providing the tool response to the llm fails because this line
instructor/instructor/multimodal.py
Line 358 in 5dc6a4c
throws away the essential
tool_call_id
field of the message that provides the tool response to the llm. An example tool response message is as follows:When the
tool_call_id
is dropped by line 359, this exception ensues:if line 359 is replaced with
the error goes away
To Reproduce
mode=instructor.Mode.PARALLEL_TOOLS
and response_model=None.model_dump
the resultingresponse.choices[0].message
.model_dump
from step 2 and the tool output to the LLM in{"role": "tool",...}
message s(described here: https://platform.openai.com/docs/guides/function-calling#customizing-function-calling-behavior)response_model
.Expected behavior
The LLM reasons over the original message and the tool call output correctly.
The text was updated successfully, but these errors were encountered: