Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Litellm support not working #1269

Open
3 of 8 tasks
pedro-gainlife opened this issue Dec 18, 2024 · 0 comments
Open
3 of 8 tasks

Litellm support not working #1269

pedro-gainlife opened this issue Dec 18, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@pedro-gainlife
Copy link

pedro-gainlife commented Dec 18, 2024

  • This is actually a bug report.
  • I am not getting good LLM Results
  • I have tried asking for help in the community on discord or discussions and have not received a response.
  • I have tried searching the documentation and have not found an answer.

What Model are you using?

  • gpt-3.5-turbo
  • gpt-4-turbo
  • gpt-4
  • Other (please specify): Bedrock-powered us.meta.llama3-1-70b-instruct-v1:0

Describe the bug
[Instructor's example code (from the documentation page) to create a client using the from_litellm method doesn't run

Code:

from litellm import completion
import instructor
from pydantic import BaseModel

# Enable instructor patches
client = instructor.from_litellm(completion)

class User(BaseModel):
    name: str
    age: int

# Create structured output
user = client.completion(
    model="gpt-3.5-turbo",  # Can use any supported model
    messages=[
        {"role": "user", "content": "Extract: Jason is 25 years old"},
    ],
    response_model=User,
)

print(user)  # User(name='Jason', age=25)

Output:
AttributeError: 'NoneType' object has no attribute 'completion'

Replacing the invocation method from "completion" to "messages.create" also does not work, as it yields an error of Unsupported Parameters even if the model has support for tools, i.e.

Code:

from litellm import completion
import instructor
from pydantic import BaseModel
import litellm

client = instructor.from_litellm(completion)

class User(BaseModel):
    name: str
    age: int

litellm.supports_function_calling('us.meta.llama3-1-70b-instruct-v1:0')

client.messages.create(
        model="us.meta.llama3-1-70b-instruct-v1:0",
        messages=[
            {"content": "Hello, how are you?","role": "user"},
        ],
        response_model=User,
    )

Output:

True
instructor.exceptions.InstructorRetryException: litellm.UnsupportedParamsError: bedrock does not support parameters: {'tool_choice': {'type': 'function', 'function': {'name': 'User'}}}, for model=us.meta.llama3-1-70b-instruct-v1:0. To drop these, set `litellm.drop_params=True` or for proxy:

Note that simple function calling works fine for these models on LiteLLM standalone:

Code:

from litellm import completion
response = completion(model="us.meta.llama3-1-70b-instruct-v1:0",
        messages=[
            {"content": "Hello, how is the weather today in NYC?","role": "user"},
        ],
        tools = [
            {
                "type": "function",
                "function": {
                    "name": "get_current_weather",
                    "description": "Get the current weather in a given location",
                    "parameters": {
                        "type": "object",
                        "properties": {
                            "location": {
                                "type": "string",
                                "description": "The city and state, e.g. San Francisco, CA",
                            },
                            "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
                        },
                        "required": ["location"],
                    },
                },
            }
        ]
)```

Output:

```python
ModelResponse(id='HASH-ID-HERE', created=TIMESTAMP, model='us.meta.llama3-1-70b-instruct-v1:0', object='chat.completion', system_fingerprint=None, choices=[Choices(finish_reason='tool_calls', index=0, message=Message(content='', role='assistant', tool_calls=[ChatCompletionMessageToolCall(index=0, function=Function(arguments='{"location": "New York City, NY"}', name='get_current_weather'), id='tooluse_PWIxRBQ4RryaaV7V-awq3Q', type='function')], function_call=None))], usage=Usage(completion_tokens=24, prompt_tokens=101, total_tokens=125, completion_tokens_details=None, prompt_tokens_details=None))

To Reproduce
Code:

from litellm import completion
import instructor
from pydantic import BaseModel

# Enable instructor patches
client = instructor.from_litellm(completion)

class User(BaseModel):
    name: str
    age: int

# Create structured output
user = client.completion(
    model="gpt-3.5-turbo",  # Can use any supported model
    messages=[
        {"role": "user", "content": "Extract: Jason is 25 years old"},
    ],
    response_model=User,
)

print(user) 

Expected behavior
Output:

User(name='Jason', age=25)
@github-actions github-actions bot added the bug Something isn't working label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant