Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jupyter_ai_magics.providers.ChatOpenAIProvider() got multiple values for keyword argument 'max_tokens' #624

Closed
EduardDurech opened this issue Feb 7, 2024 · 5 comments · Fixed by #632
Assignees
Labels
bug Something isn't working

Comments

@EduardDurech
Copy link
Contributor

There is an error when setting max_tokens for openai-chat in the jupyter_jupyter_ai_config.json

Error

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/base.py", line 113, in on_message
    await self.process_message(message)
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/default.py", line 53, in process_message
    self.get_llm_chain()
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/base.py", line 196, in get_llm_chain
    self.create_llm_chain(lm_provider, lm_provider_params)
  File "/usr/local/lib/python3.10/dist-packages/jupyter_ai/chat_handlers/default.py", line 27, in create_llm_chain
    llm = provider(**provider_params, **model_parameters)
TypeError: jupyter_ai_magics.providers.ChatOpenAIProvider() got multiple values for keyword argument 'max_tokens'

def get_model_parameters(
self, provider: Type[BaseProvider], provider_params: Dict[str, str]
):
return self.model_parameters.get(
f"{provider.id}:{provider_params['model_id']}", {}
)

Steps:

  • Create a file jupyter_jupyter_ai_config.json in a config path specified in jupyter --paths (I picked ~/.jupyter)
  • Add the following code
{
    "AiExtension": {
        "model_parameters": {
            "openai-chat:gpt-4-1106-preview": {
                "max_tokens": 4095
            }
        }
    }
}
  • Send a message to JupyterAI Chat in Jupyter Lab

Parametres are successfully loaded
[I FooDate AiExtension] Configured model parameters: {'openai-chat:gpt-4-1106-preview': {'max_tokens': 4095}}

Searching the codebase I cannot find that max_tokens is set anywhere and https://api.python.langchain.com/en/latest/chat_models/langchain_community.chat_models.openai.ChatOpenAI.html does not set

@EduardDurech EduardDurech added the bug Something isn't working label Feb 7, 2024
Copy link

welcome bot commented Feb 7, 2024

Thank you for opening your first issue in this project! Engagement like this is essential for open source projects! 🤗

If you haven't done so already, check out Jupyter's Code of Conduct. Also, please try to follow the issue template as it helps other other community members to contribute more effectively.
welcome
You can meet the other Jovyans by joining our Discourse forum. There is also an intro thread there where you can stop by and say Hi! 👋

Welcome to the Jupyter community! 🎉

@JasonWeill
Copy link
Collaborator

@EduardDurech Thanks for opening this! I was able to reproduce this by passing a similar override on the command line using the tip of the main branch and JupyterLab 4.1.0.

jupyter lab --AiExtension.model_parameters openai-chat:gpt-4-1106-preview='{"max_tokens": 4095}'

@DzmitrySudnik
Copy link

I'm getting the same error with Bedrock-chat models, in fact even giving some nonsense like --AiExtension.model_parameters='bedrock-chat:anthropic.claude-v2={"anything":200}' generates the same error, but complains about got multiple values for keyword argument anything

@JasonWeill
Copy link
Collaborator

It looks like the parameter is being ingested into both provider_params and model_parameters. I logged the value of each variable immediately before instantiating llm in jupyter_ai/chat_handlers/default.py, and I found the same custom model parameter in both objects:

[I 2024-02-07 15:38:23.871 AiExtension] provider_params: {'model_id': 'gpt-4-1106-preview', 'max_tokens': 4095, 'openai_api_key': 'secret'}
[I 2024-02-07 15:38:23.871 AiExtension] model_parameters: {'max_tokens': 4095}

This is true whether the parameter name is max_tokens or anything, or anything else.

@EduardDurech
Copy link
Contributor Author

Not sure if relevant but I realized that it seemed to autosave to the ~/.local/share/jupyter/jupyter_ai/config.json whatever I gave as the argument under a field called "Fields" or something (I cannot access the server to check right now, sorry about that)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants