Fixes custom model parameter overrides in config #630
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Removes model_parameters, which were redundant with provider_params, from chat and completion code. Fixes #624.
Tested by passing a custom model parameter in my JupyterLab config:
jupyter lab --AiExtension.model_parameters openai-chat:gpt-4-1106-preview='{"max_tokens": 4095}'
In the chat UI and the completion interface, this model now works. Temporarily added log statements confirm that the custom parameter is included in the
llm
object exactly once.model_parameters
was not redundant withprovider_params
in the magic command handler, so it is not modified. Magic commands with a model with custom model config continue to work.