-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to switch models, other than redefining the function? #311
Comments
as an aside, but relatedly, it's somewhat annoying to only be able to change the default model by setting an ENV var. It would be nice if this could be done programatically too. |
Would using two functions, one for each model, suit you @benwhalley ? See here in the last code example: https://magentic.dev/configuration/ |
I can see that you could do that, but it just seems like the model you use shouldn't be tied to the function itself. Likewise for things like temperature etc. In the work I'm doing now I'd like to be able to compare models, settings etc so it would take some metaprogramming to be able to define functions on the fly, which isn't ideal (and I really like elements of magentic) |
Hi @benwhalley , you can set the model at runtime using the context manager syntax. For your example I think this should work (untested): import magentic
from magentic.chat_model.litellm_chat_model import LitellmChatModel
gpt4 = LitellmChatModel('gpt-4o')
llama3 = LitellmChatModel("ollama/llama3.1")
@magentic.prompt("tell me a joke about {thing}") # Do not set model here
def joke(thing: str) -> str: ...
with gpt:
print(joke("apples"))
with llama3:
print(joke("apples")) The reason for using a context manager for this rather than an argument to the prompt-function is to allow setting the model for prompt-functions called within a @prompt("Flip a coin")
def flip_coin() -> Literal["Heads", "Tails"]: ...
@prompt_chain(
"Flip a coin until you get Heads. Then count the occurrence of each side.",
functions=[flip_coin],
)
def flip_then_summarize() -> str: ...
# Calling `flip_then_summarize` does something like
# - query LLM with: ["Flip a coin until ..."] to get a function call
# - query LLM with ["Flip a coin"] to get "Heads"
# - query LLM with: ["Flip a coin until ...", <function call>, "Heads"] to get a str
with OpenaiChatModel("gpt-4o"):
print(flip_then_summarize()) # Uses "gpt-4o" for _both_ flip_then_summarize and flip_coin
with OpenaiChatModel("gpt-4o-mini"):
print(flip_then_summarize()) # Uses "gpt-4o-mini" for _both_ flip_then_summarize and flip_coin Please let me know if this resolves the issue for you. I agree that the ability to change the default/global settings in code would be useful. I've made a new issue for that #313 |
hi - yes, this makes sense and I can see it's a tradeoff to avoid having to pass the model down the chain. The context managers work for setting both the model and retry model btw. |
Great! Closing this issue. Separate issue for setting default model is #313 |
Perhaps I'm not understanding, but it seems like there's no way to switch which model is being used, other than to redefine the function with a different decorator?
Is that correct? If so, would it be possible to include a mechanism to pass the model to the function at runtime, so that you could do:
and this would call the different models?
The text was updated successfully, but these errors were encountered: