Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: models have max_tokens, temperature, closes #330 #384

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

nevercast
Copy link

@nevercast nevercast commented Dec 26, 2024

  • Added max_tokens and temperature options to each of the models
  • Prompter gets max_tokens and temperature from profile.
  • Prints these settings at startup if set.
  • Throws if temperature is set and out of range

Worth considering in future (another issue/PR):

Expose other options for the LLMs, perhaps having "max_tokens" and "temperature" as top level settings is not ideal, I would suggest:

  • we have model.options, and we splat those into the API call allowing setting of any options. Appropriate transformations for API that do not follow OpenAI conventions, such as Gemini.
  • or, we have a known list of options in model.* such as temperature and max_tokens. Becomes max_completion_tokens for OpenAI specifically, necessary to be o1 compatible.

OpenAI and Anthropic tested as I have accounts with these providers. Referred to official documentation for all other changes. Profiles have not been changed nor has any documentation (README).

Happy to receive criticism.

- Added max_tokens and temperature options to each of the models
- Prompter gets `max_tokens` and `temperature` from `profile`.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant