You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The OpenAI API has a couple params that can be tuned specifically based on what you're looking for; for code lookups and analysis, a more rigorous constraint on temperature (i.e. don't make things up 😉 ) would probably be preferable to the default which is apparently 1 (go be creative). On the flip side, creative writing, brainstorming, marketing and sales material, etc - all really well served by high temperature.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
A couple of UX thoughts here:
Might be worth making the 2 options mutually exclusive from one another so if one is selected the other is not given the guidance not to use them together.
There may be value in tying whatever the last used temperature / top_p / mood combination (i.e. prompt addendum) to a given custom prompt, so each prompt + level of creativity / generation can be persisted across multiple uses and specific use-cases tuned in that way.
Right now it looks like the mood may be decoupled from the character string in ChatOptionState:
From a cursory search (plus direction from ChatGPT 🎉) it looks like stringifying objects to JSON in react isn't too painful so hopefully this wouldn't be too disruptive of a change (from string to slightly more complex string + floats kind of class), assuming it makes sense.
Famous last words. 😀
The text was updated successfully, but these errors were encountered:
Also love the idea of being able to create some, say, "profiles" bundling all settings (character, prompt, mood,...) into predefined user configs. That would surely be handy!
The OpenAI API has a couple params that can be tuned specifically based on what you're looking for; for code lookups and analysis, a more rigorous constraint on temperature (i.e. don't make things up 😉 ) would probably be preferable to the default which is apparently 1 (go be creative). On the flip side, creative writing, brainstorming, marketing and sales material, etc - all really well served by high temperature.
This is a pretty straightforward and interesting explanation of how temperature works in an LLM.
I think it'd be a big boost to the utility of this project to add sliders for both temperature and top_p:
api reference for temperature
api reference for top_p
A couple of UX thoughts here:
Right now it looks like the mood may be decoupled from the character string in
ChatOptionState
:From a cursory search (plus direction from ChatGPT 🎉) it looks like stringifying objects to JSON in react isn't too painful so hopefully this wouldn't be too disruptive of a change (from string to slightly more complex string + floats kind of class), assuming it makes sense.
Famous last words. 😀
The text was updated successfully, but these errors were encountered: