You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First off: Great tool and saved me the headache of trying to trace the functions tokens myself.
A final touch could to introduce an option to have a token estimator class (tokenizer class?) which gets the model type as attribute and then uses the tiktoken.encoding_for_model() function to retrieve the encoding.
That way if openai ever changes the encoding or uses a different encoding for newer models the package can stay up to date.
On a side note what I think is also useful are following functions which you can use e.g. to prevent logging of huge inputs to the model
First off: Great tool and saved me the headache of trying to trace the functions tokens myself.
A final touch could to introduce an option to have a token estimator class (tokenizer class?) which gets the model type as attribute and then uses the tiktoken.encoding_for_model() function to retrieve the encoding.
That way if openai ever changes the encoding or uses a different encoding for newer models the package can stay up to date.
On a side note what I think is also useful are following functions which you can use e.g. to prevent logging of huge inputs to the model
Best
Somerandomguy10111
The text was updated successfully, but these errors were encountered: