Problems with context of conversation using chatGPT API #22
-
I found out that the OpenAI chatGPT API does not save the context of the correspondence (conversation) with the AI, so the developer himself must take care of saving the following questions and answers in order to use the context as in the web interface. There is also a problem that if the conversation is large enough, then the limit of approximately 4000 tokens (3000 words in my opinion) will be exceeded and old queries should be removed from the context. The documentation suggests saving the context in the database or writing to a temporary file. What do you say about this? Because, for example, in the web interface (https://chat.openai.com/chat), the chat by default supports the context, unlike the API |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Hi there, You're correct that the Chat web app and the API work differently, particularly concerning the token size. The web app allows more tokens, while the API provides more control over chat history. To overcome the token limit, you have a couple of options: Exclude older messages from the prompt data. You can truncate or remove some messages to fit within the token limit. For general advice and token counting examples, refer to OpenAI's openai-cookbook. We leave it to application developers to determine how to handle historical data, but I recommend exploring https://community.openai.com for more relevant discussions and insights from other users. Best regards, Csaba |
Beta Was this translation helpful? Give feedback.
Hi there,
You're correct that the Chat web app and the API work differently, particularly concerning the token size. The web app allows more tokens, while the API provides more control over chat history. To overcome the token limit, you have a couple of options:
Exclude older messages from the prompt data. You can truncate or remove some messages to fit within the token limit.
Summarize earlier messages to compress them and reduce the token count.
For general advice and token counting examples, refer to OpenAI's openai-cookbook.
We leave it to application developers to determine how to handle historical data, but I recommend exploring https://community.openai.com for more relevant discuss…