Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Alternating Chat Messages Instead of Single Compressed User Message #123

Open
Emil-io opened this issue Oct 17, 2024 · 1 comment
Labels

Comments

@Emil-io
Copy link
Contributor

Emil-io commented Oct 17, 2024

Description:

Currently, the implementation for conversational RAG compresses all past chat messages into a single user message when using the OpenAIChatGenerator or similar components. I think this approach differs from common chat generator functionalities, where models generally benefit from receiving alternating messages from the user and assistant as separate entries. Providing this structure might help maintain conversational context more effectively.

Feature Request:

Allow the entire list of past chat messages to be provided to the LLM, rather than compressing them into a single user message. This ensures that each user query and assistant response is treated as separate ChatMessage objects, improving the context preservation and enabling the model to follow the conversational flow more naturally.

@Emil-io
Copy link
Contributor Author

Emil-io commented Oct 17, 2024

@vblagoje feel free to correct me if I'm wrong or if this is already possible out of the box.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants