You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Enable use of Valkey with LLM applications for semantic LLM caching, semantic conversation cache, LLM semantic routing
Description of the feature
Introduce support for vector data types and similarity search query
Support following methods and engines (Method : HNSW,FLAT, Engine : NMSLIB, Faiss)
Vector Range Search (e.g. find all vectors within a radius of a query vector)
Support Hyrbid search (lexical and semantic search)
Document ranking (using tf-idf, with optional user-provided weights)
Support for JSON based representation of vectors.
LLM Semantic Cache and Chat Session history management APIs support
Introduce related client python library to use the vector database related function from LLM chain - integration with Langchain, haystack, llamaIndex
Have default embedding models or use custom embedding/re-ranking, ability to integrate with HCP hosted Embedding/reranking models for the same through configuration.
The problem/use-case that the feature addresses
Enable use of Valkey with LLM applications for semantic LLM caching, semantic conversation cache, LLM semantic routing
Description of the feature
Alternatives you've considered
Refer to below issue.
https://github.com/orgs/valkey-io/discussions/371
Additional information
Consider port of https://www.redisvl.com/index.html
The text was updated successfully, but these errors were encountered: