-
Notifications
You must be signed in to change notification settings - Fork 15.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
core: Multimodal Embeddings proposal #28710
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
worth checking out #25356 as well! |
I think perhaps we can figure out a way to make this a bit more universal -- applying not just to different types of modalities (audio, video, etc) but also to interleaved data or possibly even multiple parallel "tracks" of data (e.g. few video frames plus audio of those frames, embedded together). @efriis From the Voyage perspective, we would love to support interleaved inputs (texts + images) to embedding models. Let us know what makes the most sense. |
Thank you for contributing to LangChain!
Description: this is a proposal on how to introduce multimodal embedding models to Langchain
make format
,make lint
andmake test
from the root of the package(s) you've modified. See contribution guidelines for more: https://python.langchain.com/docs/contributing/Additional guidelines:
If no one reviews your PR within a few days, please @-mention one of baskaryan, efriis, eyurtsev, ccurme, vbarda, hwchase17.