Skip to content

Commit

Permalink
Merge pull request #268 from akashAD98/tutoral/multomodal-jina
Browse files Browse the repository at this point in the history
example: food recommendation using jina-clip-v2
  • Loading branch information
PrashantDixit0 authored Nov 30, 2024
2 parents 081a5a8 + 4ededcf commit b80587f
Show file tree
Hide file tree
Showing 3 changed files with 12,989 additions and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ Create a multimodal search application using LanceDB for efficient vector-based
| [Multimodal CLIP: DiffusionDB](/examples/multimodal_clip_diffusiondb/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_clip_diffusiondb/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](./examples/multimodal_clip_diffusiondb/main.py) [![LLM](https://img.shields.io/badge/local-llm-green)](#) [![beginner](https://img.shields.io/badge/beginner-B5FF33)](#)| [![Ghost](https://img.shields.io/badge/ghost-000?style=for-the-badge&logo=ghost&logoColor=%23F7DF1E)](https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/)|
| [Multimodal CLIP: Youtube videos](/examples/multimodal_video_search/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_video_search/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](./examples/multimodal_video_search/main.py) [![LLM](https://img.shields.io/badge/local-llm-green)](#) [![beginner](https://img.shields.io/badge/beginner-B5FF33)](#)|[![Ghost](https://img.shields.io/badge/ghost-000?style=for-the-badge&logo=ghost&logoColor=%23F7DF1E)](https://blog.lancedb.com/multi-modal-ai-made-easy-with-lancedb-clip-5aaf8801c939/)|
| [Cambrian-1: Vision centric exploration of images](https://www.kaggle.com/code/prasantdixit/cambrian-1-vision-centric-exploration-of-images/) | [![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://www.kaggle.com/code/prasantdixit/cambrian-1-vision-centric-exploration-of-images/) [![LLM](https://img.shields.io/badge/local-llm-green)](#) [![intermediate](https://img.shields.io/badge/intermediate-FFDA33)](#)| [![Ghost](https://img.shields.io/badge/ghost-000?style=for-the-badge&logo=ghost&logoColor=%23F7DF1E)](https://blog.lancedb.com/cambrian-1-vision-centric-exploration/)|
| [Multimodal Jina CLIP-V2 : Food Search ](/examples/multimodal_jina_clipv2/) | <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_jina_clipv2/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> [![Python](https://img.shields.io/badge/python-3670A0?style=for-the-badge&logo=python&logoColor=ffdd54)](./examples/multimodal_jina_clipv2/main.py) [![beginner](https://img.shields.io/badge/beginner-B5FF33)](#)||
||||

### RAG
Expand Down
29 changes: 29 additions & 0 deletions examples/multimodal_jina_clipv2/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Multimodal Search Engine with Jina-CLIP v2 and LanceDB

Welcome to the **Multimodal Search Engine** project! This project utilizes [Jina-CLIP v2](https://jina.ai/news/jina-clip-v2-multilingual-multimodal-embeddings-for-text-and-images/) and [LanceDB](https://lancedb.dev)
to enable robust search functionality across both text and image data in 89 languages.

---

## Features

- **Multimodal Search**: Search across both image and text inputs.
- **Multilingual Support**: Supports 89 languages for text queries and captions.
- **Efficient Retrieval**: Powered by [LanceDB](https://lancedb.dev), ensuring low latency and high throughput.
- **Matryoshka Representations**: Enables hierarchical embedding structures for fine-grained similarity.

---

### How It Works
- Input: Accepts either a text query or an image as input.
- Encoding: Uses Jina-CLIP v2 to convert text and images into a shared embedding space.
- Storage: Stores these embeddings in LanceDB for efficient retrieval.
- Search: Matches queries to the most relevant embeddings in the database and returns results.

---

## code

Colab walkthrough <a href="https://colab.research.google.com/github/lancedb/vectordb-recipes/blob/main/examples/multimodal_jina_clipv2/main.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a>

---
Loading

0 comments on commit b80587f

Please sign in to comment.