Skip to content

Latest commit

 

History

History
99 lines (67 loc) · 3.68 KB

README.md

File metadata and controls

99 lines (67 loc) · 3.68 KB

GRAG - Good RAG

GitHub License Linting GitHub Actions Workflow Status Static Badge Static Badge Static Badge Static Badge Static Badge GitHub Issues or Pull Requests PyPI - Downloads

Static Badge Static Badge

GRAG is a simple python package that provides an easy end-to-end solution for implementing Retrieval Augmented Generation (RAG).

The package offers an easy way for running various LLMs locally, Thanks to LlamaCpp and also supports vector stores like Chroma and DeepLake. It also makes it easy to integrage support to any vector stores easy.

Diagram of a basic RAG pipeline

Diagram of a basic RAG pipeline

Table of Content

Project Overview

  • A ready to deploy RAG pipeline for document retrival.
  • Basic GUI (Under Development)
  • Evaluation Suite (Under Development)
  • RAG enhancement using Graphs (Under Development)

Getting Started

To run the projects, make sure the instructions below are followed.

Further customization can be made on the config file, src/config.ini.

  • git clone the repository
  • pip install . from the repository (note: add - then change directory to the cloned repo)
  • For Dev: pip install -e .

Requirements

Required packages to install includes (refer to pyproject.toml):

  • PyTorch
  • LangChain
  • Chroma
  • Unstructured.io
  • sentence-embedding
  • instructor-embedding

LLM Models

To quantize model, run: python -m grag.quantize.quantize

For more details, go to .\llm_quantize\readme.md Tested models:

  1. Llama-2 7B, 13B
  2. Mixtral 8x7B
  3. Gemma 7B

Model Compatibility

Refer to llama.cpp Supported Models (under Description) for list of compatible models.

Supported Vector Databases

1. Chroma

Since Chroma is a server-client based vector database, make sure to run the server.

  • To run Chroma locally, move to src/scripts then run source run_chroma.sh. This by default runs on port 8000.
  • If Chroma is not run locally, change host and port under chroma in src/config.ini.

2. Deeplake

For more information refer to Documentation.