Skip to content

This project(RAG) focuses on operationalizing LLMs by integrating OpenAI, MLflow, FastAPI, and RAGAS for evaluation. It allows users to deploy and manage LLMs, track model runs, and log evaluation metrics in MLflow. The project also features MLflow traces that logs all the user inputs ,responses ,retrieved contexts ,and other essential metrices.

Notifications You must be signed in to change notification settings

Chandru-21/LLMOps

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Retrieval Augmented Generation (RAG)

RAG LLMOps using OpenAI,MLflow,FastAPI and RAGAS(Evaluation)

Step I : Install all the requirements and execute the RAG_mlflow.py file

Now your LLM model run is registered in MLFlow with its evaluations metrices.

image

RAGAS Evaluation metrics are logged in MLFlow, refer code for further details.

Step II : Execute the app.py (python app.py) in cmd navigate to the FastAPI UI in browser,

image

Click on try it out and enter your question

Step III : Model Monitoring/tracing

Navigate back to the MLflow UI and click on the 'Traces' tab. There, you'll find the question you asked in FastAPI is logged along with its response.

image

Click on the Request id to see more details about the run,

image

About

This project(RAG) focuses on operationalizing LLMs by integrating OpenAI, MLflow, FastAPI, and RAGAS for evaluation. It allows users to deploy and manage LLMs, track model runs, and log evaluation metrics in MLflow. The project also features MLflow traces that logs all the user inputs ,responses ,retrieved contexts ,and other essential metrices.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages