Skip to content

Latest commit

 

History

History
27 lines (14 loc) · 1.06 KB

README.md

File metadata and controls

27 lines (14 loc) · 1.06 KB

Retrieval Augmented Generation (RAG)

RAG LLMOps using OpenAI,MLflow,FastAPI and RAGAS(Evaluation)

Step I : Install all the requirements and execute the RAG_mlflow.py file

Now your LLM model run is registered in MLFlow with its evaluations metrices.

image

RAGAS Evaluation metrics are logged in MLFlow, refer code for further details.

Step II : Execute the app.py (python app.py) in cmd navigate to the FastAPI UI in browser,

image

Click on try it out and enter your question

Step III : Model Monitoring/tracing

Navigate back to the MLflow UI and click on the 'Traces' tab. There, you'll find the question you asked in FastAPI is logged along with its response.

image

Click on the Request id to see more details about the run,

image