Step I : Install all the requirements and execute the RAG_mlflow.py file
Now your LLM model run is registered in MLFlow with its evaluations metrices.
RAGAS Evaluation metrics are logged in MLFlow, refer code for further details.
Step II : Execute the app.py (python app.py) in cmd navigate to the FastAPI UI in browser,
Click on try it out and enter your question
Step III : Model Monitoring/tracing
Navigate back to the MLflow UI and click on the 'Traces' tab. There, you'll find the question you asked in FastAPI is logged along with its response.
Click on the Request id to see more details about the run,