SOTA metrics for evaluating Retrieval Augmented Generation (RAG)
ragas is a framework that helps you evaluate your Retrieval Augmented Generation (RAG) pipelines. RAG denotes a class of LLM applications that use external data to augment the LLM’s context. There are existing tools and frameworks that help you build these pipelines but evaluating it and quantifying your pipeline performance can be hard.. This is were ragas (RAG Assessment) comes in
ragas provides you with the tools based on the latest research for evaluating LLM generated text to give you insights about your RAG pipeline. ragas can be integrated with your CI/CD to provide continuous check to ensure performance.
pip install ragas
if you want to install from source
git clone https://github.com/explodinggradients/ragas && cd ragas
pip install -e .
This is a small example program you can run to see ragas in action!
from ragas import evaluate
from datasets import Dataset
import os
os.environ["OPENAI_API_KEY"] = "your-openai-key"
# prepare your huggingface dataset in the format
# Dataset({
# features: ['question','contexts','answer'],
# num_rows: 25
# })
dataset: Dataset
results = evaluate(dataset)
# {'ragas_score': 0.860, 'context_relavency': 0.817,
# 'faithfulness': 0.892, 'answer_relevancy': 0.874}
If you want a more in-depth explanation of core components, check out our quick-start notebook
Ragas measures your pipeline's performance against two dimensions
- Faithfulness: measures the information consistency of the generated answer against the given context. If any claims made in the answer that cannot be deduced from context is penalized.
- Relevancy: measures how relevant retrieved contexts and the generated answer are to the question. The presence of extra or redundant information is penalized.
Through repeated experiments, we have found that the quality of a RAG pipeline is highly dependent on these two dimensions. The final ragas_score
is the harmonic mean of these two factors.
To read more about our metrics, checkout docs.
If you want to get more involved with Ragas, check out our discord server. It's a fun community where we geek out about LLM, Retrieval, Production issues and more.
We track very basic usage metrics to guide us to figure out what our users want, what is working and what's not. As a young startup, we have to be brutally honest about this which is why we are tracking these metrics. But as an Open Startup we open-source all the data we collect. You can read more about this here. Ragas doesnot track any information that can be used to identify you or your company. You can take a look at exactly what we track in the code
To disable usage-tracking you set the RAGAS_DO_NOT_TRACK
flag to true.
- Why harmonic mean?
Harmonic mean penalizes extreme values. For example, if your generated answer is fully factually consistent with the context (faithfulness = 1) but is not relevant to the question (relevancy = 0), a simple average would give you a score of 0.5 but a harmonic mean will give you 0.0
- How to use Ragas to improve your pipeline?
"Measurement is the first step that leads to control and eventually to improvement" - James Harrington
Here we assume that you already have your RAG pipeline ready. When it comes to RAG pipelines, there are mainly two parts - Retriever and generator. A change in any of this should also impact your pipelines's quality.
- First, decide one parameter that you're interested in adjusting. for example the number of retrieved documents, K.
- Collect a set of sample prompts (min 20) to form your test set.
- Run your pipeline using the test set before and after the change. Each time record the prompts with context and generated output.
- Run ragas evaluation for each of them to generate evaluation scores.
- Compare the scores and you will know how much the change has affected your pipelines' performance.