Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: add faithfulness metric based on Bespoke Labs MiniCheck model #1269

Closed
wants to merge 25 commits into from

Conversation

vutrung96
Copy link

This PR adds a faithfulness metric based on the Bespoke-MiniCheck-7B model.

Users can use the metric either by calling the model through the Bespoke Labs API, or by running the model locally.

I tested that the metric works via a colab: https://colab.research.google.com/drive/1OcL8-LkeKp-_7-_8_l7ysO8O6_AIz6jd#scrollTo=Jbg0gon7uXII.

@dosubot dosubot bot added the size:L This PR changes 100-499 lines, ignoring generated files. label Sep 10, 2024
@shahules786
Copy link
Member

Thanks for the PR @vutrung96 , I will take a look at it shortly.

@vutrung96
Copy link
Author

@shahules786 thanks! one thing I'm running into is that for make type, I'm getting import not found for this line:

"import einops as einops"

I think it's because in CI, the command pip install is run to install ragas, which doesn't include the optional dependencies.

@jjmachan
Copy link
Member

@vutrung96 you can add that as part of the dev dependencis in requirements/dev.txt?

@vutrung96
Copy link
Author

@jjmachan thanks for the suggestion! I've added the dependencies to dev/requirements.txt.

@shahules786 shahules786 self-requested a review September 12, 2024 14:48
@vutrung96
Copy link
Author

@shahules786 ping on review. please lmk if you need any clarifications :)

@shahules786
Copy link
Member

@vutrung96 I'm rethinking and reworking parts of faithfulness metrics. I'm also thinking best ways to allow users to use any NLI model within it without adding code into ragas.
please bear with me on this.

@shahules786
Copy link
Member

shahules786 commented Sep 21, 2024


Hi @vutrung96 , thanks again for the PR.

We want to enable developers to use any model of their choice, regardless of the metric they use. There are two types of models in this context:

  1. General-purpose models (e.g., OpenAI, Anthropic, LLaMA, etc.)
  2. Specialized models that are limited to one or more tasks (e.g., Vectara HHEM, Bespoke)

For the former, we already support the use of any model with ragas. For the latter, currently, either we or the user has to modify the code in ragas to integrate the specialized model. It is fine if the user does this in their own version of ragas, but merging that code into the main ragas repository transfers the responsibility of maintaining and updating it ( this is the case with this PR), which is not something we can take on.

Therefore, we are introducing components that allow developers to plug in their specialized models for use with metrics. This is still experimental but can be integrated into ragas after a few iterations and feedback. #1339

Here’s the revised version:


In your case, I think the model can be used as a component by passing it as a HuggingfacePipeline to LLMComponent. Please take a look at it, and perhaps we can add documentation to help users with this integration.


Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants