From fce937d52893b237a5334b8572fb96a3a657e769 Mon Sep 17 00:00:00 2001 From: Ruan Chaves Rodrigues Date: Sun, 10 Sep 2023 20:39:16 -0400 Subject: [PATCH] relocate paragraph --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index b0b081b..c9e9fa1 100644 --- a/README.md +++ b/README.md @@ -12,8 +12,6 @@ The **Napolab** is your go-to collection of Portuguese datasets with the followi * 👩‍🔧 **Human**: Expert human annotations only. No automatic or unreliable annotations. * 🎓 **General**: No domain-specific knowledge or advanced preparation is needed to solve dataset tasks. -Napolab is structured similarly to benchmarks like GLUE and [PLUE](https://github.com/ju-resplande/PLUE). All datasets come with either two or three fields: `'sentence1', 'sentence2', 'label'` or just `'sentence1', 'label'`. To evaluate LLMs using Napolab, you simply need to design prompts to get label predictions from the model. - Napolab currently includes the following datasets: | | | | @@ -48,6 +46,8 @@ benchmark = napolab["datasets"] translated_benchmark = napolab["translations"] ``` +Napolab is structured similarly to benchmarks like GLUE and [PLUE](https://github.com/ju-resplande/PLUE). All datasets come with either two or three fields: `'sentence1', 'sentence2', 'label'` or just `'sentence1', 'label'`. To evaluate LLMs using Napolab, you simply need to design prompts to get label predictions from the model. + ## 🤖 Models We've made several models, fine-tuned on this benchmark, available on Hugging Face Hub: