From 2dd824d30197d53f41b3c5db72b59238152bf9ef Mon Sep 17 00:00:00 2001 From: Yaroslav Golubev Date: Thu, 6 Jun 2024 02:18:33 +0200 Subject: [PATCH] Update README.md --- module_summarization/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/module_summarization/README.md b/module_summarization/README.md index fb1e593..e2b7b9c 100644 --- a/module_summarization/README.md +++ b/module_summarization/README.md @@ -19,7 +19,7 @@ We provide dependencies via the [Poetry](https://python-poetry.org/docs/) manage #### Generation -In order to generate your predictions, add your parameters in the [config](configs) directory and run: +In order to generate your predictions, add your parameters in the [configs](configs) directory and run: * `poetry run python chatgpt.py --config="configs/config_openai.yaml"` if you use [OpenAI](https://platform.openai.com/docs/overview) models; * `poetry run python togetherai.py --config="configs/config_together.yaml"` if you use [Together.AI](https://www.together.ai/) models. @@ -34,7 +34,7 @@ To compare predicted and ground truth metrics we introduce the new metric based CompScore = \frac{ P(pred | LLM(code, pred, gold)) + P(pred | LLM(code, gold, pred))}{2} ``` -In order to evaluate predictions, add your parameters in the (config)[configs/config_eval.yaml] and run: +In order to evaluate predictions, add your parameters in the [config](configs/config_eval.yaml) and run: * `poetry run python metrics.py --config="configs/config_eval.yaml"` The script will evaluate the predictions and save the results into the `results.json` file.