Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 1.09 KB

paper_15.md

File metadata and controls

11 lines (6 loc) · 1.09 KB

On Leakage of Code Generation Evaluation Datasets

Authors: Matton, Alexandre and Sherborne, Tom and Aumiller, Dennis and Tommasone, Elena and Alizadeh, Milad and He, Jingyi and Ma, Raymond and Voisin, Maxime and Gilsenan-McMahon, Ellen and Gallé, Matthias

Abstract:

In this paper, we consider contamination by code generation test sets, in particular in their use in modern large language models.We discuss three possible sources of such contamination and show findings supporting each of them: (i) direct data leakage, (ii) indirect data leakage through the use of synthetic data and (iii) overfitting to evaluation sets during model selection.To address this, we release Less Basic Python Problems (LBPP): an uncontaminated new benchmark of 161 prompts with their associated Python solutions. LBPP is released at https://huggingface.co/datasets/CohereForAI/lbpp

Link: Read Paper

Labels: code generation, program synthesis, benchmark