Skip to content

Latest commit

 

History

History
11 lines (6 loc) · 4.1 KB

paper_6.md

File metadata and controls

11 lines (6 loc) · 4.1 KB

No Need to Lift a Finger Anymore? Assessing the Quality of Code Generation by ChatGPT

Authors: Liu, Zhijie and Tang, Yutian and Luo, Xiapu and Zhou, Yuming and Zhang, Liang Feng

Abstract:

Large language models (LLMs) have demonstrated impressive capabilities across various natural language processing (NLP) tasks, such as machine translation, question answering, summarization, and so on. Additionally, LLMs are also highly valuable in supporting software engineering tasks, particularly in the field of code generation. Automatic code generation is a process of automatically generating source code or executable code based on given specifications or requirements, improving developer productivity. In this study, we perform a systematic empirical assessment to the quality of code generation using <italic>ChatGPT</italic>, a recent state-of-the-art product LLM. We leverage 728 algorithm problems in five languages (i.e., C, C++, Java, Python, and JavaScript) and 18 CWEs with 54 code scenarios for the code generation task. Our evaluation encompasses a comprehensive analysis of code snippets generated by <italic>ChatGPT</italic>, focusing on three critical aspects: correctness, complexity, and security. We also specifically investigate <italic>ChatGPT</italic>'s ability to engage in multi-round fixing process (i.e., <italic>ChatGPT</italic>'s dialog ability, chatting between users and <italic>ChatGPT</italic> for fixing generated buggy code) of facilitating code generation. By delving into the generated code and examining the experimental results, this work provides valuable insights into the performance of <italic>ChatGPT</italic> in tackling code generation tasks over the three critical aspects. The experimental results demonstrate that (1) <italic>ChatGPT</italic> is better at generating functionally correct code for problems before 2021 in different languages than problems after 2021 with <inline-formula><tex-math notation="LaTeX">$48.14%$</tex-math><alternatives><mml:math><mml:mn>48.14</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math><inline-graphic xlink:href="tang-ieq1-3392499.gif"/></alternatives></inline-formula> advantage in <italic>Accepted</italic> rate on judgment platform, but <italic>ChatGPT</italic>'s ability to directly fix erroneous code with multi-round fixing process to achieve correct functionality is relatively weak; (2) the distribution of cyclomatic and cognitive complexity levels for code snippets in different languages varies. Furthermore, the multi-round fixing process with <italic>ChatGPT </italic> generally preserves or increases the complexity levels of code snippets; (3) in algorithm scenarios with languages of C, C++, and Java, and CWE scenarios with languages of C and Python3, the code generated by <italic>ChatGPT </italic> has relevant vulnerabilities. However, the multi-round fixing process for vulnerable code snippets demonstrates promising results, with more than <inline-formula><tex-math notation="LaTeX">$89%$</tex-math><alternatives><mml:math><mml:mn>89</mml:mn><mml:mi mathvariant="normal">%</mml:mi></mml:math><inline-graphic xlink:href="tang-ieq2-3392499.gif"/></alternatives></inline-formula> of vulnerabilities successfully addressed; and (4) code generation may be affected by <italic>ChatGPT</italic>'s non-determinism factor, resulting in variations of code snippets in functional correctness, complexity, and security. Overall, our findings uncover potential issues and limitations that arise in the <italic>ChatGPT</italic>-based code generation and lay the groundwork for improving AI and LLM-based code generation techniques.

Link: Read Paper

Labels: code generation, program synthesis, empirical study