MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation
Abstract
Large language models have demonstrated the ability to generate both natural language and programming language text. Although contemporary code generation models are trained on corpora with several programming languages, they are tested using benchmarks that are typically monolingual. The most widely used code generation benchmarks only target Python, so there is little quantitative evidence of how code generation models perform on other programming languages. We propose MultiPL-E, a system for translating unit test-driven code generation benchmarks to new languages. We create the first massively multilingual code generation benchmark by using MultiPL-E to translate two popular Python code generation benchmarks to 18 additional programming languages. We use MultiPL-E to extend the HumanEval benchmark (Chen et al., 2021) and MBPP benchmark (Austin et al., 2021) to 18 languages that encompass a range of programming paradigms and popularity. Using these new parallel benchmarks, we evaluate the multi-language performance of three state-of-the-art code generation models: Codex (Chen et al., 2021), CodeGen (Nijkamp et al., 2022) and InCoder (Fried et al., 2022). We find that Codex matches or even exceeds its performance on Python for several other languages. The range of programming languages represented in MultiPL-E allow us to explore the impact of language frequency and language features on model performance. Finally, the MultiPL-E approach of compiling code generation benchmarks to new programming languages is both scalable and extensible, making it straightforward to evaluate new models, benchmarks, and languages.
Repository Citation
Cassano, Federico, John Gouwar, Daniel Nguyen, et al. 2023. "MultiPL-E: A Scalable and Polyglot Approach to Benchmarking Neural Code Generation." IEEE Transactions on Software Engineering 49(7): 3675-3691.
Publisher
IEEE Computer Society
Publication Date
7-1-2023
Publication Title
IEEE Transactions on Software Engineering
Department
Computer Science
Document Type
Article
DOI
https://dx.doi.org/10.1109/TSE.2023.3267446
Keywords
B.2.3 reliability, Testing, Fault-tolerance, I.51.D neural nets
Language
English
Format
text