版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Wuzhou Univ Guangxi Key Lab Machine Vis & Intelligent Control Wuzhou 543003 Peoples R China Macau Univ Sci & Technol Fac Humanities & Arts Macau 999078 Peoples R China Macau Univ Sci & Technol Fac Innovat Engn Macau 999078 Peoples R China
出 版 物:《JOURNAL OF SUPERCOMPUTING》 (J Supercomput)
年 卷 期:2025年第81卷第8期
页 面:1-23页
核心收录:
学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:We are grateful to the anonymous reviewers for their valuable comments
主 题:Source code optimization Parameter-efficient training Instruction fine-tuning
摘 要:Source code optimization enables developers to enhance programs at the human-computer interaction level, thereby improving development efficiency and product quality. With the rise of large language models (LLMs), fine-tuning and prompting have become mainstream solutions for this task. However, both approaches present challenges: fine-tuning is resource-intensive due to the exponential growth in the scale of LLMs, whereas prompting, although resource-efficient, struggles to generate high-quality optimized programs. In this paper, we present CodeOPT, a LoRA-driven approach for fine-tuning LLMs to optimize C/C++ code. Instead of fine-tuning all LLM parameters, CodeOPT leverages LoRA to fine-tune only an optimization adapter, significantly reducing the number of trainable parameters. Additionally, we incorporate prior optimization knowledge during fine-tuning and introduce optimization-based instruction fine-tuning, enabling LLMs to effectively learn from external knowledge sources to improve program optimization. To evaluate the effectiveness of CodeOPT, we benchmarked it against several baselines on challenging programming tasks from different code completion platforms. Experimental results demonstrate that CodeOPT outperforms all baselines, including the state of the art, while keeping modifications to the original program minimal.