With the technical advancement of digital media and the medium of communication in recent years, there is a widespread interest in digital entertainment. An emerging te- nical research area edutainment, or educational...
详细信息
ISBN:
(数字)9783642145339
ISBN:
(纸本)9783642145322
With the technical advancement of digital media and the medium of communication in recent years, there is a widespread interest in digital entertainment. An emerging te- nical research area edutainment, or educational entertainment, has been accepted as education using digital entertainment. Edutainment has been recognized as an eff- tive way of learning using modern digital media tools, like computers, games, mobile phones, televisions, or other virtual reality applications, which emphasizes the use of entertainment with application to the education domain. The Edutainment conference series was established in 2006 and subsequently - ganized as a special event for researchers working in this new interest area of e-learning and digital entertainment. The main purpose of Edutainment conferences is to facilitate the discussion, presentation, and information exchange of the scientific and technological development in the new community. The Edutainment conference series becomes a valuable opportunity for researchers, engineers, and graduate s- dents to communicate at these international annual events. The conference series - cludes plenary invited talks, workshops, tutorials, paper presentation tracks, and panel discussions. The Edutainment conference series was initiated in Hangzhou, China in 2006. Following the success of the first event, the second (Edutainment 2007 in Hong Kong, China), third (Edutainment 2008 in Nanjing, China), and fourth editions (Edutainment 2009 in Banff, Canada) were organized. Edutainment 2010 was held during August 16–18, 2010 in Changchun, China. Two workshops were jointly org- ized together with Edutainment 2010.
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output...
详细信息
Large Language Models (LLMs) have shown impressive In-Context Learning (ICL) ability in code generation. LLMs take a prompt context consisting of a few demonstration examples and a new requirement as input, and output new programs without any parameter update. Existing studies have found that the performance of ICL-based code generation heavily depends on the quality of demonstration examples and thus arises research on selecting demonstration examples: given a new requirement, a few demonstration examples are selected from a candidate pool, where LLMs are expected to learn the pattern hidden in these selected demonstration examples. Existing approaches are mostly based on heuristics or randomly selecting examples. However, the distribution of randomly selected examples usually varies greatly, making the performance of LLMs less robust. The heuristics retrieve examples by only considering textual similarities of requirements, leading to sub-optimal *** fill this gap, we propose a Large language model-Aware selection approach for In-context-Learning-based code generation named LAIL. LAIL uses LLMs themselves to select examples. It requires LLMs themselves to label a candidate example as a positive example or a negative example for a requirement. Positive examples are helpful for LLMs to generate correct programs, while negative examples are trivial and should be ignored. Based on the labeled positive and negative data, LAIL trains a model-aware retriever to learn the preference of LLMs and select demonstration examples that LLMs need. During the inference, given a new requirement, LAIL uses the trained retriever to select a few examples and feed them into LLMs to generate desired programs. We apply LAIL to four widely used LLMs and evaluate it on five code generation datasets. Extensive experiments demonstrate that LAIL outperforms the state-of-the-art (SOTA) baselines by 11.58%, 3.33%, and 5.07% on CodeGen-Multi-16B, 1.32%, 2.29%, and 1.20% on CodeLlama-3
暂无评论