programming exercises are time-consuming activities for many students. Therefore, many classes provide meticulous support for students through teaching assistants (TAs). However, individual students' programming b...
详细信息
ISBN:
(纸本)9781538626528
programming exercises are time-consuming activities for many students. Therefore, many classes provide meticulous support for students through teaching assistants (TAs). However, individual students' programming behaviors are quite different from each other's, even when they are solving the same problem. It can be hard for TAs to understand the unique features of each student's programming behavior. We have used data mining to analyze students' programming behaviors in order to identify their various features. The purpose of this study is to present such behavioral features to TAs to improve the effectiveness of the assistance they can provide. In order to grasp the timing of guidance, we estimated the grades from the history of programming behavior.
programming exercises are time-consuming activities for many students. Therefore, most classes provide meticulous supports for students by employing teaching assistants (TAs). However, the programming behaviors of a p...
详细信息
ISBN:
(纸本)9783319396903;9783319396897
programming exercises are time-consuming activities for many students. Therefore, most classes provide meticulous supports for students by employing teaching assistants (TAs). However, the programming behaviors of a particular student are quite different from other students' behavior, even though they are solving the same problem. It is hard for TAs to understand the detailed features of each student's programming behavior. We have performed data mining over the records of students' programming behaviors in order to elicit the detailed features of each student's programming behavior. The purpose of this study is to present the elicited such features for TAs so that they can provide effective assistances. We have performed data mining over the chronological records of the compilation and execution of individual students. As a result, we have found that there is a correlation between the programming activities and the duration time for problem solving. Based on the data mining, we have provided TAs some guidelines for each particular group of students. We have confirmed that our classifications and guidelines are reasonable through experiments over programming exercises. We have observed students who received appropriate guidance based on our data mining improved their programming performances.
Small, auto-gradable programming exercises provide a useful tool with which to assess students' programming skills in introductory computer science. To reduce the time needed to produce programming exercises of si...
详细信息
ISBN:
(纸本)9798400706004
Small, auto-gradable programming exercises provide a useful tool with which to assess students' programming skills in introductory computer science. To reduce the time needed to produce programming exercises of similar difficulty, previous research has applied a permutation strategy to existing questions. Prior work has left several open questions: is prior exposure to a question typically indicative of higher student performance? Are observed changes in difficulty due to the specific surface feature permutations applied? How is student performance impacted by the first version of a question to which they may be exposed? In this work, we pursue this permutation strategy in multiple semesters of an introductory Python course to investigate these open questions. We use linear regression models to tease out the impacts of different surface feature changes and usage conditions. Our analysis finds similar tendencies as in prior work: question versions available in study materials tend to have 5 - 11 percentage point higher scores than novel permutations and more "substantial" surface feature changes tend to produce harder questions. Our results suggest this last finding is sensitive to how evenly permutations are applied across existing questions, as the precise impact of individual permutations changes between semesters.
We describe the ViPLab plug-in for the ILIAS learning management system (LMS) that offers students a virtual programming laboratory and hence allows them to run programming exercises without ever having to leave the b...
详细信息
ISBN:
(纸本)9781509003792
We describe the ViPLab plug-in for the ILIAS learning management system (LMS) that offers students a virtual programming laboratory and hence allows them to run programming exercises without ever having to leave the browser. In particular, this article introduces one new component of the system that allows automatic correction of programming exercises and hence simplifies the implementation of programming classes in freshmen courses significantly.
This full paper reports the results of the development of a Moodle plugin, programming Exercise Teaching Assistant (PETA), created to support teachers from introductory programming courses to identify problematic exer...
详细信息
ISBN:
(纸本)9798350336429
This full paper reports the results of the development of a Moodle plugin, programming Exercise Teaching Assistant (PETA), created to support teachers from introductory programming courses to identify problematic exercises by analyzing how students interact with them. Moodle is an open source learning management system that allows personalization of its functionality through plugins. In introductory programming courses, it is common to propose problems to students, for which they need to elaborate solution code. The VPL (Virtual programming Lab) is a popular plugin that allows teachers to create this kind of problem, with automatic evaluation built on top of test-cases. The iAssign (interactive Assignment) is another plugin of the same sort, but with block-based programming. Our plugin is an open-source project, initially developed to integrate with iAssign, and created to integrate easily with any similar plugin used in programming tasks. Our primary goal is to empower teachers and students by supplying a tool that improves the quality of programming in education. Over the last few years, the University of São Paulo has increased its public policy for social inclusion, bringing in even more learners with diverse educational backgrounds. Understanding the students' profiles and being able to modify problems based on these new contexts is fundamental to improve the quality of teaching. This plugin aims to meet the needs of teachers and learners, exploring data from their educational context to improve the learning experience. The plugin automatically extracts data from students' interaction with the coding exercise, e.g. the time between submission (TBS), code changing between submissions (CCBS), and grades. Using these variables, we calculated derived metrics in an attempt to classify exercises according to different parameters. First, we considered time-related metrics, the highest time between submissions (HT). Secondly, we evaluated metrics related to code change
Lecturers are increasingly attempting to use large language models (LLMs) to simplify and make the creation of exercises for students more efficient. Efforts are also being made to automate the exercise creation proce...
详细信息
ISBN:
(纸本)9798350378986;9798350378979
Lecturers are increasingly attempting to use large language models (LLMs) to simplify and make the creation of exercises for students more efficient. Efforts are also being made to automate the exercise creation process in software engineering (SE) education. This study explores the use of advanced LLMs, including GPT-4 and LaMDA, for automated programming exercise creation in higher education and compares the results with related work using GPT-3.5-turbo. Utilizing applications such as ChatGPT, Bing AI Chat, and Google Bard, we identify LLMs capable of initiating different exercise designs. However, manual refinement is crucial for accuracy. Common error patterns across LLMs highlight challenges in complex programming concepts, while specific strengths in various topics showcase model distinctions. This research underscores LLMs' value in exercise generation, emphasizing the critical role of human supervision in refining these processes. Our concise insights cater to educators, practitioners, and other researchers seeking to enhance SE education through LLM applications.
Integrating AI-driven tools in higher education is an emerging area with transformative potential. This paper introduces Iris, a chat-based virtual tutor integrated into the interactive learning platform Artemis that ...
详细信息
ISBN:
(纸本)9798400706004
Integrating AI-driven tools in higher education is an emerging area with transformative potential. This paper introduces Iris, a chat-based virtual tutor integrated into the interactive learning platform Artemis that offers personalized, context-aware assistance in large-scale educational settings. Iris supports computer science students by guiding them through programming exercises and is designed to act as a tutor in a didactically meaningful way. Its calibrated assistance avoids revealing complete solutions, offering subtle hints or counter-questions to foster independent problem-solving skills. For each question, it issues multiple prompts in a Chain-of-Thought to GPT-3.5-Turbo. The prompts include a tutor role description and examples of meaningful answers through few-shot learning. Iris employs contextual awareness by accessing the problem statement, student code, and automated feedback to provide tailored advice. An empirical evaluation shows that students perceive Iris as effective because it understands their questions, provides relevant support, and contributes to the learning process. While students consider Iris a valuable tool for programming exercises and homework, they also feel confident solving programming tasks in computer-based exams without Iris. The findings underscore students' appreciation for Iris' immediate and personalized support, though students predominantly view it as a complement to, rather than a replacement for, human tutors. Nevertheless, Iris creates a space for students to ask questions without being judged by others.
Introductory courses usually only teach a small subset of a programming language and its library, in order to focus on the general concepts rather than overwhelm students with the syntactic, semantic and API minutiae ...
详细信息
ISBN:
(纸本)9798400701399
Introductory courses usually only teach a small subset of a programming language and its library, in order to focus on the general concepts rather than overwhelm students with the syntactic, semantic and API minutiae of a particular language. This paper presents courseware that checks if a program only uses the subset of the Python language and library defined by the instructor. This allows to automatically check that programming examples, exercises and assessments only use the taught constructs. It also helps detect student code with advanced constructs, possibly copied from Q&A sites or generated by large language models. The tool is easy to install, configure and use. It also checks Python code in Jupyter notebooks, a popular format for interactive textbooks and assessment handouts.
Over the years, several systematic literature reviews have been published reporting advances in tools and techniques for automated assessment in Computer Science. However, there is not yet a major bibliometric study t...
详细信息
ISBN:
(纸本)9783031330223;9783031330230
Over the years, several systematic literature reviews have been published reporting advances in tools and techniques for automated assessment in Computer Science. However, there is not yet a major bibliometric study that examines the relationships and influence of publications, authors, and journals to make these research trends visible. This paper presents a bibliometric study of automated assessment of programming exercises, including a descriptive analysis using various bibliometric measures and data visualizations. The data was collected from the Web of Science Core Collection. The obtained results allow us to identify the most influential authors and their affiliations, monitor the evolution of publications and citations, establish relationships between emerging themes in publications, discover research trends, and more. This paper provides a deeper knowledge of the literature and facilitates future researchers to start in this field.
The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI's Codex, a natural language machine learning model trained...
详细信息
ISBN:
(纸本)9781450394314
The teaching and assessment of introductory programming involves writing code that solves a problem described by text. Previous research found that OpenAI's Codex, a natural language machine learning model trained on billions of lines of code, performs well on many programming problems, often generating correct and readable Python code. GitHub's version of Codex, Copilot, is freely available to students. This raises pedagogic and academic integrity concerns. Educators need to know what Copilot is capable of, in order to adapt their teaching to AI-powered programming assistants. Previous research evaluated the most performant Codex model quantitatively, e.g. how many problems have at least one correct suggestion that passes all tests. Here I evaluate Copilot instead, to see if and how it differs from Codex, and look qualitatively at the generated suggestions, to understand the limitations of Copilot. I also report on the experience of using Copilot for other activities asked of students in programming courses: explaining code, generating tests and fixing bugs. The paper concludes with a discussion of the implications of the observed capabilities for the teaching of programming.
暂无评论