Artificial Intelligence-driven Development Environments (aiDEs) offer developers revolutionary computer programming assistance. There is great potential in incorporating aiDEs into Computer Science education;however, ...
详细信息
ISBN:
(纸本)9798400706004
Artificial Intelligence-driven Development Environments (aiDEs) offer developers revolutionary computer programming assistance. There is great potential in incorporating aiDEs into Computer Science education;however, the effects of these tools should be fully examined before doing so. Here, a within-subjects study was conducted to compare the programming performance, workload, emotion, and self-efficacy of seventeen novices coding with and without use of the GitHub Copilot aiDE under time pressure. Results showed that using the aiDE significantly increased programming efficiency and reduced effort and mental workload but did not significantly impact emotion or self-efficacy. However, participants' performance improved with more experience using the ai, and their self-efficacy followed. The results suggest that students who try aiDEs will likely be tempted to use them for time-sensitive work. There is no evidence that providing aiDEs will aid struggling students, but there is a clear need for students to practice with ai to become competent and confident using it.
Evaluating the correctness of code generated by ai is a challenging open problem. In this paper, we propose a fully automated method, named ACCA , to evaluate the correctness of ai -generated code for security purpose...
详细信息
Evaluating the correctness of code generated by ai is a challenging open problem. In this paper, we propose a fully automated method, named ACCA , to evaluate the correctness of ai -generated code for security purposes. The method uses symbolic execution to assess whether the ai -generated code behaves as a reference implementation. We use ACCA to assess four state-of-the-art models trained to generate security -oriented assembly code and compare the results of the evaluation with different baseline solutions, including output similarity metrics, widely used in the field, and the well-known ChatGPT, the ai -powered language model developed by Openai. Our experiments show that our method outperforms the baseline solutions and assesses the correctness of the ai -generated code similar to the human -based evaluation, which is considered the ground truth for the assessment in the field. Moreover, ACCA has a very strong correlation with the human evaluation (Pearson's correlation coefficient = 0 . 84 on average). Finally, since it is a full y automated solution that does not require any human intervention, the proposed method performs the assessment of every code snippet in similar to 0 . 17 s on average, which is definitely lower than the average time required by human analysts to manually inspect the code, based on our experience.
暂无评论