In November 2022, OpenAI has introduced ChatGPT - a chatbot based on supervised and reinforcement learning. Not only can it answer questions emulating human-like responses, but it can also generate code from scratch o...
详细信息
In November 2022, OpenAI has introduced ChatGPT - a chatbot based on supervised and reinforcement learning. Not only can it answer questions emulating human-like responses, but it can also generate code from scratch or complete coding templates provided by the user. ChatGPT can generate unique responses which render any traditional anti-plagiarism tool useless. Its release has ignited a heated debate about its usage in academia, especially by students. We have found, to our surprise, that our students at POLITEHNICA University of Bucharest (UPB) have been using generative AI tools (ChatGPT and its predecessors) for solving homework, for at least 6 months. We therefore set out to explore the capabilities of ChatGPT and assess its value for educational purposes. We used ChatGPT to solve all our coding assignments for the semester from our UPB Functional programming course. We discovered that, although ChatGPT provides correct answers in 68% of the cases, only around half of those are legible solutions which can benefit students in some form. On the other hand, ChatGPT has a very good ability to perform code review on student programming homework. Based on these findings, we discuss the pros and cons of ChatGPT in a teaching environment, as well as means for integrating GPT models for generating code reviews, in order to improve the code-writing skills of students.
Several popular cost estimation models like COCOMO and function points use adjustment variables, such as software complexity and platform, to modify original estimates and arrive at final estimates. Using data on 666 ...
详细信息
Several popular cost estimation models like COCOMO and function points use adjustment variables, such as software complexity and platform, to modify original estimates and arrive at final estimates. Using data on 666 programs from 15 software projects, this study empirically tests a research model that studies the influence of three adjustment variables-software complexity, computer platform, and program type (batch or online programs) on software effort. The results confirm that all the three adjustment variables have a significant effect on effort. Further, multiple comparison of means also points to two other results for the data examined. Batch programs involve significantly higher software effort than online programs. Programs rated as complex have significantly higher effort than programs rated as average.
暂无评论