版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:CSIROs Data61 Res Way Clayton Vic 3168 Australia Univ Melbourne Sch Math & Stat Parkville Vic 3010 Australia
出 版 物:《JOURNAL OF MACHINE LEARNING RESEARCH》 (机器学习研究杂志)
年 卷 期:2023年第24卷第1期
页 面:1-52页
核心收录:
学科分类:08[工学] 0811[工学-控制科学与工程] 0812[工学-计算机科学与技术(可授工学、理学学位)]
基 金:Australian Research Council [FL140100012] ARC Training Centre in Optimisation Technologies, Integrated Methodologies and Applications (OPTIMA) [IC200100009] Australian Research Council [FL140100012] Funding Source: Australian Research Council
主 题:Item Response Theory algorithm evaluation algorithm portfolios classification ma chine learning algorithm selection instance space analysis explainable algorithm evaluation.
摘 要:Item Response Theory (IRT) has been proposed within the field of Educational Psychometrics to assess student ability as well as test question difficulty and discrimination power. More recently, IRT has been applied to evaluate machine learning algorithm performance on a single classification dataset, where the student is now an algorithm, and the test question is an observation to be classified by the algorithm. In this paper we present a modified IRT-based framework for evaluating a portfolio of algorithms across a repository of datasets, while simultaneously eliciting a richer suite of characteristics - such as algorithm consistency and anomalousness - that describe important aspects of algorithm performance. These characteristics arise from a novel inversion and reinterpretation of the traditional IRT model without requiring additional dataset feature computations. We test this framework on algorithm portfolios for a wide range of applications, demonstrating the broad applicability of this method as an insightful algorithm evaluation tool. Furthermore, the explainable nature of IRT parameters yield an increased understanding of algorithm portfolios.