咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >How Many Ratings per Item are ... 收藏
arXiv

How Many Ratings per Item are Necessary for Reliable Significance Testing?

作     者:Homan, Christopher M. Korn, Flip Welty, Chris 

作者机构:Department of Computer Science Rochester Institute of Technology RochesterNY14607 United States Google Research New YorkNY10011 United States 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Stochastic models 

摘      要:Most approaches to machine learning evaluation assume that machine and human responses are repeatable enough to be measured against data with unitary, authoritative, gold standard responses, via simple metrics such as accuracy, precision, and recall that assume scores are independent given the test item. However, AI models have multiple sources of stochasticity and the human raters who create gold standards tend to disagree with each other, often in meaningful ways, hence a single output response per input item may not provide enough information. We introduce methods for determining whether an (existing or planned) evaluation dataset has enough responses per item to reliably compare the performance of one model to another. We apply our methods to several of very few extant gold standard test sets with multiple disaggregated responses per item and show that there are usually not enough responses per item to reliably compare the performance of one model against another. Our methods also allow us to estimate the number of responses per item for hypothetical datasets with similar response distributions to the existing datasets we study. When two models are very far apart in their predictive performance, fewer raters are needed to confidently compare them, as expected. However, as the models draw closer, we find that a larger number of raters than are currently typical in annotation collection are needed to ensure that the power analysis correctly reflects the difference in performance. © 2024, CC BY.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分