咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ESTIMATING THE ALGORITHMIC VAR... 收藏

ESTIMATING THE ALGORITHMIC VARIANCE OF RANDOMIZED ENSEMBLES VIA THE BOOTSTRAP

作     者:Lopes, Miles E. 

作者机构:Univ Calif Davis Dept Stat Math Sci Bldg 4118399 Crocker Lane Davis CA 95616 USA 

出 版 物:《ANNALS OF STATISTICS》 (Ann. Stat.)

年 卷 期:2019年第47卷第2期

页      面:1088-1112页

核心收录:

学科分类:07[理学] 0714[理学-统计学(可授理学、经济学学位)] 0701[理学-数学] 070101[理学-基础数学] 

基  金:NSF [DMS-1613218] 

主  题:Bootstrap random forests bagging randomized algorithms 

摘      要:Although the methods of bagging and random forests are some of the most widely used prediction methods, relatively little is known about their algorithmic convergence. In particular, there are not many theoretical guarantees for deciding when an ensemble is large enough-so that its accuracy is close to that of an ideal infinite ensemble. Due to the fact that bagging and random forests are randomized algorithms, the choice of ensemble size is closely related to the notion of algorithmic variance (i.e., the variance of prediction error due only to the training algorithm). In the present work, we propose a bootstrap method to estimate this variance for bagging, random forests and related methods in the context of classification. To be specific, suppose the training dataset is fixed, and let the random variable ERRt denote the prediction error of a randomized ensemble of size t. Working under a first-order model for randomized ensembles, we prove that the centered law of ERRt can be consistently approximated via the proposed method as t - infinity. Meanwhile, the computational cost of the method is quite modest, by virtue of an extrapolation technique. As a consequence, the method offers a practical guideline for deciding when the algorithmic fluctuations of ERRt are negligible.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分