An algorithm is called stable at a training set S if any change of a single point in S yields only a small change in the *** of the learning algorithm is necessary for learnability in the supervised classification and...
详细信息
ISBN:
(纸本)9781467321006
An algorithm is called stable at a training set S if any change of a single point in S yields only a small change in the *** of the learning algorithm is necessary for learnability in the supervised classification and regression *** this paper,we give formal definitions of strong and weak stability for randomizedalgorithms and prove non-asymptotic bounds on the difference between the empirical and expected error.
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learningalgorithms to the case of randomized algori...
详细信息
We extend existing theory on stability, namely how much changes in the training data influence the estimated models, and generalization performance of deterministic learningalgorithms to the case of randomizedalgorithms. We give formal definitions of stability for randomizedalgorithms and prove non-asymptotic bounds on the difference between the empirical and expected error as well as the leave-one-out and expected error of such algorithms that depend on their random stability. The setup we develop for this purpose can be also used for generally studying randomized learning algorithms. We then use these general results to study the effects of bagging on the stability of a learning method and to prove non-asymptotic bounds on the predictive performance of bagging which have not been possible to prove with the existing theory of stability for deterministic learningalgorithms.(1)
暂无评论