As one of the efficient approaches to deal with big data, divide-and-conquer distributedalgorithms, such as the distributed kernel regression, bootstrap, structured perception training algorithms, and so on, are prop...
详细信息
As one of the efficient approaches to deal with big data, divide-and-conquer distributedalgorithms, such as the distributed kernel regression, bootstrap, structured perception training algorithms, and so on, are proposed and broadly used in learning systems. Some learning theories have been built to analyze the feasibility, approximation, and convergence bounds of these distributed learning algorithms. However, less work has been studied on the stability of these distributed learning algorithms. In this paper, we discuss the generalization bounds of distributed learning algorithms from the view of algorithmic stability. First, we introduce a definition of uniform distributed stability for distributedalgorithms and study the distributedalgorithms' generalization risk bounds. Then, we analyze the stability properties and generalization risk bounds of a kind of regularization-based distributedalgorithms. Two generalization distributed risks obtained show that the generalization distributed risk bounds for the difference between their generalization distributed and empirical distributed/leave-one-computer-out risks are closely related to the size of samples n and the amount of working computers m mathcal O(m/n(1/2)). Furthermore, the results in this paper indicate that, for a good generalization regularized distributed kernel algorithm, the regularization parameter lambda should be adjusted with the change of the term m/n(1/2). These theoretic discoveries provide the useful guidance when deploying the distributedalgorithms on practical big data platforms. We explore our theoretic analyses through two simulation experiments. Finally, we discuss some problems about the sufficient amount of working computers, nonequivalence, and generalization for distributedlearning. We show that the rules for the computation on one single computer may not always hold for distributedlearning.
暂无评论