版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Univ Calif Berkeley Dept Elect Engn & Comp Sci Berkeley CA 94720 USA Microsoft Res New York NY USA Royal Inst Technol KTH Sch Elect Engn Stockholm Sweden
出 版 物:《SIAM JOURNAL ON OPTIMIZATION》 (工业与应用数学会最优化杂志)
年 卷 期:2012年第22卷第4期
页 面:1549-1578页
核心收录:
学科分类:07[理学] 070104[理学-应用数学] 0701[理学-数学]
基 金:NDSEG fellowship U.S. Army Research Laboratory U.S. Army Research Office [W911NF-11-1-0391] Microsoft research fellowship Google graduate fellowship
主 题:convex programming stochastic optimization Markov chain Monte Carlo sampling mixing mirror descent algorithm
摘 要:We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, instead receiving samples coupled over time. We show that as long as the source of randomness is suitably ergodic it converges quickly enough to a stationary distribution-the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.