咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ERGODIC MIRROR DESCENT 收藏

ERGODIC MIRROR DESCENT

各态历经的镜子降下

作     者:Duchi, John C. Agarwal, Alekh Johansson, Mikael Jordan, Michael I. 

作者机构:Univ Calif Berkeley Dept Elect Engn & Comp Sci Berkeley CA 94720 USA Microsoft Res New York NY USA Royal Inst Technol KTH Sch Elect Engn Stockholm Sweden 

出 版 物:《SIAM JOURNAL ON OPTIMIZATION》 (工业与应用数学会最优化杂志)

年 卷 期:2012年第22卷第4期

页      面:1549-1578页

核心收录:

学科分类:07[理学] 070104[理学-应用数学] 0701[理学-数学] 

基  金:NDSEG fellowship U.S. Army Research Laboratory U.S. Army Research Office [W911NF-11-1-0391] Microsoft research fellowship Google graduate fellowship 

主  题:convex programming stochastic optimization Markov chain Monte Carlo sampling mixing mirror descent algorithm 

摘      要:We generalize stochastic subgradient descent methods to situations in which we do not receive independent samples from the distribution over which we optimize, instead receiving samples coupled over time. We show that as long as the source of randomness is suitably ergodic it converges quickly enough to a stationary distribution-the method enjoys strong convergence guarantees, both in expectation and with high probability. This result has implications for stochastic optimization in high-dimensional spaces, peer-to-peer distributed optimization schemes, decision problems with dependent data, and stochastic optimization problems over combinatorial spaces.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分