咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Multidimensional Gains for Sto... 收藏

Multidimensional Gains for Stochastic Approximation

作     者:Saab, Samer S. Shen, Dong 

作者机构:Lebanese Amer Univ Dept Elect & Comp Engn Byblos 2038 Lebanon Beijing Univ Chem Technol Coll Informat Sci & Technol Beijing Peoples R China 

出 版 物:《IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS》 (IEEE Trans. Neural Networks Learn. Sys.)

年 卷 期:2020年第31卷第5期

页      面:1602-1615页

核心收录:

学科分类:0808[工学-电气工程] 08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Lebanese American University National Natural Science Foundation of China 

主  题:Convergence Jacobian matrices Noise measurement Covariance matrices Approximation algorithms Estimation Optimization Gradient descent Newton's method noisy function measurements root finding Rosenbrock function stochastic approximation (SA) stochastic optimization variance reduction 

摘      要:This paper deals with iterative Jacobian-based recursion technique for the root-finding problem of the vector-valued function, whose evaluations are contaminated by noise. Instead of a scalar step size, we use an iterate-dependent matrix gain to effectively weigh the different elements associated with the noisy observations. The analytical development of the matrix gain is built on an iterative-dependent linear function interfered by additive zero-mean white noise, where the dimension of the function is $ {M\ge 1}$ and the dimension of the unknown variable is $ {N\ge 1}$ . Necessary and sufficient conditions for $ {M\ge N}$ algorithms are presented pertaining to algorithm stability and convergence of the estimate error covariance matrix. Two algorithms are proposed: one for the case where $ {M\ge N}$ and the second one for the antithesis. The two algorithms assume full knowledge of the Jacobian. The recursive algorithms are proposed for generating the optimal iterative-dependent matrix gain. The proposed algorithms here aim for per-iteration minimization of the mean square estimate error. We show that the proposed algorithm satisfies the presented conditions for stability and convergence of the covariance. In addition, the convergence rate of the estimation error covariance is shown to be inversely proportional to the number of iterations. For the antithesis $ {M N}$ , contraction of the error covariance is guaranteed. This underdetermined system of equations can be helpful in training neural networks. Numerical examples are presented to illustrate the performance capabilities of the proposed multidimensional gain while considering nonlinear functions.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分