版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Isfahan Univ Technol Dept Elect & Comp Engn Esfahan Iran Queens Univ Dept Elect & Comp Engn Kingston ON K7L 3N6 Canada Univ Toronto Dept Elect & Comp Engn Toronto ON M5S 3G4 Canada
出 版 物:《IEE PROCEEDINGS-VISION IMAGE AND SIGNAL PROCESSING》 (IEE Proc Vision Image Signal Proc)
年 卷 期:2000年第147卷第3期
页 面:231-237页
核心收录:
主 题:prediction theory least mean squares methods adaptive signal processing deterministic least squares minimisation normalised LMS algorithm Optimisation techniques stochastic least squares minimisation exact solutions weights speed estimation predicted errors least mean squares H∞ optimisation weights prediction filtered errors second-order H∞ optimal LMS algorithm Markov processes approximate solutions weight increment vector second-order Markov model Filtering methods in signal processing second-order H∞ optimal NLMS algorithms least mean squares algorithm Interpolation and function approximation (numerical analysis) adaptive filtering second order algorithms second-order LMS smoothly time-varying models tracking filtering theory maximum energy gain tracking performance Kalman filter time-varying systems adaptive Kalman filters second-order NLMS Signal processing theory
摘 要:It is shown that two algorithms obtained by simplifying a Kalman filter considered for a second-order Markov model are H-infinity suboptimal. Similar to least mean squares (LMS) and normalised LMS (NLMS) algorithms, these second order algorithms can be thought of as approximate solutions to stochastic or deterministic least squares minimisation. It is proved that second-order LMS and NLMS are exact solutions causing the maximum energy gain from the disturbances to the predicted and filtered errors to be less than one, respectively. These algorithms are implemented in two steps. Operation of the first step is like conventional LMS/NLMS algorithms and the second step consists of the estimation of the weight increment vector and prediction of weights for the next iteration. This step applies simple smoothing on the increment of the estimated weights to estimate the speed of the weights. Also they are cost-effective, robust and attractive for improving the tracking performance of smoothly time-varying models.