A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-proje...
详细信息
A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
A family of adaptive-filtering algorithms that uses a variable step size is proposed. A variable step size is obtained by minimizing the energy of the noise-free a posteriori error signal which is obtained by using a ...
详细信息
A family of adaptive-filtering algorithms that uses a variable step size is proposed. A variable step size is obtained by minimizing the energy of the noise-free a posteriori error signal which is obtained by using a known L-1-L-2 minimization formulation. Based on this methodology, a shrinkage affine projection (SHAP) algorithm, a shrinkage least-mean-squares (SHLMS) algorithm, and a shrinkage normalized least-mean-squares (SHNLMS) algorithm are proposed. The SHAP algorithm yields a significantly reduced steady-state misalignment as compared to the conventional affine projection (AP), variable-step-size AP, and set-membership AP algorithms for the same convergence speed although the improvement is achieved at the cost of an increase in the average computational effort per iteration in the amount of 11% to 14%. The SHLMS algorithm yields a significantly reduced steady-state misalignment and faster convergence as compared to the conventional LMS and variable-step-size LMS algorithms. Similarly, the SHNLMS algorithm yields a significantly reduced steady-state misalignment and faster convergence as compared to the conventional normalized least-mean-squares (NLMS) and set-membership NLMS algorithms.
Two new improved recursive least-squares adaptive- filteringalgorithms, one with a variable forgetting factor and the other with a variable convergence factor are proposed. Optimal forgetting and convergence factors ...
详细信息
Two new improved recursive least-squares adaptive- filteringalgorithms, one with a variable forgetting factor and the other with a variable convergence factor are proposed. Optimal forgetting and convergence factors are obtained by minimizing the mean square of the noise-free a posteriori error signal. The determination of the optimal forgetting and convergence factors requires information about the noise-free a priori error which is obtained by solving a known L-1 - L-2 minimization problem. Simulation results in system-identification and channel-equalization applications are presented which demonstrate that improved steady-state misalignment, tracking capability, and readaptation can be achieved relative to those in some state-of-the-art competing algorithms.
A partial-update NLMS (PU-NLMS) algorithm is proposed that uses a variable step size which is obtained by solving a constrained minimization problem. The proposed algorithm can be used with two different known updates...
详细信息
ISBN:
(纸本)9781479930999
A partial-update NLMS (PU-NLMS) algorithm is proposed that uses a variable step size which is obtained by solving a constrained minimization problem. The proposed algorithm can be used with two different known updates of the inherent diagonal matrix. Simulation results in a system identification application demonstrate that the proposed PU-NLMS algorithm yields reduced steady-state misalignment as compared to the known PU-NLMS, the set-membership PU-NLMS, and the M-max NLMS algorithms. The proposed PU-NLMS algorithm requires approximately the same number of iterations to converge as the conventional and set-membership PU-NLMS algorithms and somewhat fewer iterations relative to the M-max NLMS algorithm. Furthermore, it is shown that through the use of one of the two known updates of the inherent diagonal matrix, reduced computational effort can also be achieved relative to those of the known PU-NLMS and M-max NLMS algorithms.
暂无评论