The diffusion least mean squares ( LMS) [ 1] algorithm gives faster convergence than the original LMS in a distributed network. Also, it outperforms other distributed LMS algorithms like spatial LMS and incremental LM...
详细信息
ISBN:
(纸本)9781479980840
The diffusion least mean squares ( LMS) [ 1] algorithm gives faster convergence than the original LMS in a distributed network. Also, it outperforms other distributed LMS algorithms like spatial LMS and incremental LMS [ 2]. However, both LMS and diffusion-LMS are not applicable in non-linear environments where data may not be linearly separable [ 3]. A variant of LMS called kernel-LMS ( KLMS) has been proposed in [ 3] for such non-linearities. We intend to propose the kernelised version of diffusion-LMS in this paper.
We develop a gradient-descent distributedadaptive estimation strategy that compensates for error in both input and output data. To this end, we utilize the concepts of total least-squares estimation and gradient-desc...
详细信息
ISBN:
(纸本)9781479903566
We develop a gradient-descent distributedadaptive estimation strategy that compensates for error in both input and output data. To this end, we utilize the concepts of total least-squares estimation and gradient-descent optimization in conjunction with a recently-proposed framework for diffusion adaptation over networks. The proposed strategy does not require any prior knowledge about the noise variances and has a computational complexity comparable to the diffusion least mean square (DLMS) strategy. Simulation results demonstrate that the proposed strategy provides significantly improved estimation performance compared with the DLMS and bias-compensated DLMS (BC-DLMS) strategies when both the input and output signals are noisy.
We develop a gradient-descent distributedadaptive estimation strategy that compensates for error in both input and output data. To this end, we utilize the concepts of total least-squares estimation and gradient-desc...
详细信息
ISBN:
(纸本)9781479903573
We develop a gradient-descent distributedadaptive estimation strategy that compensates for error in both input and output data. To this end, we utilize the concepts of total least-squares estimation and gradient-descent optimization in conjunction with a recently-proposed framework for diffusion adaptation over networks. The proposed strategy does not require any prior knowledge about the noise variances and has a computational complexity comparable to the diffusion least mean square (DLMS) strategy. Simulation results demonstrate that the proposed strategy provides significantly improved estimation performance compared with the DLMS and bias-compensated DLMS (BC-DLMS) strategies when both the input and output signals are noisy.
暂无评论