Sparse recovery algorithms have been applied to the Space-time adaptive processing for reducing the requirement of samples over the past 15 years. However, many Sparse recovery algorithms are not robust and need accur...
详细信息
Sparse recovery algorithms have been applied to the Space-time adaptive processing for reducing the requirement of samples over the past 15 years. However, many Sparse recovery algorithms are not robust and need accurate user parameters. Conventional sparse Bayesian learning (SBL) algorithms are insensitivity to user parameters but converge slowly. To remedy the limitation, two iterative reweighted algorithms are proposed based on SBL. In order to minimise the SBL penalty function, we construct its upper-bounding surrogate function via the concave conjugate function and apply iterative reweighted algorithms to minimise the surrogate function. Theoretical analysis and numerical experiments all exhibit great performance of the proposed algorithms.
In this paper, we survey and compare different algorithms that, given an overcomplete dictionary of elementary functions, solve the problem of simultaneous sparse signal approximation, with common sparsity profile ind...
详细信息
In this paper, we survey and compare different algorithms that, given an overcomplete dictionary of elementary functions, solve the problem of simultaneous sparse signal approximation, with common sparsity profile induced by a l(p)-l(q) mixed-norm. Such a problem is also known in the statistical learning community as the group lasso problem. We have gathered and detailed different algorithmic results concerning these two equivalent approximation problems. We have also enriched the discussion by providing relations between several algorithms. Experimental comparisons of the detailed algorithms have also been carried out. The main lesson learned from these experiments is that depending on the performance measure, greedy approaches and iterative reweighted algorithms are the most efficient algorithms either in term of computational complexities, sparsity recovery or mean-square error. (C) 2011 Elsevier B.V. All rights reserved.
The fact that fewer measurements are needed by log-sum minimization for sparse signal recovery than the l(1)-minimization has been observed by extensive experiments. Nevertheless, such a benefit brought by the use of ...
详细信息
The fact that fewer measurements are needed by log-sum minimization for sparse signal recovery than the l(1)-minimization has been observed by extensive experiments. Nevertheless, such a benefit brought by the use of the log-sum penalty function has not been rigorously proved. This paper provides a theoretical justification for adopting the log-sum as an alternative sparsity-encouraging function. We prove that minimizing the log-sum penalty function subject to Az = y is able to yield the exact solution, provided that a certain condition is satisfied. Specifically, our analysis suggests that, for a properly chosen regularization parameter, exact reconstruction can be attained when the restricted isometry constant delta(3K) is smaller than one, which presents a less restrictive isometry condition than that required by the conventional l(1)-type methods.
暂无评论