In dictionary learning, sparse regularization is used to promote sparsity and has played a major role in the developing of dictionary learning algorithms. l(1)-norm is of the most popular sparse regularization due to ...
详细信息
In dictionary learning, sparse regularization is used to promote sparsity and has played a major role in the developing of dictionary learning algorithms. l(1)-norm is of the most popular sparse regularization due to its convexity and the related tractable convex optimization problems. However, l(1)-norm leads to biased solutions and provides inferior performance on certain applications compared with nonconvex sparse regularizations. In this work, we propose a generalized minimax-concave (GMC) sparse regularization, which is nonconvex, to promote sparsity to design dictionary learning model. Applying the alternate optimization scheme, we use the forward-backwardsplitting (FBS) algorithm to solve the sparse coding problem. As the improvement, we incorporate Nesterov's acceleration technique and adaptive threshold scheme into the FBS algorithm to improve the convergence efficiency and performance. In the dictionary update step, we apply the difference of convex functions (DC) programming and the DC algorithm (DCA) to address the dictionary update. Two dictionary update algorithms are designed;one updates the dictionary atoms one by one, and the other one updates the dictionary atoms simultaneously. The presented dictionary learning algorithms perform robustly in dictionary recovery. Numerical experiments are designed to verify the performance of proposed algorithms and to compare with the state-of-the-art algorithms. (C) 2020 Published by Elsevier B.V.
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using conve...
详细信息
The notion of soft thresholding plays a central role in problems from various areas of applied mathematics, in which the ideal solution is known to possess a sparse decomposition in some orthonormal basis. Using convex-analytical tools, we extend this notion to that of proximal thresholding and investigate its properties, providing, in particular, several characterizations of such thresholders. We then propose a versatile convex variational formulation for optimization over orthonormal bases that covers a wide range of problems, and we establish the strong convergence of a proximal thresholding algorithm to solve it. Numerical applications to signal recovery are demonstrated.
This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other op...
详细信息
This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other optimality measures, we define a group of abstract EB conditions, and then analyze the interplay between them;on the other hand, by using this operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The former provides a unified framework that not only allows us to find new connections between many existing EB conditions, but also paves a way to construct new ones. The latter allows us to claim the weakest conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. In addition, we show linear convergence for the proximal alternating linearized minimization algorithm under a group of equivalent EB conditions, which are strictly weaker than the traditional strongly convex condition. Moreover, by defining a new EB condition, we show Q-linear convergence of Nesterov's accelerated forward-backwardalgorithm without strong convexity. Finally, we verify EB conditions for a class of dual objective functions.
We derive strongly convergent algorithms to solve inverse problems involving elastic-net regularization. Moreover, using functional analysis techniques, we provide a rigorous study of the asymptotic properties of the ...
详细信息
We derive strongly convergent algorithms to solve inverse problems involving elastic-net regularization. Moreover, using functional analysis techniques, we provide a rigorous study of the asymptotic properties of the regularized solutions that allows to cast in a unified framework 1, elastic-net and classical Tikhonov regularization.
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case,...
详细信息
We study the extension of the proximal gradient algorithm where only a stochastic gradient estimate is available and a relaxation step is allowed. We establish convergence rates for function values in the convex case, as well as almost sure convergence and convergence rates for the iterates under further convexity assumptions. Our analysis avoid averaging the iterates and error summability assumptions which might not be satisfied in applications, e.g. in machine learning. Our proofing technique extends classical ideas from the analysis of deterministic proximal gradient algorithms.
This paper is devoted to the optimal selection of the relaxation parameter sequence for Krasnosel'ski-Mann iteration. Firstly, we establish the optimal relaxation parameter sequence of the Krasnosel'ski-Mann i...
详细信息
This paper is devoted to the optimal selection of the relaxation parameter sequence for Krasnosel'ski-Mann iteration. Firstly, we establish the optimal relaxation parameter sequence of the Krasnosel'ski-Mann iteration, with which the algorithm is proved to achieve the optimal convergence rate. Then we present an approximation to the optimal relaxation parameter sequence since the optimal relaxation parameter sequence involves fixed points of the operators and can't be used in actual computing. Thirdly, we apply our results to the operators splitting method, such as forward-backwardsplitting methods and Douglas-Rachford splitting method. Finally, numerical experiments are provided to illustrate that Krasnosel'ski-Mann iteration and the relaxed projected method with our proposed relaxation parameter sequence behave better than others.
First-order methods such as proximal gradient, which use forward-backwardsplitting techniques have proved to be very effective in solving nonsmooth convex minimization problem, which is useful in solving various prac...
详细信息
First-order methods such as proximal gradient, which use forward-backwardsplitting techniques have proved to be very effective in solving nonsmooth convex minimization problem, which is useful in solving various practical problems in different fields such as machine learning and image processing. In this paper, we propose few new forward-backward splitting algorithms, which consume less number of iterations to converge to an optimum. In addition, we derive convergence rates for the proposed formulations and show that the speed of convergence of these algorithms is significantly better than the traditional forward-backwardalgorithm. To demonstrate the practical applicability, we apply them to two real-world problems of machine learning and image processing. The first issue deals with the regression on high-dimensional datasets, whereas the second one is the image deblurring problem. Numerical experiments have been conducted on several publicly available real datasets to verify the obtained theoretical results. Results demonstrate the superiority of our algorithms in terms of accuracy, the number of iterations required to converge and the rate of convergence against the classical first-order methods.
This paper deals with the quadrature amplitude modulation (QAM) problem for the multiple-input multiple-output (MIMO) channel. Based on the maximum likelihood estimation, the QAM detection problem is formulated as an ...
详细信息
ISBN:
(纸本)9781479952557
This paper deals with the quadrature amplitude modulation (QAM) problem for the multiple-input multiple-output (MIMO) channel. Based on the maximum likelihood estimation, the QAM detection problem is formulated as an integer quadratic programming, which is a combinatorial problem and difficult to obtain exact solutions. In order to overcome combinatorial difficulties, this paper formulates the QAM detection problem as the l(o) norm minimization problem and relaxes it into a quadratic programming with the l(1) norm regularization. Utilizing and modifying the forward-backwardsplitting (FOBOS) algorithm, a new QAM detection algorithm is proposed. This algorithm has a trade-off between the computational cost and the detection accuracy, which depends on a parameter of the algorithm. Numerical simulations show that the proposed algorithm works well and achieves a good detection performance with less computational cost comparing with the semidefinite relaxation (SDR) based algorithm.
暂无评论