This paper aims to find an approximate true sparse solution of an underdetermined linear system. For this purpose, we propose two types of iterative thresholding algorithms with the continuation technique and the trun...
详细信息
This paper aims to find an approximate true sparse solution of an underdetermined linear system. For this purpose, we propose two types of iterative thresholding algorithms with the continuation technique and the truncation technique respectively. We introduce a notion of limited shrinkage thresholding operator and apply it, together with the restricted isometry property, to show that the proposed algorithms converge to an approximate true sparse solution within a tolerance relevant to the noise level and the limited shrinkage magnitude. Applying the obtained results to nonconvex regularization problems with SCAD, MCP and lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _p$$\end{document} penalty (0 <= p <= 1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$0\le p \le 1$$\end{document}) and utilizing the recovery bound theory, we establish the convergence of their proximal gradient algorithms to an approximate global solution of nonconvex regularization problems. The established results include the existing convergence theory for l1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _1$$\end{document} or l0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\ell _0$$\end{document} regularization problems for finding a true sparse solution as special c
This paper studies a nonsmooth resource allocation problem with network resource constraints and local set constraints, where the interaction graphs among agents are generally strongly connected digraphs. First, we de...
详细信息
This paper studies a nonsmooth resource allocation problem with network resource constraints and local set constraints, where the interaction graphs among agents are generally strongly connected digraphs. First, we design a centralized continuous-time proximal gradient algorithm, where each agent uses the global Lagrangian multipliers and the global values of constraint functions. For the case that the agents' private information could not be leaked and the global Lagrangian multipliers are not available, the agents are endowed with some additional variables to estimate those global information via consensus protocols. Then, we construct a class of continuous-time distributed proximal gradient algorithms by using a two-time scale mechanism to integrate the proposed proximal gradient algorithm and consensus protocols. By adopting Lyapunov stability theory and convex optimization theory, we prove that the decision variables asymptotically converge to the optimal solution of the nonsmooth resource allocation problem. Finally, numerical simulations are applied to illustrate the effectiveness of the proposed algorithms.
We consider an accelerated proximal gradient algorithm for the composite optimization with independent errors (errors little related with historical information) for solving linear inverse problems. We present a new i...
详细信息
We consider an accelerated proximal gradient algorithm for the composite optimization with independent errors (errors little related with historical information) for solving linear inverse problems. We present a new inexact version of FISTA algorithm considering deterministic and stochastic noises. We prove some convergence rates of the algorithm and we connect it with the current existing catalyst framework for many algorithms in machine learning. We show that a catalyst can be regarded as a special case of the FISTA algorithm where the smooth part of the function vanishes. Our framework gives a more generic formulation that provides convergence results for the deterministic and stochastic noise cases and also to the catalyst framework. Some of our results provide simpler alternative analysis of some existing results in literature, but they also extend the results to more generic situations.
In this paper, for two different forms of non-smooth convex optimization problems, we investigate the self-adaptive algorithms with inertia acceleration. Firstly, we propose a self-adaptive proximal gradient algorithm...
详细信息
In this paper, for two different forms of non-smooth convex optimization problems, we investigate the self-adaptive algorithms with inertia acceleration. Firstly, we propose a self-adaptive proximal gradient algorithm with an inertial step. Under reasonable parameters, the strong convergence theorem is established. Secondly, we propose a self-adaptive split proximalalgorithm with inertial acceleration. We prove that our algorithm converges strongly under suitable conditions. Notably, both inertial algorithms are extended to multi-step inertial version to accelerate the convergence of the algorithms. Finally, numerical results illustrate the performances of our algorithms.
The polyhedral projection problem arises in various applications. To efficiently solve the dual problem, one of the crucial issues is to safely identify zero-elements as well as the signs of nonzero elements at the op...
详细信息
The polyhedral projection problem arises in various applications. To efficiently solve the dual problem, one of the crucial issues is to safely identify zero-elements as well as the signs of nonzero elements at the optimal solution. In this paper, relying on its nonsmooth dual problem and active set techniques, we first propose a Duality-Gap-Active-Set strategy (DGASS) to effectively identify the indices of zero-elements and the signs of nonzero entries of the optimal solution. Serving as an efficient acceleration strategy, DGASS can be embedded into certain iterative methods. In particular, by applying DGASS to both the proximal gradient algorithm (PGA) and the proximal semismooth Newton algorithm (PSNA), we propose the method of PGA-DGASS and PSNA-DGASS, respectively. Global convergence and local quadratic convergence rate are discussed. We report on numerical results over both synthetic and real data sets to demonstrate the high efficiency of the two DGASS-accelerated methods.
This paper proposes a generalized accelerated proximalgradient (GAPG) approach for solving total variation (TV)-based image restoration problems. The GAPG algorithm generalizes the original APG algorithm by replacing...
详细信息
This paper proposes a generalized accelerated proximalgradient (GAPG) approach for solving total variation (TV)-based image restoration problems. The GAPG algorithm generalizes the original APG algorithm by replacing the Lipschitz constant with an appropriate positive-definite matrix, resulting in faster convergence. For TV-based image restoration problems, we further introduce two auxiliary variables that approximate the partial derivatives. Constraints on the variables can easily be imposed without modifying the algorithm much, and the TV regularization can be either isotropic or anisotropic. As compared with the recently developed APG-based methods for TV-based image restoration, i.e., monotone version of the two-step iterative shrinkage/thresholding algorithm (MTwIST) and monotone version of the fast IST algorithm (MFISTA), our GAPG is much simpler as it does not require to solve an image denoising subproblem. Moreover, the convergence rate of O(k(-2)) is maintained by our GAPG, where k s the number of iterations;the cost of each iteration in GAPG is also lower. As a result, in our experiments, our GAPG approach can be much faster than MTwIST and MFISTA. The experiments also verify that our GAPG converges faster than the original APG and MTwIST when they solve identical problems.
This paper proposes a blockwise accelerated proximalgradient (BAPG) approach. It chooses a block diagonal Lipschitz matrix in the generalized APG algorithm, such that the subproblems can be solved either by fast Four...
详细信息
ISBN:
(数字)9781510617421
ISBN:
(纸本)9781510617421
This paper proposes a blockwise accelerated proximalgradient (BAPG) approach. It chooses a block diagonal Lipschitz matrix in the generalized APG algorithm, such that the subproblems can be solved either by fast Fourier transform (FFT) or in closed forms. Experiments verify the great speed advantage of BAPG for total variation-based image restoration.
Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomize...
详细信息
Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose a randomized proximalgradient method harnessing this underlying structure. We introduce two key components: (i) a random subspace proximal gradient algorithm;and (ii) an identification-based sampling of the subspaces. Their interplay brings a significant performance improvement on typical learning problems in terms of dimensions explored.
We consider the problem of minimizing a finite sum of convex functions subject to the set of minimizers of a convex differentiable function. In order to solve the problem, an algorithm combining the incremental proxim...
详细信息
We consider the problem of minimizing a finite sum of convex functions subject to the set of minimizers of a convex differentiable function. In order to solve the problem, an algorithm combining the incremental proximalgradient method with smooth penalization technique is proposed. We show the convergence of the generated sequence of iterates to an optimal solution of the optimization problems, provided that a condition expressed via the Fenchel conjugate of the constraint function is fulfilled. Finally, the functionality of the method is illustrated by some numerical experiments addressing image inpainting problems and generalized Heron problems with least squares constraints.
L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure e...
详细信息
L-band digital aeronautical communication system 1(L-DACS1) is a promising candidate data-link for future air-ground communication, but it is severely interfered by the pulse pairs(PPs) generated by distance measure equipment. A novel PP mitigation approach is proposed in this paper. Firstly, a deformed PP detection(DPPD) method that combines a filter bank, correlation detection, and rescanning is proposed to detect the deformed PPs(DPPs) which are caused by multiple filters in the receiver. Secondly, a finite impulse response(FIR) model is used to approximate the overall characteristic of filters, and then the waveform of DPP can be acquired by the original waveform of PP and the FIR model. Finally, sparse representation is used to estimate the position and amplitude of each DPP, and then reconstruct each DPP. The reconstructed DPPs will be subtracted from the contaminated signal to mitigate interference. Numerical experiments show that the bit error rate performance of our approach is about 5 dB better than that of recent works and is closer to interference-free environment.
暂无评论