Proximal gradient algorithms are popularly implemented to achieve convex optimization with nonsmooth regularization. Obtaining the exact solution of the proximal operator for nonsmooth regularization is challenging be...
详细信息
Proximal gradient algorithms are popularly implemented to achieve convex optimization with nonsmooth regularization. Obtaining the exact solution of the proximal operator for nonsmooth regularization is challenging because errors exist in the computation of the gradient; consequently, the design and application of inexact proximal gradient algorithms have attracted considerable attention from researchers. This paper proposes computationally efficient basic and inexact proximal gradient descent algorithms with random reshuffling. The proposed stochastic algorithms take randomly reshuffled data to perform successive gradient descents and implement only one proximal operator after all data pass through. We prove the convergence results of the proposed proximal gradient algorithms under the sampling-without-replacement reshuffling *** computational errors exist in gradients and proximal operations, the proposed inexact proximal gradient algorithms can converge to an optimal solution neighborhood. Finally, we apply the proposed algorithms to compressed sensing and compare their efficiency with some popular algorithms.
This paper is concerned with the analysis of geometrical properties and behaviors of the optimal value and global optimal solutions for a class of nonsmooth optimization problems. We provide conditions under which the...
详细信息
This paper is concerned with the analysis of geometrical properties and behaviors of the optimal value and global optimal solutions for a class of nonsmooth optimization problems. We provide conditions under which the solution set of a nonsmooth and nonconvex optimization problem is non-empty and/or compact. We also examine related properties such as the compactness of the sublevel sets, the boundedness from below and the coercivity of the objective function to characterize the non-emptiness and the compactness of the solution set of the underlying optimization problem under the unboundedness of its associated feasible set.
An inexact alternating direction method of multiplies (I-ADMM) with an expansion linesearch step was developed for solving a family of separable minimization problems subject to linear constraints, where the objective...
详细信息
An inexact alternating direction method of multiplies (I-ADMM) with an expansion linesearch step was developed for solving a family of separable minimization problems subject to linear constraints, where the objective function is the sum of a smooth but possibly nonconvex function and a possibly nonsmooth nonconvex function. Global convergence and linear convergence rate of the I-ADMM were established under proper conditions while inexact relative error criterion was used for solving the subproblems. In addition, a unified proximal gradient (UPG) method with momentum acceleration was proposed for solving the smooth but possibly nonconvex subproblem. This UPG method guarantees global convergence and will automatically reduce to an optimal accelerated gradient method when the smooth function in the objective is convex. Our numerical experiments on solving nonconvex quadratic programming problems and sparse optimization problems from statistical learning show that the proposed I-ADMM is very effective compared with other state-of-the-art algorithms in the literature.
In this paper we present GSSN, a globalized SCD semismooth* Newton method for solving nonsmooth nonconvex optimization problems. The global convergence properties of the method are ensured by the proximal gradient met...
详细信息
In this paper we present GSSN, a globalized SCD semismooth* Newton method for solving nonsmooth nonconvex optimization problems. The global convergence properties of the method are ensured by the proximal gradient method, whereas locally superlinear convergence is established via the SCD semismooth* Newton method under quite weak assumptions. The Newton direction is based on the SC (subspace containing) derivative of the subdifferential mapping and can be computed by the (approximate) solution of an equality-constrained quadratic program. Special attention is given to the efficient numerical implementation of the overall method.
In this paper, we propose a generalized framework for developing learning-rate-free momentum stochastic gradient descent (SGD) methods in the minimization of nonsmooth nonconvex functions, especially in training nonsm...
详细信息
In this paper, we propose a generalized framework for developing learning-rate-free momentum stochastic gradient descent (SGD) methods in the minimization of nonsmooth nonconvex functions, especially in training nonsmooth neural networks. Our framework adaptively generates learning rates based on the historical data of stochastic subgradients and iterates. Under mild conditions, we prove that our proposed framework enjoys global convergence to the stationary points of the objective function in the sense of the conservative field, hence providing convergence guarantees for training nonsmooth neural networks. Based on our proposed framework, we propose a novel learning-rate-free momentum SGD method (LFM). Preliminary numerical experiments reveal that LFM performs comparably to the state-of-the-art learning-rate-free methods (which have not been shown theoretically to be convergent) across well-known neural network training benchmarks.
We apply nonsmooth optimization techniques to classification problems, with particular reference to the Transductive Support Vector Machine ( TSVM) approach, where the considered decision function is nonconvex and non...
详细信息
We apply nonsmooth optimization techniques to classification problems, with particular reference to the Transductive Support Vector Machine ( TSVM) approach, where the considered decision function is nonconvex and nondifferentiable, hence difficult to minimize. We present some numerical results obtained by running the proposed method on some standard test problems drawn from the binary classification literature.
This paper proposes a way to combine the Mesh Adaptive Direct Search (MADS) algorithm, which extends the Generalized Pattern Search (GPS) algorithm, with the Variable Neighborhood Search (VNS) metaheuristic, for nonsm...
详细信息
This paper proposes a way to combine the Mesh Adaptive Direct Search (MADS) algorithm, which extends the Generalized Pattern Search (GPS) algorithm, with the Variable Neighborhood Search (VNS) metaheuristic, for nonsmooth constrained optimization. The resulting algorithm retains the convergence properties of MADS, and allows the far reaching exploration features of VNS to move away from local solutions. The paper also proposes a generic way to use surrogate functions in the VNS search. Numerical results illustrate advantages and limitations of this method.
Multiband frequency domain synthesis consists in the minimization of a finite family of closed-loop transfer functions on prescribed frequency intervals. This is an algorithmically difficult problem due to its inheren...
详细信息
Multiband frequency domain synthesis consists in the minimization of a finite family of closed-loop transfer functions on prescribed frequency intervals. This is an algorithmically difficult problem due to its inherent nonsmoothness and nonconvexity. We extend our previous work on nonsmooth H-infinity synthesis to develop a nonsmooth optimization technique to compute local solutions to multiband synthesis problems. The proposed method is shown to perform well on illustrative examples. (C) 2007 Elsevier Ltd. All rights reserved.
Using a nonconvex nonsmooth optimization approach, we introduce a model for semisupervised clustering (SSC) with pairwise constraints. In this model, the objective function is represented as a sum of three terms: the ...
详细信息
Using a nonconvex nonsmooth optimization approach, we introduce a model for semisupervised clustering (SSC) with pairwise constraints. In this model, the objective function is represented as a sum of three terms: the first term reflects the clustering error for unlabeled data points, the second term expresses the error for data points with must-link (ML) constraints, and the third term represents the error for data points with cannot-link (CL) constraints. This function is nonconvex and nonsmooth. To find its optimal solutions, we introduce an adaptive SSC (A-SSC) algorithm. This algorithm is based on the combination of the nonsmooth optimization method and an incremental approach, which involves the auxiliary SSC problem. The algorithm constructs clusters incrementally starting from one cluster and gradually adding one cluster center at each iteration. The solutions to the auxiliary SSC problem are utilized as starting points for solving the nonconvex SSC problem. The discrete gradient method (DGM) of nonsmooth optimization is applied to solve the underlying nonsmooth optimization problems. This method does not require subgradient evaluations and uses only function values. The performance of the A-SSC algorithm is evaluated and compared with four benchmarking SSC algorithms on one synthetic and 12 real-world datasets. Results demonstrate that the proposed algorithm outperforms the other four algorithms in identifying compact and well-separated clusters while satisfying most constraints.
It is known that the design of optimal transmit beamforming vectors for cognitive radio multicast transmission can be formulated as indefinite quadratic optimization programs. Given the challenges of such nonconvex pr...
详细信息
It is known that the design of optimal transmit beamforming vectors for cognitive radio multicast transmission can be formulated as indefinite quadratic optimization programs. Given the challenges of such nonconvex problems, the conventional approach in literature is to recast them as convex semidefinite programs (SDPs) together with rank-one constraints. Then, these nonconvex and discontinuous constraints are dropped allowing for the realization of a pool of relaxed candidate solutions, from which various randomization techniques are utilized with the hope to recover the optimal solutions. However, it has been shown that such approach fails to deliver satisfactory outcomes in many practical settings, wherein the determined solutions are found to be unacceptably far from the actual optimality. On the contrary, we in this contribution tackle the aforementioned optimal beamforming problems differently by representing them as SDPs with additional reverse convex (but continuous) constraints. Non-smooth optimization algorithms are then proposed to locate the optimal solutions of such design problems in an efficient manner. Our thorough numerical examples verify that the proposed algorithms offer almost global optimality whilst requiring relatively low computational load.
暂无评论