We introduce two new algorithms to minimise smooth difference of convex (dc) functions that accelerate the convergence of the classical dc algorithm (dcA). We prove that the point computed by dcA can be used to define...
详细信息
We introduce two new algorithms to minimise smooth difference of convex (dc) functions that accelerate the convergence of the classical dc algorithm (dcA). We prove that the point computed by dcA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of dcA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Aojasiewicz property of the objective function. We apply our algorithms to a class of smooth dc programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Aojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform dcA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.
We introduce a new approach to apply the boosted difference of convex functions algorithm (BdcA) for solving non-convex and non-differentiable problems involving difference of two convex functions (dc functions). Supp...
详细信息
We introduce a new approach to apply the boosted difference of convex functions algorithm (BdcA) for solving non-convex and non-differentiable problems involving difference of two convex functions (dc functions). Supposing the first dc component differentiable and the second one possibly non-differentiable, the main idea of BdcA is to use the point computed by the subproblem of the dc algorithm (dcA) to define a descent direction of the objective from that point, and then a monotone line search starting from it is performed in order to find a new point which decreases the objective function when compared with the point generated by the subproblem of dcA. This procedure improves the performance of the dcA. However, if the first dc component is non-differentiable, then the direction computed by BdcA can be an ascent direction and a monotone line search cannot be performed. Our approach uses a non-monotone line search in the BdcA (nmBdcA) to enable a possible growth in the objective function values controlled by a parameter. Under suitable assumptions, we show that any cluster point of the sequence generated by the nmBdcA is a critical point of the problem under consideration and provides some iteration-complexity bounds. Furthermore, if the first dc component is differentiable, we present different iteration-complexity bounds and prove the full convergence of the sequence under the Kurdyka-& Lstrok;ojasiewicz property of the objective function. Some numerical experiments show that the nmBdcA outperforms the dcA, such as its monotone version.
Linear discriminant analysis (LDA) has attracted many attentions as a classical tool for both classification and dimensionality reduction. Classical LDA performs quite well in simple and low dimensional setting while ...
详细信息
Linear discriminant analysis (LDA) has attracted many attentions as a classical tool for both classification and dimensionality reduction. Classical LDA performs quite well in simple and low dimensional setting while it is not suitable for small sample size data (SSS). Feature selection is an effective way to solve this problem. As a variant of LDA, sparse optimal scoring (SOS) with l(0)-norm regularization is considered in this paper. By using a new continuous nonconvex nonsmooth function to approximate l(0)-norm, we propose a novel difference of convex functions algorithm (dcA) for sparse optimal scoring. The most favorable property of the proposed dcA is its subproblem admits an analytical solution. The effectiveness of the proposed method is validated via theoretical analysis as well as some illustrative numerical experiments.
This paper considers the dc (Difference-of-Convex-functions) algorithms in a Hilbert space setting and discusses their convergence. Applied to indefinite quadratic programs under linear constraints in Hilbert spaces, ...
详细信息
This paper considers the dc (Difference-of-Convex-functions) algorithms in a Hilbert space setting and discusses their convergence. Applied to indefinite quadratic programs under linear constraints in Hilbert spaces, among other things, the dcA yields two basic types of iteration algorithms, called the Projection dc decomposition algorithm and the Proximal dc decomposition algorithm. It is proved that, under some mild conditions, any iterative sequence generated by either one of the just mentioned algorithms R-linearly converges to a Karush-Kuhn-Tucker point of the QP problem.
We consider a class of generalized dc (difference-of-convex functions) programming, which refers to the problem of minimizing the sum of two convex (possibly nonsmooth) functions minus one smooth convex part. To effic...
详细信息
We consider a class of generalized dc (difference-of-convex functions) programming, which refers to the problem of minimizing the sum of two convex (possibly nonsmooth) functions minus one smooth convex part. To efficiently exploit the structure of the problem under consideration, in this paper, we shall introduce a unified Douglas-Rachford method in Hilbert space. As an interesting byproduct of the unified framework, we can easily show that our proposed algorithm is able to deal with convex composite optimization models. Due to the nonconvexity of dc programming, we prove that the proposed method is convergent to a critical point of the problem under some assumptions. Finally, we demonstrate numerically that our proposed algorithm performs better than the state-of-the-art dc algorithm and alternating direction method of multipliers (ADMM) for dc regularized sparse recovery problems.
Some new properties of the Projection dc decomposition algorithm (we call it algorithm A) and the Proximal dc decomposition algorithm (we call it algorithm B) Pham Dinh et al. in Optim Methods Softw, 23(4): 609-629 (2...
详细信息
Some new properties of the Projection dc decomposition algorithm (we call it algorithm A) and the Proximal dc decomposition algorithm (we call it algorithm B) Pham Dinh et al. in Optim Methods Softw, 23(4): 609-629 (2008) for solving the indefinite quadratic programming problem under linear constraints are proved in this paper. Among other things, we show that dcA sequences generated by algorithm A converge to a locally unique solution if the initial points are taken from a neighborhood of it, and dcA sequences generated by either algorithm A or algorithm B are all bounded if a condition guaranteeing the solution existence of the given problem is satisfied.
In this paper, we investigate the use of dc (Difference of Convex functions) models and algorithms in the application of trust-region methods to the solution of a class of nonlinear optimization problems where the con...
详细信息
In this paper, we investigate the use of dc (Difference of Convex functions) models and algorithms in the application of trust-region methods to the solution of a class of nonlinear optimization problems where the constrained set is closed and convex (and, from a practical point of view, where projecting onto the feasible region is computationally affordable). We consider dc local models for the quadratic model of the objective function used to compute the trust-region step, and apply a primal-dual subgradient method to the solution of the corresponding trust-region subproblems. One is able to prove that the resulting scheme is globally convergent to first-order stationary points. The theory requires the use of exact second-order derivatives but, in turn, the computation of the trust-region step asks only for one projection onto the feasible region (in comparison to the calculation of the generalized Cauchy point which may require more). The numerical efficiency and robustness of the proposed new scheme when applied to bound-constrained problems is measured by comparing its performance against some of the current state-of-the-art nonlinear programming solvers on a vast collection of test problems.
In this work, two novel formulations for embedded feature selection are presented. A second-order cone programming approach for Support Vector Machines is extended by adding a second regularizer to encourage feature e...
详细信息
In this work, two novel formulations for embedded feature selection are presented. A second-order cone programming approach for Support Vector Machines is extended by adding a second regularizer to encourage feature elimination. The one- and the zero norm penalties are used in combination with the Tikhonov regularization under a robust setting designed to correctly classify instances, up to a predefined error rate, even for the worst data distribution. The use of the zero norm leads to a nonconvex formulation, which is solved by using Difference of Convex (dc) functions, extending dc programming to second-order cones. Experiments on high-dimensional microarray datasets were performed, and the best performance was obtained with our approaches compared with well-known feature selection methods for Support Vector Machines. (C) 2017 Elsevier Inc. All rights reserved.
Motivated by a class of applied problems arising from physical layer based security in a digital communication system, in particular, by a secrecy sum-rate maximization problem, this paper studies a nonsmooth, differe...
详细信息
Motivated by a class of applied problems arising from physical layer based security in a digital communication system, in particular, by a secrecy sum-rate maximization problem, this paper studies a nonsmooth, difference-of-convex (dc) minimization problem. The contributions of this paper are (i) clarify several kinds of stationary solutions and their relations;(ii) develop and establish the convergence of a novel algorithm for computing a d-stationary solution of a problem with a convex feasible set that is arguably the sharpest kind among the various stationary solutions;(iii) extend the algorithm in several directions including a randomized choice of the subproblems that could help the practical convergence of the algorithm, a distributed penalty approach for problems whose objective functions are sums of dc functions, and problems with a specially structured (nonconvex) dc constraint. For the latter class of problems, a pointwise Slater constraint qualification is introduced that facilitates the verification and computation of a B(ouligand)-stationary point.
In this article, we study a merit function based on sub-additive functions for solving the non-linear complementarity problem (NCP). This leads to consider an optimization problem that is equivalent to the NCP. In the...
详细信息
In this article, we study a merit function based on sub-additive functions for solving the non-linear complementarity problem (NCP). This leads to consider an optimization problem that is equivalent to the NCP. In the case of a concave NCP this optimization problem is a Difference of Convex (dc) program and we can therefore use dc algorithm to locally solve it. We prove that in the case of a concave monotone NCP, it is sufficient to compute a stationary point of the optimization problem to obtain a solution of the complementarity problem. In the case of a general NCP, assuming that a dc decomposition of the complementarity problem is known, we propose a penalization technique to reformulate the optimization problem as a dc program and prove that local minima of this penalized problem are also local minima of the merit problem. Numerical results on linear complementarity problems, absolute value equations and non-linear complementarity problems show that our method is promising.
暂无评论