The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convex funct...
详细信息
The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convexfunctionsalgorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. First, we show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Second, we show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka-Lojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the minimum sum-of-squares clustering problem, BDCA was on average 16 times faster than DCA, and for the multidimensional scaling problem, BDCA was 3 times faster than DCA.
We introduce a new approach to apply the boosted difference of convex functions algorithm (BDCA) for solving non-convex and non-differentiable problems involving difference of two convexfunctions (DC functions). Supp...
详细信息
We introduce a new approach to apply the boosted difference of convex functions algorithm (BDCA) for solving non-convex and non-differentiable problems involving difference of two convexfunctions (DC functions). Supposing the first DC component differentiable and the second one possibly non-differentiable, the main idea of BDCA is to use the point computed by the subproblem of the DC algorithm (DCA) to define a descent direction of the objective from that point, and then a monotone line search starting from it is performed in order to find a new point which decreases the objective function when compared with the point generated by the subproblem of DCA. This procedure improves the performance of the DCA. However, if the first DC component is non-differentiable, then the direction computed by BDCA can be an ascent direction and a monotone line search cannot be performed. Our approach uses a non-monotone line search in the BDCA (nmBDCA) to enable a possible growth in the objective function values controlled by a parameter. Under suitable assumptions, we show that any cluster point of the sequence generated by the nmBDCA is a critical point of the problem under consideration and provides some iteration-complexity bounds. Furthermore, if the first DC component is differentiable, we present different iteration-complexity bounds and prove the full convergence of the sequence under the Kurdyka-& Lstrok;ojasiewicz property of the objective function. Some numerical experiments show that the nmBDCA outperforms the DCA, such as its monotone version.
The boosted difference of convex functions algorithm (BDCA) has been recently introduced to accelerate the performance of the classical difference of convexfunctionsalgorithm (DCA). This acceleration is achieved tha...
详细信息
The boosted difference of convex functions algorithm (BDCA) has been recently introduced to accelerate the performance of the classical difference of convexfunctionsalgorithm (DCA). This acceleration is achieved thanks to an extrapolation step from the point computed by DCA via a line search procedure. In this work, we propose an extension of BDCA that can be applied to difference of convexfunctions programs with linear constraints, and prove that every cluster point of the sequence generated by this algorithm is a Karush-Kuhn-Tucker point of the problem if the feasible set has a Slater point. When the objective function is quadratic, we prove that any sequence generated by the algorithm is bounded and R-linearly (geometrically) convergent. Finally, we present some numerical experiments where we compare the performance of DCA and BDCA on some challenging problems: to test the copositivity of a given matrix, to solve one-norm and infinity-norm trust-region subproblems, and to solve piecewise quadratic problems with box constraints. Our numerical results demonstrate that this new extension of BDCA outperforms DCA.
The difference of convexfunctionsalgorithm (DCA) is widely used for minimizing the difference of two convexfunctions. A recently proposed accelerated version, termed BDCA for boosted DC algorithm, incorporates a li...
详细信息
The difference of convexfunctionsalgorithm (DCA) is widely used for minimizing the difference of two convexfunctions. A recently proposed accelerated version, termed BDCA for boosted DC algorithm, incorporates a line search step to achieve a larger decrease of the objective value at each iteration. Thanks to this step, BDCA usually converges much faster than DCA in practice. The solutions found by DCA are guaranteed to be critical points of the problem, but these may not be local minima. Although BDCA tends to improve the objective value of the solutions it finds, these are frequently just critical points as well. In this paper we combine BDCA with a simple Derivative-Free Optimization (DFO) algorithm to force the d-stationarity (lack of descent direction) at the point obtained. The potential of this approach is illustrated through some computational experiments on a Minimum-Sum-of-Squares clustering problem. Our numerical results demonstrate that the new method provides better solutions while still remains faster than DCA in the majority of test cases.
暂无评论