We introduce a new approach to apply the boosted difference of convex functions algorithm (BdcA) for solving non-convex and non-differentiable problems involving difference of two convex functions (dc functions). Supp...
详细信息
We introduce a new approach to apply the boosted difference of convex functions algorithm (BdcA) for solving non-convex and non-differentiable problems involving difference of two convex functions (dc functions). Supposing the first dc component differentiable and the second one possibly non-differentiable, the main idea of BdcA is to use the point computed by the subproblem of the dc algorithm (dcA) to define a descent direction of the objective from that point, and then a monotone line search starting from it is performed in order to find a new point which decreases the objective function when compared with the point generated by the subproblem of dcA. This procedure improves the performance of the dcA. However, if the first dc component is non-differentiable, then the direction computed by BdcA can be an ascent direction and a monotone line search cannot be performed. Our approach uses a non-monotone line search in the BdcA (nmBdcA) to enable a possible growth in the objective function values controlled by a parameter. Under suitable assumptions, we show that any cluster point of the sequence generated by the nmBdcA is a critical point of the problem under consideration and provides some iteration-complexity bounds. Furthermore, if the first dc component is differentiable, we present different iteration-complexity bounds and prove the full convergence of the sequence under the Kurdyka-& Lstrok;ojasiewicz property of the objective function. Some numerical experiments show that the nmBdcA outperforms the dcA, such as its monotone version.
Linear discriminant analysis (LDA) has attracted many attentions as a classical tool for both classification and dimensionality reduction. Classical LDA performs quite well in simple and low dimensional setting while ...
详细信息
Linear discriminant analysis (LDA) has attracted many attentions as a classical tool for both classification and dimensionality reduction. Classical LDA performs quite well in simple and low dimensional setting while it is not suitable for small sample size data (SSS). Feature selection is an effective way to solve this problem. As a variant of LDA, sparse optimal scoring (SOS) with l(0)-norm regularization is considered in this paper. By using a new continuous nonconvex nonsmooth function to approximate l(0)-norm, we propose a novel difference of convex functions algorithm (dcA) for sparse optimal scoring. The most favorable property of the proposed dcA is its subproblem admits an analytical solution. The effectiveness of the proposed method is validated via theoretical analysis as well as some illustrative numerical experiments.
We introduce two new algorithms to minimise smooth difference of convex (dc) functions that accelerate the convergence of the classical dc algorithm (dcA). We prove that the point computed by dcA can be used to define...
详细信息
We introduce two new algorithms to minimise smooth difference of convex (dc) functions that accelerate the convergence of the classical dc algorithm (dcA). We prove that the point computed by dcA can be used to define a descent direction for the objective function evaluated at this point. Our algorithms are based on a combination of dcA together with a line search step that uses this descent direction. Convergence of the algorithms is proved and the rate of convergence is analysed under the Aojasiewicz property of the objective function. We apply our algorithms to a class of smooth dc programs arising in the study of biochemical reaction networks, where the objective function is real analytic and thus satisfies the Aojasiewicz property. Numerical tests on various biochemical models clearly show that our algorithms outperform dcA, being on average more than four times faster in both computational time and the number of iterations. Numerical experiments show that the algorithms are globally convergent to a non-equilibrium steady state of various biochemical networks, with only chemically consistent restrictions on the network topology.
This paper considers the dc (Difference-of-Convex-functions) algorithms in a Hilbert space setting and discusses their convergence. Applied to indefinite quadratic programs under linear constraints in Hilbert spaces, ...
详细信息
This paper considers the dc (Difference-of-Convex-functions) algorithms in a Hilbert space setting and discusses their convergence. Applied to indefinite quadratic programs under linear constraints in Hilbert spaces, among other things, the dcA yields two basic types of iteration algorithms, called the Projection dc decomposition algorithm and the Proximal dc decomposition algorithm. It is proved that, under some mild conditions, any iterative sequence generated by either one of the just mentioned algorithms R-linearly converges to a Karush-Kuhn-Tucker point of the QP problem.
We consider a class of generalized dc (difference-of-convex functions) programming, which refers to the problem of minimizing the sum of two convex (possibly nonsmooth) functions minus one smooth convex part. To effic...
详细信息
We consider a class of generalized dc (difference-of-convex functions) programming, which refers to the problem of minimizing the sum of two convex (possibly nonsmooth) functions minus one smooth convex part. To efficiently exploit the structure of the problem under consideration, in this paper, we shall introduce a unified Douglas-Rachford method in Hilbert space. As an interesting byproduct of the unified framework, we can easily show that our proposed algorithm is able to deal with convex composite optimization models. Due to the nonconvexity of dc programming, we prove that the proposed method is convergent to a critical point of the problem under some assumptions. Finally, we demonstrate numerically that our proposed algorithm performs better than the state-of-the-art dc algorithm and alternating direction method of multipliers (ADMM) for dc regularized sparse recovery problems.
In this paper, an NP-hard problem of minimizing a sum of pointwise minima of two functions is considered. Using a new equivalent formula, we propose a smooth approximation and an ADMM algorithm for solving the problem...
详细信息
In this paper, an NP-hard problem of minimizing a sum of pointwise minima of two functions is considered. Using a new equivalent formula, we propose a smooth approximation and an ADMM algorithm for solving the problem. In numerical experiments, we survey four methods including the algorithms proposed in this paper and the known methods. The results of numerical experiments indicate that the performance of each of algorithms could highly depend on the problem and simulation settings.
Proximal support vector machine (PSVM), as a variant of support vector machine (SVM), is to generate a pair of non-parallel hyperplanes for classification. Although PSVM is one of the powerful classification tools, it...
详细信息
Proximal support vector machine (PSVM), as a variant of support vector machine (SVM), is to generate a pair of non-parallel hyperplanes for classification. Although PSVM is one of the powerful classification tools, its ability on feature selection is still weak. To overcome this defect, we introduce l(0)-norm regularization in PSVM which enables PSVM to select important features and remove redundant features simultaneously for classification. This PSVM is called as a sparse proximal support vector machine (SPSVM). Due to the presence of l(0)-norm, the resulting optimization problem of SPSVM is neither convex nor smooth and thus, is difficult to solve. In this paper, we introduce a continuous nonconvex function to approximate l(0)-norm, and propose a novel difference of convex functions algorithms (dcA) to solve SPSVM. The main merit of the proposed method is that all subproblems are smooth and admit closed form solutions. The effectiveness of the proposed method is illustrated by theoretical analysis as well as some numerical experiments on both simulation datasets and real world datasets. (C) 2020 Elsevier Inc. All rights reserved.
Trajectory planning for a swarm of UAVs is known as a challenging nonconvex optimization problem, particularly due to a large number of collision avoidance constraints required for individual pairs of UAVs in the swar...
详细信息
ISBN:
(纸本)9798350301243
Trajectory planning for a swarm of UAVs is known as a challenging nonconvex optimization problem, particularly due to a large number of collision avoidance constraints required for individual pairs of UAVs in the swarm. In this paper, we tackle this nonconvexity by leveraging the difference of convex function (dc) programming. We introduce the slack variables to relax and reformulate the collision avoidance conditions and employ the penalty function term to equivalently convert the problem into the dc form. Consequently, we construct a penalty dc algorithm in which we sequentially solve a set of convex optimization problems obtained by linearizing the collision avoidance constraint. The algorithm iteratively tightens the safety condition and reduces the objective cost of the planning problem and the additional penalty term. Numerical results demonstrate the effectiveness of the proposed approach in planning a large number of UAVs in congested space.
In this paper, we consider a class of sparse group l(0) regularized optimization problems. First, we give a continuous relaxation model of the considered problem and define a class of stationary points of the relaxati...
详细信息
In this paper, we consider a class of sparse group l(0) regularized optimization problems. First, we give a continuous relaxation model of the considered problem and define a class of stationary points of the relaxation problem. Then, we establish the equivalence of these two problems in the sense of global minimizers, and prove that the defined stationary point is equivalent to the local minimizer of the considered sparse group l(0) regularized problem with a desirable bound from its global minimizers. Further, based on the difference-of-convex (dc) structure of the relaxation problem, we design two dc algorithms to solve the relaxation problem. We prove that any accumulation point of the iterates generated by them is a local minimizer with a desirable bound for the considered sparse group l(0) problem. In particular, all accumulation points have a common support set and their zero entries can be attained within finite iterations. Moreover, we give the global convergence analysis of the proposed algorithms. Finally, we perform some numerical experiments to show the efficiency of the proposed algorithms.
Image reconstruction is important for activities that rely on optical and comparative data analysis. Signals obtained through acquisition systems can be inconsistent due to several factors, such as camera movement and...
详细信息
Image reconstruction is important for activities that rely on optical and comparative data analysis. Signals obtained through acquisition systems can be inconsistent due to several factors, such as camera movement and noise, but they can be modeled mathematically. With this, continuous optimization tools have become popular in recent years for image problems. This work seeks to reconstruct noisy images using a non -monotone boosted dc algorithm (nmBdcA), an accelerated variant of the Difference of Convex algorithm (dcA), with a non-convex version of the total variation (TV) model, to obtain better computational performance than dcA and superior quality of the restored image. Specifically, the performance of the non-convex TV model is compared to its convex formulation. Two sections of experiments are included, one with black-and-white images and another with grayscale medical computed tomography (CT) images. The results of the first section emphasize that nmBdcA performs reconstructions with higher PSNR in all experiments, lower CPU time, and SSIM greater or equal to dcA in 91.67% of tests. The non-convex TV model is more robust than the convex one in the presence of more noise, presenting higher SSIM and PSNR in all experiments performed. In the section of experiments with CT images, nmBdcA outperforms dcA in reconstruction quality and CPU time in all tests performed. The performance of the non-convex model applied with nmBdcA outperforms the convex model in SSIM and CPU time in all experiments, being superior in PSNR in 77.78% of the tests.
暂无评论