Multi-target localization in a distributed multiple-input multiple-output radar is quite challenging as the correct measurement-target associations in each transmitter-receiver pair are unknown. In this paper, we addr...
详细信息
Multi-target localization in a distributed multiple-input multiple-output radar is quite challenging as the correct measurement-target associations in each transmitter-receiver pair are unknown. In this paper, we address this difficult problem from a joint optimization perspective. The measurement-target association and multi- target localization are jointly formulated as an intractable mixed-integer optimization problem, which contains both discrete and continuous variables. We first develop an equivalent difference of convex functions (DC) representation for the non-convex Boolean constraint imposed on the association variables, making the problem tractable. Then, a DC algorithm is derived to efficiently solve the resulting optimization problem. Simulation results demonstrate that the proposed DC method is numerically accurate when compared to state-of-the-art methods.
\In this work, necessary optimality conditions of KKT type for (weak) Pareto optimality are derived and a DC-Dinkelbach-type algorithm is proposed for vector fractional mathematical programs with ratios of difference ...
详细信息
\In this work, necessary optimality conditions of KKT type for (weak) Pareto optimality are derived and a DC-Dinkelbach-type algorithm is proposed for vector fractional mathematical programs with ratios of difference of convex (DC) functions, and DC constraints, by reducing the latter to a system of scalar parametric problems and using DC tools. The special case of vector fractional programs with ratios of convexfunctions is also analysed.
作者:
Karcher, Cody J.MIT
Dept Aeronaut & Astronaut 77 Massachusetts Ave Cambridge MA 02139 USA
Signomial Programming (SP) has proven to be a powerful tool for engineering design optimization, striking a balance between the computational efficiency of Geometric Programming (GP) and the extensibility of more gene...
详细信息
Signomial Programming (SP) has proven to be a powerful tool for engineering design optimization, striking a balance between the computational efficiency of Geometric Programming (GP) and the extensibility of more general methods for optimization. While techniques exist for fitting GP compatible models to data, no models have been proposed that take advantage of the increased modeling flexibility available in SP. Here, a new difference of Softmax Affine function is constructed by utilizing existing methods of GP compatible fitting in difference of convex (DC) functions. This new function class is fit to data in log-log space and becomes either a signomial or a set of signomials upon inverse transformation. Examples presented here include simple test cases in 1D and 2D, and a fit to the performance data of the NACA 24xx family of airfoils. In each case, RMS error is driven to less than 1%.
The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convex funct...
详细信息
The boosted difference of convex functions algorithm (BDCA) was recently proposed for minimizing smooth difference of convex (DC) functions. BDCA accelerates the convergence of the classical difference of convex functions algorithm (DCA) thanks to an additional line search step. The purpose of this paper is twofold. First, we show that this scheme can be generalized and successfully applied to certain types of nonsmooth DC functions, namely, those that can be expressed as the difference of a smooth function and a possibly nonsmooth one. Second, we show that there is complete freedom in the choice of the trial step size for the line search, which is something that can further improve its performance. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka-Lojasiewicz property. Applications and numerical experiments for two problems in data science are presented, demonstrating that BDCA outperforms DCA. Specifically, for the minimum sum-of-squares clustering problem, BDCA was on average 16 times faster than DCA, and for the multidimensional scaling problem, BDCA was 3 times faster than DCA.
We are concerned in this paper with minimax fractional programs whose objective functions are the maximum of finite ratios of difference of convex functions, with constraints also described by difference of convex fun...
详细信息
We are concerned in this paper with minimax fractional programs whose objective functions are the maximum of finite ratios of difference of convex functions, with constraints also described by difference of convex functions. Like Dinkelbach-type algorithms, the method of centers for generalized fractional programs fails to work for such problems, since the parametric subproblems may be nonconvex, whereas the latters need a global optimal solution for these subproblems. We first give necessary optimality conditions for these problems, by means of convex analysis tools, and then extend the last method to solve such programs. The method is based on solving a sequence of parametric convex problems. We show that every cluster point of the sequence of optimal solutions of these subproblems satisfies necessary optimality conditions of Karush-Kuhn-Tucker criticality type.
In this paper, we consider the minimization of a class of nonconvex composite functions with difference of convex structure under linear constraints. While this kind of problems in theory can be solved by the celebrat...
详细信息
In this paper, we consider the minimization of a class of nonconvex composite functions with difference of convex structure under linear constraints. While this kind of problems in theory can be solved by the celebrated alternating direction method of multipliers (ADMM), a direct application of ADMM often leads to difficult nonconvex subproblems. To address this issue, we propose to convexify the subproblems through a linearization technique as done in the difference of convex functions algorithm (DCA). By assuming the Kurdyka-Aojasiewicz property, we prove that the resulting algorithm sequentially converges to a critical point. It turns out that in the applications of signal and image processing such as compressed sensing and image denoising, the proposed algorithm usually enjoys closed-form solutions of the subproblems and thus can be very efficient. We provide numerical experiments to demonstrate the effectiveness of our algorithm.
A global optimization algorithm is presented for maximizing the sum of difference of convex functions ratios problem over nonconvex feasible region. This algorithm is based on branch and bound framework. To obtain a d...
详细信息
A global optimization algorithm is presented for maximizing the sum of difference of convex functions ratios problem over nonconvex feasible region. This algorithm is based on branch and bound framework. To obtain a difference of convex programming, the considered problem is first reformulated by introducing new variables as few as possible. By using subgradient and convex envelope, the fundamental problem of estimating lower bound in the branch and bound algorithm is transformed into a relaxed linear programming problem which can be solved efficiently. Furthermore, the size of the relaxed linear programming problem does not change during the algorithm search. Lastly, the convergence of the algorithm is analyzed and the numerical results are reported.
Maximum hands-off control is the optimal solution to the L(0 )optimal control problem. While convex approximation is typically used to relax this problem, it does not necessarily result in maximum hands-off control. T...
详细信息
Maximum hands-off control is the optimal solution to the L(0 )optimal control problem. While convex approximation is typically used to relax this problem, it does not necessarily result in maximum hands-off control. Therefore, this study introduces a nonconvex approximation method and a class of nonconvex optimal control problems that are always equivalent to the maximum hands-off control problem. A computation method based on difference of convex functions optimization is then derived and numerically validated.
The strategy presented in this paper differs significantly from existing approaches as we formulate the problem as a standard optimization problem of difference of convex functions. We have developed the necessary and...
详细信息
The strategy presented in this paper differs significantly from existing approaches as we formulate the problem as a standard optimization problem of difference of convex functions. We have developed the necessary and sufficient conditions for global solutions in this standard form. The main challenge in the standard form arises from a constraint of the form g(t) = 1, where g is a convex function. We utilize the classical inequality between the weighted arithmetic and harmonic means to overcome this challenge. This enables us to express the optimality conditions as a convex geometric programming problem and employ a predictor-corrector primal-dual interior point method for its solution, with weights updated during the predictor phase. The interior point method solves the dual problem of geometric programming and obtains the primal solution through exponential transformation. We have implemented the algorithm in Fortran 90 and validated it using a set of test problems from the literature. The proposed method successfully solved all the test problems, and the computational results are presented alongside the tested problems and the corresponding solutions found.
We introduce a bundle method for the unconstrained minimization of non smooth difference-of-convex (DC) functions, and it is based on the calculation of a special type of descent direction called descent-ascent direct...
详细信息
We introduce a bundle method for the unconstrained minimization of non smooth difference-of-convex (DC) functions, and it is based on the calculation of a special type of descent direction called descent-ascent direction. The algorithm only requires evaluations of the minuend component function at each iterate, and it can be considered as a parsimonious bundle method as accumulation of information takes place only in case the descent-ascent direction does not provide a sufficient decrease. No line search is performed, and proximity control is pursued independent of whether the decrease in the objective function is achieved. Termination of the algorithm at a point satisfying a weak criticality condition is proved, and numerical results on a set of benchmark DC problems are reported.
暂无评论