This thesis studies the theory and implementation of infeasible-start primal-dual interior- pointmethods for convex optimization problems. Convex optimization has applications in many fields of engineering and scienc...
This thesis studies the theory and implementation of infeasible-start primal-dual interior- pointmethods for convex optimization problems. Convex optimization has applications in many fields of engineering and science such as data analysis, control theory, signal pro- cessing, relaxation and randomization, and robust optimization. In addition to strong and elegant theories, the potential for creating efficient and robust software has made convex optimization very popular. Primal-dual algorithms have yielded efficient solvers for con- vex optimization problems in conic form over symmetric cones (linear-programming (LP), second-order cone programming (SOCP), and semidefinite programing (SDP)). However, many other highly demanded convex optimization problems lack comparable solvers. To close this gap, we have introduced a general optimization setup, called Domain-Driven, that covers many interesting classes of optimization. Domain-Driven means our techniques are directly applied to the given "good" formulation without a forced reformulation in a conic form. Moreover, this approach also naturally handles the cone constraints and hence the conic form. A problem is in the Domain-Driven setup if it can be formulated as minimizing a linear function over a convex set, where the convex set is equipped with an efficient self- concordant barrier with an easy-to-evaluate Legendre-Fenchel conjugate. We show how general this setup is by providing several interesting classes of examples. LP, SOCP, and SDP are covered by the Domain-Driven setup. More generally, consider all convex cones with the property that both the cone and its dual admit efficiently computable self- concordant barriers. Then, our Domain-Driven setup can handle any conic optimization problem formulated using direct sums of these cones and their duals. Then, we show how to construct interesting convex sets as the direct sum of the epigraphs of univariate convex functions. This construction, as a special case, con
The classical k-means algorithm for partitioning n points in R-d into k clusters is one of the most popular and widely spread clustering methods. The need to respect prescribed lower bounds on the cluster sizes has be...
详细信息
The classical k-means algorithm for partitioning n points in R-d into k clusters is one of the most popular and widely spread clustering methods. The need to respect prescribed lower bounds on the cluster sizes has been observed in many scientific and business applications. In this paper, we present and analyze a generalization of k-means that is capable of handling weighted point sets and prescribed lower and upper bounds on the cluster sizes. We call it weight-balanced k-means. The key difference to existing models lies in the ability to handle the combination of weighted point sets with prescribed bounds on the cluster sizes. This imposes the need to perform partial membership clustering, and leads to significant differences. For example, while finite termination of all k-means variants for unweighted point sets is a simple consequence of the existence of only finitely many partitions of a given set of points, the situation is more involved for weighted point sets, as there are infinitely many partial membership clusterings. Using polyhedral theory, we show that the number of iterations of weight-balanced k-means is bounded above by n(O(dk)), so in particular it is polynomial for fixed k and d. This is similar to the known worst-case upper bound for classical k-means for unweighted point sets and unrestricted cluster sizes, despite the much more general framework. We conclude with the discussion of some additional favorable properties of our method. (C) 2017 Elsevier B.V. All rights reserved.
Proximal distance algorithms combine the classical penalty method of constrained minimization with distance majorization. If f(x) is the loss function, and C is the constraint set in a constrained minimization problem...
详细信息
Proximal distance algorithms combine the classical penalty method of constrained minimization with distance majorization. If f(x) is the loss function, and C is the constraint set in a constrained minimization problem, then the proximal distance principle mandates minimizing the penalized loss f(x) + ρ/2 dist(x, C)2 and following the solution xρ to its limit as ρ tends to ∞. At each iteration the squared Euclidean distance dist(x, C)2 is majorized by the spherical quadratic ǁx - PC(xk)ǁ2, where PC(xk) denotes the projection of the current iterate xk onto C. The minimum of the surrogate function f(x) + ρ/2 ǁx-PC(xk)ǁ2 is given by the proximal map proxρ-1f [PC(xk)]. The next iterate xk+1 automatically decreases the original penalized loss for fixed ρ. Since many explicit projections and proximal maps are known, it is straightforward to derive and implement novel optimization algorithms in this setting. These algorithms can take hundreds if not thousands of iterations to converge, but the simple nature of each iteration makes proximal distance algorithms competitive with traditional algorithms. For convex problems, proximal distance algorithms reduce to proximal gradient algorithms and therefore enjoy well understood convergence properties. For nonconvex problems, one can attack convergence by invoking Zangwill's theorem. Our numerical examples demonstrate the utility of proximal distance algorithms in various high-dimensional settings, including a) linearprogramming, b) constrained least squares, c) projection to the closest kinship matrix, d) projection onto a second-order cone constraint, e) calculation of Horn's copositive matrix index, f) linear complementarity programming, and g) sparse principal components analysis. The proximal distance algorithm in each case is competitive or superior in speed to traditional methods such as the interiorpoint method and the alternating direction method of multipliers (ADMM). Source code for the numerical examples can be found
The starting point used by an interiorpoint algorithm for linear and convex quadratic programming may significantly influence the behaviour of the method. A widely used heuristic to construct such a point consists of...
详细信息
The starting point used by an interiorpoint algorithm for linear and convex quadratic programming may significantly influence the behaviour of the method. A widely used heuristic to construct such a point consists of dropping variable nonnegativity constraints and computing a solution which minimizes the Euclidean norm of the primal (or dual) point while satisfying the appropriate primal (or dual) equality constraints, followed by shifting the variables so that all their components are positive and bounded away from zero. In this Short Communication a new approach for finding a starting point is proposed. It relies on a few inexact Newton steps performed at the start of the solution process. A formal justification of the new heuristic is given and computational results are presented to demonstrate its advantages in practice. Computational experience with a large collection of small- and medium-size test problems reveals that the new starting point is superior to the old one and saves 20-40% of iterations needed by the primal-dual method. For larger and more difficult problems this translates into remarkable savings in the solution time. (C) 2016 Elsevier B.V. All rights reserved.
We introduce and begin to explore the mean and median of finite sets of shapes represented as integral currents. The median can be computed efficiently in practice, and we focus most of our theoretical and computation...
详细信息
The increase of computer performance continues to support the practice of large-scale optimization. Computers with multiple computing cores and vector processing capabilities are now widely available. We investigate h...
详细信息
The increase of computer performance continues to support the practice of large-scale optimization. Computers with multiple computing cores and vector processing capabilities are now widely available. We investigate how the recently introduced Advanced Vector Instruction (AVX) set on Intel-compatible architectures can be exploited in interiorpointmethods for linear and nonlinear optimization. We focus on data structures and implementation techniques that utilize the new vector instructions. Our numerical experiments demonstrate that the AVX instruction set provides a significant performance boost in our implementation on large-scale problem that have significant fill-in in the sparse Cholesky factorization, achieving up to 100 gigaflops performance on a standard desktop computer on linear optimization problems for which the required Cholesky factorization is relatively dense.
In this paper, we present a primal-dual interiorpoint method for linear optimization problems based on a new efficient kernel function with a trigonometric barrier term. We derive the complexity bounds for large and ...
详细信息
In this paper, we present a primal-dual interiorpoint method for linear optimization problems based on a new efficient kernel function with a trigonometric barrier term. We derive the complexity bounds for large and small-update methods, respectively. We obtain the best known complexity bound for large update, which improves significantly the so far obtained complexity results based on a trigonometric kernel function given by Peyghami et al. The results obtained in this paper are the first to reach this goal.
In this paper, we propose a new predictor-corrector interior-point algorithm for linearprogramming based on a wide neighbourhood. In each iteration, the algorithm computes the Ai-Zhang's predictor direction (SIAM...
详细信息
In this paper, we propose a new predictor-corrector interior-point algorithm for linearprogramming based on a wide neighbourhood. In each iteration, the algorithm computes the Ai-Zhang's predictor direction (SIAM J. Optim. 16(2):400-417, 2005) and a new corrector direction, in an attempt to improve its performance. We drive that the duality gap reduces in both predictor and corrector steps. Moreover, we also prove that the complexity of the algorithm coincides with the best iteration bound for small neighbourhood algorithms. Finally, some numerical experiments are provided which reveal capability and effectiveness of the proposed method.
Max-cut problem is one of many NP-hard graph theory problems which attracted many researchers over the years. Maximum cuts are useful items including theoretical physics and electronics. But they are best known for al...
详细信息
ISBN:
(纸本)9781509007516
Max-cut problem is one of many NP-hard graph theory problems which attracted many researchers over the years. Maximum cuts are useful items including theoretical physics and electronics. But they are best known for algorithmic problem of finding a maximum cutting, commonly called MAX-CUT, a relatively well-studied problem, particularly in the context of the approximation. Various heuristics, or combination of optimization and heuristic methods have been developed to solve this problem. Among them is the efficient algorithm of Goemans and Williamson. Their algorithm combines Semidefinite programming and a rounding procedure to produce an approximate solution to the max-cut problem. Semidefinite programming (SDP) is currently the most sophisticated area of Conic programming that is polynomially solvable. The SDP problem is solved with interiorpointmethods. In parallel, the development of efficient SDP solvers, based on interiorpoint algorithms, also contributed to the success of this method. In this paper we use a new variant of the solver CSDP (C library for semidfinite programming) to resolve this problem. It is based on a Majorize-Minimize line search algorithm for barrier function optimization. A tangent majorant function is built to approximate a scalar criterion containing a barrier function. The comparison of the results obtained with the classic CSDP and our new variant is promising.
暂无评论