The main goals of this paper are to: i) relate two iteration-complexity bounds derived for the Mizuno-Todd-Ye predictor-corrector (MTY P-C) algorithm for linear programming (LP), and;ii) study the geometrical structur...
详细信息
The main goals of this paper are to: i) relate two iteration-complexity bounds derived for the Mizuno-Todd-Ye predictor-corrector (MTY P-C) algorithm for linear programming (LP), and;ii) study the geometrical structure of the LP central path. The first iteration-complexity bound for the MTY P-C algorithm considered in this paper is expressed in terms of the integral of a certain curvature function over the traversed portion of the central path. The second iteration-complexity bound, derived recently by the authors using the notion of crossover events introduced by Vavasis and Ye, is expressed in terms of a scale-invariant condition number associated with m x n constraint matrix of the LP. In this paper, we establish a relationship between these bounds by showing that the first one can be majorized by the second one. We also establish a geometric result about the central path which gives a rigorous justification based on the curvature of the central path of a claim made by Vavasis and Ye, in view of the behavior of their layered least squares path following LP method, that the central path consists of O(n(2)) long but straight continuous parts while the remaining curved part is relatively "short".
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For ...
详细信息
The regularization of a convex program is exact if all solutions of the regularized problem are also solutions of the original problem for all values of the regularization parameter below some positive threshold. For a general convex program, we show that the regularization is exact if and only if a certain selection problem has a Lagrange multiplier. Moreover, the regularization parameter threshold is inversely related to the Lagrange multiplier. We use this result to generalize an exact regularization result of Ferris and Mangasarian [Appl. Math. Optim., 23 (1991), pp. 266-273] involving a linearized selection problem. We also use it to derive necessary and sufficient conditions for exact penalization, similar to those obtained by Bertsekas [Math. Programming, 9 (1975), pp. 87-99] and by Bertsekas, Nedic, and Ozdaglar [Convex Analysis and Optimization, Athena Scientific, Belmont, MA, 2003]. When the regularization is not exact, we derive error bounds on the distance from the regularized solution to the original solution set. We also show that existence of a "weak sharp minimum" is in some sense close to being necessary for exact regularization. We illustrate the main result with numerical experiments on the l(1) regularization of benchmark (degenerate) linear programs and semidefinite/second-order cone programs. The experiments demonstrate the usefulness of l(1) regularization in finding sparse solutions.
We describe some concepts from the theory of Euclidean Jordan algebras and their use in optimization theory. This includes: primal-dual algorithms, optimality conditions, convexity of spectral functions, proof of some...
详细信息
We describe some concepts from the theory of Euclidean Jordan algebras and their use in optimization theory. This includes: primal-dual algorithms, optimality conditions, convexity of spectral functions, proof of some inequalities and a Jordan-algebraic version of Horn-Schur theorem.
Two interior-point algorithms are proposed and analyzed, for the (local) Solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (Much like in the case of primal-du...
详细信息
Two interior-point algorithms are proposed and analyzed, for the (local) Solution of (possibly) indefinite quadratic programming problems. They are of the Newton-KKT variety in that (Much like in the case of primal-dual algorithms for linear programming) search directions for the "primal" variables and the Karush-Kuhn-Tucker (KKT) multiplier estimates are components of the Newton (or quasi-Newton) direction for the Solution of the equalities in the first-order KKT conditions of optimality or a perturbed version of these conditions. Our algorithms are adapted from previously proposed algorithms for convex quadratic programming and general nonlinear programming. First, inspired by recent work by P. Tseng based on a "primal" affine-scaling algorithm (a la Dikin) [J. of Global Optimization, 30 (2004), no. 2, 285-300]. we consider a simple Newton-KKT affine-scaling algorithm. Then, a "barrier" version of the same algorithm is considered, which reduces to the affine-scaling version when the barrier parameter is set to zero at every iteration, rather than to the prescribed value. Global and local quadratic convergence are proved under nondegeneracy assumptions for both algorithms. Numerical results on randomly generated problems Suggest that the proposed algorithms may be of great practical interest.
We present complexity results on solving real-number standard linear programs LP(A, b, c), where the constraint matrix A is an element of R-m x n, the right-hand-side vector b is an element of R-m and the objective co...
详细信息
We present complexity results on solving real-number standard linear programs LP(A, b, c), where the constraint matrix A is an element of R-m x n, the right-hand-side vector b is an element of R-m and the objective coefficient vector c is an element of R-n are real. In particular, we present a two-layered interior-point method and show that LP(A, b, 0), i.e., the linear feasibility problem Ax = b and x >= 0, can be solved in in O(n(2.5)c(A)) interior-point method iterations. Here 0 is the vector of all zeros and c(A) is the condition measure of matrix A defined in [25]. This complexity iteration bound is reduced by a factor n from that for general LP(A, b, c) in [ 25]. We also prove that the iteration bound will be further reduced to O(n(1.5)c(A)) for LP(A, 0, 0), i.e., for the homogeneous linear feasibility problem. These results are surprising since the classical view has been that linear feasibility would be as hard as linear programming.
Recently, interior-point algorithms have been applied to nonlinear and nonconvex optimization. Most of these algorithms are either primal-dual path-following or affine-scaling in nature, and some of them are conjectur...
详细信息
Recently, interior-point algorithms have been applied to nonlinear and nonconvex optimization. Most of these algorithms are either primal-dual path-following or affine-scaling in nature, and some of them are conjectured to converge to a local minimum. We give several examples to show that this may be untrue and we suggest some strategies for overcoming this difficulty.
In this paper we present a new iteration-complexity bound for the Mizuno-Todd-Ye predictor-corrector (MTY P-C) primal-dual interior-point algorithm for linear programming. The analysis of the paper is based on the imp...
详细信息
In this paper we present a new iteration-complexity bound for the Mizuno-Todd-Ye predictor-corrector (MTY P-C) primal-dual interior-point algorithm for linear programming. The analysis of the paper is based on the important notion of crossover events introduced by Vavasis and Ye. For a standard form linear program min{c(T)x : Ax = b, x >= 0} with decision variable x is an element of R(n), we show that the MTY P-C algorithm, started from a well-centered interior-feasible solution with duality gap n mu(0), finds an interior-feasible solution with duality gap less than n eta in O(T(mu(0)/eta) + n(3.5) log((chi) over bar (A)*)) iterations, where T(t) = min{n(2) log(log t), log t} for all t > 0 and (chi) over bar (A)* is a scaling invariant condition number associated with the matrix A. More specifically, (chi) over bar (A)* is the infimum of all the conditions numbers (chi) over bar (AD), where D varies over the set of positive diagonal matrices. Under the setting of the Turing machine model, our analysis yields an O(n(3.5) L(A) + min{n(2) log L, L}) iteration-complexity bound for the MTY P-C algorithm to find a primal-dual optimal solution, where LA and L are the input sizes of the matrix A and the data (A, b, c), respectively. This contrasts well with the classical iteration- complexity bound for the MTY P-C algorithm, which depends linearly on L instead of log L.
It's shown that a multi-target linear-quadratic control problem can be reduced to the classical tracking problem where the target is a convex combination of the original ones. Finding coefficients in this convex c...
详细信息
It's shown that a multi-target linear-quadratic control problem can be reduced to the classical tracking problem where the target is a convex combination of the original ones. Finding coefficients in this convex combination is reduced to solving a second-order cone programming problem which can be easily solved using modern interiorpointalgorithms. (C) 2003 Elsevier B.V. All rights reserved.
A real square matrix is said to be a P-matrix if all its principal minors are positive. It is well known that this property is equivalent to: the nonsign-reversal property based on the componentwise product of vectors...
详细信息
A real square matrix is said to be a P-matrix if all its principal minors are positive. It is well known that this property is equivalent to: the nonsign-reversal property based on the componentwise product of vectors, the order P-property based on the minimum and maximum of vectors, uniqueness property in the standard linear complementarity problem, (Lipschitzian) homeomorphism property of the normal map corresponding to the nonnegative orthant. In this article, we extend these notions to a linear transformation defined on a Euclidean Jordan algebra. We study some interconnections between these extended concepts and specialize them to the space S-n of all n x n real symmetric matrices with the semidefinite cone S-+(n) and to the space R-n with the Lorentz cone. (C) 2004 Elsevier Inc. All rights reserved.
We propose a method of outer approximations, with each approximate problem smoothed using entropic regularization, to solve continuous min-max problems. By using a well-known uniform error estimate for entropic regula...
详细信息
We propose a method of outer approximations, with each approximate problem smoothed using entropic regularization, to solve continuous min-max problems. By using a well-known uniform error estimate for entropic regularization, convergence of the overall method is shown while allowing each smoothed problem to be solved inexactly. In the case of convex objective function and linear constraints, an interior-point algorithm is proposed to solve the smoothed problem inexactly. Numerical examples are presented to illustrate the behavior of the proposed method.
暂无评论