We develop an interior-point polynomial-time algorithm for a generalized linear-fractional problem. The latter problem can be regarded as a nonpolyhedral extension of the usual linear-fractional programming;typical ex...
详细信息
We develop an interior-point polynomial-time algorithm for a generalized linear-fractional problem. The latter problem can be regarded as a nonpolyhedral extension of the usual linear-fractional programming;typical example (which is of interest for control theory) is the minimization of the generalized eigenvalue of a pair of symmetric matrices linearly depending on the decision variables.
We propose analyzing interior-pointmethods using notions of problem-instance size which are direct generalizations of the condition number of a matrix. The notions pertain to linearprogramming quite generally;the un...
详细信息
We propose analyzing interior-pointmethods using notions of problem-instance size which are direct generalizations of the condition number of a matrix. The notions pertain to linearprogramming quite generally;the underlying vector spaces are not required to be finite-dimensional and, more importantly, the cones defining nonnegativity are not required to be polyhedral. Thus, for example, the notions are appropriate in the context of semi-definite programming, We prove various theorems to demonstrate how the notions can be used in analyzing interior-pointmethods. These theorems assume little more than that the interiors of the cones (defining nonnegativity) are the domains of self-concordant barrier functions.
The proximal point method for convex optimization has been extended recently through the use of generalized distances (e.g., Bregman distances) instead of the Euclidean one. One advantage of these extensions is the po...
详细信息
The proximal point method for convex optimization has been extended recently through the use of generalized distances (e.g., Bregman distances) instead of the Euclidean one. One advantage of these extensions is the possibility of eliminating certain constraints (mainly positivity) from the subproblems, transforming an inequality constrained problem into a sequence of unconstrained or equality constrained problems. We consider here methods obtained using a certain class of Bregman functions applied to convex quadratic (including linear) programming, which are of the interior-point type. We prove that the limit of the sequence generated by the method lies in the relative interior of the solution set, and furthermore is the closest optimal solution to the initial point, in the sense of the Bregman distance. These results do not hold for the standard proximal point method, i.e., when the square of the Euclidean norm is used as the Bregman distance.
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's me...
详细信息
We describe a potential reduction method for convex optimization problems involving matrix inequalities. The method is based on the theory developed by Nesterov and Nemirovsky and generalizes Gonzaga and Todd's method for linearprogramming. A worst-case analysis shows that the number of iterations grows as the square root of the problem size, but in practice it appears to grow more slowly. As in other interior-pointmethods the overall computational effort is therefore dominated by the least-squares system that must be solved in each iteration. A type of conjugate-gradient algorithm can be used for this purpose, which results in important savings for two reasons. First, it allows us to take advantage of the special structure the problems often have (e.g., Lyapunov or algebraic Riccati inequalities). Second, we show that the polynomial bound on the number of iterations remains valid even if the conjugate-gradient algorithm is not run until completion, which in practice can greatly reduce the computational effort per iteration. We describe in detail how the algorithm works for optimization problems with L Lyapunov inequalities, each of size m. We prove an overall worst-case operation count of O(m(5.5)L(1.5)). Th, average-case complexity appears to be closer to O(m(4)L(1.5)). This estimate is justified by extensive numerical experimentation, and is consistent with other researchers' experience with the practical performance of interior-point algorithms for linearprogramming. This result means that the computational cost of extending current control theory based on the solution of Lyapunov or Riccati equations to a theory that is based on the solution of (multiple, coupled) Lyapunov or Riccati inequalities is modest.
作者:
CHENG, ZYMITCHELL, JEGraduate Student
Department of Mathematical Sciences Rensselare Polytechnic Institute Troy New York. Associate Professor
Department of Mathematical Sciences Rensselare Polytechnic Institute Troy New York.
One motivation for the standard primal-dual direction used in interior-pointmethods is that it can be obtained by solving a least-squares problem. In this paper, we propose a primal-dual interior-point method derived...
详细信息
One motivation for the standard primal-dual direction used in interior-pointmethods is that it can be obtained by solving a least-squares problem. In this paper, we propose a primal-dual interior-point method derived through a modified least-squares problem. The direction used is equivalent to the Newton direction for a weighted barrier function method with the weights determined by the current primal-dual iterate. We demonstrate that the Newton direction for the usual, unweighted barrier function method can be derived through a weighted modified least-squares problem. The algorithm requires a polynomial number of iterations. It enjoys quadratic convergence if the optimal vertex is nondegenerate.
An optimization method is developed based on ellipsoidal trust regions that are defined by conic functions. It provides a powerful unifying theory from which can be derived a variety of interesting and potentially use...
详细信息
An optimization method is developed based on ellipsoidal trust regions that are defined by conic functions. It provides a powerful unifying theory from which can be derived a variety of interesting and potentially useful optimization algorithms, in particular, conjugate-gradient-like algorithms for nonlinear minimization and Karmarkar-like interior-point algorithms for linearprogramming.
作者:
LIN, CYNEKOOIE, BFAN, MKHGraduate Student
School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta Georgia. Assistant Professor
School of Electrical and Computer Engineering Georgia Institute of Technology Atlanta Georgia.
In this paper, we propose an interior-point method for minimizing a convex function subject to linear constraints. Our method employs ideas from a previously studied method due to Fan and Nekooie in a different contex...
详细信息
In this paper, we propose an interior-point method for minimizing a convex function subject to linear constraints. Our method employs ideas from a previously studied method due to Fan and Nekooie in a different context. Under certain assumptions, we show that the proposed method has a fast rate of convergence. A numerical example is included to illustrate the method.
A local analysis of the Iri-Imai algorithm for linearprogramming is given to demonstrate quadratic convergence under degeneracy. Specifically, we show that the algorithm with an exact line search either terminates af...
详细信息
A local analysis of the Iri-Imai algorithm for linearprogramming is given to demonstrate quadratic convergence under degeneracy. Specifically, we show that the algorithm with an exact line search either terminates after a finite number of iterations yielding a point on the set of optimal solutions or converges quadratically to one of the relative analytic centers of the faces of the set of optimal solutions including vertices. Mostly, the sequence generated falls into one of the optimal vertices, and it is rare that the sequence converges to the relative analytic center of a face whose dimension is greater than or equal to one.
Semi-infinite linear programs often arise as the limit of a sequence of approximating linear programs. Hence, studying the behavior of extensions of linearprogramming algorithms to semi-infinite problems can yield va...
详细信息
Semi-infinite linear programs often arise as the limit of a sequence of approximating linear programs. Hence, studying the behavior of extensions of linearprogramming algorithms to semi-infinite problems can yield valuable insight into the behavior of the underlying linearprogramming algorithm when the number of constraints or the number of variables is very large. In this paper, we study the behavior of the affine-scaling algorithm on a particular semi-infinite linearprogramming problem. We show that the continuous trajectories converge to the optimal solution but that, for any strictly positive step, there are starting points for which the discrete algorithm converges to nonoptimal boundary points.
We present a log-barrier based algorithm for linearly constrained convex differentiable programming problems in nonnegative variables, but where the objective function may not be differentiable at points having a zero...
详细信息
We present a log-barrier based algorithm for linearly constrained convex differentiable programming problems in nonnegative variables, but where the objective function may not be differentiable at points having a zero coordinate. We use an approximate centering condition as a basis for decreasing the positive parameter of the log-barrier term and show that the total number of iterations to achieve an is-an-element-of-tolerance optimal solution is O(\log(is-an-element-of)\) x (number of inner-loop iterations). When applied to the n-variable dual geometric programming problem, this bound becomes O(n2U/is-an-element-of), where U is an upper bound on the maximum magnitude of the iterates generated during the computation.
暂无评论