The problem of maximizing the determinant of a matrix subject to linear matrix inequalities (LMIs) arises in many fields, including computational geometry, statistics, system identification, experiment design, and inf...
详细信息
The problem of maximizing the determinant of a matrix subject to linear matrix inequalities (LMIs) arises in many fields, including computational geometry, statistics, system identification, experiment design, and information and communication theory. It can also be considered as a generalization of the semidefinite programming problem. We give an overview of the applications of the determinant maximization problem, pointing out simple cases where specialized algorithms or analytical solutions are known. We then describe an interior-point method, with a simplified analysis of the worst-case complexity and numerical results that indicate that the method is very efficient, both in theory and in practice. Compared to existing specialized algorithms (where they are available), the interior-point method will generally be slower;the advantage is that it handles a much wider variety of problems.
In a second-order cone program (SOCP) a linear function is minimized over the intersection of an affine set and the product of second-order (quadratic) cones. SOCPs are nonlinear convex problems that include linear an...
详细信息
In a second-order cone program (SOCP) a linear function is minimized over the intersection of an affine set and the product of second-order (quadratic) cones. SOCPs are nonlinear convex problems that include linear and (convex) quadratic programs as special cases, but are less general than semidefinite programs (SDPs). Several efficient primal-dual interior-pointmethods for SOCP have been developed in the last few years. After reviewing the basic theory of SOCPs, we describe general families of problems that can be recast as SOCPs. These include robust linearprogramming and robust least-squares problems, problems involving sums or maxima of norms, or with convex hyperbolic constraints. We discuss a variety of engineering applications, such as filter design, antenna array weight design, truss design, and grasping force optimization in robotics. We describe an efficient primal-dual interior-point method for solving SOCPs, which shares many of the features of primal-dual interior-pointmethods for linear program-ming (LP): Worst-case theoretical analysis shows that the number of iterations required to solve a problem grows at most as the square root of the problem size, while numerical experiments indicate that the typical number of iterations ranges between 5 and 50, almost independent of the problem size. (C) 1998 Elsevier Science Inc. All rights reserved.
This paper deals with regularized penalty-barrier methods for convex programming problems. In the spirit of an iterative proximal regularization approach, an interior-point method is constructed, in which at each step...
详细信息
This paper deals with regularized penalty-barrier methods for convex programming problems. In the spirit of an iterative proximal regularization approach, an interior-point method is constructed, in which at each step a strongly convex function has to be minimized and the prox-term can be scaled by a variable scaling factor. The convergence of the method is studied for an axiomatically given class of barrier functions. According to the results, a wide class of barrier functions (in particular, logarithmic and exponential functions) can be applied to design special algorithms. For the method with a logarithmic barrier, the rate of convergence is investigated and assumptions that ensure linear convergence are given.
Extending our previous work (Monteiro and Pang 1996), this paper studies properties of two fundamental mappings associated with the family of interior-pointmethods for solving monotone nonlinear complementarity probl...
详细信息
Extending our previous work (Monteiro and Pang 1996), this paper studies properties of two fundamental mappings associated with the family of interior-pointmethods for solving monotone nonlinear complementarity problems over the cone of symmetric positive semidefinite matrices. The first of these maps lead to a family of new continuous trajectories which include the central trajectory as a special case. These trajectories completely "fill up" the set of interior feasible points of the problem in the same way as the weighted central paths do the interior of the feasible region of a linear program. Unlike the approach based on the theory of maximal monotone maps taken by Shida and Shindoh(1996) and Shida, Shindoh, and Kojima (1995), our approach is based on the theory of local homeomorphic maps in nonlinear analysis.
We discuss here generalized proximal pointmethods applied to variational inequality problems. These methods differ from the classical point method in that a so-called Bregman distance substitutes for the Euclidean di...
详细信息
We discuss here generalized proximal pointmethods applied to variational inequality problems. These methods differ from the classical point method in that a so-called Bregman distance substitutes for the Euclidean distance and forces the sequence generated by the algorithm to remain in the interior of the feasible region, assumed to be nonempty. We consider here the case in which this region is a polyhedron (which includes linear and nonlinearprogramming, monotone linear complementarity problems, and also certain nonlinear complementarity problems), and present two alternatives to deal with linear equality constraints. We prove that the sequences generated by any of these alternatives, which in general are different, converge to the same point, namely the solution of the problem which is closest, in the sense of the Bregman distance, to the initial iterate, for a certain class of operators. This class consists essentially of point-to-point and differentiable operators such that their Jacobian matrices are positive semidefinite (not necessarily symmetric) and their kernels are constant in the feasible region and invariant through symmetrization. For these operators, the solution set of the problem is also a polyhedron. Thus, we extend a previous similar result which covered only linear operators with symmetric and positive-semidefinite matrices.
The computation of the analytic center of the solution set can be important in linearprogramming applications where it is desirable to obtain a solution that is not near the relative boundary of the solution set. In ...
详细信息
The computation of the analytic center of the solution set can be important in linearprogramming applications where it is desirable to obtain a solution that is not near the relative boundary of the solution set. In this work we discuss the effective computation of the analytic center solution by the use of primal-dual interior-pointmethods. A primal-dual interior-point algorithm designed for effectively computing the analytic-center solution is proposed, and theory and numerical results are presented.
作者:
Todd, MJYe, YYCornell Univ
Sch Operat Res & Ind Engn Engn & Theory Ctr Ithaca NY 14853 USA Univ Iowa
Dept Management Sci Iowa City IA 52242 USA
In exact arithmetic, the simplex method applied to a particular linearprogramming problem instance with real data either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions ...
详细信息
In exact arithmetic, the simplex method applied to a particular linearprogramming problem instance with real data either shows that it is infeasible, shows that its dual is infeasible, or generates optimal solutions to both problems. Most interior-pointmethods, on the other hand, do not provide such clear-cut information. If the primal and dual problems have bounded nonempty sets of optimal solutions, they usually generate a sequence of primal or primal-dual iterates that approach feasibility and optimality. But if the primal or dual instance is infeasible, most methods give less precise diagnostics. There are methods with finite convergence to an exact solution even with real data. Unfortunately, bounds on the required number of iterations for such methods applied to instances with real data are very hard to calculate and often quite large. Our concern is with obtaining information from inexact solutions after a moderate number of iterations. We provide general tools (extensions of the Farkas lemma) for concluding that a problem or its dual is likely (in a certain well-defined sense) to be infeasible, and apply them to develop stopping rules for a homogeneous self-dual algorithm and for a generic infeasible-interior-point method for linearprogramming. These rules allow precise conclusions to be drawn about the linearprogramming problem and its dual: either near-optimal solutions are produced, or we obtain "certificates" that all optimal solutions, or all feasible solutions to the primal or dual, must have large norm. Our rules thus allow more definitive interpretation of the output of such an algorithm than previous termination criteria. We give bounds on the number of iterations required before these rules apply. Our tools may also be useful for other iterative methods for linearprogramming. (C) 1998 The Mathematical programming Society, Inc. Published by Elsevier Science B.V.
interior-point algorithms are a new class of optimization routines which exhibit several theoretical and practical advantages over other methods for linear and quadratic programming. One goal of this work was to inves...
interior-point algorithms are a new class of optimization routines which exhibit several theoretical and practical advantages over other methods for linear and quadratic programming. One goal of this work was to investigate the question of whether interior-pointmethods would prove as efficient when used to solve nonlinear engineering design problems. This was tested by using a primal-dual interior-point solver in sequential linearprogramming and sequential quadratic programming paradigms. The results show that interior-pointmethods do perform well for large-scale nonlinear engineering optimization. interior-pointmethods take fewer iterations in less time to solve large-scale problems than simplex-based algorithms, and are immune to constraint degeneracy. These two features make interior-pointmethods well suited to engineering optimization. A second focus of this work was the development of fuzzy heuristics to make the sequential linear and quadratic programming algorithms adaptable. While most optimization algorithms are derived from mathematical theory, certain algorithm parameters are often chosen from some acceptable range of values based on a programmer's judgement. Fuzzy logic was chosen to adaptively modify these values "on-the-fly". The resulting adaptive sequential linearprogramming algorithm not only performed better than the original sequential liner programming algorithm, but also performed as well or better than state-of-the-art sequential quadratic programming algorithms when solving large-scale problems. The simplicity of the sequential linearprogramming algorithm, coupled with adaptability from the fuzzy rules resulted in a robust, high-performance algorithm.
In this paper, an interiorpoint algorithm for linear programs is adapted for solving multistage stochastic linear programs. The algorithm is based on Monteiro and Adler's path-following algorithm for deterministi...
详细信息
In this paper, an interiorpoint algorithm for linear programs is adapted for solving multistage stochastic linear programs. The algorithm is based on Monteiro and Adler's path-following algorithm for deterministic linear programs. In practice, the complexity of the algorithm is linear with respect to the size of the sample space. The algorithm starts from a feasible solution of the problem and proceeds along a path of random vectors. The cubic polynomial complexity of the algorithm for deterministic linear programs is derived from the calculations of the Newton steps. In the algorithm developed in this paper, the probabilistic structure of the problem is taken into consideration while calculating a Newton step and the size of the sample space appears linearly in the complexity. The development of an algorithm that requires a relatively small number of arithmetic operations, in terms of the sample space size, allows the use of the algorithm for multistage stochastic linear programs with a very large number of scenarios.
暂无评论