A new approach that uses interiorpointmethods is presented for teaching optimization theory to electrical and computer engineers. The features allow students to look at design projects such as VLSI circuit layout an...
详细信息
A new approach that uses interiorpointmethods is presented for teaching optimization theory to electrical and computer engineers. The features allow students to look at design projects such as VLSI circuit layout and simplify to solving linear systems of equations. The simplification to solve systems of symmetric positive-definite equations allows this material to be taught as part of a numerical methods course for electrical engineers. A simple junior level project is described for teaching interiorpoint optimization and solutions of linear systems of equations. Additional material that would be suitable for senior or graduate-level courses on this topic is also suggested.
We give two results related to Gonzaga's recent paper showing that lower bounds derived from the Todd-Burrell update can be obtained by solving a one-variable linearprogramming problem involving the centering dir...
详细信息
We give two results related to Gonzaga's recent paper showing that lower bounds derived from the Todd-Burrell update can be obtained by solving a one-variable linearprogramming problem involving the centering direction and the affine direction. We show how these results may be used to update the primal solution when using the dual affine variant of Karmarkar's algorithm. This leads to a dual projective algorithm.
Karmarkar's algorithm for linearprogramming was published in 1984, and it is highly important to both theory and practice. On the practical side some of its variants have been found to be far more efficient than ...
详细信息
Karmarkar's algorithm for linearprogramming was published in 1984, and it is highly important to both theory and practice. On the practical side some of its variants have been found to be far more efficient than the simplex method on a wide range of very large calculations, while its polynomial time properties are fundamental to research on complexity. These properties depend on the fact that each iteration reduces a ''potential function'' by an amount that is bounded away from zero, the bound being independent of all the coefficients that occur. It follows that, under mild conditions on the initial vector of variables, the number of iterations that are needed to achieve a prescribed accuracy in the final value of the linear objective function is at most a multiple of n, where n is the number of inequality constraints. By considering a simple example that allows n to be arbitrarily large, we deduce analytically that the magnitude of this complexity bound is correct. Specifically, we prove that the solution of the example by Karmarkar's original algorithm can require about n/20 iterations. Further, we find that the algorithm makes changes to the variables that are closely related to the steps of the simplex method.
The purpose of this study is to broaden the scope of projective transformation methods in mathematical programming, both in terms of theory and algorithms. We start by generalizing the concept of the analytic center o...
详细信息
The purpose of this study is to broaden the scope of projective transformation methods in mathematical programming, both in terms of theory and algorithms. We start by generalizing the concept of the analytic center of a polyhedral system of constraints to the w-center of a polyhedral system, which stands for weighted center, where there is a positive weight on the logarithmic barrier term for each inequality constraint defining the polyhedron X We prove basic results regarding contained and containing ellipsoids centered at the w-center of the system X. We next shift attention to projective transformations, and we exhibit an elementary projective transformation that transforms the polyhedron X to another polyhedron Z, and that transforms the current interiorpoint to the w-center of the transformed polyhedron Z. We work throughout with a polyhedral system of the most general form, namely both inequality and equality costraints. This theory is then applied to the problem of finding the w-center of a polyhedral system X. We present a projective transformation algorithm, which is an extension of Karmarkar's algorithm, for finding the w-center of the system X. At each iteration, the algorithm exhibits either a fixed constant objective function improvement, or converges superlinearly to the optimal solution. The algorithm produces upper bounds on the optimal value at each iteration. The direction chosen at each iteration is shown to be a positively scaled Newton direction. This broadens a result of Bayer and Lagarias regarding the connection between projective transformation methods and Newton's method. Furthermore, the algorithm specializes to Vaidya's algorithm when used with a line-search, and so shows that Vaidya's algorithm is superlinearly convergent as well. Finally, we show how the algorithm can be used to construct well-scaled containing and contained ellipsoids at near-optimal solutions to the w-center problem.
Developments in optimization theory, including emphasis on large problems and on interior-pointmethods for linearprogramming, have begun to appear in production software. Here is a reference tool that includes discu...
详细信息
ISBN:
(数字)9781611970951
ISBN:
(纸本)9780898713220
Developments in optimization theory, including emphasis on large problems and on interior-pointmethods for linearprogramming, have begun to appear in production software. Here is a reference tool that includes discussions of these areas and names software packages that incorporate the results of theoretical research. After an introduction to the major problem areas in optimization and an outline of the algorithms used to solve them, a data sheet is presented for each of the 75 software packages and libraries in the authors' survey. These include information on the capabilities of the packages, how to obtain them, and addresses for further information.
Standard optimization paradigms are addressed — linear, quadratic, and nonlinearprogramming; network optimization; unconstrained and bound-constrained optimization; least-squares problems; nonlinear equations; and integer programming. The most practical algorithms for the major fields of numerical optimization are outlined, and the software packages in which they are implemented are described.
This format will aid current and potential users of optimization software in classifying the optimization problem to be solved, determining appropriate algorithms, and obtaining the software that implements those algorithms. Readers need only a basic knowledge of vector calculus and linear algebra to understand this book.
Recently, Zhang, Tapia, and Dennis (Ref. 1) produced a superlinear and quadratic convergence theory for the duality gap sequence in primal-dual interior-pointmethods for linearprogramming. in this theory, a basic as...
详细信息
Recently, Zhang, Tapia, and Dennis (Ref. 1) produced a superlinear and quadratic convergence theory for the duality gap sequence in primal-dual interior-pointmethods for linearprogramming. in this theory, a basic assumption for superlinear convergence is the convergence of the iteration sequence;and a basic assumption for quadratic convergence is nondegeneracy. Several recent research projects have either used or built on this theory under one or both of the above-mentioned assumptions. In this paper, we remove both assumptions from the Zhang-Tapia-Dennis theory.
The objective of this paper is to present, in a tutorial way, the basic ideas involved in Narendra Karmarkar's projective scaling algorithm for solving linearprogramming problems and to show two of its most impor...
详细信息
The objective of this paper is to present, in a tutorial way, the basic ideas involved in Narendra Karmarkar's projective scaling algorithm for solving linearprogramming problems and to show two of its most important extensions, the dual and the dual affine algorithms. Implementation issues regarding the practical solution of the search direction are also discussed. A new parallel preconditioned conjugate gradient solution technique is introduced. This parallel search direction solver provides almost perfect p speedup for p processors. A summary of application results on a very large scale integration (VLSI) circuit layout and economic dispatch in power systems is presented. In all these applications, the interiorpointmethods exhibit an average speedup of about six to 100 times over a well-known MINOS simplex code.
interiormethods for optimization were widely used in the 1960s, primarily in the form of barrier methods. However, they were not seriously applied to linearprogramming because of the dominance of the simplex method....
The nonlinear rescaling principle employs monotone and sufficiently smooth functions to transform the constraints and/or the objective function into an equivalent problem, the classical Lagrangian which has important ...
详细信息
The nonlinear rescaling principle employs monotone and sufficiently smooth functions to transform the constraints and/or the objective function into an equivalent problem, the classical Lagrangian which has important properties on the primal and the dual spaces. The application of the nonlinear rescaling principle to constrained optimization problems leads to a class of modified barrier functions (MBF's) and MBF Method (MBFM's). Being classical Lagrangians (CL's) for an equivalent problem, the MBF's combine the best properties of the CL's and classical barrier functions (CBF's) but at the same time are free of their most essential deficiencies. Due to the excellent MBF properties, new characteristics of the dual pair convex programming problems have been found and the duality theory for nonconvex constrained optimization has been developed. The MBFM have up to a superlinear rate of convergence and are to the classical barrier functions (CBF's) method as the Multipliers Method for Augmented Lagrangians is to the Classical Penalty Function Method. Based on the dual theory associated with MBF, the method for the simultaneous solution of the dual pair convex programming problems with up to quadratic rates of convergence have been developed. The application of the MBF to linear (LP) and quadratic (QP) programming leads to a new type of multipliers methods which have a much better rate of convergence under lower computational complexity at each step as compared to the CBF methods. The numerical realization of the MBFM leads to the Newton Modified Barrier Method (NMBM). The excellent MBF properties allow us to discover that for any nondegenerate constrained optimization problem, there exists a "hot" start, from which the NMBM has a better rate of convergence, a better complexity bound, and is more stable than the interiorpointmethods, which are based on the classical barrier functions.
This paper describes a primal-dual interior paint algorithm for convex nonlinearprogramming problems subject to linear constraints. The algorithm is based on the path following idea. Each iteration updates a penalty ...
详细信息
This paper describes a primal-dual interior paint algorithm for convex nonlinearprogramming problems subject to linear constraints. The algorithm is based on the path following idea. Each iteration updates a penalty parameter and finds a Newton step associated with the simplified Karush-Kuhn-Tucker system of equations which characterizes a solution of the logarithmic barrier function problem for that parameter. It is shown that the duality gap if reduced at each iteration by a factor of (1 - δ / n;), where S is positive and depends on some parameters associated with the objective function.
暂无评论