We prove a linear bound on the average total curvature of the central path of linearprogrammingtheory in terms of the number of independent variables of the primal problem, and independent of the number of constraints.
We prove a linear bound on the average total curvature of the central path of linearprogrammingtheory in terms of the number of independent variables of the primal problem, and independent of the number of constraints.
Phenotype phase plane analysis is a linear optimization procedure which can be used to study the value of the objective function (a desired phenotype) as two variables (external substrates) vary simultaneously. Existi...
详细信息
Phenotype phase plane analysis is a linear optimization procedure which can be used to study the value of the objective function (a desired phenotype) as two variables (external substrates) vary simultaneously. Existing methods for phenotype phase plane analysis are based on computing shadow prices as defined in classical linearprogramming duality theory. Since different bases may produce different "shadow prices", we present here an alternative way to determine the phenotype phase plane based on so-called interiorpointmethods, which unambiguously calculates the correct shadow prices. In addition, this work constitutes the first attempt at producing a mathematically rigorous algorithm to compute phenotype phase planes in a systematic way. (c) 2004 Elsevier Ltd. All rights reserved.
We study the geometry of the central paths of linearprogrammingtheory. These paths are the solution curves of the Newton vector field of the logarithmic barrier function. This vector field extends to the boundary of...
详细信息
We study the geometry of the central paths of linearprogrammingtheory. These paths are the solution curves of the Newton vector field of the logarithmic barrier function. This vector field extends to the boundary of the polytope and we study the main properties of this extension: continuity, analyticity, singularities.
This study details the implementation of various warm-start strategies. We are interested in the situation in which we have solved one linearprogramming problem (LP) by an interior-point method, and we then want to s...
This study details the implementation of various warm-start strategies. We are interested in the situation in which we have solved one linearprogramming problem (LP) by an interior-point method, and we then want to solve another LP (the second). The second has the same dimensions as the original, but the data has been perturbed. We use the interior-point iterates for the original LP to determine a warm-start point to the second LP. Thus, when we apply an interior-point method to the second LP from this starting point, we can solve it in fewer iterations than if we had solved it in the typical manner using a "cold-start." We give a brief overview and history of linearprogramming in Chapter 1. We discuss interior-pointmethods, closely following the notation used by Wright [54], in Chapter 2. Here, we detail Mehrotra's predictor-corrector interior-point algorithm. We describe the least-squares warm-start strategy and the Newton step correction discussed by Yildirim and Wright [60] in Chapter 3. We also outline several other strategies that we used in our implementation. We describe the main aspects of the software package PCx in Chapter 4. This interior-point code based on Mehrotra's predictor-corrector method is our main resource in applying warm-start strategies and running experiments. We detail the exact formulation applied in the code, as well as provide an outline of the main steps of the algorithm. In Chapter 5, we illustrate how we first implemented the least-squares strategy from theory into practice by augmenting the PCx code as necessitated. We discuss how we then adapted other strategies based on results from preliminary testing by expanding the PCx code. In Chapter 6, we state the final results obtained by testing our warm-start strategies on the NETLIB problems. We also give some preliminary results for our experiments with random instances of the transportation problem. Finally, we conclude Chapter 6 with a discussion on some improvements to our warm-
One of the main ingredients of interior-pointmethods is the generation of iterates in a neighborhood of the central path. Measuring how close the iterates are to the central path is an important aspect of such method...
详细信息
One of the main ingredients of interior-pointmethods is the generation of iterates in a neighborhood of the central path. Measuring how close the iterates are to the central path is an important aspect of such methods and it is accomplished by using proximity measure functions. In this paper, we propose a unified presentation of the proximity measures and a study of their relationships and computational role when using a generic primal-dual interior-point method for computing the analytic center for a standard linear optimization problem. We demonstrate that the choice of the proximity measure can affect greatly the performance of the method. It is shown that we may be able to choose the algorithmic parameters and the central-path neighborhood radius (size) in such a way to obtain comparable results for several measures. We discuss briefly how to relate some of these results to nonlinearprogramming problems.
It is well known that solving general integer programs is NP-hard [ 248]. Most current state-of-the-art commercial software implements traditional linear/nonlinearprogramming based branch-and-bound method or its vari...
It is well known that solving general integer programs is NP-hard [ 248]. Most current state-of-the-art commercial software implements traditional linear/nonlinearprogramming based branch-and-bound method or its variants. In the worst case this method, branching on one or more fractional variables generates a search tree with an exponential number of nodes in the input size of the problem. In 1983 Lenstra [198] proposed an algorithm of branching on hyperplanes for solving mixed integer linearprogramming problems. His seminal theoretical result shows that the proposed algorithm can solve general mixed integer linearprogramming problem in polynomial time in the input size of the problem with fixed number of integer variables. However, his algorithm finds limited value in practice. In this thesis, we propose generalized branching methods for solving mixed integer linear programs. Our approach integrates the recent developments in the interiorpointmethods and lattice basis reduction techniques. Special attention is paid to the problem of finding good branching hyperplanes. We show that the problem of finding a good branching hyperplane can be formulated as a shortest lattice vector finding problem on the adjoint lattice of the Kernel lattice of the equality constraints. Basis reduction techniques are employed to solve the shortest lattice vector problem. We show that the reduced adjoint lattice basis available at the root node can be used to bound the total number of nodes in the branch-and-bound tree. Implementation issues are discussed and computational results are presented on a set of hard pure integer linear programs. We also present a primal-dual interiorpoint algorithm with higher order corrections, which can be used to solve the analytic center finding problem. Convergence conditions for this algorithm are introduced and a Krylov subspace based higher order correction is proposed. Numerical results on the Netlib problems are presented to show that this met
作者:
Lee, KSGeem, ZWNIST
Mat & Construct Res Div Bldg & Fire Res Lab Gaithersburg MD 20899 USA Univ Maryland
Dept Civil & Environm Engn College Pk MD 20742 USA
Most engineering optimization algorithms are based on numerical linear and nonlinearprogrammingmethods that require substantial gradient information and usually seek to improve the solution in the neighborhood of a ...
详细信息
Most engineering optimization algorithms are based on numerical linear and nonlinearprogrammingmethods that require substantial gradient information and usually seek to improve the solution in the neighborhood of a starting point. These algorithms, however, reveal a limited approach to complicated real-world optimization problems. If there is more than one local optimum in the problem, the result may depend on the selection of an initial point, and the obtained optimal solution may not necessarily be the global optimum. This paper describes a new harmony search (HS) meta-heuristic algorithm-based approach for engineering optimization problems with continuous design variables. This recently developed HS algorithm is conceptualized using the musical process of searching for a perfect state of harmony. It uses a stochastic random search instead of a gradient search so that derivative information is unnecessary. Various engineering optimization problems, including mathematical function minimization and structural engineering optimization problems, are presented to demonstrate the effectiveness and robustness of the HS algorithm. The results indicate that the proposed approach is a powerful search and optimization technique that may yield better solutions to engineering problems than those obtained using current algorithms. (c) 2004 Elsevier B.V. All rights reserved.
Implementations of the primal-dual approach in solving linearprogramming problems still face issues in maintaining numerical stability and in attaining high accuracy. The major source of numerical problems occurs dur...
Implementations of the primal-dual approach in solving linearprogramming problems still face issues in maintaining numerical stability and in attaining high accuracy. The major source of numerical problems occurs during the solving of a highly ill-conditioned linear system within the algorithm. We perform a numerical investigation to better understand the numerical behavior related to the solution accuracy of an implementation of an infeasible primal-dual interior-point (IPDIP) algorithm in LIPSOL, a linearprogramming solver. From our study, we learned that most test problems can achieve higher than the standard 10−8 accuracy used in practice, and a high condition number of the ill-conditioned coefficient matrix does not solely determine the attainable solution accuracy. Furthermore, we learned that the convergence of the primal residual is usually most affected by numerical errors. Most importantly, early satisfaction of the primal equality constraints is often conducive to eventually achieving high solution accuracy.
COMPREHENSIVE COVERAGE OF NONlinearprogrammingtheory AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED Nonlinearprogramming: theory and Algorithms —now in an extensively updated Third Edition—addresses the problem ...
ISBN:
(数字)9780471787778
ISBN:
(纸本)9780471486008
COMPREHENSIVE COVERAGE OF NONlinearprogrammingtheory AND ALGORITHMS, THOROUGHLY REVISED AND EXPANDED Nonlinearprogramming: theory and Algorithms —now in an extensively updated Third Edition—addresses the problem of optimizing an objective function in the presence of equality and inequality constraints. Many realistic problems cannot be adequately represented as a linear program owing to the nature of the nonlinearity of the objective function and/or the nonlinearity of any constraints. The Third Edition begins with a general introduction to nonlinearprogramming with illustrative examples and guidelines for model construction. Concentration on the three major parts of nonlinearprogramming is provided: Convex analysis with discussion of topological properties of convex sets, separation and support of convex sets, polyhedral sets, extreme points and extreme directions of polyhedral sets, and linearprogramming Optimality conditions and duality with coverage of the nature, interpretation, and value of the classical Fritz John (FJ) and the Karush-Kuhn-Tucker (KKT) optimality conditions; the interrelationships between various proposed constraint qualifications; and Lagrangian duality and saddle point optimality conditions Algorithms and their convergence, with a presentation of algorithms for solving both unconstrained and constrained nonlinearprogramming problems Important features of the Third Edition include: New topics such as second interiorpointmethods, nonconvex optimization, nondifferentiable optimization, and more Updated discussion and new applications in each chapter Detailed numerical examples and graphical illustrations Essential coverage of modeling and formulating nonlinear programs Simple numerical problems Advanced theoretical exercises The book is a solid reference for professionals as well as a useful text for students in the fields of operations research, management science, industrial engineering, applied mathematics, and also in engineering d
linearprogramming (LP) is the most widely used mathematical model for real world applications that involve optimization. In the past fifteen years, interiorpointmethods (IPMS) have become highly successful in solvi...
linearprogramming (LP) is the most widely used mathematical model for real world applications that involve optimization. In the past fifteen years, interiorpointmethods (IPMS) have become highly successful in solving LP problems, especially large-scale ones, while enjoying good theoretical convergence and complexity properties. Nevertheless, for the IPM that is implemented in most codes, the Mehrotra Predictor-Corrector (MPC) algorithm, no global convergence or complexity theory is available. We construct a similar algorithm to the MPC algorithm, the Primal-Dual Corrector (PDC). On each iteration, this algorithm computes an additional direction, a corrector, to augment the direction of the Primal-Dual (PD) algorithm, the standard primal-dual path-following IPM, in an attempt to improve performance. We present examples, however, that show that the PDC algorithm fails to converge to a solution of the LP problem, in both exact and finite arithmetic, regardless of the choice of stepsize that is employed. The cause of the bad behaviour of the PDC algorithm is that the correctors exert too much influence on the direction in which the iterates move. We attempt to reduce their impact by multiplying them by the square of the stepsize in the expression of the new iterates. The resulting algorithm, the Primal-Dual Second-Order Corrector (PDSOC), overcomes the failure that the PDC algorithm experienced on the aforementioned examples. While the outline of the PDSOC algorithm is known, we present a substantive theoretical interpretation of its construction. Further, we investigate the convergence and complexity properties of the PD and the PDSOC algorithms. Using standard terminology, we are concerned with long-step versions of these algorithms, where the iterates belong to a large neighbourhood of the primal-dual central path. We assume that a primal-dual strictly feasible starting point is available. In both algorithms, we use a new stepsize technique suggested by M. J. D. P
暂无评论