This paper presents a wide class of globally convergent interior-point algorithms for the nonlinear complementarity problem with a continuously differentiable monotone mapping in terms of a unified global convergence ...
详细信息
This paper presents a wide class of globally convergent interior-point algorithms for the nonlinear complementarity problem with a continuously differentiable monotone mapping in terms of a unified global convergence theory given by Polak in 1971 for general nonlinear programs. The class of algorithms is characterized as: Move in a Newton direction for approximating a point on the path of centers of the complementarity problem at each iteration. Starting from a strictly positive but infeasible initial point, each algorithm in the class either generates an approximate solution with a given accuracy or provides us with information that the complementarity problem has no solution in a given bounded set. We present three typical examples of our interior-point algorithms, a horn neighborhood model, a constrained potential reduction model with the use of the standard potential function, and a pure potential reduction model with the use of a new potential function.
A new complexity analysis for constant potential reduction algorithms for linear programming is considered. Using Karmarkar's primal-based potential function, it is shown that for a class of linear programs the nu...
详细信息
A new complexity analysis for constant potential reduction algorithms for linear programming is considered. Using Karmarkar's primal-based potential function, it is shown that for a class of linear programs the number of iterations required is at most O(mt + n log(n)), where m is the number of equations and n is the number of variables in the standard linear programming form, and t can be interpreted as the number of significant figures required in the final solution. It is also shown that this result holds in expectation for some randomly generated problems. At the same time, it is shown that Karmarkar's algorithm requires at least Omega(n) iterations to solve Anstreicher's linear program, if the initial solution is poor but valid.
This article develops an affine-scaling method for linear programming in standard primal form. Its descent search directions are formulated in terms of the null-space of the linear programming matrix, which, in turn, ...
详细信息
This article develops an affine-scaling method for linear programming in standard primal form. Its descent search directions are formulated in terms of the null-space of the linear programming matrix, which, in turn, is defined by a suitable basis matrix. We describe some basic properties of the method and an experimental implementation that employs a periodic basis change strategy in conjunction with inexact computation of the search direction by an iterative method, specifically, the conjugate-gradient method with diagonal preconditioning. The results of a numerical study on a number of nontrivial problems representative of problems that arise in practice are reported and discussed. A key advantage of the primal null-space affine-scaling method is its compatibility with the primal simplex method. This is considered in the concluding section, along with implications for the development of a more practical implementation.
The l(p)-norm discrete estimation problem min(x is an element of Rn) vertical bar vertical bar b - A(T) x vertical bar vertical bar(p)(p) is troublesome when p is close to unity because the objective function approach...
详细信息
The l(p)-norm discrete estimation problem min(x is an element of Rn) vertical bar vertical bar b - A(T) x vertical bar vertical bar(p)(p) is troublesome when p is close to unity because the objective function approaches a nonsmooth form as p converges to one. This paper presents an efficient algorithm for solving l(p)-norm problems for all 1 <= p < 2. When p = 1 it is essentially the method presented by T. F. Coleman and Y. Li [Math. Programming, 56 (1992), pp. 189-222], which is a globally and quadratically convergent algorithm under some nondegeneracy assumptions. The existing iteratively reweighted least-squares (IRLS) method can be obtained from the new approach by updating some dual multipliers in a special fashion. The new method is globally convergent, and it is superlinearly convergent when there is no zero residual at the solution. At each iteration the main computational cost of the new method is the same as that of the IRLS method: solving a reweighted least-squares problem. Numerical experiments indicate that this method is significantly faster than popular iteratively reweighted least-squares methods when p is close or equal to one.
The efficiency of interior-point algorithms for linear programming is related to the effort required to factorize the matrix used to solve for the search direction at each iteration. When the linear program is in symm...
详细信息
The efficiency of interior-point algorithms for linear programming is related to the effort required to factorize the matrix used to solve for the search direction at each iteration. When the linear program is in symmetric form (i.e., the constraints are Ax less-than-or-equal-to b, x greater-than-or-equal-to 0), then there are two mathematically equivalent forms of the search direction, involving different matrices. One form necessitates factoring a matrix whose sparsity pattern has the same form as that of (AA(T)). The other form necessitates factoring a matrix whose sparsity pattern has the same form as that of (A(T)A). Depending on the structure of the matrix A, one of these two forms may produce significantly less fill-in than the other. Furthermore, by analyzing the fill-in of both forms prior to starting the iterative phase of the algorithm, the form with the least fill-in can be computed and used throughout the algorithm. Finally, this methodology can be applied to linear programs that are not in symmetric form, that contain both equality and inequality constraints.
Recently, various interiorpointalgorithms related to the Karmarkar algorithm have been developed for linear programming. In this paper, we first show how this "interiorpoint" philosophy can be adapted to ...
详细信息
Recently, various interiorpointalgorithms related to the Karmarkar algorithm have been developed for linear programming. In this paper, we first show how this "interiorpoint" philosophy can be adapted to the linear l1 problem (in which there are no feasibility constraints) to yield a globally and linearly convergent algorithm. We then show that the linear algorithm can be modified to provide a globally and ultimately quadratically convergent algorithm. This modified algorithm appears to be significantly more efficient in practise than a more straightforward interiorpoint approach via a linear programming formulation: we present numerical results to support this claim.
This paper presents extensions and further analytical properties of algorithms for linear programming based only on primal scaling and projected gradients of a potential function. The paper contains extensions and ana...
详细信息
This paper presents extensions and further analytical properties of algorithms for linear programming based only on primal scaling and projected gradients of a potential function. The paper contains extensions and analysis of two polynomial-time algorithms for linear programming. We first present an extension of Gonzaga's O(nL) iteration algorithm, that computes dual variables and does not assume a known optimal objective function value. This algorithm uses only affine scaling, and is based on computing the projected gradient of the potential function [GRAPHICS] where x is the vector of primal variables and s is the vector of dual slack variables, and q = n + square-root n. The algorithm takes either a primal step or recomputes dual variables at each iteration. We next present an alternate form of Ye's O(square-root n L) iteration algorithm, that is an extension of the first algorithm of the paper, but uses the potential function [GRAPHICS] where q = n + square-root n. We use this alternate form of Ye's algorithm to show that Ye's algorithm is optimal with respect to the choice of the parameter q in the following sense. Suppose that q = n + n(t) where t greater-than-or-equal-to 0. Then the algorithm will solve the linear program in O(n(r)L) iterations, where r = max{t, 1-t}. Thus the value of t that minimizes the complexity bound is t = 1/2, yielding Ye's O(square-root n L) iteration bound.
暂无评论