linearprogramming (LP) is the most widely used mathematical model for real world applications that involve optimization. In the past fifteen years, interiorpointmethods (IPMS) have become highly successful in solvi...
linearprogramming (LP) is the most widely used mathematical model for real world applications that involve optimization. In the past fifteen years, interiorpointmethods (IPMS) have become highly successful in solving LP problems, especially large-scale ones, while enjoying good theoretical convergence and complexity properties. Nevertheless, for the IPM that is implemented in most codes, the Mehrotra Predictor-Corrector (MPC) algorithm, no global convergence or complexity theory is available. We construct a similar algorithm to the MPC algorithm, the Primal-Dual Corrector (PDC). On each iteration, this algorithm computes an additional direction, a corrector, to augment the direction of the Primal-Dual (PD) algorithm, the standard primal-dual path-following IPM, in an attempt to improve performance. We present examples, however, that show that the PDC algorithm fails to converge to a solution of the LP problem, in both exact and finite arithmetic, regardless of the choice of stepsize that is employed. The cause of the bad behaviour of the PDC algorithm is that the correctors exert too much influence on the direction in which the iterates move. We attempt to reduce their impact by multiplying them by the square of the stepsize in the expression of the new iterates. The resulting algorithm, the Primal-Dual Second-Order Corrector (PDSOC), overcomes the failure that the PDC algorithm experienced on the aforementioned examples. While the outline of the PDSOC algorithm is known, we present a substantive theoretical interpretation of its construction. Further, we investigate the convergence and complexity properties of the PD and the PDSOC algorithms. Using standard terminology, we are concerned with long-step versions of these algorithms, where the iterates belong to a large neighbourhood of the primal-dual central path. We assume that a primal-dual strictly feasible starting point is available. In both algorithms, we use a new stepsize technique suggested by M. J. D. P
In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of Rudin, Osher, and Fatemi ( ROF). Our approach formulates TV minimization as a second- ...
详细信息
In this paper we present optimization algorithms for image restoration based on the total variation (TV) minimization framework of Rudin, Osher, and Fatemi ( ROF). Our approach formulates TV minimization as a second- order cone program which is then solved by interior-point algorithms that are efficient both in practice ( using nested dissection and domain decomposition) and in theory ( i. e., they obtain solutions in polynomial time). In addition to the original ROF minimization model, we show how to apply our approach to other TV models, including ones that are not solvable by PDE-based methods. Numerical results on a varied set of images are presented to illustrate the effectiveness of our approach.
The mixed integer bilevel linearprogramming problem (MIBLPP), where the upper-level decision maker controls all zero-one variable and the lower-level decision maker controls all continuous variables, is discussed. By...
详细信息
The mixed integer bilevel linearprogramming problem (MIBLPP), where the upper-level decision maker controls all zero-one variable and the lower-level decision maker controls all continuous variables, is discussed. By solving the extreme points of the follower's dual problem, the MIBLPP is decomposed into a series of mixed integer linear program problems. Using mixed integer linear program methods, a global optimal solution to the MIBLPP can be obtained.
A major objective of modelling geophysical features, biological objects, financial processes and many other irregular surfaces and functions is to develop “shape-preserving” methodologies for smoothly interpolating ...
A major objective of modelling geophysical features, biological objects, financial processes and many other irregular surfaces and functions is to develop “shape-preserving” methodologies for smoothly interpolating bivariate data with sudden changes in magnitude or spacing. Shape preservation usually means the elimination of extraneous non-physical oscillations. Classical splines do not preserve shape well in this sense. Empirical experiments have shown that the recently proposed cubic L1 splines are cable of providing C 1-smooth, shape-preserving, multi-scale interpolation of arbitrary data, including data with abrupt changes in spacing and magnitude, with no need for node adjustment or other user input. However, a theoretic treatment of the bivariate cubic L1 splines is still in lack. The currently available approximation algorithms are not able to generate the exact coefficients of a bivariate cubic L1 spline. For theoretical treatment and the algorithm development, we propose to solve bivariate cubic L1 spline problems in a generalized geometric programming framework. Our framework includes a primal problem, a geometric dual problem with a linear objective function and convex cubic constraints, and a linear system for dual-to-primal transformation. We show that bivariate cubic L1 splines indeed preserve linearity under some mild conditions. Since solving the dual geometric program involves heavy computation, to improve computational efficiency, we further develop three methods for generating bivariate cubic L1 splines: a tensor-product approach that generates a good approximation for large scale bivariate cubic L1 splines; a primal-dual interiorpoint method that obtains discretized bivariate cubic L1 splines robustly for small and medium size problems; and a compressed primal-dual method that efficiently and robustly generates discretized bivariate cubic L 1 splines of large size.
One of the critical problems in the call center industries is the staffing problem, since they must face variable demands and because staff costs represent a major part of the costs of these industries. Prom a modelin...
详细信息
We study a local feature of two interior-pointmethods: a logarithmic barrier function method and a primal-dual method. In particular, we provide an asymptotic analysis on the radius of the sphere of convergence of Ne...
详细信息
We study a local feature of two interior-pointmethods: a logarithmic barrier function method and a primal-dual method. In particular, we provide an asymptotic analysis on the radius of the sphere of convergence of Newton's method on two equivalent systems associated with the two aforementioned interior-pointmethods for nondegenerate nonlinear programs. We show that the radii of the spheres of convergence have different asymptotic behavior, as the two methods attempt to follow a solution trajectory {x(mu)} that, under suitable conditions, converges to a solution as mu-->0. We show that, in the case of the barrier function method, the radius of the sphere of convergence of Newton's method is Theta(mu), while for the primal-dual method the radius is bounded away from zero as mu-->0. This work is an extension of the authors earlier work (Ref. 1) on linear programs.
In practice, many large-scale linearprogramming problems are too large to be solved effectively due to the computer's speed and/or memory limitation, even though today's computers have many more capabilities ...
In practice, many large-scale linearprogramming problems are too large to be solved effectively due to the computer's speed and/or memory limitation, even though today's computers have many more capabilities than before. Algorithms are exploited to solve such large linearprogramming problems, either in the sequential or parallel computation environment. This study focuses on two parallel algorithms for solving large-scale linearprogramming problems efficiently. The first parallel decomposition algorithm discussed in this study is from the theory problems in a special block-angular structure. The theory or the decomposition principle is first examined. Since the subproblems of a linearprogramming problem can be in any of the three possible cases—optimal solution case, unbounded solution case and no solution case, examples are provided for solving the problem when its subproblems are in any of these cases. The concept of extreme directions is discussed due to its direct connection with the unbounded solution case. A parallel computation code, which can handle all these cases, is implemented in this study with the decomposition principle theory and its performance is tested for large-scale linearprogramming problems. Only the problems in the special block-angular structure can be solved with the decomposition principle. For general linearprogramming problems, this study proposed a new decomposition algorithm named “division by the interiorpoint”. The idea of this new algorithm is as follows: with a found interiorpoint inside the feasible region, divide the feasible region into multiple subregions and use multiple processors to solve the problem in each subregion. This new algorithm is first demonstrated with a few small numerical examples. A parallel computation code in this new idea is implemented and tested with large-scale linearprogramming problems.
Implementations of the primal-dual approach in solving linearprogramming prob- lems still face issues in maintaining numerical stability and in attaining high accu- racy. The major source of numerical problems occurs...
Implementations of the primal-dual approach in solving linearprogramming prob- lems still face issues in maintaining numerical stability and in attaining high accu- racy. The major source of numerical problems occurs during the solving of a highly ill-conditioned linear system within the algorithm. We perform a numerical investi gation to better understand the numerical behavior related to the solution accuracy of an implementation of an infeasible primal-dual interior-point (IPDIP) algorithm in LIPSOL. a linearprogramming solver. Hem our study, we learned that most test problems can achieve higher than the standard 10"' accuracy used in practice, and a high condition number of the ill-conditioned coefficient matrix does not solely deter- mine the attainable solution accuracy. Furthermore, we learned that the convergence of the primal residual is usually most affected by numerical errors. Most importantly, early satisfaction of the primal equality constraints is often conducive to eventually achieving high solution accuracy.
暂无评论