This textbook provides a self-contained introduction to linearprogramming using MATLAB software to elucidate the development of algorithms and theory. Early chapters cover linear algebra basics, the simplex method, d...
详细信息
ISBN:
(纸本)9780898716436
This textbook provides a self-contained introduction to linearprogramming using MATLAB software to elucidate the development of algorithms and theory. Early chapters cover linear algebra basics, the simplex method, duality, the solving of large linear problems, sensitivity analysis, and parametric linearprogramming. In later chapters, the authors discuss quadratic programming, linear complementarity, interior-pointmethods, and selected applications of linearprogramming to approximation and classification problems. Exercises are interwoven with the theory presented in each chapter, and two appendices provide additional information on linear algebra, convexity, nonlinear functions, and on available MATLAB commands, respectively. Readers can access MATLAB codes and associated mex files at a Web site maintained by the authors. Only a basic knowledge of linear algebra and calculus is required to understand this textbook, which is geared toward junior and senior-level undergraduate students, first-year graduate students, and researchers unfamiliar with linearprogramming.
A wide variety of problems involving analysis of systems can be rewritten as a semidefinite program. When solving these problems optimization algorithms are used. Large size makes the problems unsolvable in practice a...
详细信息
A wide variety of problems involving analysis of systems can be rewritten as a semidefinite program. When solving these problems optimization algorithms are used. Large size makes the problems unsolvable in practice and computationally more effective solvers are needed. This paper investigates how to exploit structure and problem knowledge in some control applications. It is shown that inexact search directions are useful to reduce the computational burden and that operator formalism can be utilized to derive tailored calculations.
During the iterations of interiorpointmethods symmetric indefinite systems are decomposed by LDLT factorization. This step can be performed in a special way where the symmetric indefinite system is transformed to a ...
详细信息
During the iterations of interiorpointmethods symmetric indefinite systems are decomposed by LDLT factorization. This step can be performed in a special way where the symmetric indefinite system is transformed to a positive definite one, called the normal equations system. This approach proved to be efficient in most of the cases and numerically reliable, due to the positive definite property. It has been recognized, however, that in case the linear program contains "dense" columns, this approach results in an undesirable fill-in during the computations and the direct factorization of the symmetric indefinite system is more advantageous. The paper describes a new approach to detect cases where the system of normal equations is not preferable for interiorpointmethods and presents a new algorithm for detecting the set of columns which is responsible for the excessive fill-in in the matrix A A (T). By solving large-scale linearprogramming problems we demonstrate that our heuristic is reliable in practice.
This article introduces a new method for computing regression quantile functions. This method applies a finite smoothing algorithm based on smoothing the nondifferentiable quantile regression objective function rho(ta...
详细信息
This article introduces a new method for computing regression quantile functions. This method applies a finite smoothing algorithm based on smoothing the nondifferentiable quantile regression objective function rho(tau). The smoothing can be done for all tau is an element of (0, 1), and the: convergence is finite for any finite number of tau(i) is an element of (0, 1), i = 1,..., N. Numerical comparison shows that the finite smoothing algorithm outperforms the simplex algorithm in computing speed. Compared with the powerful interiorpoint algorithm, which was introduced in an earlier article, it is competitive overall;however, it is significantly faster than the interiorpoint algorithm when the design matrix in quantile regression has a large number of covariates. Additionally, the new algorithm provides the same accuracy as the simplex algorithm. In contrast, the interiorpoint algorithm gives only the approximate solutions in theory, and rounding may be necessary to improve the accuracy of these solutions in practice.
interior-pointmethods (IPMs) are not only very effective in practice for solving linear optimization problems but also have polynomial-time complexity. Despite the practical efficiency of large-update algorithms, fro...
详细信息
interior-pointmethods (IPMs) are not only very effective in practice for solving linear optimization problems but also have polynomial-time complexity. Despite the practical efficiency of large-update algorithms, from a theoretical point of view, these algorithms have a weaker iteration bound with respect to small-update algorithms. In fact, there is a significant gap between theory and practice for large-update algorithms. By introducing self-regular barrier functions, Peng, Roos and Terlaky improved this gap up to a factor of log n. However, checking these self-regular functions is not simple and proofs of theorems involving these functions are very complicated. Roos et al. by presenting a new class of barrier functions which are not necessarily self-regular, achieved very good results through some much simpler theorems. In this paper we introduce a new kernel function in this class which yields the best known complexity bound, both for large-update and small-update methods.
Given a data matrix, we find its nearest symmetric positive-semidefinite Toeplitz matrix. In this paper, we formulate the problem as an optimization problem with a quadratic objective function and semidefinite constra...
详细信息
Given a data matrix, we find its nearest symmetric positive-semidefinite Toeplitz matrix. In this paper, we formulate the problem as an optimization problem with a quadratic objective function and semidefinite constraints. In particular, instead of solving the so-called normal equations, our algorithm eliminates the linear feasibility equations from the start to maintain exact primal and dual feasibility during the course of the algorithm. Subsequently, the search direction is found using an inexact Gauss-Newton method rather than a Newton method on a symmetrized system and is computed using a diagonal preconditioned conjugate-gradient-type method. Computational results illustrate the robustness of the algorithm.
Semidefinite programming is an extension of linearprogramming in which the linear inequality constraints are replaced by the linear matrix inequalities. Semidefinite programs can be solved by interior-pointmethods t...
Semidefinite programming is an extension of linearprogramming in which the linear inequality constraints are replaced by the linear matrix inequalities. Semidefinite programs can be solved by interior-pointmethods that extend interior-pointmethods for linearprogramming and have a similar worst-case complexity. Since the 1990s, numerous applications of semidefinite programming have been discovered in control, signal processing, machine learning, combinatorial optimization, and other areas. Several high-quality general-purpose solver packages have also been developed. Much of the recent work in this field has centered around optimization problems involving nonnegative polynomial constraints. The basic observation is that sum-of-squares formulations (or relaxations) of such problems can be solved by semidefinite programming. In practice, however, the semidefinite programs that result from this approach are often challenging for general-purpose solvers due to the presence of large auxiliary matrix variables. It is therefore of interest to develop specialized algorithms for semidefinite programs derived from sum-of-squares formulations. In this thesis it is shown that a substantial reduction in the complexity can be achieved by exploiting problem structure. A fast method is presented, based on reformulating the sum-of-squares constraints using discrete transforms. This leads to semidefinite programs with a low-rank structure that is easily exploited in interior-point algorithms. For optimization problems involving trigonometric polynomials the idea can be implemented in a fast and stable way via the discrete Fourier transform. The advantages of this approach are demonstrated with examples of one- and two-dimensional filter design problems.
This paper gives a theory and method that specifies how the optimal solution of a linear program changes when the right-hand side of the original problem is changed and when the original optimal solution exhibits prim...
详细信息
This paper gives a theory and method that specifies how the optimal solution of a linear program changes when the right-hand side of the original problem is changed and when the original optimal solution exhibits primal degeneracy. The method determines an optimal change vector as the resource availabilities change, and it calculates a range for which this vector is valid. Resource availabilities are allowed to change simultaneously in any arbitrary proportion, and the concept of an "efficient resource bundle" is introduced. The geometry of the optimal change vector is presented from which the desired results are derived. The connection between the geometrical results and their algebraic calculation in tableau-form is shown. Our method uses a pivoting algorithm and the relationship with post-optimality results from interior-pointmethods will be established. (c) 2005 Elsevier Ltd. All rights reserved.
暂无评论