A nonlinear convex programming problem is solved by methods of interval arithmetic which take into account the input errors and the round-off errors. The problem is reduced to the solution of a nonlinear parameter dep...
详细信息
A nonlinear convex programming problem is solved by methods of interval arithmetic which take into account the input errors and the round-off errors. The problem is reduced to the solution of a nonlinear parameter dependent system of equations. Moreover error estimations are developed for special problems with uniformly convex cost functions.
Approximating a given continuous probability distribution of the data of a linear program by a discrete one yields solution methods for the stochastic linear programming problem with complete fixed recourse. For a pro...
详细信息
Approximating a given continuous probability distribution of the data of a linear program by a discrete one yields solution methods for the stochastic linear programming problem with complete fixed recourse. For a procedure along the lines of [8], the reduction of the computational amount of work compared to the usual revised simplex method is figured out. Furthermore, an alternative method is proposed, where by refining particular discrete distributions the optimal value is approximated.
Large practical linear and integer programming problems are not always presented in a form which is the most compact representation of the problem. Such problems are likely to posses generalized upper bound(GUB) and r...
详细信息
Large practical linear and integer programming problems are not always presented in a form which is the most compact representation of the problem. Such problems are likely to posses generalized upper bound(GUB) and related structures which may be exploited by algorithms designed to solve them *** steps of an algorithm which by repeated application reduces the rows, columns, and bounds in a problem matrix and leads to the freeing of some variables are first presented. The ‘unbounded solution’ and ‘no feasible solution’ conditions may also be detected by this. Computational results of applying this algorithm are presented and *** algorithm to detect structure is then described. This algorithm identifies sets of variables and the corresponding constraint relationships so that the total number of GUB-type constraints is maximized. Comparisons of computational results of applying different heuristics in this algorithm are presented and discussed.
Necessary and sufficient conditions for a linear programming problem whose parameters (both in constraints and in the objective function) are prescribed by intervals are given under which any linear programming proble...
详细信息
Necessary and sufficient conditions for a linear programming problem whose parameters (both in constraints and in the objective function) are prescribed by intervals are given under which any linear programming problem with parameters being fixed in these intervals has a finite optimum.
In this paper sufficient conditions for local and superlinear convergence to a Kuhn—Tucker point are established for a class of algorithms which may be broadly defined and comprise a quadratic programming algorithm f...
详细信息
In this paper sufficient conditions for local and superlinear convergence to a Kuhn—Tucker point are established for a class of algorithms which may be broadly defined and comprise a quadratic programming algorithm for repeated solution of a subproblem and a variable metric update to develop the Hessian in the subproblem. In particular the DFP update and an update attributed to Powell are shown to provide a superlinear convergent subclass of algorithms provided a start is made sufficiently close to the solution and the initial Hessian in the subproblem is sufficiently close to the Hessian of the Lagrangian at this point.
The object of this paper is to prove duality theorems for quasiconvex programming problems. The principal tool used is the transformation introduced by Manas for reducing a nonconvex programming problem to a convex pr...
详细信息
The object of this paper is to prove duality theorems for quasiconvex programming problems. The principal tool used is the transformation introduced by Manas for reducing a nonconvex programming problem to a convex programming problem. Duality in the case of linear, quadratic, and linear-fractional programming is a particular case of this general case.
In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a function f(x) subject to a constraint φ{symbol}(x)=0. Here, f is a scalar, x i...
详细信息
In this paper, the numerical solution of the basic problem of mathematical programming is considered. This is the problem of minimizing a function f(x) subject to a constraint φ{symbol}(x)=0. Here, f is a scalar, x is an n-vector, and φ{symbol} is a q-vector, with qTφ{symbol}(x)+kφ{symbol}T(x) φ{symbol}(x). Here, the q-vector λ is an approximation to the Lagrange multiplier, and the scalar k>0 is the penalty constant. Previously, the augmented penalty function W(x, λ, k) was used by Hestenes in his method of multipliers. In Hestenes' version, the method of multipliers involves cycles, in each of which the multiplier and the penalty constant are held constant. After the minimum of the augmented penalty function is achieved in any given cycle, the multiplier λ is updated, while the penalty constant k is held unchanged. In this paper, two modifications of the method of multipliers are presented in order to improve its convergence characteristics. The improved convergence is achieved by (i) increasing the updating frequency so that the number of iterations in a cycle is shortened to ΔN=1 for the ordinary-gradient algorithm and the modified-quasilinearization algorithm and ΔN=n for the conjugate-gradient algorithm, (ii) imbedding Hestenes' updating rule for the multiplier λ into a one-parameter family and determining the scalar parameter β so that the error in the optimum condition is minimized, and (iii) updating the penalty constant k so as to cause some desirable effect in the ordinary-gradient algorithm, the conjugate-gradient algorithm, and the modified-quasilinearization algorithm. For the sake of identification, Hestenes' method of multipliers is called Method MM-1, the modification including (i) and (ii) is called Method MM-2, and the modification including (i), (ii), (iii) is called Method MM-3. Evaluation of the theory is accomplished w
In dealing with dynamic economic policy models one encounters optimization problems whose objective function is an integral of a linear function of a finite number of continuous variables and whose constraints are lin...
详细信息
The problem under consideration consists in maximizing a separable concave objective functional on a class of non-negative Lebesgue integrable functions satisfying a system of linear constraints. The problem is approx...
详细信息
Optimization of physical systems consisting of interrelated subsystems which can be formulated as geometric programming problems is considered. Necessary and sufficient conditions are derived for decomposing the optim...
详细信息
暂无评论