In the hierarchical optimization and control of large scale steady state systems, the Interaction Balance Method (IBM) is of great importance. However, there are many practical problems which are not IBM applicable. T...
详细信息
In the hierarchical optimization and control of large scale steady state systems, the Interaction Balance Method (IBM) is of great importance. However, there are many practical problems which are not IBM applicable. This paper introduces a objective convexifying technique---- Sequential convexifying Method (SCM), which turns most of the IBM unsolvable problems into solvable ones. Being different from the Augmented Lagrangian Method, SCM maintains the separability of the objective after convexification, which has eased the task of decomposation significantly. A convergence proof of SCM as well as the estimation of the convergence ratio is presented. Simulation is also provided.
The optimization of a linear function on a closed convex set, F, can be stated as a linear semi-infinite program, since F is the solution set of (usually) infinite linear inequality systems, the so-called linear repre...
详细信息
We present a greedy algorithm for solving a special class of convex programming problems and establish a connection with polymatroid theory which yields a theoretical explanation and verification of the algorithm via ...
详细信息
We present a greedy algorithm for solving a special class of convex programming problems and establish a connection with polymatroid theory which yields a theoretical explanation and verification of the algorithm via some recent results of S. Fujishige.
We give three models for smoothing empirical functions. The first model is the well known graduating model given by Whittaker [7], the second and the third model are new. New iterative procedures are presented for sol...
详细信息
We give three models for smoothing empirical functions. The first model is the well known graduating model given by Whittaker [7], the second and the third model are new. New iterative procedures are presented for solving these models. These iterative procedures are based on the equilibrium conditions of l p programming, so our solution procedure for solving Whittaker's model is also new.
作者:
THAKUR, LSYale Univ
Sch of Management New Haven CT USA Yale Univ Sch of Management New Haven CT USA
Estimation of a convex function interpolating its known values and satisfying certain smooth-ness properties is needed in some applications and has been investigated in many studies from various perspectives. Without ...
详细信息
Estimation of a convex function interpolating its known values and satisfying certain smooth-ness properties is needed in some applications and has been investigated in many studies from various perspectives. Without the convexity assumption, Karlin’s theorem characterizes the solution to the problem studied here while under convexity, the analogous result is due to Smith and Ward. The aim of this paper is to give a convex programming characterization that can be used to calculate an optimal spline which solves a given problem of degree 2. Specifically, our aim is to determine a smooth (f, $f^{(1)} $ absolutely continuous) convex spline f of second degree, which interpolates $(r + 2)$ given points in $[a,b]$, and minimizes the Tchebycheff norm $\| {f^{(2)} } \|_\infty $, with $f^{(2)} $ essentially bounded in $[a,b]$. The convexity of the formulation enables us to calculate an optimal spline using widely available computer routines for nonlinear optimization. The approach is illustrated by providing the convex programming formulations and the computer-obtained optimal solutions for two numerical examples.
We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual functi...
详细信息
We give a bound on the distance between an arbitrary point and the solution set of a monotone linear complementarity problem in terms of a condition constant that depends on the problem data only and a residual function of the violations of the complementary problem conditions by the point considered. When the point satisfies the linear inequalities of the complementarity problem, the residual consists of the complementarity condition plus its square root. This latter term is essential and without it the error bound cannot hold. We also show that another natural residual that has been employed to bound errors for strictly monotone linear complementarity problems fails to bound errors for the monotone case considered here.
We give a detailed proof, under slightly weaker conditions on the objective function, that a modified Frank-Wolfe algorithm based on Wolfe's ‘away step’ strategy can achieve geometric convergence, provided a str...
详细信息
We give a detailed proof, under slightly weaker conditions on the objective function, that a modified Frank-Wolfe algorithm based on Wolfe's ‘away step’ strategy can achieve geometric convergence, provided a strict complementarity assumption holds.
A linear state and control constrained problem arising in optimal routing in communication networks is investigated by Fenchel duality methods. The problem reduces to a dual program having a particularly simple solution.
A linear state and control constrained problem arising in optimal routing in communication networks is investigated by Fenchel duality methods. The problem reduces to a dual program having a particularly simple solution.
Stochastic quasi-gradient methods for solving convex problems of stochastic optimization are considered. The principal idea of methods consists in using random estimates of gradients of the objective function to searc...
详细信息
Stochastic quasi-gradient methods for solving convex problems of stochastic optimization are considered. The principal idea of methods consists in using random estimates of gradients of the objective function to search for the point of extremum. To control algorithm parameters the recurrence adaptive procedures are suggested which are quasi -gradient algorithms with respect to parameters. The convergence is proved and the estimates of the rate of convergence of such algorithms are given. The results of computations for several stochastic optimization problems are considered.
暂无评论