We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many mor...
详细信息
We extend Clarkson's randomized algorithm for linear programming to a general scheme for solving convex optimization problems. The scheme can be used to speed up existing algorithms on problems which have many more constraints than variables. In particular, we give a randomized algorithm for solving convex quadratic and linear programs, which uses that scheme together with a variant of Karmarkar's interior point method. For problems with n constraints, d variables, and input length L, if n = OMEGA(d2), the expected total number of major Karmarkar's iterations is O(d2(log n)L), compared to the best known deterministic bound of O(square-root n L). We also present several other results which follow from the general scheme.
A Bregman function is a strictly convex, differentiable function that induces a well-behaved distance measure or D-function on Euclidean space. This paper shows that, for every Bregman function, there exists a '...
详细信息
A Bregman function is a strictly convex, differentiable function that induces a well-behaved distance measure or D-function on Euclidean space. This paper shows that, for every Bregman function, there exists a ''nonlinear'' version of the proximal point algorithm, and presents an accompanying convergence theory. Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods. These multiplier methods are different from those studied by Kort and Bertsekas, and include nonquadratic variations on the proximal method of multipliers.
This paper is concerned with the design of gain-scheduled controllers with guaranteed H infinity performance for a class of linear parameter-varying (LPV) plants. Here the plant state-space matrices are assumed to dep...
详细信息
This paper is concerned with the design of gain-scheduled controllers with guaranteed H infinity performance for a class of linear parameter-varying (LPV) plants. Here the plant state-space matrices are assumed to depend affinely on a vector theta of time-varying real parameters. Assuming real-time measurement of these parameters, they can be fed to the controller to optimize the performance and robustness of the closed-loop system. The resulting controller is time-varying and automatically 'gain-scheduled' along parameter trajectories. Based on the notion of quadratic H infinity performance, solvability conditions are obtained for continuous- and discrete-time systems. In both cases the synthesis problem reduces to solving a system of linear matrix inequalities (LMIs). The main benefit of this approach is to bypass most difficulties associated with more classical schemes such as gain-interpolation or gain-scheduling techniques. The methodology presented in this paper is applied to the gain scheduling of a missile autopilot. The missile has a large operating range and high angles of attack. The difficulty of the problem is reinforced by tight performance requirements as well as the presence of flexible modes that limit the control bandwidth.
We analyze an algorithm for the problem min f(x) s.t. x greater than or equal to 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x(j)(k+1) = x(j)(k)(1-lambda(k) del f(x(k))(j)) wit...
详细信息
We analyze an algorithm for the problem min f(x) s.t. x greater than or equal to 0 suggested, without convergence proof, by Eggermont. The iterative step is given by x(j)(k+1) = x(j)(k)(1-lambda(k) del f(x(k))(j)) with lambda(k) > 0 determined through a line search. This method can be seen as a natural extension of the steepest descent method for unconstrained optimization, and we establish convergence properties similar to those known for steepest descent, namely weak convergence to a KKT point for a general f, weak convergence to a solution for convex f and full convergence to the solution for strictly convex f. Applying this method to a maximum likelihood estimation problem, we obtain an additively overrelaxed version of the EM Algorithm. We extend the full convergence results known for EM to this overrelaxed version by establishing local Fejer monotonicity to the solution set.
In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty funct...
详细信息
In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy ''proximal'' term in place of a quadratic. We strengthen substantially the available convergence results for these methods, and we derive the convergence rate of these methods when applied to linear programs.
We propose analyzing interior-point methods using notions of problem-instance size which are direct generalizations of the condition number of a matrix. The notions pertain to linear programming quite generally;the un...
详细信息
We propose analyzing interior-point methods using notions of problem-instance size which are direct generalizations of the condition number of a matrix. The notions pertain to linear programming quite generally;the underlying vector spaces are not required to be finite-dimensional and, more importantly, the cones defining nonnegativity are not required to be polyhedral. Thus, for example, the notions are appropriate in the context of semi-definite programming, We prove various theorems to demonstrate how the notions can be used in analyzing interior-point methods. These theorems assume little more than that the interiors of the cones (defining nonnegativity) are the domains of self-concordant barrier functions.
A proximal bundle method is presented for minimizing a nonsmooth convex function f. At each iteration, it requires only one approximate evaluation of f and its epsilon-subgradient, and it finds a search direction via ...
详细信息
A proximal bundle method is presented for minimizing a nonsmooth convex function f. At each iteration, it requires only one approximate evaluation of f and its epsilon-subgradient, and it finds a search direction via quadratic programming. When applied to the Lagrangian decomposition of convex programs, it allows for inexact solutions of decomposed subproblems;yet, increasing their required accuracy automatically, it asymptotically finds both the primal and dual solutions. It is an implementable approximate version of the proximal point algorithm. Some encouraging numerical experience is reported.
This note considers the simultaneous stabilization of a collection of linear systems by periodic output feedback. convex programming can be used to compute a simultaneously stabilizing output injection matrix, which s...
详细信息
This note considers the simultaneous stabilization of a collection of linear systems by periodic output feedback. convex programming can be used to compute a simultaneously stabilizing output injection matrix, which serves as a boundary condition for a minimization problem that yields a stabilizing output feedback gain. The procedure is illustrated by a numerical example.
We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almos...
详细信息
We study proximal level methods for convex optimization that use projections onto successive approximations of level sets of the objective corresponding to estimates of the optimal value. We show that they enjoy almost optimal efficiency estimates. We give extensions for solving convex constrained problems, convex-concave saddle-point problems and variational inequalities with monotone operators. We present several variants, establish their efficiency estimates, and discuss possible implementations. In particular, our methods require bounded storage in contrast to the original level methods of Lemarechal, Nemirovskii and Nesterov.
This paper considers robust performance analysis and state feedback design for systems with time-varying parameter uncertainties. The notion of a strongly robust H infinity performance criterion is introduced, and its...
详细信息
This paper considers robust performance analysis and state feedback design for systems with time-varying parameter uncertainties. The notion of a strongly robust H infinity performance criterion is introduced, and its applications in robust performance analysis and synthesis for nominally linear systems with time-varying uncertainties are discussed and compared with the constant scared small gain criterion. It is shown that most robust performance analysis and synthesis problems under this strongly robust H infinity performance criterion can be transformed into linear matrix inequality problems, and can be solved through finite-dimensional convex programming. The results are in general less conservative than those using small gain type criteria.
暂无评论