Characterizations of optimality are presented for infinite-dimensional convex programming problems, where the number of constraints is not restricted to be finite and where no constraint qualification is assumed. The ...
详细信息
Characterizations of optimality are presented for infinite-dimensional convex programming problems, where the number of constraints is not restricted to be finite and where no constraint qualification is assumed. The optimality conditions are given in asymptotic forms using subdifferentials and epsilon-subdifferentials. They are obtained by employing a version of the Farkas lemma for systems involving convex functions. An extension of the results to problems with a semiconvex objective function is also given.
This paper addresses the issue of how to solve convex programming problems by analog artificial neural networks (ANN's), with applications in asychronous transfer mode (ATM) resource management. We first show that...
详细信息
This paper addresses the issue of how to solve convex programming problems by analog artificial neural networks (ANN's), with applications in asychronous transfer mode (ATM) resource management. We first show that the essential and difficult optimization problem of dimensioning the system of virtual subnetworks in ATM networks can be modeled as a convex programming tusk. Here the transformation of the problem into a convex: programming task is a nontrivial step. We also present and analyze an analog ANN architecture that is capable of solving such convert programming tasks with time-varying penalty multipliers, The latter property makes it possible to perform quick sensitivity analysis With respect to the constraints in order to identify the bottleneck capacities in the network or those which give the highest return if we invest in extending them.
We consider dual coordinate ascent methods for minimizing a strictly convex (possibly nondifferentiable) function subject to linear constraints. Such methods are useful in large-scale applications (e.g., entropy maxim...
详细信息
We consider dual coordinate ascent methods for minimizing a strictly convex (possibly nondifferentiable) function subject to linear constraints. Such methods are useful in large-scale applications (e.g., entropy maximization, quadratic programming, network flow), because they are simply, can exploit sparsity and in certain cases are highly parallelizable. We establish their global convergence under weak conditions and a free-steering order of relaxation. Previous comparable results were restricted to special problems with separable costs and equality constraints. Our convergence framework unifies to a certain extent the approaches of Bregman. Censor and Lent, De Pierro and lusem, and Luo and Tseng, and complements that of Bertsekas and Tseng.
Forward-backward splitting methods provide a range of approaches to solving large-scale optimization problems and variational inequalities in which structures conducive to decomposition can be utilized. Apart from spe...
详细信息
Forward-backward splitting methods provide a range of approaches to solving large-scale optimization problems and variational inequalities in which structures conducive to decomposition can be utilized. Apart from special cases where the forward step is absent and a version of the proximal point algorithm comes out, efforts at evaluating the convergence potential of such methods have so far relied on Lipschitz properties and strong monotonicity, or inverse strong monotonicity, of the mapping involved in the forward step, the perspective mainly being that of projection algorithms. Here, convergence is analyzed by a technique that allows properties of the mapping in the backward step to be brought in as well. For the first time in such a general setting, global and local contraction rates are derived;moreover, they are derived in a form which makes it possible to determine the optimal step size relative to certain constants associated with the given problem. Insights are thereby gained into the effects of shifting strong monotonicity between the forward and backward mappings when a splitting is selected.
We consider the problem min f(x) s.t. x is an element of C, where C is a closed and cover subset of R-n with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as a...
详细信息
We consider the problem min f(x) s.t. x is an element of C, where C is a closed and cover subset of R-n with nonempty interior, and introduce a family of interior point methods for this problem, which can be seen as approximate versions of generalized proximal point methods. Each step consists of a one-dimensional search along either a curve or a segment in the interior of C. The information about the boundary of C is contained in a generalized distance which defines the segment of the curve, and whose gradient diverges at the boundary of C. The objective of the search is either f or f plus a regularizing term. When C = R-n, the usual steepest descent method is a particular case of our general scheme, and we manage to extend known convergence results for the steepest descent method to our family: for nonregularized one-dimensional searches, under a level set boundedness assumption on f, the sequence is bounded, the difference between consecutive iterates converges to 0 and every cluster point of the sequence satisfies first-order optimality conditions for the problem, i.e. is a solution if f is convex. For the regularized search and convex f, no boundedness condition on f is needed and full and global convergence of the sequence to a solution of the problem is established.
Simultaneous exploitation of task and data parallelism provides significant benefits for many applications. The basic approach for exploiting task and data parallelism is to use a task graph representation (Macro Data...
详细信息
Simultaneous exploitation of task and data parallelism provides significant benefits for many applications. The basic approach for exploiting task and data parallelism is to use a task graph representation (Macro Dataflow Graph) for programs to decide on the degree of data parallelism to be used for each task (allocation) and an execution order for the tasks (scheduling). Previously, we presented a two step approach for allocation and scheduling by considering the two steps to be independent of each other. In this paper, we present a new simultaneous approach which uses constraints to model the scheduler during allocation. The new simultaneous approach provides significant benefits over our earlier approach for the benchmark task graphs that we have considered.
A computational study of some logarithmic barrier decomposition algorithms for semi-infinite programming is presented in this paper. The conceptual algorithm is a straightforward adaptation of the logarithmic barrier ...
详细信息
In this paper, directional differentiability properties of the optimal value function of a parameterized semi-infinite programming problem are studied. It is shown that if the unperturbed semi-infinite programming pro...
详细信息
In this paper, directional differentiability properties of the optimal value function of a parameterized semi-infinite programming problem are studied. It is shown that if the unperturbed semi-infinite programming problem is convex, then the corresponding optimal value function is directionally differentiable under mild regularity assumptions. A max-min formula for the directional derivatives, well-known in the finite convex case, is given.
The phi-divergence proximal method is an extension of the proximal minimization algorithm, where the usual quadratic proximal term is substituted by a class of convex statistical distances, called phi-divergences. In ...
详细信息
The phi-divergence proximal method is an extension of the proximal minimization algorithm, where the usual quadratic proximal term is substituted by a class of convex statistical distances, called phi-divergences. In this paper, we study the convergence rate of this nonquadratic proximal method for convex and particularly linear programming. We identify a class of phi-divergences for which superlinear convergence is attained both for optimization problems with strongly convex objectives at the optimum and linear programming problems, when the regularization parameters tend to zero. We prove also that, with regularization parameters bounded away from zero, convergence is at least linear for a wider class of phi-divergences, when the method is applied to the same kinds of problems. We further analyze the associated class of augmented Lagrangian methods for convex programming with nonquadratic penalty terms, and prove convergence of the dual generated by these methods for linear programming problems under a weak nondegeneracy assumption.
We give several reSSPIsults, some new and some old, but apparently overlooked, that provide useful characterizations of barrier functions and their relationship to problem function properties. In particular, we show t...
详细信息
暂无评论