We present a method to solve constrained convex stochastic optimization problems when the objective is a finite sum of convex functions . Our method is based on Incremental Stochastic Subgradient algorithms and string...
详细信息
We present a method to solve constrained convex stochastic optimization problems when the objective is a finite sum of convex functions . Our method is based on Incremental Stochastic Subgradient algorithms and string-averaging techniques, with an assumption that the subgradient directions are affected by random errors in each iteration. Our analysis allows the method to perform approximate projections onto the feasible set in each iteration. We provide convergence results for the case where a diminishing step-size rule is used. We test our method in a large set of random instances of a stochastic convex programming problem and we compare its performance with the robust mirror descent stochastic approximation algorithm proposed in Nemirovski et al. (Robust stochastic approximation approach to stochastic programming, SIAM J Optim 19 (2009), pp. 15741609).
In this doctoral thesis, we propose new iterative methods for solving a class of convex optimization problems. In general, we consider problems in which the objective function is composed of a finite sum of convex fun...
详细信息
In this doctoral thesis, we propose new iterative methods for solving a class of convex optimization problems. In general, we consider problems in which the objective function is composed of a finite sum of convex functions and the set of constraints is, at least, convex and closed. The iterative methods we propose are basically designed through the combination of incremental subgradient methods and string-averaging algorithms. Furthermore, in order to obtain methods able to solve optimization problems with many constraints (and possibly in high dimensions), generally given by convex functions, our analysis includes an operator that calculates approximate projections onto the feasible set, instead of the Euclidean projection. This feature is employed in the two methods we propose; one deterministic and the other stochastic. A convergence analysis is proposed for both methods and numerical experiments are performed in order to verify their applicability, especially in large scale problems.
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a give...
详细信息
We present a method for non-smooth convex minimization which is based on subgradient directions and string-averaging techniques. In this approach, the set of available data is split into sequences (strings) and a given iterate is processed independently along each string, possibly in parallel, by an incremental subgradient method. (ISM). The end-points of all strings are averaged to form the next iterate. The method is useful to solve sparse and large-scale non-smooth convex optimization problems, such as those arising in tomographic imaging. A convergence analysis is provided under realistic, standard conditions. Numerical tests are performed in a tomographic image reconstruction application, showing good performance for the convergence speed when measured as the decrease ratio of the objective function, in comparison to classical ISM.
暂无评论