This study aims to merge the well-established ideas of bundle and Gradient Sampling (GS) methods to develop an algorithm for locating a minimizer of a nonsmoothconvex function. In the proposed method, with the help o...
详细信息
This study aims to merge the well-established ideas of bundle and Gradient Sampling (GS) methods to develop an algorithm for locating a minimizer of a nonsmoothconvex function. In the proposed method, with the help of the GS technique, we sample a number of differentiable auxiliary points around the current iterate. Then, by applying the standard techniques used in bundle methods, we construct a polyhedral (piecewise linear) model of the objective function. Moreover, by performing quasi-Newton updates on the set of auxiliary points, this polyhedral model is augmented with a regularization term that enjoys second-order information. If required, this initial model is improved by the techniques frequently used in GS and bundle methods. We analyse the global convergence of the proposed method. As opposed to the original GS method and some of its variants, our convergence analysis is independent of the size of the sample. In our numerical experiments, various aspects of the proposed method are examined using a variety of test problems. In particular, in contrast with many variants of bundle methods, we will see that the user can supply gradients approximately. Moreover, we compare the proposed method with some efficient variants of GS and bundle methods.
This paper studies distributed algorithms for the nonsmooth extended monotropic optimization problem, which is a general convex optimization problem with a certain separable structure. The considered nonsmooth objecti...
详细信息
This paper studies distributed algorithms for the nonsmooth extended monotropic optimization problem, which is a general convex optimization problem with a certain separable structure. The considered nonsmooth objective function is the sum of local objective functions assigned to agents in a multiagent network, with local set constraints and affine equality constraints. Each agent only knows its local objective function, local set constraint, and the information exchanged between neighbors. To solve the constrained convex optimization problem, we propose two novel distributed continuous-time subgradient-based algorithms, with projected output feedback and derivative feedback, respectively. Moreover, we prove the convergence of proposed algorithms to the optimal solutions under some mild conditions and analyze convergence rates, with the help of the techniques of variational inequalities, decomposition methods, and differential inclusions. Finally, we give an example to illustrate the efficacy of the proposed algorithms.
This paper focuses on a distributed nonsmooth composite optimization problem over a multiagent networked system, in which each agent is equipped with a local Lipschitz-differentiable function and two possibly nonsmoot...
详细信息
This paper focuses on a distributed nonsmooth composite optimization problem over a multiagent networked system, in which each agent is equipped with a local Lipschitz-differentiable function and two possibly nonsmoothfunctions, one of which incorporates a linear mapping. To address this problem, we introduce a synchronous distributed algorithm featuring uncoordinated relaxed factors. It serves as a generalized relaxed version of the recent method TriPD-Dist. Notably, the considered problem in the presence of asynchrony and delays remains relatively unexplored. In response, a new asynchronous distributed primal-dual proximal algorithm is first proposed, rooted in a comprehensive asynchronous model. It is operated under the assumption that agents utilize possibly outdated information from their neighbors, while considering arbitrary, time-varying, yet bounded delays. With some special adjustments, new asynchronous distributed extensions of existing centralized methods are obtained via the proposed asynchronous algorithm. Theoretically, a new convergence analysis technique of the proposed algorithms is provided. Specifically, a sublinear convergence rate is explicitly derived by showcasing that the iteration behaves as a nonexpansive operator. In addition, the proposed asynchronous algorithm converges the optimal solution in expectation under the same step-size conditions as its synchronous counterpart. Finally, numerical studies substantiate the efficacy of the proposed algorithms and validate their performance in practical scenarios.
暂无评论