In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall los...
详细信息
In this paper, we consider the distributed online optimization problem on a time-varying network, where each agent on the network has its own time-varying objective function and the goal is to minimize the overall loss accumulated. Moreover, we focus on distributed algorithms which do not use gradient information and projection operators to improve the applicability and computational efficiency. By introducing the deterministic differences and the randomized differences to substitute the gradient information of the objective functions and removing the projection operator in the traditional algorithms, we design two kinds of gradient-free distributed online optimization algorithms without projection step, which can economize considerable computational resources as well as has less limitations on the applicability. We prove that both of two algorithms achieves consensus of the estimates and regrets of Olog(T)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O\left( \log (T)\right) $$\end{document} for local strongly convex objective, respectively. Finally, a simulation example is provided to verify the theoretical results.
This letter considers online distributed convex constrained optimization over a time-varying multi-agent network. Agents in this network cooperate to minimize the global objective function through information exchange...
详细信息
This letter considers online distributed convex constrained optimization over a time-varying multi-agent network. Agents in this network cooperate to minimize the global objective function through information exchange with their neighbors and local computation. Since the capacity or bandwidth of communication channels often is limited, a random quantizer is introduced to reduce the transmission bits. Through incorporating this quantizer, we develop a quantized distributed online projection-free optimization algorithm, which can achieve the saving of communication resources and computational costs. For different parameter settings of the quantizer, we establish the corresponding dynamic regret upper bounds of the proposed algorithm and reveal the trade-off between the convergence performance and the quantization effect. Finally, the theoretical results are illustrated by the simulation of distributed online linear regression problem.
暂无评论