This article proposes an extended zero-gradient-sum (EZGS) approach for solving constrained distributed optimization with free initialization and desired convergence properties. A Newton-based continuous-time algorith...
详细信息
This article proposes an extended zero-gradient-sum (EZGS) approach for solving constrained distributed optimization with free initialization and desired convergence properties. A Newton-based continuous-time algorithm is first designed for general constrainedoptimization, which is adapted to handle inequality constraints by using log-barrier penalty functions. Then, a general class of EZGS dynamics is developed to address equation-constrained distributed optimization, where an auxiliary dynamics is introduced to ensure the final ZGS property from any initialization. It is demonstrated that for typical consensus protocols and auxiliary dynamics, the proposed EZGS dynamics can achieve the performance with exponential/finite/fixed/prescribed-time (PT) convergence. Particularly, the nonlinear consensus protocols for finite-time EZGS algorithms allow for heterogeneous power coefficients. Significantly, the proposed PT EZGS dynamics is continuous, uniformly bounded, and capable of reaching the optimal solution in a single stage. Furthermore, the barrier method is employed to handle the inequality constraints effectively. Finally, the efficiency and performance of the proposed algorithms are validated through numerical examples, highlighting their superiority over existing methods. In particular, by selecting appropriate protocols, the proposed EZGS dynamics can achieve desired convergence performance.
We develop an exponentially convergent distributed algorithm to minimize a sum of nonsmooth cost functions with a set constraint. The set constraint generally leads to the nonlinearity in distributed algorithms, and r...
详细信息
We develop an exponentially convergent distributed algorithm to minimize a sum of nonsmooth cost functions with a set constraint. The set constraint generally leads to the nonlinearity in distributed algorithms, and results in difficulties to derive an exponential rate. In this article, we remove the consensus constraints by an exact penalty method, and then propose a distributed projected subgradient algorithm by virtue of a differential inclusion and a differentiated projection operator. Resorting to nonsmooth approaches, we prove the convergence for this algorithm, and moreover, provide both the sublinear and exponential rates under some mild assumptions.
This paper investigates a class of constraineddistributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into accoun...
详细信息
This paper investigates a class of constraineddistributed zeroth-order optimization(ZOO) problems over timevarying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into account recent progress and addressing these concerns separately, there remains a lack of solutions offering theoretical guarantees for both privacy protection and constrained ZOO over time-varying unbalanced *** hereby propose a novel algorithm, termed the differential privacy(DP) distributed push-sum based zeroth-order constrainedoptimization algorithm(DP-ZOCOA). Operating over time-varying unbalanced graphs, DP-ZOCOA obviates the need for supplemental suboptimization problem computations, thereby reducing overhead in comparison to distributed primary-dual methods. DP-ZOCOA is specifically tailored to tackle constrained ZOO problems over time-varying unbalanced graphs,offering a guarantee of convergence to the optimal solution while robustly preserving privacy. Moreover, we provide rigorous proofs of convergence and privacy for DP-ZOCOA, underscoring its efficacy in attaining optimal convergence without constraints. To enhance its applicability, we incorporate DP-ZOCOA into the federated learning framework and formulate a decentralized zeroth-order constrained federated learning algorithm(ZOCOA-FL) to address challenges stemming from the timevarying imbalance of communication topology. Finally, the performance and effectiveness of the proposed algorithms are thoroughly evaluated through simulations on distributed least squares(DLS) and decentralized federated learning(DFL) tasks.
A nonsmooth distributedoptimization problem subject to affine equality and convex inequality is considered in this paper. All the local objective functions in the distributedoptimization problem possess a common dec...
详细信息
A nonsmooth distributedoptimization problem subject to affine equality and convex inequality is considered in this paper. All the local objective functions in the distributedoptimization problem possess a common decision variable. And taking privacy into consideration, each agent doesn't share its local information with other agents, including the information about the local objective function and constraint set. To cope with this distributedoptimization, a neurodynamic approach based on the penalty-like methods is proposed. It is proved that the presented neurodynamic approach is convergent to an optimal solution to the considered distributedoptimization problem. The proposed neurodynamic approach in this paper has lower model complexity and computational load via reducing auxiliary variables. In the end, two illustrative examples are given to show the effectiveness and practical application of the proposed neural network. (c) 2019 Elsevier B.V. All rights reserved.
A weight-balanced network plays an important role in the exact convergence of a distributedoptimization algorithm, but is not always satisfied in practice. Different from most of existing works focusing on designing ...
详细信息
A weight-balanced network plays an important role in the exact convergence of a distributedoptimization algorithm, but is not always satisfied in practice. Different from most of existing works focusing on designing distributed algorithms, in this article, we analyze the convergence of a well-known distributed projected subgradient algorithm over time-varying general graph sequences, i.e., the weight matrices of the network are only required to be row stochastic instead of doubly stochastic. We first show that there may exist a graph sequence such that the algorithm is not convergent when the network switches freely within finitely many graphs. Then, to guarantee its convergence under any uniformly jointly strongly connected graph sequence, we provide a necessary and sufficient condition on the cost functions, i.e., the intersection of optimal solution sets to all local optimization problems is not empty. Furthermore, we surprisingly find that the algorithm is convergent for any periodically switching graph sequence, but the converged solution minimizes a weighted sum of local cost functions, where the weights depend on the Perron vectors of some product matrices of the underlying switching graphs. Finally, we consider a slightly broader class of quasi-periodically switching graph sequences, and show that the algorithm is convergent for any quasi-periodic graph sequence if and only if the network switches between only two graphs. This work helps us understand impacts of communication networks on the convergence of distributed algorithms, and complements existing results from a different viewpoint.
A distributedoptimization problem (DOP) with affine equality and convex inequality constraints is studied in this article. First, the consensus constraint of the considered DOP is relaxed and a related approximate DO...
详细信息
A distributedoptimization problem (DOP) with affine equality and convex inequality constraints is studied in this article. First, the consensus constraint of the considered DOP is relaxed and a related approximate DOP (ADOP) is presented. It is proved that the optimal solutions of the ADOP (i.e., the near-optimal solutions of the original DOP) are able to approach the optimal solutions of the original DOP. A continuous-time algorithm is proposed for the ADOP and it is demonstrated that the state solution of the presented algorithm converges to the critical point set of the ADOP with general locally Lipschitz continuous objective functions. This means the presented algorithm is efficient for distributed nonconvex optimization problems. Particularly, when the objective functions are convex ones, the state solution of the presented algorithm is further proved to converge to a near-optimal solution of the original DOP. One illustrative example and an application on load sharing problems are shown to validate the effectiveness of the proposed algorithm.
Stochastic subgradient algorithms (SSAs) are widely studied owing to their applications in distributed and online learning. However, in a distributed setting, their sub-linear convergence rates tend to attract a large...
详细信息
ISBN:
(纸本)9781665436977
Stochastic subgradient algorithms (SSAs) are widely studied owing to their applications in distributed and online learning. However, in a distributed setting, their sub-linear convergence rates tend to attract a large number of information exchanges that raise the overall communication burden. In order to reduce this burden, in this paper, we design two static stochastic event-based broadcasting protocols that operate in conjunction with SSAs to address a set-constrained distributed optimization problem (DOP). We address two notions of stochastic convergence, namely, almost sure and mean convergence;for each of these notions we design event-based broadcasting protocols, specifically, the stochastic event-thresholds. Subsequently, we illustrate the design via a numerical example and provide comparisons to evaluate its performance against the existing event-based protocols.
Electric vehicles (EVs) are controllable loads from which distribution grid operator can benefit in order to minimize the load profile variations. In this paper, we proposed a hierarchical distributedoptimization fra...
详细信息
ISBN:
(纸本)9781538655832
Electric vehicles (EVs) are controllable loads from which distribution grid operator can benefit in order to minimize the load profile variations. In this paper, we proposed a hierarchical distributedoptimization framework such that EV management system (EVMS), as a part of distribution grid management system, minimizes the load variance of the grid in communication with the EV aggregators which control EV charging load of the distribution system feeders. The hierarchical distributed framework, based on alternative direction method of multipliers (ADMM), increases the scalability of the EV charging infrastructure while decreases computational burden. In our proposed approach, each EV aggregator schedules the EV charging profiles of its feeder in a distributed fashion which avoids sharing the EV owners' desired charging profile information and enables privacy preserving. To show the performance of our approach, we apply it to a case study with 100% EV penetration, including 4 feeders and 60 EVs, and show how the load variance of the system and charging cost of individual EVs decrease.
This technical note studies the distributedoptimization problem of a sum of nonsmooth convex cost functions with local constraints. At first, we propose a novel distributed continuous-time projected algorithm, in whi...
详细信息
This technical note studies the distributedoptimization problem of a sum of nonsmooth convex cost functions with local constraints. At first, we propose a novel distributed continuous-time projected algorithm, in which each agent knows its local cost function and local constraint set, for the constrainedoptimization problem. Then we prove that all the agents of the algorithm can find the same optimal solution, and meanwhile, keep the states bounded while seeking the optimal solutions. We conduct a complete convergence analysis by employing nonsmooth Lyapunov functions for the stability analysis of differential inclusions. Finally, we provide a numerical example for illustration.
暂无评论