In this paper, we study the optimization problems for a group of agents whose individual objective functions and constraints may depend on the variables of neighboring agents. Several algorithms are proposed based on ...
详细信息
ISBN:
(纸本)9781538613955
In this paper, we study the optimization problems for a group of agents whose individual objective functions and constraints may depend on the variables of neighboring agents. Several algorithms are proposed based on operator splitting techniques that can iteratively converge to an optimal primal (or dual) solution of the optimization problems. Then, via random coordinate updates, asynchronous implementations of the algorithms are developed with low computation and communication complexity and guaranteed almost sure convergence to an optimal solution. Numerical results are presented to illustrate the proposed algorithms.
In this paper, we provide a distributed methodology to allow a network of agents, tasked to execute a distributed algorithm, to overcome Man-in-the-Middle (MITM) attacks that aim at steering the result of the algorith...
详细信息
In this paper, we provide a distributed methodology to allow a network of agents, tasked to execute a distributed algorithm, to overcome Man-in-the-Middle (MITM) attacks that aim at steering the result of the algorithm toward inconsistent values or dangerous configurations. We want the agents to be able to restore the correct result of the algorithm despite the attacks. To this end, we provide a distributed algorithm to let the set of agents, interconnected by an undirected network topology, construct several edge-disjoint spanning trees by assigning labels to their incident edges. The ultimate objective is to use these spanning trees to run multiple instances of the same distributed algorithm in parallel, in order to detect MITM attacks or other faulty or malicious link behavior (e.g., when the instances yield different results) and to restore the correct result (when the majority of instances is unaffected). The proposed algorithm is lightweight and asynchronous, and is based on iterated depth-first visits on the graph. We complement this paper with a thorough analysis of the performance of the proposed algorithms.
Two asynchronous distributed algorithms are presented for solving a linear equation of the form Ax = b with at least one solution. The equation is simultaneously and asynchronously solved by m agents assuming that eac...
详细信息
Two asynchronous distributed algorithms are presented for solving a linear equation of the form Ax = b with at least one solution. The equation is simultaneously and asynchronously solved by m agents assuming that each agent knows only a subset of the rows of the partitioned matrix [A b], the estimates of the equation's solution generated by its neighbors, and nothing more. Neighbor relationships are characterized by a time-dependent directed graph whose vertices correspond to agents and whose arcs depict neighbor relationships. Each agent recursively updates its estimate of a solution at its own event times by utilizing estimates generated by its neighbors which are transmitted with delays. The event time sequences of different agents are not assumed to be synchronized. It is shown that for any matrix-vector pair (A, b) for which the equation has a solution and any repeatedly jointly strongly connected sequence of neighbor graphs defined on the merged sequence of all agents' event times, the algorithms cause all agents' estimates to converge exponentially fast to the same solution to Ax = b. The first algorithm requires a specific initialization step at each agent, and the second algorithm works for arbitrary initializations. Explicit expressions for convergence rates are provided, and a relation between local initializations and limiting consensus solutions is established, which is used to solve the least 2-norm solution.
Principal component analysis (PCA) is a fundamental primitive of many data analysis, array processing, and machine learning methods. In applications where extremely large arrays of data are involved, particularly in d...
详细信息
Principal component analysis (PCA) is a fundamental primitive of many data analysis, array processing, and machine learning methods. In applications where extremely large arrays of data are involved, particularly in distributed data acquisition systems, distributed PCA algorithms can harness local communications and network connectivity to overcome the need of communicating and accessing the entire array locally. A key feature of distributed PCA algorithm is that they defy the conventional notion that the first step toward computing the principal vectors is to form a sample covariance. This paper is a survey of the methodologies to perform distributed PCA on different data sets, their performance, and of their applications in the context of distributed data acquisition systems.
In this study, we investigate the problem of accelerating distributed algorithms for solving linear equations over multi-agent networks. While almost all distributed algorithms in the literature assume that the equati...
详细信息
In this study, we investigate the problem of accelerating distributed algorithms for solving linear equations over multi-agent networks. While almost all distributed algorithms in the literature assume that the equations are not shared with the neighboring agents, it is shown that the assumption is not restrictive, and an algorithm has been proposed which can be used to determine the equations of the neighbors. We also present a numerical example to illustrate that the convergence rate of a distributed algorithm can be significantly improved by using the proposed algorithm.
The Lovasz local lemma (LLL), introduced by Erdos and Lovasz in 1975, is a powerful tool of the probabilistic method that allows one to prove that a set of n "bad" events do not happen with non-zero probabil...
详细信息
The Lovasz local lemma (LLL), introduced by Erdos and Lovasz in 1975, is a powerful tool of the probabilistic method that allows one to prove that a set of n "bad" events do not happen with non-zero probability, provided that the events have limited dependence. However, the LLL itself does not suggest how to find a point avoiding all bad events. Since the works of Alon (Random Struct algorithms 2(4): 367-378, 1991) and Beck (Random Struct algorithms 2(4): 343-365, 1991) there has been a sustained effort to find a constructive proof (i.e. an algorithm) for the LLL or weaker versions of it. In a major breakthrough Moser and Tardos (J ACM 57(2): 11, 2010) showed that a point avoiding all bad events can be found efficiently. They also proposed a distributed/parallel version of their algorithm that requires O(log(2) n) rounds of communication in a distributed network. In this paper we provide two new distributed algo-rithms for the LLL that improve on both the efficiency and simplicity of the Moser-Tardos algorithm. For clarity we express our results in terms of the symmetric LLL though both algorithms deal with the asymmetric version as well. Let p bound the probability of any bad event and d be the maximum degree in the dependency graph of the bad events. When epd(2) < 1 we give a truly simple LLL algorithm running in O(log(1/epd2) n) rounds. Under the weaker condition ep(d + 1) < 1, we give a slightly slower algorithm running in O(log(2) d . log(1/ep(d+1)) n) rounds. Furthermore, we give an algorithm that runs in sublogarithmic rounds under the condition p . f (d) < 1, where f (d) is an exponential function of d. Although the conditions of the LLL are locally verifiable, we prove that any distributed LLL algorithm requires Omega(log* n) rounds. In many graph coloring problems the existence of a valid coloring is established by one or more applications of the LLL. Using our LLL algorithms, we give logarithmic-time distributed algorithms for frugal coloring, defective c
Consensus-based distributed algorithms have been the key to many problems arising in multi-agent systems including reinforcement learning [1], [2], formation control [3], [4], task allocation [5]and so on. Byconsensus...
详细信息
Consensus-based distributed algorithms have been the key to many problems arising in multi-agent systems including reinforcement learning [1], [2], formation control [3], [4], task allocation [5]and so on. Byconsensushere is meant that all agents in the network reach an agreement regarding a certain quantity of interest [6], [7]. Bydistributedhere is meant that the whole multi-agent system achieve global objectives by only local coordination among nearby neighbors [8]. On one hand, the absence of central controllers in multi-agent systems make them inherently robust against individual agent/link failures. On the other hand, the high dependence of the whole system on local coordination also raises a significant concern that algorithms for multi-agent networks may be crashed down in the presence of even one malicious agent [9]. This has motivated us to develop methodologies to achieveresiliencein order to guarantee nice performance for consensus-based distributed algorithms especially in hostile environment. One challenge along this direction comes from the fact that each agent is usually with locally available information, which makes it very difficult to identify or isolate malicious agents [10]. The authors of [11]-[13]have achieved significant progress by showing that given N-adversarial nodes under Byzantine attacks, there exists a strategy for normal agents to achieve consensus if the network connectivity is 2 N+1. These results are usually computationally expensive, assume the network topology to be all-to-all networks, or require normal agents to be aware of non-local information. Most recently the authors of [14], [15]have investigated consensus-based distributed optimizations under adversarial agents. They have introduced a local filtering mechanism which allows each agent to discard the most extreme values in their neighborhood at each step. This is not directly applicable to consensus-based distributed computation algorithms [16]-[19], in which extreme val
In 1994, Thomassen proved that every planar graph is 5-list-colorable. In 1995, Thomassen proved that every planar graph of girth at least five is 3-list-colorable. His proofs naturally lead to quadratic-time algorith...
详细信息
In 1994, Thomassen proved that every planar graph is 5-list-colorable. In 1995, Thomassen proved that every planar graph of girth at least five is 3-list-colorable. His proofs naturally lead to quadratic-time algorithms to find such colorings. Here, we provide the first linear-time algorithms to find such colorings. For a fixed surface S, Thomassen showed in 1997 that there exists a linear-time algorithm to decide if a graph embedded in S is 5-colorable and similarly in 2003 if a graph of girth at least five embedded in S is 3-colorable. Using the theory of hyperbolic families, the author and Thomas showed such algorithms exist for list-colorings. Around the same time, Dvorak and Kawarabayashi also provided such algorithms. Moreover, they gave an algorithm to find such colorings (if they exist). Here we provide the first such algorithm which is fixed parameter tractable with genus as the parameter; indeed, we provide a linear-time algorithm to find such colorings. In 1988, Goldberg, Plotkin and Shannon provided a deterministic distributed algorithm for 7-coloring n-vertex planar graphs in O(log n) rounds. In 2018, Aboulker, Bonamy, Bousquet, and Esperet provided a deterministic distributed algorithm for 6-coloring n-vertex planar graphs in polylogarithmic rounds. Their algorithm in fact works for 6-list-coloring. They also provided a polylogarithmic algorithm for 4-list-coloring triangle-free planar graphs. Chechik and Mukhtar independently obtained such algorithms for ordinary coloring in O(log n) rounds, which is best possible in terms of running time. Here we provide the first polylogarithmic deterministic distributed algorithms for 5-coloring n-vertex planar graphs and similarly for 3-coloring planar graphs of girth at least five. Indeed, these algorithms run in O(log n) rounds, work also for list-colorings, and even work on a fixed surface (assuming such a coloring exists).
We study distributed composite optimization over networks: agents minimize the sum of a smooth (strongly) convex function-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general...
详细信息
ISBN:
(数字)9781728155494
ISBN:
(纸本)9781728155500
We study distributed composite optimization over networks: agents minimize the sum of a smooth (strongly) convex function-the agents' sum-utility-plus a nonsmooth (extended-valued) convex one. We propose a general algorithmic framework for such a class of problems and provide a unified convergence analysis leveraging the theory of operator splitting. Our results unify several approaches proposed in the literature of distributed optimization for special instances of our formulation. Distinguishing features of our scheme are: (i) when the agents' functions are strongly convex, the algorithm converges at a linear rate, whose dependencies on the agents' functions and the network topology are decoupled, matching the typical rates of centralized optimization; (ii) the step-size does not depend on the network parameters but only on the optimization ones; and (iii) the algorithm can adjust the ratio between the number of communications and computations to achieve the same rate of the centralized proximal gradient scheme (in terms of computations). This is the first time that a distributed algorithm applicable to composite optimization enjoys such properties.
In this brief announcement we summarize our results concerning distributed algorithms for LP-type problems in the well-known gossip model. LP-type problems include many important classes of problems such as (integer) ...
详细信息
ISBN:
(纸本)9781450361842
In this brief announcement we summarize our results concerning distributed algorithms for LP-type problems in the well-known gossip model. LP-type problems include many important classes of problems such as (integer) linear programming, geometric problems like smallest enclosing ball and polytope distance, and set problems like hitting set and set cover. In the gossip model, a node can only push information to or pull information from nodes chosen uniformly at random. Protocols for the gossip model are usually very practical due to their fast convergence, their simplicity, and their stability under stress and disruptions. Our algorithms are very efficient (logarithmic rounds or better with just polylogarithmic communication work per node per round) whenever the combinatorial dimension of the given LP-type problem is constant, even if the size of the given LP-type problem is polynomially large in the number of nodes.
暂无评论