With the development of big data and artificial intelligence, distributed optimization has emerged as an indispensable tool for solving large-scale problems. In particular, the multi-agent system based on distributed ...
详细信息
With the development of big data and artificial intelligence, distributed optimization has emerged as an indispensable tool for solving large-scale problems. In particular, the multi-agent system based on distributed information processing can be elaborately designed for distributed optimization, in which the agents collaboratively minimize a global objective function made up of a sum of local objective cost functions subject to some local and/or global constraints. Inspired by the applications involving resource allocation, machine learning, power systems, sensor networks and cloud computing, a variety of distributed optimization models and algorithms have been investigated and developed. The optimization models include unconstrained and constrained problems in continuous and discontinuous systems with undirected and directed communication topology graphs. The constraints include bounded constraint, separable and inseparable equality and inequality constraints. Meanwhile, in distributed algorithms, every agent executes its local computation and updating on basis of its own data information and that exchanging with its neighboring agents by means of the underlying communication networks, in order to deal with the optimization problems in a distributed way. This paper is designed to provide a comprehensive overview of extant distributed models and algorithms for distributed optimization.(c) 2021 Elsevier B.V. All rights reserved.
Radio access network (RAN) slicing is an effective methodology to dynamically allocate networking resources in 5G networks. One of the main challenges of RAN slicing is that it is provably an NP-Hard problem. For this...
详细信息
Radio access network (RAN) slicing is an effective methodology to dynamically allocate networking resources in 5G networks. One of the main challenges of RAN slicing is that it is provably an NP-Hard problem. For this reason, we design near-optimal low-complexity distributed RAN slicing algorithms. First, we model the slicing problem as a congestion game, and demonstrate that such game admits a unique Nash equilibrium (NE). Then, we evaluate the Price of Anarchy (PoA) of the NE, i.e., the efficiency of the NE as compared with the social optimum, and demonstrate that the PoA is upper-bounded by 3/2. Next, we propose two fully-distributed algorithms that provably converge to the unique NE without revealing privacy-sensitive parameters from the slice tenants. Moreover, we introduce an adaptive pricing mechanism of the wireless resources to improve the network owner's profit. We evaluate the performance of our algorithms through simulations and an experimental testbed deployed on the Amazon EC2 cloud, both based on a real-world dataset of base stations from the OpenCellID project. Results conclude that our algorithms converge to the NE rapidly and achieve near-optimal performance, while our pricing mechanism effectively improves the profit of the network owner.
The Internet of Things (IoT) is rapidly gaining ground in future wireless communications. Transmission reliability and latency are two significant measurements for the utilization of the IoT. In this paper, we aim to ...
详细信息
The Internet of Things (IoT) is rapidly gaining ground in future wireless communications. Transmission reliability and latency are two significant measurements for the utilization of the IoT. In this paper, we aim to improve reliability and latency requirements by solving the link scheduling problem. Under the Rayleigh fading model, a more realistic interference model, we first localize the global interference by ignoring the interference outside some certain distance, and obtain the success probability of a transmission at least 1 - epsilon, where epsilon is an acceptable error probability of a transmission. Based on this key result, we then design two localized and distributed algorithms for one-slot scheduling problem (i.e., how to ensure that the selected links have high transmission reliability or can be scheduled successfully). In addition, we design a localized and distributed algorithm with time complexity of O(Delta('T,r)(max) log n) to resolve latency minimization problem (i.e., minimize the number of time slots until all transmissions were successful), where Delta('T,r)(max) is the maximum number of senders within R-T centered at a receiver in the network, where R-T is transmission range of a node. Theoretical analysis and extensive simulations demonstrate that the proposed algorithms can improve the reliability and latency requirements significantly. (C) 2019 Elsevier B.V. All rights reserved.
This paper presents a probabilistic performance analysis of a deadlock detection algorithm in distributed systems. Although there has been extensive study on deadlock detection algorithms in distributed systems, littl...
详细信息
This paper presents a probabilistic performance analysis of a deadlock detection algorithm in distributed systems. Although there has been extensive study on deadlock detection algorithms in distributed systems, little attention has been paid to the study of the performance of these algorithms. Most work on performance study has been achieved through simulation but not through an analytic model. Min [14], to the best of our knowledge, made the sole attempt to evaluate the performance of distributed deadlock detection algorithms analytically. Being different from Min's [14], our analytic approach takes the time-dependent behavior of each process into consideration rather than simply taking the mean-value estimation. Furthermore, the relation among the times when deadlocked processes become blocked is studied, which enhances the accuracy of the analysis. We measure performance metrics such as duration of deadlock, the number of algorithm invocations, and the mean waiting time of a blocked process. It is shown that the analytic estimates are nearly consistent with simulation results.
Model checking has advanced over the last decades to become an effective formal technique for verifying distributed and concurrent systems. As computers grew in memory and processing capacity, it became possible to ex...
详细信息
Model checking has advanced over the last decades to become an effective formal technique for verifying distributed and concurrent systems. As computers grew in memory and processing capacity, it became possible to exhaustively verify systems with billions of states, making it practical to model and verify real-world protocols and algorithms. However, writing a model is a manual task that potentially introduces defects which the model checker tool finds to fulfill the formal specification (e.g., an incorrect model that fulfills an incomplete specification). Furthermore, this kind of formal verification technique is limited by the well-known state-space explosion problem. This paper aims to provide a set of generic template models, appropriate for distributed round-based algorithms, to be used to focus modeling effort on algorithm-specific details. To mitigate state-space explosion, the paper proposes two reduction techniques, named partition symmetry reduction and message order reduction, that exploit symmetries in the state space to avoid expanding equivalent states. The reusable framework for verifying round-based algorithms and the two proposed reduction techniques provide the means for reducing by orders of magnitude the number of states required to analyze common distributed algorithms.
In this paper, we describe distributed algorithms for combinational fault simulation assuming the classical stuck-at fault model. Our algorithms have been implemented on a network of Sun workstations under the Paralle...
详细信息
In this paper, we describe distributed algorithms for combinational fault simulation assuming the classical stuck-at fault model. Our algorithms have been implemented on a network of Sun workstations under the Parallel Virtual Machine (PVM) environment. Two techniques are used for subdividing work among processors - test set partition and fault set partition. The sequential algorithm for fault simulation, used on individual nodes of the network, is based on a novel path compression technique proposed in this paper We describe experimental results on a number of ISCAS'85 benchmark circuits.
Job shop scheduling belongs to the class of NP-hard problems, There are a number of algorithms in literature for finding near optimal solution for the job shop scheduling problem. Many of these algorithms exploit the ...
详细信息
Job shop scheduling belongs to the class of NP-hard problems, There are a number of algorithms in literature for finding near optimal solution for the job shop scheduling problem. Many of these algorithms exploit the problem specific information and hence are less general. However, simulated annealing algorithm for job shop scheduling is general and produces better results in comparison with other similar algorithms, But one of the major drawbacks of the algorithm is that the execution time is high, This makes the algorithm inapplicable to large scale problems, One possible approach to reduce the execution time of the algorithm is to develop distributed algorithms for simulated annealing, In this paper, we discuss approaches to developing distributed algorithms for simulated annealing for solving the job shop scheduling problem. Three different algorithms have been developed, These are the Temperature Modifier, the Locking Edges and the Modified Locking Edges algorithms, These algorithms have been implemented on the distributed Task Sharing System (DTSS) running on a network of 18 sun workstations. The observed performance showed that each of these algorithms performs well depending on the problem size.
In this article, we propose distributed continuous-time algorithms to solve the optimal resource allocation problem with certain time-varying quadratic cost functions for multiagent systems. The objective is to alloca...
详细信息
In this article, we propose distributed continuous-time algorithms to solve the optimal resource allocation problem with certain time-varying quadratic cost functions for multiagent systems. The objective is to allocate a quantity of resources while optimizing the sum of all the local time-varying cost functions. Here, the optimal solutions are trajectories rather than some fixed points. We consider a large number of agents that are connected through a network, and our algorithms can be implemented using only local information. By making use of the prediction-correction method and the nonsmooth consensus idea, we first design two distributed algorithms to deal with the case when the time-varying cost functions have identical Hessians. We further propose an estimator-based algorithm which uses distributed average tracking theory to estimate certain global information. With the help of the estimated global information, the case of nonidentical constant Hessians is addressed. In each case, it is proved that the solutions of the proposed dynamical systems with certain initial conditions asymptotically converge to the optimal trajectories. We illustrate the effectiveness of the proposed distributed continuous-time optimal resource allocation algorithms through simulations.
We introduce a novel consensus mechanism by which the agents of a network can reach an agreement on the value of a shared logical vector function depending on binary input events. Based on results on the convergence o...
详细信息
We introduce a novel consensus mechanism by which the agents of a network can reach an agreement on the value of a shared logical vector function depending on binary input events. Based on results on the convergence of finite-state iteration systems, we provide a technique to design logical consensus systems that minimizing the number of messages to be exchanged and the number of steps before consensus is reached, and tolerating a bounded number of failed or malicious agents. We provide sufficient joint conditions on the input visibility and the communication topology for the method's applicability. We describe the application of our method to two distributed network intrusion detection problems. (C) 2013 Elsevier Ltd. All rights reserved.
In the dispersion problem, a group of k <= n mobile robots, initially placed on the vertices of an anonymous graph G with n vertices, must redistribute themselves so that each vertex hosts no more than one robot. W...
详细信息
ISBN:
(纸本)9783031814037;9783031814044
In the dispersion problem, a group of k <= n mobile robots, initially placed on the vertices of an anonymous graph G with n vertices, must redistribute themselves so that each vertex hosts no more than one robot. We address this challenge on an anonymous triangular grid graph, where each vertex can connect to up to six adjacent vertices. We propose a distributed deterministic algorithm that achieves dispersion on an unoriented triangular grid graph in O(root n) time, where n is the number of vertices. Each robot requires O(log n) bits of memory. The time complexity of our algorithm and the memory usage per robot are optimal. This work builds on previous studies by Kshemkalyani et al. [WALCOM 2020 [17]] and Banerjee et al. [ALGOWIN 2024 [3]]. Importantly, our algorithm terminates without requiring prior knowledge of n and resolves a question posed by Banerjee et al. [ALGOWIN 2024 [3]].
暂无评论