Following [4] we extend and generalize the game-theoretic model of distributed computing, identifying different utility functions that encompass different potential preferences of players in a distributed system. A go...
详细信息
ISBN:
(纸本)9781450329446
Following [4] we extend and generalize the game-theoretic model of distributed computing, identifying different utility functions that encompass different potential preferences of players in a distributed system. A good distributed algorithm in the game-theoretic context is one that prohibits the agents (processors with interests) from deviating from the protocol;any deviation would result in the agent losing, i.e., reducing its utility at the end of the algorithm. We distinguish between different utility functions in the context of distributed algorithms, e.g., utilities based on communication preference, solution preference, and output preference. Given these preferences we construct two basic building blocks for game theoretic distributed algorithms, a wake-up building block resilient to any preference and in particular to the communication preference (to which previous wake-up solutions were not resilient), and a knowledge sharing building block that is resilient to any and in particular to solution and output preferences. Using the building blocks we present several new algorithms for consensus, and renaming as well as a modular presentation of the leader election algorithm of [4].
The last decade has seen an increased attention on large-scale data analysis, caused mainly by the availability of new sources of data and the development of programming model that allowed their analysis. Since many o...
详细信息
The last decade has seen an increased attention on large-scale data analysis, caused mainly by the availability of new sources of data and the development of programming model that allowed their analysis. Since many of these sources can be modeled as graphs, many large-scale graph processing frameworks have been developed, from vertex-centric models such as pregel to more complex programming models that allow asynchronous computation, can tackle dynamism in the data and permit the usage of different amount of resources. This thesis presents theoretical and practical results in the area of distributed large- scale graph analysis by giving an overview of the entire pipeline. Data must first be pre-processed to obtain a graph, which is then partitioned into subgraphs of similar size. To analyze this graph the user must choose a system and a programming model that matches her available resources, the type of data and the class of algorithm to execute. Aside from an overview of all these different steps, this research presents three novel approaches to those steps. The first main contribution is dfep, a novel distributed parti- tioning algorithm that divides the edge set into similar sized partition. dfep can obtain partitions with good quality in only a few iterations. The output of dfep can then be used by etsch, a graph processing framework that uses partitions of edges as the focus of its programming model. etsch's programming model is shown to be flexible and can easily reuse sequential classical graph algorithms as part of its workflow. Implementations of etsch in hadoop, spark and akka allow for a comparison of those systems and the discussion of their advantages and disadvantages. The implementation of etsch in akka is by far the fastest and is able to process billion-edges graphs faster that competitors such as gps, blogel and giraph++, while using only a few computing nodes. A final contribution is an application study of graph-centric approaches to word sense
The Internet of Things (IoT) represents a rapidly growing field, where billions of intelligent devices are interconnected through the Internet, enabling the seamless sharing of data and resources. These smart devices ...
详细信息
The Internet of Things (IoT) represents a rapidly growing field, where billions of intelligent devices are interconnected through the Internet, enabling the seamless sharing of data and resources. These smart devices are typically employed to sense various environmental characteristics, including temperature, motion of objects, and occupancy, and transfer their values to the nearest access points for further analysis. The exponential growth in sensor availability and deployment, powered by recent advances in sensor fabrication, has greatly increased the complexity of IoT network architecture. As the market for these sensors grows, so does the problem of ensuring that IoT networks meet high requirements for network availability, dependability, flexibility, and scalability. Unlike traditional networks, IoT systems must be able to handle massive amounts of data generated by various and frequently-used resource-constrained devices, while ensuring efficient and dependable communication. This puts high constraints on the design of IoT, mainly in terms of the required network availability, reliability, flexibility, and scalability. To this end, this work considers deploying a recent technology of distributed edge computing to enable IoT applications over dense networks with the announced requirements. The proposed network depends on distributed edge computing at two levels: multiple access edge computing and fog computing. The proposed structure increases network scalability, availability, reliability, and scalability. The network model and the energy model of the distributed nodes are introduced. An energy-offloading method is considered to manage IoT data over the network energy, efficiently. The developed network was evaluated using a developed IoT testbed. Heterogeneous evaluation scenarios and metrics were considered. The proposed model achieved a higher energy efficiency by 19%, resource utilization by 54%, latency efficiency by 86%, and reduced network congestion by
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this *** allocation is a combinatorial optimization process under a...
详细信息
A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this *** allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is *** proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation *** to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical *** response time is decreased by 40%com-pared with the conditional GA.
With the establishment and development of integrated monitoring platforms and communication information platforms for intelligent substations, the data volume of the power system is showing explosive growth. However, ...
详细信息
One of the main bottlenecks in distributed computing systems is the stragglers' problem. Error correction codes have been proposed to alleviate this problem at the cost of coding complexity for the master node. In...
详细信息
One of the main bottlenecks in distributed computing systems is the stragglers' problem. Error correction codes have been proposed to alleviate this problem at the cost of coding complexity for the master node. In this work, we aim to reduce this coding complexity and propose a novel family of binary locally repairable codes (BLRC) to encode the distributed tasks in a linear matrix-vector multiplication problem. In comparison to the widely used maximum distance separable (MDS) codes, our proposed codes (i) eliminate the costly multiplication operations from the encoding and decoding processes, (ii) allow for low-complexity recovery within the local groups. We analyze the complexity of our proposed codes and through simulations show that compared to MDS codes, our codes reduce the overall encoding plus computation plus decoding time by more than 35% in many practical scenarios.
As the option trading nowadays has become popular, it is important to simulate efficiently large amounts of option pricings. The purpose of this paper is to show valuations of large amount of options, using network di...
详细信息
As the option trading nowadays has become popular, it is important to simulate efficiently large amounts of option pricings. The purpose of this paper is to show valuations of large amount of options, using network distribute computing resources. We valuated 108 options simultaneously on the self-made cluster computer system which is very inexpensive, compared to the supercomputer or the GPU adopting system. For the numerical valuations of options, we developed the option pricing software to solve the Black-Scholes partial differential equation by the finite element method. This yielded accurate values of options and the Greeks with reasonable computational times. This was executed on the single node and then extended on the cluster computer system. We can infer our research for large amounts options on the distributed computing will be a highly attractive alternative to devising hedging strategies or developing new pricing models.
Current and future space missions demand highly reliable on-board computing systems, which are capable of carrying out high-performance data processing. At present, no single computing scheme satisfies both, the highl...
详细信息
Current and future space missions demand highly reliable on-board computing systems, which are capable of carrying out high-performance data processing. At present, no single computing scheme satisfies both, the highly reliable operation requirement and the high-performance computing requirement. The aim of this paper is to review existing systems and offer a new approach to addressing the problem. In the first part of the paper, a detailed survey of fault-tolerant distributed computing systems for space applications is presented. Fault types and assessment criteria for fault-tolerant systems are introduced. Redundancy schemes for distributed systems are analyzed. A review of the state-of-the-art on fault-tolerant distributed systems is presented and limitations of current approaches are discussed. In the second part of the paper, a new fault-tolerant distributed computing platform with wireless links among the computing nodes is proposed. Novel algorithms, enabling important aspects of the architecture, such as time slot priority adaptive fault tolerant channel access and fault-tolerant distributed computing using task migration are introduced. (C) 2016 COSPAR. Published by Elsevier Ltd. All rights reserved.
In a number of distributed computing applications, messages must be transmitted on demand between processes running at different locations on the Internet. The end-to-end delays experienced by the messages have a sign...
详细信息
In a number of distributed computing applications, messages must be transmitted on demand between processes running at different locations on the Internet. The end-to-end delays experienced by the messages have a significant "random" component due to the complicated nature of network traffic. We propose a method based on delay-regression estimation to achieve low end-to-end delays for message transmissions in distributed computing applications. Two-paths are realized between various communicating processes in a transparent manner. Our scheme is implemented over the Internet by a network of NetLets, which communicate with one another to maintain an accurate "state" of delay-regressions in the network. NetLets handle all network traffic between the processes and also perform routing at a certain level depending on the underlying network. We present experimental results to illustrate that NetLets provide a viable and practical means for achieving low end-to-end delays for distributed computing applications over the Internet.
Communication data management (CDM) is an important issue in high performance distributed computing where a massive amount of data exchange frequently occurs among geographically distributed components. In this paper,...
详细信息
Communication data management (CDM) is an important issue in high performance distributed computing where a massive amount of data exchange frequently occurs among geographically distributed components. In this paper, we review existing CDM schemes in distributed computing systems and we propose more efficient CDM schemes. Three types of quantization-based CDM schemes are proposed: the fixed quantization-based CDM (FQ-CDM), the adaptive quantization-based CDM (AQ-CDM), and the mobility-predictive quantization-based CDM (MPQ-CDM). The FQ-CDM applies a basic theory of quantized systems to the distributed computing environment. The AQ-CDM uses a communication object clustering mechanism, which operates a pattern recognition clustering algorithm. The MPQ-CDM predicts the next states of communication objects by using past and current data and controls data communication among communication objects. The mobile object location monitoring system (MOLMS), based on High Level Architecture, is designed and developed to apply these CDM schemes to distributed computing. In this paper we conduct experiments by comparing these CDM schemes with each other on the MOLMS. The experimental results show that the AQCDIVI is the more effective scheme for communication message reduction and the MPQ-CDM is the more suitable scheme for mobile location error reduction.
暂无评论