META-HEURISTIC ALGORITHMS FOR ADVANCED distributedsystemS Discover a collection of meta-heuristic algorithms for distributedsystems in different application domains Meta-heuristic techniques are increasingly gaining...
详细信息
ISBN:
(数字)9781394188093
ISBN:
(纸本)9781394188062
META-HEURISTIC ALGORITHMS FOR ADVANCED distributedsystemS Discover a collection of meta-heuristic algorithms for distributedsystems in different application domains Meta-heuristic techniques are increasingly gaining favor as tools for optimizing distributedsystems—generally, to enhance the utility and precision of database searches. Carefully applied, they can increase system effectiveness, streamline operations, and reduce cost. Since many of these techniques are derived from nature, they offer considerable scope for research and development, with the result that this field is growing rapidly. Meta-Heuristic Algorithms for Advanced distributedsystems offers an overview of these techniques and their applications in various distributedsystems. With strategies based on both global and local searching, it covers a wide range of key topics related to meta-heuristic algorithms. Those interested in the latest developments in distributedsystems will find this book indispensable. Meta-Heuristic Algorithms for Advanced distributedsystems readers will also find: Analysis of security issues, distributedsystem design, stochastic optimization techniques, and more Detailed discussion of meta-heuristic techniques such as the genetic algorithm, particle swam optimization, and many others Applications of optimized distribution systems in healthcare and other key??industries Meta-Heuristic Algorithms for Advanced distributedsystems is ideal for academics and researchers studying distributedsystems, their design, and their applications.
It is common for real-world applications to analyze big graphs using distributed graph processing systems. Popular in-memory systems require an enormous amount of resources to handle big graphs. While several out-of-c...
详细信息
ISBN:
(纸本)9781538623268
It is common for real-world applications to analyze big graphs using distributed graph processing systems. Popular in-memory systems require an enormous amount of resources to handle big graphs. While several out-of-core approaches have been proposed for processing big graphs on disk, the high disk I/O overhead could significantly reduce performance. In this paper, we propose GraphH to enable high-performance big graph analytics in small clusters. Specifically, we design a two-stage graph partition scheme to evenly divide the input graph into partitions, and propose a GAB (Gather-Apply-Broadcast) computation model to make each worker process a partition in memory at a time. We use an edge cache mechanism to reduce the disk I/O overhead, and design a hybrid strategy to improve the communication performance. GraphH can efficiently process big graphs in small clusters or even a single commodity server. Extensive evaluations have shown that GraphH could be up to 7.8x faster compared to popular in-memory systems, such as Pregel+ and PowerGraph when processing generic graphs, and more than 100x faster than recently proposed out-of-core systems, such as GraphD and Chaos when processing big graphs.
computing resources in volunteer computing grid represent a big under-used reserve of processing capacity. However, a task scheduler has no guarantees regarding the deliverable computing power of these resources. Pred...
详细信息
ISBN:
(纸本)9781467366366
computing resources in volunteer computing grid represent a big under-used reserve of processing capacity. However, a task scheduler has no guarantees regarding the deliverable computing power of these resources. Predicting CPU availability can help to better exploit these resources and make effective scheduling decisions. In this paper, we draw up the main guidelines to develop a scalable method to predict CPU availability in a large-scale volunteer computingsystem. To reduce solution time and ensure precision, we use simple prediction techniques precisely Autoregressive models and tendency-based strategy. To address the limitations of autoregressive models, we propose an automated approach to check whether time series satisfy the assumptions of the models and to construct a prediction model by identifying its appropriate order value. At each prediction, we consider autoregressive models over three different past analyses: first over the recent hours, second during the same hours of the previous days and third during the same weekly hours of the previous weeks. We analyze the performance of multivariate vector autoregressive models (VAR) and pure autoregressive models (AR), constructed according to our approach, against the tendency prediction technique using traces of a large-scale Internet-distributed computing system, termed seti@home.
Communication bandwidth is a bottleneck in distributed machine learning, and limits the system scalability. The transmission of gradients often dominates the communication in distributed SGD. One promising technique i...
详细信息
ISBN:
(纸本)9783030185794;9783030185787
Communication bandwidth is a bottleneck in distributed machine learning, and limits the system scalability. The transmission of gradients often dominates the communication in distributed SGD. One promising technique is using the gradient compression to reduce the communication cost. Recently, many approaches have been developed for the deep neural networks. However, they still suffer from the high memory cost, slow convergence and serious staleness problems over sparse high-dimensional models. In this work, we propose Sparse Gradient Compression (SGC) to efficiently train both the sparse models and the deep neural networks. SGC uses momentum approximation to reduce the memory cost with negligible accuracy degradation. Then it improves the accuracy with long-term gradient compensation, which maintains global momentum to make up for the information loss caused by the approximation. Finally, to alleviate the staleness problem, SGC updates model weight with the accumulation of delayed gradients at local, called local update technique. The experiments over the sparse high-dimensional models and deep neural networks indicate that SGC can compress 99.99% gradients for every iteration without performance degradation, and saves the communication cost up to 48x.
As a distributed computing system, a CNC system needs to be operated reliably, dependably and safely. How to design reliable and dependable software and perform effective verification for CNC systems becomes an import...
详细信息
ISBN:
(纸本)9781424431748
As a distributed computing system, a CNC system needs to be operated reliably, dependably and safely. How to design reliable and dependable software and perform effective verification for CNC systems becomes an important research problem. In this paper, we propose a new modeling method called TTM/ATRTTL (Timed Transition Models/All-Time Real-Time Temporal Logics) fir specifying CNC systems. TTM/ATRTTL. provides full supports for specifying hard real-time and feedback that are needed,for modeling CNC systems. We also propose a verification framework with verification rules and theorems and implement it with STeP and SF2STeP. The proposed verification framework can check reliability, dependability and safety of systems specified by our TTM/ATRTTL method. We apply our modeling and verification techniques on an open architecture CNC (OAC) system and conduct comprehensive studies on modeling and verifying a logical controller that is the key part of OAC The results show that our method can effectively model and verify CNC systems and generate CNC software that
A multi-pipeline optical interconnection network for distributed computing system has been designed. Each sub-layer network is connected to the ring with an access node (AN), which can transmit data at every wavelengt...
详细信息
ISBN:
(纸本)0819446963
A multi-pipeline optical interconnection network for distributed computing system has been designed. Each sub-layer network is connected to the ring with an access node (AN), which can transmit data at every wavelength with a tunable laser diode. The data transmission speed at each wavelength is 1.25 Gbit/s. With 8 wavelengths, a total bandwidth of 10 Gbit/s can be obtained. Each AN only receives a certain wavelength. With a band pass filter, the desired optical signal can be dropped down. Pipelining data transmission is achieved among different wavelengths. This network is a multi-pipeline structure. So the communication latency and communication overheads can be decreased. Meanwhile, the ring topology has good scalability. The scale of the network can be expanded adopting more wavelengths at each access node.
Collaborative editing enables multiple users who reside remotely to share and edit some documents at the same time. It is fundamentally based on operational transformation which adjusts the position of operation accor...
详细信息
ISBN:
(纸本)9781424404285
Collaborative editing enables multiple users who reside remotely to share and edit some documents at the same time. It is fundamentally based on operational transformation which adjusts the position of operation according to the transformed execution order. For a last decade, many researches have been performed in this area and the correctness and possibility of operational transformation have been proved. Even though operational transformation gives us the possibility of collaborative editing, it has a limitation with a viewpoint of usability and efficiency. In other words, the existing operational transformation is devised without considering the properties of collaborative editing such as the frequency of operational transformation and human-centric viewpoint. In this paper, we would like to introduce view-centric operational transformation which considers the priority of transforming operation according to user's viewpoint. Using this way, we have tried to improve the existing operational transformation and provide more useful and efficient collaborative editing.
The article describes the problem of scheduling jobs with absolute priorities in a geographically distributed network of supercomputer centers (GDN). In this case English auction method can be efficiently applied. Cla...
详细信息
The article describes the problem of scheduling jobs with absolute priorities in a geographically distributed network of supercomputer centers (GDN). In this case English auction method can be efficiently applied. Classic market model considers computational resources as the goods (subject of auction trades), and resources' owners act as sellers. Users act as buyers who participate in the auction on purpose to purchase computing resources for the execution of their jobs. This model assumes that customers have certain budgets in nominal or real money. The priority of the job is actually determined by the price, which the user can pay to finish the job by a certain time. The GDN model investigated by the authors differs from the known ones in that the jobs priorities are absolute and assigned according to uniform rules. The main goal is the earliest execution of high-priority jobs. In this case, the concept of the user's budget becomes meaningless, and the classic auction models do not work. The authors propose a new approach where the jobs act as the goods and buyers are resource owners who paying for jobs with available idle supercomputing resources. For this approach, the authors investigate the features and characteristics of English auction, as the most preferred method for scheduling jobs with absolute priorities in GDN.
computing resources in volunteer computing grid represent a big under-used reserve of processing capacity. However, a task scheduler has no guarantees regarding the deliverable computing power of these resources. Pred...
详细信息
ISBN:
(纸本)9782960053265
computing resources in volunteer computing grid represent a big under-used reserve of processing capacity. However, a task scheduler has no guarantees regarding the deliverable computing power of these resources. Predicting CPU availability can help to better exploit these resources and make effective scheduling decisions. In this paper, we draw up the main guidelines to develop a method to predict CPU availability in a large-scale volunteer computingsystem. To reduce solution time and ensure precision, we use simple prediction techniques, precisely Autoregressive models and tendency-based strategy. To address the limitations of autoregressive models, we propose an automated approach to check whether time series satisfy the assumptions of the models and to construct the prediction model. At each prediction, we consider autoregressive models over three different past analyses: first over the recent hours, second during the same hours of the previous days and third during the same weekly hours of the previous weeks. We analyze the performance of multivariate vector autoregressive models (VAR) and pure autoregressive models (AR), constructed according to our approach, against the tendency prediction technique. We study the impact of the cross-correlation between the CPU availability indicators on the performance of VAR models. We used traces of a large-scale Internet-distributed computing system, termed seti[at]home.
The main goal of the work was aimed to create a parallel application using a multithreaded execution model, which will allow the most complete and efficient use of all available computing resources. At the same time, ...
详细信息
ISBN:
(纸本)9783030941413;9783030941406
The main goal of the work was aimed to create a parallel application using a multithreaded execution model, which will allow the most complete and efficient use of all available computing resources. At the same time, themain attention was paid to the issues of maximizing the performance of the multithreaded computing part of the application and more efficient use of available hardware. During the development process, the effectiveness of various methods of software and algorithmic optimization was evaluated, taking into account the features of the functioning of a highly loaded multithreaded application, designed to run on systems with a large number of parallel computing threads. The problem of loading all available computing resources at the moment was solved, including the dynamic distribution of the involved CPU cores/threads and the computing accelerators, installed in the system.
暂无评论