Task allocation-the problem of efficiently allocating a set of M tasks to a set of N nodes-is a fundamental issue in distributed computing systems (DCSs). The problem is particularly challenging in the presence of fai...
详细信息
ISBN:
(数字)9788995004388
ISBN:
(纸本)9781728198729
Task allocation-the problem of efficiently allocating a set of M tasks to a set of N nodes-is a fundamental issue in distributed computing systems (DCSs). The problem is particularly challenging in the presence of fail-prone nodes. In this work, we design an adaptive task (re-)allocation mechanism, named Asocial. The Asocial allows us to optimize the utilization of system resources (including computing resources and network resources) while increasing service-level fault tolerance as well as failure resilience. In addition, we propose a cooperative game model based task (re-)allocation approach for cooperation among Physical Nodes (PNs). We consider both the performance states and reliability levels of candidate PNs when deploying tasks. Specifically, we exploit failure prediction techniques to evaluate PNs' reliability levels. As a result, we can utilize the resources efficiently and thus improve the service reliability (i.e., the probability of serving all the tasks before their delivery time). We show by means of numerical evaluations that the proposed Asocial can significantly improve service reliability, system availability as well as resource utilization. In particular, by using failure prediction result (F-measure is around 0.8), the application completion rate (ACR), the task completion rate (TCR) and the computing resource utilization (CRU) reach 85.20%, 85.67% and 78.34%, respectively. Compared with 67.30%, 70.07% and 65.79% of the initial allocation scheme, the performance achieves a significant improvement (26.60%, 22.26% and 19.08%, respectively).
We present the Pluggable distributed Resource Allocator (PDRA), a middleware for distributed computing in heterogeneous mobile robotic networks. PDRA enables autonomous robotic agents to share computational resources ...
详细信息
ISBN:
(数字)9781728162126
ISBN:
(纸本)9781728162133
We present the Pluggable distributed Resource Allocator (PDRA), a middleware for distributed computing in heterogeneous mobile robotic networks. PDRA enables autonomous robotic agents to share computational resources for computationally expensive tasks such as localization and path planning. It sits between an existing single-agent planner/executor and existing computational resources (e.g. ROS packages), intercepts the executor's requests and, if needed, transparently routes them to other robots for execution. PDRA is pluggable: it can be integrated in an existing single-robot autonomy stack with minimal modifications. Task allocation decisions are performed by a mixed-integer programming algorithm, solved in a shared-world fashion, that models CPU resources, latency requirements, and multi-hop, periodic, bandwidth-limited network communications; the algorithm can minimize overall energy usage or maximize the reward for completing optional tasks. Simulation results show that PDRA can reduce energy and CPU usage by over 50% in representative multi-robot scenarios compared to a naive scheduler; runs on embedded platforms; and performs well in delay- and disruption-tolerant networks (DTNs). PDRA is available to the community under an open-source license.
This paper introduces a simple method of building a prototype cluster of the Raspberry Pi. The Pi cluster is a powerful-low-cost tool for teaching the complex concepts of parallel and distributed computing to undergra...
详细信息
ISBN:
(数字)9781728153179
ISBN:
(纸本)9781728153186
This paper introduces a simple method of building a prototype cluster of the Raspberry Pi. The Pi cluster is a powerful-low-cost tool for teaching the complex concepts of parallel and distributed computing to undergraduate E&CS students. The performance of the presented Pi cluster is assessed using two different applications of face recognition and image encryption, which are computationally expensive. The paper explains how to compare the performance of the Pi cluster against a traditional high performance traditional cluster. The comparison is designed to help undergraduate students understand the state of the art of the clusters. The paper explains how the Pi clusters fit in the CS curriculum at Old Dominion University. The presented project-based learning is an effective teaching approach that helps the learning of struggling engineering/computer science students from minorities and women groups at ODU.
Integrating parallel and distributed computing (PDC) topics in core computing courses is a topic of increasing interest for educators. However, there is a question of how best to introduce PDC to undergraduates. Sever...
详细信息
ISBN:
(数字)9781728174457
ISBN:
(纸本)9781728174570
Integrating parallel and distributed computing (PDC) topics in core computing courses is a topic of increasing interest for educators. However, there is a question of how best to introduce PDC to undergraduates. Several educators have proposed the use of “unplugged activities”, such as role-playing dramatizations and analogies, to introduce PDC concepts. Yet, unplugged activities for PDC are widely-scattered and often difficult to find, making it challenging for educators to create and incorporate unplugged interventions in their classrooms. The PDCunplugged project seeks to rectify these issues by providing a free repository where educators can find and share unplugged activities related to PDC. The existing curation contains nearly forty unique unplugged activities collected from thirty years of the PDC literature and from all over the Internet, and maps each activity to relevant CS2013 PDC knowledge units and TCPP PDC topic areas. Learn more about the project at ***.
The 2019 ABET computer science criteria requires that all computing students learn parallel and distributed computing (PDC) as undergraduates, and CS2013 recommends at least fifteen hours of PDC in the undergraduate c...
详细信息
ISBN:
(纸本)9781450367936
The 2019 ABET computer science criteria requires that all computing students learn parallel and distributed computing (PDC) as undergraduates, and CS2013 recommends at least fifteen hours of PDC in the undergraduate curriculum. Consequently, many educators look for easy ways to integrate PDC into courses at their institutions. This hands-on workshop introduces Message Passing Interface (MPI) basics in C/C++ and Python using clusters of Raspberry Pis. The Message Passing Interface (MPI) is a multi-language, platform independent, industry-standard library for parallel and distributed computing. Raspberry Pis are an inexpensive and engaging hardware platform for studying PDC as early as the first course. Participants will experience how to teach distributed computing essentials with MPI by means of reusable, effective "parallel patterns", including single program multiple data (SPMD) execution, send-receive message passing, the master-worker pattern, parallel loop patterns, and other common patterns, plus longer "exemplar" programs that use MPI to solve significant applied problems. The workshop includes: (i) personal experience with the Raspberry Pi (clusters provided for workshop use); (ii) assembly of Beowulf clusters of Raspberry Pis quickly in the classroom; (iii) self-paced hands-on experimentation with the working MPI programs; and (iv) a discussion of how these may be used to achieve the goals of CS2013 and ABET. No prior experience with MPI, PDC, or the Raspberry Pi is expected. All materials from this workshop will be freely available from ***; participants should bring a laptop to access these materials.
We evaluate the performance of two types of distributed computing jobs in optically interconnected disaggregated data center, and show the requirements of minimum bandwidth and local memory on CPU blades to achieve ac...
详细信息
Deep learning developed in the last decade and has been established as recent, modern and very promising technique with large potential to be successfully applied in various domains. Despite deep learning outperforms ...
详细信息
ISBN:
(数字)9781728186511
ISBN:
(纸本)9781728186528
Deep learning developed in the last decade and has been established as recent, modern and very promising technique with large potential to be successfully applied in various domains. Despite deep learning outperforms alternative techniques in providing solution to many different problems there are several concerns and limitations of the its usage, one of them being the computational complexity of the training process. The paper discusses the possibilities of taking advantage of high-performance and distributed computing for efficiently and effectively training deep learning models for computer vision applications. A comparison of deep learning platforms and frameworks utilized on high-performance computer systems is presented and their application for computer vision problem of object detection is evaluated based on a suggested parallel computation architectural model and utilization of RetinaNet model.
Recent advances in artificial intelligence, edge computing, and big data have enabled extensive reasoning capabilities at the edge of the network. Edge servers are now capable of extracting meaningful intelligence fro...
详细信息
Recent advances in artificial intelligence, edge computing, and big data have enabled extensive reasoning capabilities at the edge of the network. Edge servers are now capable of extracting meaningful intelligence from IoT nodes, which can benefit a very diverse set of IoT applications, including smart carrier and distribution networks (power, people, water, and food), smart agriculture and manufacturing, and healthcare and maintenance. Unfortunately, as the infrastructures become more intelligent, they also become more vulnerable to disruption due to cyberattacks and information leakage. Furthermore, the rich data gathering and analytics involved in driving the intelligent management substantially raise the stakes in terms of privacy violation of the people and organizations that it serves.
Transition from single nude computing to complex distributed computing environments has proven its efficiency: in the computer networks especially in executing heavy computational tasks. The sensors on the floor have ...
详细信息
ISBN:
(纸本)9781538656570
Transition from single nude computing to complex distributed computing environments has proven its efficiency: in the computer networks especially in executing heavy computational tasks. The sensors on the floor have become smart with additional computational capabilities compared to passive acquisition nodes. We merged these two entities and evaluated the distributed computing of PLC load on smart sensors on an industrial automation framework. We have utilized SoC (Raspberry Pi) to build a SMART sensor which has allowed us to create a distributed and scalable sensor network structure. This structure in conjunction with state of the Art Industrial Automation network improves the performance of the Automation networks.
in an autonomous driving system, the field of view spans multiple cameras placed around a car driven through numerous driving scenarios. Sensor data is received by the analyzing unit at a high velocity, also the camer...
详细信息
ISBN:
(纸本)9781538653234
in an autonomous driving system, the field of view spans multiple cameras placed around a car driven through numerous driving scenarios. Sensor data is received by the analyzing unit at a high velocity, also the camera provides over millions of images for a small drive of about half a mile. Also not all the images captured by the cameras are capable of being analyzed as some of them might have to be discarded on accounts of high noise levels or lack of lighting. A simple example of this is when pictures clicked on burst mode often have more throwaways than the ones which can be utilized. So, it is important for the analyzing unit to make a series of decisions before even starting the feature extraction process. Efficient processing of a high volume of images is therefore a challenge which autonomous systems such as the driving system face. Given the multiple cameras present on autonomous cars, providing high resolution pictures through varying driving scenarios, the objective is to process and analyze this huge dataset efficiently. This paper shall demonstrate the power of distributed computing in image processing algorithms and analysis of incredibly large datasets using a distributed approach. This paper gives a statistical proof of concept of how implementing a distributed parallel programming paradigm can improve autonomous systems such as the driving system which deal with high volumes of images.
暂无评论