The Internet of Things (IoT) concept has been around for some time and applications such as transportation, health-care, education, travel, smart grid, retail, are and will be major benefactors of this concept. Howeve...
详细信息
ISBN:
(纸本)9781450328098
The Internet of Things (IoT) concept has been around for some time and applications such as transportation, health-care, education, travel, smart grid, retail, are and will be major benefactors of this concept. However, only recently, due to technological advances in sensor devices and rich wireless connectivity, Internet of Things at scale is becoming reality. For example, Cisco's Internet of Things Group predicts over 50 billion connected sensory devices by 2020. In this talk, we will discuss the Internet of Mobile Things (IoMT) since several game-changing technological advances happened on mobile things' such as mobile phones, trains, and cars, where rich sets of sensors, connected via diverse sets of wireless Internet technologies, are changing and influencing how people communicate, move, and download and distribute information. In this space, challenges come from the needs to determine (1) contextual information such as location, duration of contact, density of devices, utilizing networked sensory information;(2) higher level knowledge such as users' activity detection, mood detection, applications usage pattern detection and user interactions on 'mobile things', utilizing contextual information;and (3) adaptive and real-time parallel and distributed architectures that integrate context, activity, mood, usage patterns into mobile application services on mobile 'things'. Solving these challenges will provide enormous opportunities to improve the utility of mobile 'things', optimizing scarce resources on mobile 'things' such as energy, memory, and bandwidth.
The emergence of multicore machines has made exploiting parallelism a necessity to harness the abundant computing resources in both a single machine and clusters. This, however, may hinder programming productivities a...
详细信息
In the last decade, Graphics Processing Units(GPUs) have gained an increasing popularity as accelerators for High Performance Computing (HPC) applications. Recent GPUs are not only powerful graphics engines but also h...
详细信息
Recently Model-Driven Engineering (MDE) is becoming more and more popular as it is able to solve complex problems by exploiting the abstraction power of models. As models, metamodels and model transformations are the ...
详细信息
Recently Model-Driven Engineering (MDE) is becoming more and more popular as it is able to solve complex problems by exploiting the abstraction power of models. As models, metamodels and model transformations are the heart of MDE, they play a vital role. Nevertheless, existing transformation languages and accompanying tools cannot deal with large models such those used in the fields of astronomy, genetics, etc. The main problems are related to the storage of very large models, the unreasonable time needed to execute the transformation and the impossibility of transforming distributed or streaming models. We tackle this problem by means of incorporating the concurrent and distributed mechanisms that Linda (a mature coordination language for parallel processes) provides into model transformation approaches.
The integration of accelerators into cluster systems is currently one of the architectural trends in high performance computing. Usually, those accelerators are manycore compute devices which are directly connected to...
详细信息
ISBN:
(纸本)9780769551173
The integration of accelerators into cluster systems is currently one of the architectural trends in high performance computing. Usually, those accelerators are manycore compute devices which are directly connected to individual cluster nodes via PCI Express. Recent advances of accelerators, however, do not require a host CPU anymore and now even enable their integration as self-contained nodes that are able to MPI-communicate over their own network interface. This approach offers new opportunities for application developers, as compute kernels can now span multiple communicating accelerators to better account for larger MPI-based code regions with the potential for massive node-level parallelism. However, it also raises the question of how to program such an environment. An instance of this novel cluster architecture is the DEEP cluster system currently under development. Based on this hardware concept, we investigate the MPI_Comm_spawn process creation mechanism for offloading MPI-based distributed memory compute kernels onto multiple network-attached accelerators. We identify limitations of MPI_Comm_spawn and present an offloading mechanism which results in only a fraction of the overhead of a pure MPI_Comm_spawn solution.
To run search tasks in a parallel and load-balanced fashion, existing parallel BLAST schemes such as mpiBLAST introduce a data initialization preparation stage to move database fragments from the shared storage to loc...
详细信息
ISBN:
(纸本)9781450325066
To run search tasks in a parallel and load-balanced fashion, existing parallel BLAST schemes such as mpiBLAST introduce a data initialization preparation stage to move database fragments from the shared storage to local cluster nodes. Unfortunately, a quickly growing sequence database becomes too heavy to move in the network in today's big data era. In this paper, we develop a Scalable Data Access Framework (SDAFT) to solve the problem. It employs a distributed file system (DFS) to provide scalable data access for parallel sequence searches. SDAFT consists of two interlocked components: 1) a data centric load-balanced scheduler (DC-scheduler) to enforce data-process locality and 2) a translation layer to translate conventional parallel I/O operations into HDFS I/O. By experimenting our SDAFT prototype system with real-world database and queries at a wide variety of computing platforms, we found that SDAFT can reduce I/O cost by a factor of 4 to 10 and double the overall execution performance as compared with existing schemes. Copyright 2013 ACM.
Computational challenges for the one-to-many and many-to-many protein structure comparison (PSC) problem are a result of several factors: constantly expanding large-size structural proteomics databases, high computati...
详细信息
Cloud computing has become more popular for a decade;it has been under continuous development with advances in architecture, software, and network. Hadoop-MapReduce is a common software framework processing paralleliz...
详细信息
ISBN:
(纸本)9781467364041;9781467364034
Cloud computing has become more popular for a decade;it has been under continuous development with advances in architecture, software, and network. Hadoop-MapReduce is a common software framework processing parallelizable problem across big datasets using a distributed cluster of processors or stand-alone computers. Cloud Hadoop-MapReduce can scale incrementally in the number of processing nodes. Hence, the Hadoop-MapReduce is designed to provide a processing platform with powerful computation. Network traffic is always a most important bottleneck in data-intensive computing and network latency decreases significant performance in data parallelsystems. Network bottleneck is caused by network bandwidth and the network speed is much slower than disk data access. So that, good data locality can reduces network traffic and increases performance in data-intensive HPC systems. However, Hadoop's scheduler has a defect of data locality in resource assignment. In this paper, we present a locality-aware scheduling algorithm (LaSA) for Hadoop-MapReduce scheduler. Firstly, we propose a mathematical model of weight of data interference in Hadoop scheduler. Secondly, we present the LaSA algorithm to use weight of data interference to provide data locality-aware resource assignment in Hadoop scheduler. Finally, we build an experimental environment with 3 cluster and 35 VMs to verify the LaSA's performance.
暂无评论