Advances in sensor and computer technology are revolutionizing the way that remote sensing data with hundreds or even thousands of channels for the same area on the surface of the earth is collected, managed and analy...
详细信息
ISBN:
(纸本)9781467311601
Advances in sensor and computer technology are revolutionizing the way that remote sensing data with hundreds or even thousands of channels for the same area on the surface of the earth is collected, managed and analyzed. In this paper, the classical Spectral Angle Mapper (SAM) algorithm, which is fit for parallel and distributedcomputing, is implemented by using Graphic Processing Units (GPU) and distributed cluster respectively to accelerate the computations. A quantitative performance comparison between Compute Unified Device Architecture (CUDA) and Matlab platform is given by analyzing result of different parallel architectures' implementation of the same SAM algorithm.
This paper examines the integration of the NSF/TCPP Core Curriculum Recommendations in a liberal arts undergraduate setting. We examine how parallel and distributedcomputing concepts can be incorporated across the br...
详细信息
ISBN:
(纸本)9781467309745
This paper examines the integration of the NSF/TCPP Core Curriculum Recommendations in a liberal arts undergraduate setting. We examine how parallel and distributedcomputing concepts can be incorporated across the breadth of the undergraduate curriculum. As a model of such an integration, changes are proposed to Data Structures and Design and Analysis of Algorithms. These changes were implemented in Design and Analysis of Algorithms and the results were compared to previous iterations of that course taught by the same instructor. The student feedback received shows that the introduction of these topics made the course more engaging and conveyed an adequate introduction to this material.
As the foundation of cloud computing, Server consolidation allows multiple computer infrastructures running as virtual machines in a single physical node. It improves the utilization of most kinds of resource but memo...
详细信息
Summary form only given. The keynote speach was not made available for publication as part of the conference proceedings. Only a professional biography of the speaker is presented.
Summary form only given. The keynote speach was not made available for publication as part of the conference proceedings. Only a professional biography of the speaker is presented.
Summary form only given. The 2012 international Workshop on High Performance Data Intensive computing (HPDIC2012) is a forum for professionals involved in data intensive computing and high performance computing. The g...
Summary form only given. The 2012 international Workshop on High Performance Data Intensive computing (HPDIC2012) is a forum for professionals involved in data intensive computing and high performance computing. The goal of this workshop is to bridge the gap between theory and practice in the field of high performance data intensive computing and bring together researchers and practitioners from academia and industry working on high performance data intensive computing technologies. We believe that high performance data intensive computing will benefit from close interaction between researchers and industry practitioners, so that the research can inform current deployments and deployment challenges can inform new research. In support of this, HPDIC2012 will provide a forum for both academics and industry practitioners to share their ideas and experiences, discuss challenges and recent advances, introduce developments and tools, identify open issues, present applications and enhancements for data intensive computing systems and report state-of-the-art and in-progress research, leverage each other's perspectives, and identify new/emerging trends in this important area.
The petabyte scale of the Big Data generation in bioinformatics requires the introduction of advanced computational techniques to enable efficient knowledge discovery from data. Many data analysis tools in bioinformat...
详细信息
The petabyte scale of the Big Data generation in bioinformatics requires the introduction of advanced computational techniques to enable efficient knowledge discovery from data. Many data analysis tools in bioinformatics have been developed but few have been adapted to take advantage of high performance computing (HPC) resources. For some of these tools, an attractive option is to employ a map/reduce strategy. On the other hand, Cloud computing could be an important platform to run such tools in parallel because it provides on-demand, elastic computational resources. This paper presents a software suite for Microsoft Azure which supports legacy software (without modifications of the algorithm). We demonstrate the feasibility of the approach by benchmarking a typical bioinformatics tool, namely dotplot.
Sensing Instrument as a Service (SIaaS) is a new cloud service that allows users to use data acquisition instruments shared through cloud infrastructure. It offers a common interface to managing physical sensing instr...
详细信息
Sensing Instrument as a Service (SIaaS) is a new cloud service that allows users to use data acquisition instruments shared through cloud infrastructure. It offers a common interface to managing physical sensing instrument and it permits to take all advantages of using cloud computing technology in storing and processing acquired data. With SIaaS different research groups could share physical instrument, in a controlled way, event if they are geographically distributed and user can access to them using an internet connected device without the need to install any sort of program. With the interaction of different own framework we have created a new cloud service for transforming physical resources in a ubiquitous service.
This paper reports our experiences in reimplementing an entry-level graduate course in high-performance parallelcomputing aimed at physical scientists and engineers. These experiences have directly informed a signifi...
详细信息
This paper reports our experiences in reimplementing an entry-level graduate course in high-performance parallelcomputing aimed at physical scientists and engineers. These experiences have directly informed a significant redesign of a junior/senior undergraduate course, Introduction to High-Performance computing (CS 4225 at Georgia Tech), which we are implementing for the current Spring 2012 semester. Based on feedback from the graduate version, the redesign of the undergraduate course emphasizes peer instruction and hands-on activities during the traditional lecture periods, as well as significant time for end-to-end projects. This paper summarizes our anecdotal findings from the graduate version's exit surveys and briefly outlines our plans for the undergraduate course.
With the rise of Service computing, applications are more and more built as temporal compositions of autonomous services, in which services are combined dynamically to satisfy constantly arriving users' requests. ...
详细信息
With the rise of Service computing, applications are more and more built as temporal compositions of autonomous services, in which services are combined dynamically to satisfy constantly arriving users' requests. These applications run on top of web-based, large, unreliable, and heterogeneous platforms, in which there is a high demand for autonomic behaviours, such as self-optimisation, self-adaptation, or self-healing. Chemistry-inspired computing consists in envisioning a computation as a succession of implicitly parallel, distributed and autonomous reactions, each reaction consuming molecules of data to produce new ones, until the state of inertia, where no more reactions are possible. This vision of a computation makes it a promising candidate to inject autonomic behaviours in service computing. However, while the benefits of such a model are manifold, its deployment over large scale platforms remains a widely open issue. In this paper, after identifying the main obstacles towards such a deployment, we propose a peer-to-peer framework able to execute chemical specifications at large scale. It combines distributed hash tables and algorithms for the atomic capture of molecules, and proposes an efficient method for inertia detection, which is a critical problem, in particular when addressed in a large scale environment. The sustainability of the framework is established through a complete complexity analysis.
暂无评论