the proceedings contain 37 papers. the topics discussed include: stable protocols for the medium access control in wireless networks;new approaches of parallel calculus in groups of firms;mathematical theory of inform...
ISBN:
(纸本)9789604741342
the proceedings contain 37 papers. the topics discussed include: stable protocols for the medium access control in wireless networks;new approaches of parallel calculus in groups of firms;mathematical theory of information technology;AI, granular computing, and automata with structured memory;using cloud computing for E-Learning systems;quality model for M-Learning applications;robustness of information systems and technologies;using some web content mining techniques for Arabic text classification;a modified C-Means clustering algorithm;new implementation of unsupervised ID3 algorithm (NIU-ID3) using Visual ***;distributed algorithms for power saving optimization in sensor network;secure automatic ticketing system;and secure distribution of confidential information via self-destructing data.
Massively Multiplayer Online Games (MMOGs) gradually become one of the most popular internet applications. Traditional client-server architecture is widely used in MMOGs’ deployment, but its scalability and maintenan...
详细信息
Large-scale computational science Simulations are a dominant component of the workload oil modern supercomputers. Efficient use of high-end resources for these large computations is of considerable scientific and econ...
详细信息
ISBN:
(纸本)9783642019692
Large-scale computational science Simulations are a dominant component of the workload oil modern supercomputers. Efficient use of high-end resources for these large computations is of considerable scientific and economic importance. However, conventional job schedulers limit flexibility in that they are 'static', i.e., the number of processors allocated to ail application call not be changed at runtime. In earlier work, we described ReSHAPE a system that eliminates this drawback by supporting dynamic resizability in distributed-memory parallelapplications. the goal of this paper is to present a case study highlighting the steps involved in adapting a production scientific simulation code to take advantage of ReSHAPE. LAMMPS, a widely used molecular dynamics code, is the test case. Minor extensions to LAMMPS allow it to be resized using ReSHAPE, and experimental results show that resizing significantly improves overall system utilization as well as performance of an individual LAMMPS job.
As distributed systems increase in both popularity and scale, it becomes increasingly important to understand as well as to systematically identify performance anomalies and potential opportunities for optimization. H...
详细信息
ISBN:
(纸本)9783642038686
As distributed systems increase in both popularity and scale, it becomes increasingly important to understand as well as to systematically identify performance anomalies and potential opportunities for optimization. However;large scale distributed systems are often complex and non-deterministic clue to hardware and software heterogeneity and configurable runtime options that may boost or diminish performance. It is therefore important to be able to disseminate and present the information gleaned from a local system under a common evaluation methodology so that;such efforts can be valuable ill one environment;and provide general guidelines for other environments. Evaluation methodologies can conveniently be encapsulated inside of a common analysis framework that serves as an outer layer upon which appropriate experimental design and relevant workloads (benchmarking and profiling applications) can be supported. In this paper we present ExPerT, an Extensible Performance Toolkit. ExPerT defines a flexible framework from which a, set of benchmarking, tracing, and profiling applications call be correlated together in a unified interface. the framework consists primarily of two parts: all extensible module for profiling and benchmarking support, and a unified data discovery tool for information gathering acid parsing. We include a case study of disk I/O performance in virtualized distributed environments which demonstrates the flexibility of our framework for selecting benchmark suite, creating experimental design, and performing analysis.
Practical implementations of atomically consistent read/write memory service are important building blocks for higher level applications. this is especially true when data accessibility and survivability are provided ...
详细信息
ISBN:
(纸本)9783642030949
Practical implementations of atomically consistent read/write memory service are important building blocks for higher level applications. this is especially true when data accessibility and survivability are provided by a distributed platform consisting of networked nodes, where both nodes and connections are Subject to failure. this work presents an experimental evaluation of the practicality of an atomic memory service implementation, called RAMBO, which is the first to support multiple reader, multiple writer access to the atomic data with an integrated reconfiguration protocol to replace the underlying set of replicas without any interruption of the ongoing operations. theoretical guarantees of this service are well understood;however, only rudimentary analytical performance along with limited LAN testing were performed on the implementation of RAMBO - neither representing any realistic deployment setting. In order to assess true practicality of the RAMBO service, we devised a series of experiments tested on PlanetLab - a planetary-scale research WAN network. Our experiments show that RAMBO'S performance is reasonable (under the tested scenarios) and under the somewhat extreme conditions of PlanetLab. this demonstrates the feasibility of developing dependable reconfigurable sharable data services with provable consistency guarantees on unreliable distributed systems.
Peer-to-peer emerges as a better way for building applications on the Internet that require high scalability and availability. Peer-to-peer systems are usually organized into structured overlay networks, which provide...
详细信息
ISBN:
(纸本)9781605585987
Peer-to-peer emerges as a better way for building applications on the Internet that require high scalability and availability. Peer-to-peer systems are usually organized into structured overlay networks, which provide key-based routing capabilities to eliminate flooding in unstructured ones. Many overlay network protocols have been proposed to organize peers into various topologies with emphasis on different networking properties. However, applications are often stuck to a specific peer-to-peer overlay network implementation, because different overlay implementations usually provide very different interfaces and messaging mechanisms. In this paper, we present a framework for constructing peer-to-peer overlay networks in Java. First, networking is abstracted by the interfaces that use URIs to uniformly address peers on different underlying or overlay networks. then, asynchronous and synchronous messaging support is built upon these interfaces. Finally, overlay networking interfaces are sketched to handle specific issues in overlay networks. We have constructed several overlay networks in this framework, and built peer-to-peer applications which are independent of overlay implementations. Copyright 2009 ACM.
Multi-wavelength data cross-match among multiple catalogs is a basic and unavoidable step to make distributed digital archives accessible and inter-operable. As current catalogs often contain millions or billions obje...
详细信息
ISBN:
(纸本)9783642030949
Multi-wavelength data cross-match among multiple catalogs is a basic and unavoidable step to make distributed digital archives accessible and inter-operable. As current catalogs often contain millions or billions objects, it is a typical data-intensive computation problem. In this paper, a high-efficient parallel approach of astronomical cross-match is introduced. We issue our partitioning and parallelization approach, after that we address some problems introduced by task partition and give the solutions correspondingly, including a sky splitting function HEALPix we selected which play a key role on boththe task partitioning and the database indexing, and a quick bit-operation algorithm we advanced to resolve the block-edge problem. Our experiments prove that the function has a marked performance superiority comparing withthe previous functions and is fully applicable to large-scale cross-match.
Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallelapplications by preemptively migrating application parts away from nodes th...
详细信息
Proactive fault tolerance (FT) in high-performance computing is a concept that prevents compute node failures from impacting running parallelapplications by preemptively migrating application parts away from nodes that are about to fail. this paper provides a foundation for proactive FT by defining its architecture and classifying implementation options. this paper further relates prior work to the presented architecture and classification, and discusses the challenges ahead for needed supporting technologies.
Large-scale industrial systems involve nowadays hundreds of developers working on hundreds of models representing parts of the whole system specification. Unfortunately, few tool support is provided for managing this ...
详细信息
Withthe help of massively parallelized computing techniques, a comprehensive, large-scale numerical simulation of CO2 geologic storage that predicts not only CO2 migration but also its impact on regional groundwater ...
详细信息
Withthe help of massively parallelized computing techniques, a comprehensive, large-scale numerical simulation of CO2 geologic storage that predicts not only CO2 migration but also its impact on regional groundwater flow was performed. As a case study, a hypothetical industrial-scale injection of CO2 at the Tokyo Bay, surrounded by the most industrialized area in Japan, was considered. In the simulation, CO2 is injected into a storage aquifer at about 1km depths under the Tokyo Bay from 10 wells with a total rate of 10 million tons/year for 100 years. A regional hydrogeological model with an area of about 60km x 70km around the Tokyo Bay was discretized into approximately 10 million gridblocks. To solve the high-resolution model efficiently, we used a parallelized multiphase flow simulator TOUGH2-MP/ECO2N on a highest performance supercomputer in Japan, the Earth Simulator (5120 CPUs). the results suggest that even if containment of CO2 plume is ensured, pressure buildup in the order of tens of meters can occur in shallow confined layers of extensive regions including urban inlands. (C) 2009 Elsevier Ltd. All rights reserved.
暂无评论