A computational Grid, as an emerging framework for providing globally distributedcomputing resources to our desktop, has been established as a new programming environment. In particular, a new Grid research for utili...
详细信息
ISBN:
(纸本)9780889866379
A computational Grid, as an emerging framework for providing globally distributedcomputing resources to our desktop, has been established as a new programming environment. In particular, a new Grid research for utilizing a large number of busy computers is indispensable for further development of Grid. In this paper, we concentrate on Grid scheduling problem. To cope with unstable availability, low reliability, and mixed operation policies that are still serious problems in metascheduling, we propose ASF, Agent-based Scheduling Framework whose idea is dual to the conventional metascheduling. ASF is composed of a single discreet metascheduler and a collection of autonomous agents attached to each computing resource manager. Each agent autonomously finds jobs to be processed, instead of being assigned a job to by a omnipotent metascheduler. We implement a prototype of ASF, and prove its effectiveness. Our experiment shows that the total elapsed time of job processing is reduced by 11%.
Recent expectations regarding a new generation of the Web strongly depend on a success of Semantic Web technology. Resource Description Framework (RDF)(1) is the basis for explicit and machine-readable representations...
详细信息
ISBN:
(纸本)9780889866379
Recent expectations regarding a new generation of the Web strongly depend on a success of Semantic Web technology. Resource Description Framework (RDF)(1) is the basis for explicit and machine-readable representations of semantics of various Web resources and enables a framework for interoperability of future Semantic Web-based applications. However it has been pointed out that RDF is not suitable for describing highly dynamic and proactive resources (e.g. industrial devices, processes, etc.). Therefore, an appropriate extension of the existing RDF is necessary. This paper presents the Proactivity Layer of the Smart Resource in Semantic Web with the Resource Agent Behaviour definition. Process performance strategies and coordination methods of such proactive goal-driven resources are considered.
In this paper we make use of the LoPC model, which is inspired by both the LogP/PG and BSP models but accounts for contention for message processing resources in parallel programming models to derive a general estimat...
详细信息
ISBN:
(纸本)9780889866379
In this paper we make use of the LoPC model, which is inspired by both the LogP/PG and BSP models but accounts for contention for message processing resources in parallel programming models to derive a general estimate of execution cost. We carry out this cost estimate analysis for three dominate programming models: message passing, shared memory and distributed shared memory. We analyze a typical application;GUPs written in these programming models that have irregular receiver-initiated synchronous communication. The LoPC estimate for this application is shown to be accurate when compared against measured values of runtime of actual empirical computations on an SGI O2000 multiprocessor machine.
Current synchronization engines are mainly designed to reconcile data repositories between multiple clients and a central server on a star-like topology. A different approach is needed to achieve synchronization on pe...
详细信息
ISBN:
(纸本)9780889866379
Current synchronization engines are mainly designed to reconcile data repositories between multiple clients and a central server on a star-like topology. A different approach is needed to achieve synchronization on peer-to-peer topologies where any node can be both client and server and updates may happen independently. Version vectors are one solution to the problem, ensuring global convergence of the datasets and providing straightforward conflict detection, while letting applications to control the conflict resolution semantics in their specific domain. In this paper an implementation of a synchronization engine for contact data in mobile devices using version vectors is presented. The engine is capable of optimistically synchronizing databases among many nodes in a peer-to-peer fashion.
Data mining across different companies, organizations, online shops, or the likes is necessary so as to discover valuable shared patterns, associations, trends, or dependencies in their shared data. Privacy, however, ...
详细信息
ISBN:
(纸本)9780889866379
Data mining across different companies, organizations, online shops, or the likes is necessary so as to discover valuable shared patterns, associations, trends, or dependencies in their shared data. Privacy, however, is a concern. In many situations it is required that data mining should be conducted without any privacy being violated. In response to this requirement, this paper proposes an effective distributed privacy-preserving data mining approach called CRDM (Collusion-Resistant Data Mining). CRDM is characterized by its ability to resist the collusion. Let the number of sites participating in data mining be M. Unless the number of colluding sites is not less than M - 1, privacy cannot be violated. Results of both analytical and experimental performance study demonstrated the effectiveness of CRDM.
This paper presents some numerical evaluations of parallel double Divide and Conquer for singular value decomposition. For eigenvalue decomposition and singular value decomposition, double Divide and Conquer was recen...
详细信息
ISBN:
(纸本)9780889866379
This paper presents some numerical evaluations of parallel double Divide and Conquer for singular value decomposition. For eigenvalue decomposition and singular value decomposition, double Divide and Conquer was recently proposed. It rst computes eigen/singular values by a compact version of Divide and Conquer. The corresponding eigen/singular vectors are then computed by twisted factorization. The speed and accuracy of double Divide and Conquer are as good or even better than standard algorithms such as QR and the original Divide and Conquer. In addition, it is expected that double Divide and Conquer has great parallelism because each step is theoretically parallel and heavy communication is not required. This paper numerically evaluates a parallel implementation of dDC with MPI on some large scale problems using a distributed memory architecture and a massively parallel super computer, especially in terms of parallelism. It shows high scalability and super linear speed-up is observed in some cases.
The matrix-vector product is one of the most important computational components of Krylov methods. This kernel is an irregular problem, which has led to the development of several compressed storage formats. We design...
详细信息
ISBN:
(纸本)9780889866379
The matrix-vector product is one of the most important computational components of Krylov methods. This kernel is an irregular problem, which has led to the development of several compressed storage formats. We design a data structure for distributed matrix to compute the matrix-vector product efficiently on distributed memory parallel computers using MPI. We conduct numerical experiments on several different sparse matrices and show the parallel performance of our sparse matrix-vector product routines.
Domus is an architecture for distributed Hash Tables (DHTs) tailored to a shared-all cluster environment. Domus DHTs build on a (dynamic) set of cluster nodes;each node may perform routing and/or storage tasks, for on...
详细信息
ISBN:
(纸本)9780889866379
Domus is an architecture for distributed Hash Tables (DHTs) tailored to a shared-all cluster environment. Domus DHTs build on a (dynamic) set of cluster nodes;each node may perform routing and/or storage tasks, for one or more DHTs, as a function of the node base (static) resources and of its (dynamic) state. Domus DHTs also benefit from a rich set of user-level attributes and operations. pDomus is a prototype of Domus that creates an environment where to evaluate the architecture concepts and features. In this paper, we present a set of experiments conduced to obtain figures of merit on the scalability of a specific DHT operation, with several lookup methods and storage technologies. The evaluation also involves a comparison with a database and a P2P-oriented DHT platform. The results are promising, and a motivation for further work.
Despite the well known advantages of distributed processing for intensive computations like simulation, frameworks often fail to exploit them. A distributed simulation is harder to develop than a sequential one, becau...
详细信息
ISBN:
(纸本)9780889866379
Despite the well known advantages of distributed processing for intensive computations like simulation, frameworks often fail to exploit them. A distributed simulation is harder to develop than a sequential one, because it is necessary to interface and map activities to processors and handle the ensuing communication and synchronization problems. Very often the designer has to explicitly specify extra information concerning distribution for the framework to make an effort to exploit parallelism. This paper presents Automated distributed Simulation (ADS), which allows the designer to forget about distribution concerns while benefiting from the advantages. ADS relies on the actor formalism. It is realized as an open source implementation for the Ptolemy II simulation framework. Experiments compare different topologies, granularities and number of blocks, achieving linear speedups for practical cases. We implement pipelining techniques so iterative models with purely sequential topologies can benefit from ADS.
This paper presents improvements of the parallel-FIMI method for statical load balancing of mining of all frequent itemsets on a distributed-memory (DM) parallel machine. This method probabilistically partitions the s...
详细信息
ISBN:
(纸本)9780889867048
This paper presents improvements of the parallel-FIMI method for statical load balancing of mining of all frequent itemsets on a distributed-memory (DM) parallel machine. This method probabilistically partitions the space of all frequent itemsets into partitions of approximately the same size. The improvements consist in paralelization of the approximate partitioning of the search space and of dynamic reordering of items during construction of prefix-based equivalence classes. The new versions of the method achieve nearly linear speedups up to 10 processors.
暂无评论