For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frame...
详细信息
ISBN:
(纸本)9781450355599
For over a decade now we have been witnessing the success of massive parallel computation (MPC) frameworks, such as MapReduce, Hadoop, Dryad, or Spark. One of the reasons for their success is the fact that these frameworks are able to accurately capture the nature of large-scale computation. In particular, compared to the classic distributed algorithms or PRAM models, these frameworks allow for much more local computation. The fundamental question that arises in this context is though: can we leverage this additional power to obtain even faster parallel algorithms? A prominent example here is the maximum matching problem-one of the most classic graph problems. It is well known that in the PRAM model one can compute a 2-approximate maximum matching in O(logn) rounds. However, the exact complexity of this problem in the MPC framework is still far from understood. Lattanzi et al. (SPAA 2011) showed that if each machine has n(1+Omega(1)) memory, this problem can also be solved 2-approximately in a constant number of rounds. These techniques, as well as the approaches developed in the follow up work, seem though to get stuck in a fundamental way at roughly O(logn) rounds once we enter the (at most) near-linear memory regime. It is thus entirely possible that in this regime, which captures in particular the case of sparse graph computations, the best MPC round complexity matches what one can already get in the PRAM model, without the need to take advantage of the extra local computation power. In this paper, we finally refute that possibility. That is, we break the above O(logn) round complexity bound even in the case of slightly sublinear memory per machine. In fact, our improvement here is almost exponential: we are able to deliver a (2+epsilon)-approximate maximum matching, for any fixed constant epsilon > 0, in O (log logn)(2)) rounds. To establish our result we need to deviate from the previous work in two important ways that are crucial for exploiting the power of t
An approach to the management of non-functional concerns in massively parallel and/or distributed architectures that marries parallel programming patterns with autonomic computing is presented. The necessity and suita...
详细信息
ISBN:
(纸本)9781424437511
An approach to the management of non-functional concerns in massively parallel and/or distributed architectures that marries parallel programming patterns with autonomic computing is presented. The necessity and suitability of the adoption of autonomic techniques are evidenced. Issues arising in the implementation of autonomic managers taking care of multiple concerns and of coordination among hierarchies of such autonomic managers are discussed. Experimental results are presented that demonstrate the feasibility of the approach.
Sorting has been a topic of immense research value since the inception of Computer Science. Hybrid computing on multicore architectures involves computing simultaneously on a tightly coupled heterogeneous collection o...
详细信息
We have developed a combined network and service management and diagnostics solution for our in-house developed remote patient monitoring system. The developed system has included into the ALPHA eHealth/remote patient...
详细信息
This paper describes how to improve separation between domain-specific code and parallel code in skeletal systems. Traditionally, the code used to exploit parallelism is tangled among domain-specific code, which leads...
详细信息
ISBN:
(纸本)9781424444106
This paper describes how to improve separation between domain-specific code and parallel code in skeletal systems. Traditionally, the code used to exploit parallelism is tangled among domain-specific code, which leads to problems such as: poor maintainability, lower flexibility, and weak scalability. In this paper we introduce the design of the YaSkel framework, which is a support tool to write parallel programs. We argue that the design of YaSkel framework allows more freedom to change the parallelization strategy when compared with traditional skeleton frameworks. To change the parallelization strategy we rely on DI - Dependency Injection - to inject a reference of a specific skeleton in latter development stages. We also show that AOP - Aspect Oriented Programming - could be used to minimize the impact of applying skeleton based approaches to legacy code.
In modern grids, authentication is usually implemented via an X.509 PKI (public key infrastructure). Proxy certificates are employed to facilitate interaction with the grid, especially for purposes of delegation and s...
详细信息
ISBN:
(纸本)9781424436880
In modern grids, authentication is usually implemented via an X.509 PKI (public key infrastructure). Proxy certificates are employed to facilitate interaction with the grid, especially for purposes of delegation and single sign-on. We propose modifications to the grid security infrastructure that allow reporting of proxy usage information to a database, giving the end user an opportunity to review by whom and for which purpose his credentials were used. By means of a standardized protocol for certificate revocation, they can then revoke affected proxies and stop abuse.
Collective operations on distributed data sets foster a high-level data-parallel programming style that eases many aspects of parallel programming significantly. In this paper we describe how higher-order collective o...
详细信息
Collective operations on distributed data sets foster a high-level data-parallel programming style that eases many aspects of parallel programming significantly. In this paper we describe how higher-order collective operations on distributed object sets can be introduced in a structured way by means of reusable topology classes and C++ templates.
This workshop gives a forum to researchers and engineers to present their results in grid and distributedcomputing. Special areas of interest are grid middleware, grid applications, grid benchmarking, data distributi...
详细信息
This workshop gives a forum to researchers and engineers to present their results in grid and distributedcomputing. Special areas of interest are grid middleware, grid applications, grid benchmarking, data distribution and replication on the grid, fault tolerance and efficiency of grid applications, novel models of grid computing, including cloud and mobile computing. Topics include: • Applications: Theory and practice of grid computing. Solution of large problems on grids. • Benchmarking: Evaluating performance of grid hardware and middleware. Grid benchmarking. • Infrastructure: Implementation and evaluation of grid middleware. • Data Grids: Use of grids for analysis of large data sets. • Management and Scheduling: Account management, resource monitoring • Scheduling: Resource allocation, application scheduling and containerization. • Networking: QoS for grid applications. Bandwidth management. • Partitioning and Load Balancing: Mapping data and applications on computational grids. • Programming Models: Methods for remote execution and intertask communications, including message passing and RPC.
The Message Passing Interface (MPI) is a popular communication library that supports the SIMD model of parallelcomputing. Process networks (PN), where processes communicate through first-in-first-out channels, suppor...
详细信息
The Message Passing Interface (MPI) is a popular communication library that supports the SIMD model of parallelcomputing. Process networks (PN), where processes communicate through first-in-first-out channels, support the MIMD model of parallelcomputing. Java implementations of both MPI and PN are available. We show that the communication primitives available in MPI, such as send/receive, scatter/gather, and broadcast can be emulated in PN. We present benchmark results comparing the performance of MPIJava and to own PN library.
暂无评论