Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a res...
详细信息
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments. (c) 2005 Elsevier Inc. All rights reserved.
Membrane systems are parallel distributed computing models that are used in a wide variety of areas. Use of a sequential machine to simulate membrane systems loses the advantage of parallelism in Membrane computing. I...
详细信息
Membrane systems are parallel distributed computing models that are used in a wide variety of areas. Use of a sequential machine to simulate membrane systems loses the advantage of parallelism in Membrane computing. In this paper, an innovative classification algorithm based on a weighted network is introduced. Two new algorithms have been proposed for simulating membrane systems models on a Graphics Processing Unit (GPU). Communication and synchronization between threads and thread blocks in a CPU are time-consuming processes. In previous studies, dependent objects were assigned to different threads. This increases the need for communication between threads, and as a result, performance decreases. In previous studies, dependent membranes have also been assigned to different thread blocks, requiring inter-block communications and decreasing performance. The speedup of the proposed algorithm on a CPU that classifies dependent objects using a sequential approach, for example with 512 objects per membrane, was 82x, while for the previous approach (Algorithm 1), it was 8.2x. For a membrane system with high dependency among membranes, the speedup of the second proposed algorithm (Algorithm 3) was 12 x, while for the previous approach (Algorithm 1) and the first proposed algorithm (Algorithm 2) that assign each membrane to one thread block, it was 1.8x. (C) 2014 Elsevier B.V. All rights reserved.
This paper analyzes and compares different incentive mechanisms for a master to motivate the collaboration of smartphone users on both data acquisition and distributed computing applications. To collect massive sensit...
详细信息
This paper analyzes and compares different incentive mechanisms for a master to motivate the collaboration of smartphone users on both data acquisition and distributed computing applications. To collect massive sensitive data from users, we propose a reward-based collaboration mechanism, where the master announces a total reward to be shared among collaborators, and the collaboration is successful if there are enough users wanting to collaborate. We show that if the master knows the users' collaboration costs, then he can choose to involve only users with the lowest costs. However, without knowing users' private information, then he needs to offer a larger total reward to attract enough collaborators. Users will benefit from knowing their costs before the data acquisition. Perhaps surprisingly, the master may benefit as the variance of users' cost distribution increases. To utilize smartphones' computation resources to solve complex computing problems, we study how the master can design an optimal contract by specifying different task-reward combinations for different user types. Under complete information, we show that the master involves a user type as long as the master's preference characteristic outweighs that type's unit cost. All collaborators achieve a zero payoff in this case. If the master does not know users' private cost information, however, he will conservatively target at a smaller group of users with small costs, and has to give most benefits to the collaborators.
Parallel computing is now an essential paradigm for high performance scientific computing. Most existing hardware and software solutions are expensive or difficult to use. We developed Playdoh, a Python library for di...
详细信息
Parallel computing is now an essential paradigm for high performance scientific computing. Most existing hardware and software solutions are expensive or difficult to use. We developed Playdoh, a Python library for distributing computations across the free computing units available in a small network of multicore computers. Playdoh supports independent and loosely coupled parallel problems such as global optimisations, Monte Carlo simulations and numerical integration of partial differential equations. It is designed to be lightweight and easy to use and should be of interest to scientists wanting to turn their lab computers into a small cluster at no cost. (C) 2011 Elsevier B.V. All rights reserved.
Even though demand response (DR) participation has substantial benefits to the market as a whole, current DR programs suffer from a collection of market, regulatory, infrastructure and technology problems, such as lac...
详细信息
Even though demand response (DR) participation has substantial benefits to the market as a whole, current DR programs suffer from a collection of market, regulatory, infrastructure and technology problems, such as lack of scalability, lack of privacy, imprecision, and nonacceptance by customers. This paper describes how a fundamentally different DR approach, based on service priority tiers for appliances and on stochastic distributed computing, can overcome these problems and be integrated with energy markets. Our approach takes advantage of inexpensive communications technology to estimate the state of home and small-business major electrical appliances and have those appliances respond to power grid state signals within a few seconds. Organizing appliances into service priority tiers allows retail customer power demand to be de-commoditized, making these DR resources a potent force for improving the efficiency of energy markets. This paper describes the proposed methodology, examines how it can be integrated into energy markets, and presents results from mathematical analysis and from simulation of 100 000 devices.
Placement delivery arrays for distributed computing (Comp-PDAs) have recently been proposed as a framework to construct universal computing schemes for MapReduce-like systems. In this work, we extend this concept to s...
详细信息
Placement delivery arrays for distributed computing (Comp-PDAs) have recently been proposed as a framework to construct universal computing schemes for MapReduce-like systems. In this work, we extend this concept to systems with straggling nodes, i.e., to systems where a subset of the nodes cannot accomplish the assigned map computations in due time. Unlike most previous works that focused on computing linear functions, our results are universal and apply for arbitrary map and reduce functions. Our contributions are as follows. Firstly, we show how to construct a universal coded computing scheme for MapReduce-like systems with straggling nodes from any given Comp-PDA. We also characterize the storage and communication loads of the resulting scheme in terms of the Comp-PDA parameters. Then, we prove an information-theoretic converse bound on the storage-communication (SC) tradeoff achieved by universal computing schemes with straggling nodes. We show that the information-theoretic bound matches the performance achieved by the coded computing schemes with straggling nodes corresponding to the Maddah-Ali and Niesen (MAN) PDAs, i.e., to the Comp-PDAs describing Maddah-Ali and Niesen's coded caching scheme. Interestingly, the MAN-PDAs are optimal for any number of straggling nodes. This implies that the map phase of optimal coded computing schemes does not need to be adapted to the number of stragglers in the system. We show that the points that lie exactly on the fundamental SC tradeoff cannot be achieved with Comp-PDAs that require smaller number of files than the MAN-PDAs. This is however possible for some of the points that lie close to the SC tradeoff. For these latter points, the decrease in the requested number of files can be exponential in the number of nodes of the system. We also model the total execution time, and numerically show that the active set size should be chosen to balance the duration of the map phase and the durations of the shuffle and reduce pha
In distributed computing such as grid computing, online users submit their tasks anytime and anywhere to dynamic resources. Task arrival and execution processes are stochastic. How to adapt to the consequent uncertain...
详细信息
In distributed computing such as grid computing, online users submit their tasks anytime and anywhere to dynamic resources. Task arrival and execution processes are stochastic. How to adapt to the consequent uncertainties, as well as scheduling overhead and response time, are the main concern in dynamic scheduling. Based on the decision theory, scheduling is formulated as a Markov decision process (MDP). To address this problem, an approach from machine learning is used to learn task arrival and execution patterns online. The proposed algorithm can automatically acquire such knowledge without any aforehand modeling, and proactively allocate tasks on account of the forthcoming tasks and their execution dynamics. Under comparison with four classic algorithms such as Min-Min, Min-Max, Suffrage, and ECT, the proposed algorithm has much less scheduling overhead. The experiments over both synthetic and practical environments reveal that the proposed algorithm outperforms other algorithms in terms of the average response time. The smaller variance of average response time further validates the robustness of our algorithm. (C) 2014 Elsevier Inc. All rights reserved.
An automated service demand-supply control system can improve a large-scale grid infrastructure comprising a federation of distributed utility data centers.
An automated service demand-supply control system can improve a large-scale grid infrastructure comprising a federation of distributed utility data centers.
Statistical models for spatio-temporal data are increasingly used in environmetrics, climate change, epidemiology, remote sensing and dynamical risk mapping. Due to the complexity of the relationships among the involv...
详细信息
Statistical models for spatio-temporal data are increasingly used in environmetrics, climate change, epidemiology, remote sensing and dynamical risk mapping. Due to the complexity of the relationships among the involved variables and dimensionality of the parameter set to be estimated, techniques for model definition and estimation which can be worked out stepwise are welcome. In this context, hierarchical models are a suitable solution since they make it possible to define the joint dynamics and the full likelihood starting from simpler conditional submodels. Moreover, for a large class of hierarchical models, the maximum likelihood estimation procedure can be simplified using the Expectation-Maximization (EM) algorithm. In this paper, we define the EM algorithm for a rather general three-stage spatio-temporal hierarchical model, which includes also spatio-temporal covariates. In particular, we show that most of the parameters are updated using closed forms and this guarantees stability of the algorithm unlike the classical optimization techniques of the Newton-Raphson type for maximizing the full likelihood function. Moreover, we illustrate how the EM algorithm can be combined with a spatio-temporal parametric bootstrap for evaluating the parameter accuracy through standard errors and non-Gaussian confidence intervals. To do this a new software library in form of a standard R package has been developed. Moreover, realistic simulations on a distributed computing environment allow us to discuss the algorithm properties and performance also in terms of convergence iterations and computing times. (C) 2009 Elsevier Ltd. All rights reserved.
A flexibility-based distributed computing strategy (DCS) for structural health monitoring (SHM) has recently been proposed which is suitable for implementation on a network of densely distributed smart sensors. This a...
详细信息
A flexibility-based distributed computing strategy (DCS) for structural health monitoring (SHM) has recently been proposed which is suitable for implementation on a network of densely distributed smart sensors. This approach uses a hierarchical strategy in which adjacent smart sensors are grouped together to form sensor communities. A flexibility-based damage detection method is employed to evaluate the condition of the local elements within the communities by utilizing only locally measured information. The damage detection results in these communities are then communicated with the surrounding communities and sent back to a central station. Structural health monitoring can be done without relying on central data acquisition and processing. The main purpose of this paper is to experimentally verify this flexibility-based DCS approach using wired sensors;such verification is essential prior to implementation on a smart sensor platform. The damage locating vector method that forms foundation of the DCS approach is briefly reviewed, followed by an overview of the DCS approach. This flexibility-based approach is then experimentally verified employing a 5.6 m long three-dimensional truss structure. To simulate damage in the structure, the original truss members are replaced by ones with a reduced cross section. Both single and multiple damage scenarios are studied. Experimental results show that the DCS approach can successfully detect the damage at local elements using only locally measured information.
暂无评论