Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instan...
详细信息
Cloud computing providers are now offering their unused resources for leasing in the spot market, which has been considered the first step towards a full-fledged market economy for computational resources. Spot instances are virtual machines (VMs) available at lower prices than their standard on-demand counterparts. These VMs will run for as long as the current price is lower than the maximum bid price users are willing to pay per hour. Spot instances have been increasingly used for executing compute-intensive applications. In spite of an apparent economical advantage, due to an intermittent nature of biddable resources, application execution times may be prolonged or they may not finish at all. This paper proposes a resource allocation strategy that addresses the problem of running compute-intensive jobs on a pool of intermittent virtual machines, while also aiming to run applications in a fast and economical way. To mitigate potential unavailability periods, a multifaceted fault-aware resource provisioning policy is proposed. Our solution employs price and runtime estimation mechanisms, as well as three fault-tolerance techniques, namely check pointing, task duplication and migration. We evaluate our strategies using trace-driven simulations, which take as input real price variation traces, as well as an application trace from the Parallel Workload Archive. Our results demonstrate the effectiveness of executing applications on spot instances, respecting QoS constraints, despite occasional failures.
Many applications in federated Grids have quality-of-service (QoS) constraints such as deadline. Admission control mechanisms assure QoS constraints of the applications by limiting the number of user requests accepted...
详细信息
Many applications in federated Grids have quality-of-service (QoS) constraints such as deadline. Admission control mechanisms assure QoS constraints of the applications by limiting the number of user requests accepted by a resource provider. However, in order to maximize their profit, resource owners are interested in accepting as many requests as possible. In these circumstances, the question that arises is: what is the effective number of requests that can be accepted by a resource provider in a way that the number of accepted external requests is maximized and, at the same time, QoS violations are minimized. In this paper, we answer this question in the context of a virtualized federated Grid environment, where each Grid serves requests from external users along with its local users and requests of local users have preemptive priority over external requests. We apply analytical queuing model to address this question. Additionally, we derive a preemption-aware admission control policy based on the proposed model. Simulation results under realistic working conditions indicate that the proposed policy improves the number of completed external requests (up to 25%). In terms of QoS violations, the 95% confidence interval of the average difference with other policies is between (14.79%, 18.56%).
In order to improve understanding and working with data, visualizing information is without a doubt the best method to implement. Data visualization as a term unites the established field of scientific visualization a...
详细信息
Preventing and controlling of epidemics in human or computer networks is a top problem. This paper presents an implementation of the algorithm that simulates the epidemic spreading in networks, using CUDA (Compute Uni...
详细信息
Preventing and controlling of epidemics in human or computer networks is a top problem. This paper presents an implementation of the algorithm that simulates the epidemic spreading in networks, using CUDA (Compute Unified Device Architecture) technology. Spreading of the epidemics over the network is modeled by a discrete SIR (Susceptible - Infected - Recovered) model. This implementation enables selection of starting node and monitoring the epidemic spread in each cycle. In comparison to a common CPU implementation, the CUDA implementation achieves about 10× faster execution time, which is of high significance to running tests on larger networks. The implementation was tested on real social networks consisting of more than 5 million nodes. Hence, we belive this implementation can have a practical value in analysis of the epidemic spreading over large networks.
In this paper, a neural network PCA method that integrates neural networks (NN) and principal component analysis (PCA) is used to detect faults in a wastewater treatment plant. The neural networks are used to calculat...
详细信息
In this paper, a neural network PCA method that integrates neural networks (NN) and principal component analysis (PCA) is used to detect faults in a wastewater treatment plant. The neural networks are used to calculate a non-linear and dynamic model of the process in normal operating conditions. PCA is used to generate monitoring charts based on the residuals calculated as the difference between the process measurements and the output of the networks. It can evaluate the current performance of the process and detects the faults. This technique has been applied to the simulation of a benchmark of a biological wastewater treatment process, a highly non-linear process. The simulation results clearly show the advantages of using this NNPCA monitoring in comparison with classical PCA monitoring.
Most existing hashing methods adopt some projection functions to project the o-riginal data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit (zero or one) b...
详细信息
ISBN:
(纸本)9781627480031
Most existing hashing methods adopt some projection functions to project the o-riginal data into several dimensions of real values, and then each of these projected dimensions is quantized into one bit (zero or one) by thresholding. Typically, the variances of different projected dimensions are different for existing projection functions such as principal component analysis (PCA). Using the same number of bits for different projected dimensions is unreasonable because larger-variance dimensions will carry more information. Although this viewpoint has been widely accepted by many researchers, it is still not verified by either theory or experiment because no methods have been proposed to find a projection with equal variances for different dimensions. In this paper, we propose a novel method, called isotropic hashing (IsoHash), to learn projection functions which can produce projected dimensions with isotropic variances (equal variances). Experimental results on real data sets show that IsoHash can outperform its counterpart with different variances for different dimensions, which verifies the viewpoint that projections with isotropic variances will be better than those with anisotropic variances.
The simplest multiplierless decimation filter is the cascaded-integrator- comb (CIC) filter. However, the magnitude response of this filter has a high passband droop, which is not tolerable in many applications. The d...
详细信息
Establishing performance metrics is a key part of a systematic design process. In particular, specifying metrics useful for quantifying performance in the ongoing efforts towards biomolecular circuit design is an impo...
详细信息
ISBN:
(纸本)9781467320658
Establishing performance metrics is a key part of a systematic design process. In particular, specifying metrics useful for quantifying performance in the ongoing efforts towards biomolecular circuit design is an important problem. Here we address this issue for the design of a fast biomolecular step response that is uniform across different cells and widely different environmental conditions using a combination of simple mathematical models and experimental measurements using single-cell time-lapse microscopy. We evaluate two metrics, the difference of the step response from an ideal step and the relative difference between multiple realizations of the step response, that can provide a single number to measure performance. We use a model of protein production-degradation to show that these performance metrics correlate with response features of speed and noise. Finally, we work through an experimental methodology to estimate these metrics for step responses that have been acquired for inducible protein expression circuits in E. coli. These metrics will be useful to evaluate biomolecular step responses, as well as for setting similar performance measures for other design goals.
Gestural interfaces have lately become extremely popular due to the introduction on the market of low-cost acquisition devices such as iPhone, Wii, and Kinect. Such devices allow practitioners to design, experiment, a...
详细信息
Analysis of sound field distribution is a data-intense and memory-intense application. To speed up calculation, an alternative solution is to implement the analysis algorithms by FPGA. This paper presents the related ...
详细信息
Analysis of sound field distribution is a data-intense and memory-intense application. To speed up calculation, an alternative solution is to implement the analysis algorithms by FPGA. This paper presents the related issues for FPGA based sound field analysis system from the point of view of hardware implementation. Compared with other algorithms, the OCTA-FDTD algorithm consumes 49 slices in FPGA, and the system updates 536.2 million elements per second. In system architecture, the system based on the parallel architecture benefits from fast computation since the sound pressures of all elements are obtained and updated at a clock cycle. But it consumes more hardware resources, and a small sound space is simulated by a FPGA chip. In contrast, the system based on the time-sharing architecture extends the simulated sound area by expense of computation speed since the sound pressures are calculated element by element.
暂无评论