This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is useful and effective when it is immaterial when a task completes, as long as it meets so...
详细信息
This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is useful and effective when it is immaterial when a task completes, as long as it meets some deadline. We show how to modify any scaling algorithm to keep performance the same but minimize expected energy consumption. We refer to our approach as PACE (Processor Acceleration to Conserve Energy) since the resulting schedule increases speed as the task progresses. Since PACE depends on the probability distribution of the task's work requirement, we present methods for estimating this distribution and evaluate these methods on a variety of real workloads. We also show how to approximate the optimal schedule with one that changes speed a limited number of times. Using PACE causes very little additional overhead, and yields substantial reductions in CPU energy consumption. Simulations using real workloads show it reduces the CPU energy consumption of previously published algorithms by up to 49.5%, with an average of 20.6%, without any effect on performance.
Several recent studies have reported the phenomenon of "software aging", one in which the state of a software system degrades with time. This may eventually lead to performance degradation of the software or...
详细信息
Several recent studies have reported the phenomenon of "software aging", one in which the state of a software system degrades with time. This may eventually lead to performance degradation of the software or crash/hang failure or both. "Software rejuvenation" is a pro-active technique aimed to prevent unexpected or unplanned outages due to aging. The basic idea is to stop the running software, clean its internal state and restart it. In this paper, we discuss software rejuvenation as applied to cluster systems. This is both an innovative and an efficient way to improve cluster system availability and productivity. Using Stochastic Reward Nets (SRNs), we model and analyze cluster systems which employ software rejuvenation. For our proposed time-based rejuvenation policy, we determine the optimal rejuvenation interval based on system availability and cost. We also introduce a new rejuvenation policy based on prediction and show that it can dramatically increase system availability and r educe downtime cost. These models are very general and can capture a multitude of cluster system characteristics, failure behavior and performability measures, which we are just beginning to explore. We then briefly describe an implementation of a software rejuvenation system that performs periodic and predictive rejuvenation, and show some empirical data from systems that exhibit aging.
In this paper we study the dynamic multi-path routing problem. We focus on an operating regime where traffic flows arrive at and depart from the network in a bursty fashion, and where the delays involved in link state...
详细信息
In this paper we study the dynamic multi-path routing problem. We focus on an operating regime where traffic flows arrive at and depart from the network in a bursty fashion, and where the delays involved in link state advertisement may lead to "synchronization" effects that adversely impact the performance of dynamic single-path routing schemes. We start by analyzing a simple network of parallel links, where the goal is to minimize the average increase in network congestion on the time scale of link state advertisements. We consider an asymptotic regime leading to an optimization problem permitting closed-form analysis of the number of links over which dynamic multi-path routing should be conducted. Based on our analytical result we examine three types of dynamic routing schemes, and identify a robust policy, i.e., routing the traffic to a set of links with loads within a factor of the least loaded, that exhibits robust performance. We then propose a similar policy for mesh networks and show by simulation some of its desirable properties. The main results suggest that our proposal would provide significant performance improvement for high speed networks carrying bursty traffic flows.
Immersive environments are those that surround users in an artificial world. These environments consist of a composition of various types of immersidata: unique data types that are combined to render a virtual experie...
详细信息
Immersive environments are those that surround users in an artificial world. These environments consist of a composition of various types of immersidata: unique data types that are combined to render a virtual experience. Acquisition, for storage and future querying, of information describing sessions in these environments is challenging because of the real-time demands and sizeable amounts of data to be managed. In this paper, we summarize a comparison of techniques for achieving the efficient acquisition of one type of immersidata, the haptic data type, which describes the movement, rotation, and force associated with user-directed objects in an immersive environment. In addition to describing a general process for real-time sampling and recording of this type of data, we propose three distinct sampling strategies: fixed, grouped, and adaptive. We conducted several experiments with a real haptic device and found that there are tradeoffs between the accuracy, efficiency, and complexity of implementation for each of the proposed techniques. While it is possible to use any of these approaches for real-time haptic data acquisition, we found that an adaptive sampling strategy provided the most efficiency without significant loss in accuracy. As immersive environments become more complex and contain more haptic sensors, techniques such as adaptive sampling can be useful for improving scalability of real-time data acquisition.
WWW workload generators are used to evaluate web server performance, and thus have a large impact on what performance optimizations are applied to servers. However, current benchmarks ignore a crucial component: how t...
详细信息
WWW workload generators are used to evaluate web server performance, and thus have a large impact on what performance optimizations are applied to servers. However, current benchmarks ignore a crucial component: how these servers perform in the environment in which they are intended to be used, namely the wide-area Internet. This paper shows how WAN conditions can affect WWW server performance. We examine these effects using an experimental testbed which emulates WAN characteristics in a live setting, by introducing factors such as delay and packet loss in a controlled and reproducible fashion. We study how these factors interact with the host TCP implementation and what influence they have on web server performance. We demonstrate that when more realistic wide area conditions are introduced, servers exhibit very different performance properties and scaling behaviors, which are not exposed by existing benchmarks running on LANs. We show that observed throughputs can give misleading information about serv er performance, and thus find that maximum throughput, or capacity, is a more useful metric. We find that packet losses can reduce server capacity by as much as 50 percent and increase response time as seen by the client. We show that using TCP SACK can reduce client response time, without reducing server capacity.
We propose a user model that explains the shape of the distribution of file sizes in local file systems and in the World Wide Web. We examine evidence from 562 file systems, 38 web clients and 6 web servers, and find ...
详细信息
ISBN:
(纸本)9781581133349
We propose a user model that explains the shape of the distribution of file sizes in local file systems and in the World Wide Web. We examine evidence from 562 file systems, 38 web clients and 6 web servers, and find that the model is a good description of these systems. These results cast doubt on the widespread view that the distribution of file sizes is long-tailed and that long-tailed distributions are the cause of self-similarity in the Internet.
暂无评论