Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. N...
详细信息
Web caching has been the solution of choice to web latency problems. The efficiency of a Web cache is strongly affected by the replacement algorithm used to decide which objects to evict once the cache is saturated. Numerous web cache replacement algorithms have appeared in the literature. Despite their diversity, a large number of them belong to a class known as stack-based algorithms. These algorithms are evaluated mainly via trace-driven simulation. The very few analytical models reported in the literature were targeted at one particular replacement algorithm, namely least recently used (LRU) or least frequently used (LFU). Further they provide a formula for the evaluation of the Hit Ratio only. The main contribution of this paper is an analytical model for the performance evaluation of any stack-based web cache replacement algorithm. The model provides formulae for the prediction of the object Hit Ratio, the byte Hit Ratio, and the delay saving ratio. The model is validated against extensive discrete event trace-driven simulations of the three popular stack-based algorithms, LRU, LFU, and SIZE, using NLANR and DEC traces. Results show that the analytical model achieves very good accuracy. The mean error deviation between analytical and simulation results is at most 6% for LRU, 6% for the LFU, and 10% for the SIZE stack-based algorithms. Copyright (C) 2009 John Wiley & Sons, Ltd.
Processors speed is much faster than memory;to bridge this gap cache memory is used. This paper proposes a preeminent pair of replacement algorithms for Level 1 cache (L 1) and Level 2 cache (L2) respectively for the ...
详细信息
ISBN:
(纸本)9781424429271
Processors speed is much faster than memory;to bridge this gap cache memory is used. This paper proposes a preeminent pair of replacement algorithms for Level 1 cache (L 1) and Level 2 cache (L2) respectively for the Matrix Multiplication (MM) application. The access patterns of L I and L2 are different, when CPU not gets the desired data in L1 then it goes to L2. Thus the replacement algorithm which works efficiently for L1 may not be efficient for L2. With the reference string of MM the paper has analyzed the behavior of various existing replacement algorithms at L1 and L2 respectively. The replacement algorithms which are taken into consideration are: Least Recently Used (LRU), Least Frequently Used (LFU) and First In First Out (FIFO). This paper has also proposed new replacement algorithms for L1 (NEW ALGO1) and for L2 (NEW ALGO2) respectively for the same application. Analysis shows that by applying these algorithms at L1 and L2 respectively miss rates are considerably reduced.
Cache replacement Policies play a significant and contributory role in the context of determining the effectiveness of cache memory cells. It has also become one of the major key features for efficient memory manageme...
详细信息
ISBN:
(纸本)9781509036691
Cache replacement Policies play a significant and contributory role in the context of determining the effectiveness of cache memory cells. It has also become one of the major key features for efficient memory management front the technological aspect. Hence, owing to the existing critical computing systems, it has become too essential to attain faster processing of executable instructions under any adverse situations. In the current scenario, the state of art processors such as Intel multi-core processors for application specific integrated circuits, usually employ various cache replacement policies such as Least Recently Used (LRU) and Pseudo LRU (PLRU), Round Robin, etc. However, fewer amounts of existing research works are found till date to utter about explicit performance issues associated with the conventional cache replacement algorithms. Therefore, the proposed study intended to carry out a performance evaluation to explore the design space of conventional cache replacement policies under SPEC CPU2000 benchmark suite. It initiates and configures the experimental Simple Scalar toolbox prototype on a wide range of cache sizes. The experimental outcomes obtained from the benchmark suite show that PLRU outperforms the conventional LRU concerning computational complexity and a variety of cache blocks organization.
Web-Proxy servers are used to reduce the bandwidth consumption and users' perceived latency while navigating the WWW, by caching the most frequent objects accessed by users. Since they were introduced, most of the...
详细信息
ISBN:
(纸本)9789728865771
Web-Proxy servers are used to reduce the bandwidth consumption and users' perceived latency while navigating the WWW, by caching the most frequent objects accessed by users. Since they were introduced, most of the evaluations studies related to Web-Proxy caches have focused on the replacement algorithms performance using simulation techniques. But few of them have been done assuring the representativeness of the studies and considering real traces and cache sizes. This paper describes a methodology that permits fair performance comparison studies of replacement algorithms, that is, the system reaches the steady-state and the results are provided showing narrow confidence intervals. An experimental evaluation study applying this methodology is also presented. The study uses a trace-driven simulation framework, real traces containing more than one hundred million of user's requests, and compares three replacement algorithms implemented in actual Web-Proxy caches.
The wide performance gap between processors and disks ensures that effective page replacement remains an important consideration in modem systems. This paper presents early eviction LRU (EELRU), an adaptive replacemen...
详细信息
The wide performance gap between processors and disks ensures that effective page replacement remains an important consideration in modem systems. This paper presents early eviction LRU (EELRU), an adaptive replacement algorithm. EELRU uses aggregate recency information to recognize the reference behavior of a workload and to adjust its speed of adaptation. An on-line cost/benefit analysis guides replacement decisions. This analysis is based on the LRU stack model (LRUSM) of program behavior. Essentially, EELRU is an on-line approximation of an optimal algorithm for the LRUSM. We prove that EELRU offers strong theoretical guarantees of performance relative to the LRU replacement algorithm. EELRU can never be more than a factor of 3 worse than LRU, while in a common best case it can be better than LRU by a large factor (proportional to the number of pages in memory). The goal of EELRU is to provide a simple replacement algorithm that adapts to reference patterns at all scales. Thus, EELRU should perform well for a wider range of programs and memory sizes than other algorithms. Practical experiments validate this claim. For a large number of programs and wide ranges of memory sizes, we show that EELRU outperforms LRU, typically reducing misses by 10-30%, and occasionally by much more-sometimes by a factor of 2-10. It rarely performs worse than LRIJ, and then only by a small amount. (C) 2002 Elsevier Science B.V. All rights reserved.
Although the LRU replacement algorithm has been widely used in buffer cache management, it is well-known for its inability to cope with access patterns with weak locality. Previously proposed algorithms to improve LRU...
详细信息
Although the LRU replacement algorithm has been widely used in buffer cache management, it is well-known for its inability to cope with access patterns with weak locality. Previously proposed algorithms to improve LRU greatly increase complexity and/or cannot provide consistently improved performance. Some of the algorithms only address LRU problems on certain specific and predefined cases. Motivated by the limitations of existing algorithms, we propose a general and efficient replacement algorithm, called Low Inter-reference Recency Set (LIRS). LIRS effectively addresses the limitations of LRU by using recency to evaluate Inter-Reference Recency (IRR) of accessed blocks for making a replacement decision. This is in contrast to what LRU does: directly using recency to predict the next reference time. Meanwhile, LIRS mostly retains the simple assumption adopted by LRU for predicting future block access behaviors. Conducting simulations with a variety of traces of different access patterns and with a wide range of cache sizes, we show that LIRS significantly outperforms LRU and outperforms other existing replacement algorithms in most cases. Furthermore, we show that the additional cost for implementing LIRS is trivial in comparison with that of LRU. We also show that the LIRS algorithm can be extended into a family of replacement algorithms, in which LRU is a special member.
Instruction cache replacement policies and organizations are analyzed both theoretically and experimentally. Theoretical analyses are based on a new model for cache references —the loop model. First the loop model is...
详细信息
Instruction cache replacement policies and organizations are analyzed both theoretically and experimentally. Theoretical analyses are based on a new model for cache references —the loop model. First the loop model is used to study replacement policies and cache organizations. It is concluded theoretically that random replacement is better than LRU and FIFO, and that under certain circumstances, a direct-mapped or set-associative cache may perform better than a full-associative cache organization. Experimental results using instruction trace data are then given and analyzed. The experimental results indicate that the loop model provides a good explanation for observed cache performance.
In this paper, we propose an adaptive cache replacement scheme based on the estimating type of neural networks (NN's), The statistical prediction property of such NN's is used in our work to develop a neural n...
详细信息
In this paper, we propose an adaptive cache replacement scheme based on the estimating type of neural networks (NN's), The statistical prediction property of such NN's is used in our work to develop a neural network based replacement policy which can effectively identify and eliminate inactive cache lines. This would provide larger free space for a cache to retain actively referenced lines. The proposed strategy may, therefore, yield better cache performance as compared to the conventional schemes, Simulation results for a wide spectrum of cache configurations indicate that the estimating neural network based replacement scheme provide significant performance advantage over existing policies.
In the emerging big data scenario, distributed file systems play a vital role for storage and access of large data generated in the web-based information systems. Improving performance of a distributed file system is ...
详细信息
In the emerging big data scenario, distributed file systems play a vital role for storage and access of large data generated in the web-based information systems. Improving performance of a distributed file system is a very important research issue in the current context. Client side caching and prefetching techniques enhance performance of the distributed file system. Efficient replacement policy is required to improve performance of the caching process. In this paper, we have proposed a novel client side caching algorithm namely hierarchical collaborative global caching algorithm and a cache replacement algorithm namely rank-based cache replacement algorithm. We have used support value computed for the file blocks, for prefetching, caching and replacement purposes. We have proved through simulation experiments that the proposed algorithm performs better than the existing algorithms discussed in the literature.
As the web expands its overwhelming presence in our daily lives, the pressure to improve the performance of web servers increases. An essential optimization technique that enables Internet-scale web servers to service...
详细信息
As the web expands its overwhelming presence in our daily lives, the pressure to improve the performance of web servers increases. An essential optimization technique that enables Internet-scale web servers to service clients more efficiently and with lower resource demands consists in caching requested web objects on intermediate cache servers. At the core of the cache server operation is the replacement algorithm, which is in charge of selecting, according to a cache replacement policy, the cached pages that should be removed in order to make space for new pages. Traditional replacement policies used in practice take advantage of temporal reference locality by removing the least recently/frequently requested pages from the cache. In this paper we propose a new solution that adds a spatial dimension to the cache replacement process. Our solution is motivated by the observation that users typically browse the Web by successively following the links on the web pages they visit. Our system, called SACS, measures the distance between objects in terms of the number of links necessary to navigate from one object to another. Then, when replacement takes place, objects that are distant from the most recently accessed pages are candidates for removal;the closest an object is to a recently accessed page, the less likely it is to be evicted. We have implemented a cache server using SACS and evaluated our solution against other cache replacement strategies. In this paper we present the details of the design and implementation of SACS and discuss the evaluation results obtained.
暂无评论