Cache replacement Policies play a significant and contributory role in the context of determining the effectiveness of cache memory cells. It has also become one of the major key features for efficient memory manageme...
详细信息
ISBN:
(纸本)9781509036691
Cache replacement Policies play a significant and contributory role in the context of determining the effectiveness of cache memory cells. It has also become one of the major key features for efficient memory management front the technological aspect. Hence, owing to the existing critical computing systems, it has become too essential to attain faster processing of executable instructions under any adverse situations. In the current scenario, the state of art processors such as Intel multi-core processors for application specific integrated circuits, usually employ various cache replacement policies such as Least Recently Used (LRU) and Pseudo LRU (PLRU), Round Robin, etc. However, fewer amounts of existing research works are found till date to utter about explicit performance issues associated with the conventional cache replacement algorithms. Therefore, the proposed study intended to carry out a performance evaluation to explore the design space of conventional cache replacement policies under SPEC CPU2000 benchmark suite. It initiates and configures the experimental Simple Scalar toolbox prototype on a wide range of cache sizes. The experimental outcomes obtained from the benchmark suite show that PLRU outperforms the conventional LRU concerning computational complexity and a variety of cache blocks organization.
In the emerging big data scenario, distributed file systems play a vital role for storage and access of large data generated in the web-based information systems. Improving performance of a distributed file system is ...
详细信息
In the emerging big data scenario, distributed file systems play a vital role for storage and access of large data generated in the web-based information systems. Improving performance of a distributed file system is a very important research issue in the current context. Client side caching and prefetching techniques enhance performance of the distributed file system. Efficient replacement policy is required to improve performance of the caching process. In this paper, we have proposed a novel client side caching algorithm namely hierarchical collaborative global caching algorithm and a cache replacement algorithm namely rank-based cache replacement algorithm. We have used support value computed for the file blocks, for prefetching, caching and replacement purposes. We have proved through simulation experiments that the proposed algorithm performs better than the existing algorithms discussed in the literature.
This paper deals with the Corpus of early written Latvian and explains the methodology for normalising historical spellings found in texts from the 16th-18th cc. It describes the types of replacements which will make ...
详细信息
To boost input-output performance, operating systems employ a kernel-managed caching space called the buffer cache or page cache. Given the limited size of a buffer cache, an effective cache manager is required to dec...
详细信息
To boost input-output performance, operating systems employ a kernel-managed caching space called the buffer cache or page cache. Given the limited size of a buffer cache, an effective cache manager is required to decide which blocks should be evicted from the cache. Previous cache managers use historical information to make replacement decisions. However, existing approaches are unable to maximize performance since they rely on limited historical information. Motivated by the limitations of existing solutions, this paper proposes a novel manager called the Pattern-assisted Adaptive Recency Caching (PARC) manager. PARC simultaneously uses the historical information of recency, frequency, and access patterns to estimate the locality strengths of blocks and, upon a cache miss, evicts the block with the least strength. Specifically, PARC exploits the reference regularities exhibited in past input-output behaviors to actively and rapidly adapt the recency and frequency information of blocks so as to precisely distinguish blocks with long- and short-term utility. Through comprehensive simulations on a variety of traces of different access patterns, we show that PARC is robust since, except for random workloads where the performance of each cache manager is similar, PARC always outperforms existing solutions.
Due to the rapid development in the technology, embedded systems have an effective part in controlling and managing variety of hardware and software systems. These systems plan to solve problems that make embedded sys...
详细信息
ISBN:
(纸本)9781450366281
Due to the rapid development in the technology, embedded systems have an effective part in controlling and managing variety of hardware and software systems. These systems plan to solve problems that make embedded systems become more complex than before. The majority of embedded systems are currently intended to be more interactive with the environment and process data in real-time in order to fulfil requirements that recruited for. Thus, this requires a high speed of processing data to produce an output. Hence multicore processors are usually employed in embedded system design. To ensure that embedded systems as processing units work perfectly to finish a task with expecting the Worst-Case Execution Time (WCET). A considerable number of researches have been done to handle cache memory organization for multicore processor units in real-time embedded systems. To this end, this paper presents a study of cache management techniques in real-time embedded systems.
Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size an...
详细信息
Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS) embedded NB and particle swarm optimization(PSO) reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.
Embedded Systems are designed for a variety of applications that require different types of cache designs for better performance. Processors employ cache memories to reduce the average time for fetching instructions a...
详细信息
ISBN:
(纸本)9781509013388
Embedded Systems are designed for a variety of applications that require different types of cache designs for better performance. Processors employ cache memories to reduce the average time for fetching instructions and data. The onus is on the ability of cache, in an Embedded System, to be reconfigured based on the user applications. This work discusses the design of a Reconfigurable Embedded Data Cache i.e., RED Cache with the ability to switch between different cache replacements algorithms. Three replacement algorithms used include random, least recently used and most recently used.
Bias Temperature Instability (BTI) and Hot Carrier Injections (HCI) are two of the main effects that increase a transistor's threshold voltage and further cause performance degradations. These two wearout mechanis...
详细信息
ISBN:
(纸本)9781509051427
Bias Temperature Instability (BTI) and Hot Carrier Injections (HCI) are two of the main effects that increase a transistor's threshold voltage and further cause performance degradations. These two wearout mechanisms affect all transistors, but are especially acute in the SRAM cells of first-level (L1) caches, which are frequently accessed and are critical for microprocessor performance. This work studies the cache lifetimes due to the combined effect of BTI and HCI for different cache configurations, including variation in cache size, associativity, cache line size, and the replacement algorithm. The effect of process variations is also considered. We analyze the reliability (failure probability) and performance (hit rate) of the L1 cache within a LEON3 microprocessor, while the LEON3 is running a set of benchmarks, and we provide essential insights on performance-reliability tradeoffs for cache designers.
As the web expands its overwhelming presence in our daily lives, the pressure to improve the performance of web servers increases. An essential optimization technique that enables Internet-scale web servers to service...
详细信息
As the web expands its overwhelming presence in our daily lives, the pressure to improve the performance of web servers increases. An essential optimization technique that enables Internet-scale web servers to service clients more efficiently and with lower resource demands consists in caching requested web objects on intermediate cache servers. At the core of the cache server operation is the replacement algorithm, which is in charge of selecting, according to a cache replacement policy, the cached pages that should be removed in order to make space for new pages. Traditional replacement policies used in practice take advantage of temporal reference locality by removing the least recently/frequently requested pages from the cache. In this paper we propose a new solution that adds a spatial dimension to the cache replacement process. Our solution is motivated by the observation that users typically browse the Web by successively following the links on the web pages they visit. Our system, called SACS, measures the distance between objects in terms of the number of links necessary to navigate from one object to another. Then, when replacement takes place, objects that are distant from the most recently accessed pages are candidates for removal;the closest an object is to a recently accessed page, the less likely it is to be evicted. We have implemented a cache server using SACS and evaluated our solution against other cache replacement strategies. In this paper we present the details of the design and implementation of SACS and discuss the evaluation results obtained.
Consider a two-level storage system operating with the least recently used (LRU) or the first-in, first-out (FIFO) replacement strategy. Accesses to the main storage are described by the independent reference model (I...
详细信息
Consider a two-level storage system operating with the least recently used (LRU) or the first-in, first-out (FIFO) replacement strategy. Accesses to the main storage are described by the independent reference model (IRM). Using the FKG inequality, we prove that the miss ratio for LRU is smaller than or equal to the miss ratio for FIFO.
暂无评论