In this study, we propose a caching method called RUE for dynamic large-scale data streams. We define a data model to facilitate hot data identification and management. At the heart of RUE model is hot degree that tak...
详细信息
In this study, we propose a caching method called RUE for dynamic large-scale data streams. We define a data model to facilitate hot data identification and management. At the heart of RUE model is hot degree that takes into account two factors data resource utilization efficiency and reuse distance, aiming to quantitatively reflect data popularity in a dynamic data stream. Based on data's hot degree, RUE classifies data into four types, each of which is assigned with an associated cache residence time. Guided by RUE model, we develop HM algorithm to identify and manage hot data in a dynamic data stream. HM algorithm is implemented by four stacks, namely, new stack, short stack, long stack, and temp stack. Moreover, an eviction and a migration algorithms are integrated into HM to facilitate block replacement and migration. To evaluate the performance of HM algorithm, we quantitatively compare the performance of RUE with three state-of-art algorithms, namely, LRU, LIRS, and ARC under various replacement policies, operations, and workloads. Experimental results show that RUE outperforms these three existing algorithms in terms of both read and write hit rates. Furthermore, we show that with the four stacks in place, the computing overhead of HM is negligible.
The exponential growth of big data requires more and more storage space. Host aware SMR (HA-SMR) drives are the most promising among the three types of SMR drives which can effectively improve the areal density of har...
详细信息
ISBN:
(纸本)9781665415132
The exponential growth of big data requires more and more storage space. Host aware SMR (HA-SMR) drives are the most promising among the three types of SMR drives which can effectively improve the areal density of hard disk drives. However, when handling non-sequential writes under write-intensive workloads, a host-aware SMR drive often suffers from performance degradation. In this paper, we propose a hybrid storage system called STAR, which consists of an SSD cache and an HA-SMR drive. In STAR, we propose a new data layout management for the HA-SMR drive, which divides the whole space of the HA-SMR drive into data zones and a log section. The log section is used to buffer data blocks for the data zones. We design a zone translation layer (ZTL) at the host side for the HA-SMR drive to translate non-sequential writes to sequential ones, and we call it ZTL HA-SMR drive. In addition, we also use an SSD cache to absorb non-sequential writes for the ZTL HA-SMR drive and design a cache replacement algorithm called LCF, which can reduce data migration in the ZTL HA-SMR drive. Our experimental results show that STAR can effectively reduce non-sequential writes to the HA-SMR drive and improve the performance of the HA-SMR drive as well.
Driven by the trends of emerging technologies in on-chip memories, with increasing the size of last-level caches (LLCs), spin-transfer torque magnetic random access memories (STT-MRAMs) are the most promising alternat...
详细信息
Driven by the trends of emerging technologies in on-chip memories, with increasing the size of last-level caches (LLCs), spin-transfer torque magnetic random access memories (STT-MRAMs) are the most promising alternative technology among the non-volatile memories (NVMs) to replace SRAMs. Despite their high density, scalability, and near-zero leakage power, the reliability of STT-MRAM LLCs is threatened by a high error rate due to their stochastic switching behavior. The error rate is highly influenced by the cache management, which necessitates redesigning the cachereplacement policy based on the error behavior. In this article, we propose an error-aware cachereplacement policy, namely, conditional replacement policy (CRP), to improve the reliability of STT-MRAM caches by decreasing the rate of both read disturbance and write failure. This is ascertained by nominating an appropriate data block in the cache to be replaced with an incoming data block, considering the minimum error rate. Moreover, the performance and latency of both write and read operations are considered. The simulation results show that compared with the state-of-the-art replacement policy in STT-MRAM caches, CRP reduces the total error rate by 68%. In addition, the proposed policy enhances the performance by 2% and saves energy consumption by 19%, on average. Meanwhile, the total number of writes is decreased by 41% compared with the previous scheme.
Video sharing system for mobile devices is an important mobile service which can be used to share videos with others. Scalability is critical to the system. cache technology, which employs cache servers to offload the...
详细信息
ISBN:
(纸本)9781467329644;9781467329637
Video sharing system for mobile devices is an important mobile service which can be used to share videos with others. Scalability is critical to the system. cache technology, which employs cache servers to offload the original server, has been proposed to attack the scalability problem. Traditional replacementalgorithms for cache technology are not suitable for the video sharing system. In this paper, we proposed a cache replacement algorithm called S-LRU for the system. In S-LRU, we consider the size difference of videos. Experimental results demonstrate that S-LRU outperforms the other algorithms.
Information-centric networking (ICN) has gained attention from network research communities due to its capability of efficient content dissemination. In-network caching function in ICN plays an important role to achie...
详细信息
Information-centric networking (ICN) has gained attention from network research communities due to its capability of efficient content dissemination. In-network caching function in ICN plays an important role to achieve the design motivation. However, many researchers on in-network caching due to its ability to efficiently disseminate content. The in-network caching function in ICN plays an important role in realizing the design goals. However, many in-network caching researchers have focused on where to cache rather than how to cache: the former is known as content deployment in the network and the latter is known as cachereplacement in an ICN router. Although the cachereplacement has been intensively researched in the context of web-caching and content delivery network previously, networks, the conventional approaches cannot be directly applied to ICN due to the fine granularity of chunks in ICN, which eventually changes the access patterns. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements in the design of a high performance ICN router. Then, we propose a novel cache replacement algorithm to satisfy the requirements named Compact CLOCK with Adaptive replacement (Compact CAR), which can reduce the consumption of cache memory to one-tenth compared to conventional approaches. In this paper, we argue that ICN requires a novel cache replacement algorithm to fulfill the requirements set for high performance ICN routers. Our solution, Compact CLOCK with Adaptive replacement (Compact CAR), is a novel cache replacement algorithm that satisfies the requirements. The evaluation result shows that the consumption of cache memory required to achieve a desired performance can be reduced by 90% compared to conventional approaches such as FIFO and CLOCK.
The current research statistics for cache designing reveals that Spin Torque Transfer Magnetic RAMS (STT-MRAMs) have become one of the most promising technologies in the field of memory chip design, gaining a lot of a...
详细信息
The current research statistics for cache designing reveals that Spin Torque Transfer Magnetic RAMS (STT-MRAMs) have become one of the most promising technologies in the field of memory chip design, gaining a lot of attention from researchers due to its dynamic direct map and data access policies for reducing the average cost i.e. both time and energy optimisation. In this paper, the vulnerability of STT-MRAM caches has been investigated to examine the effect of workloads as well as process variations for characterising the reliability of STT-MRAM cache. The current study is intended to analyse and evaluate an existing efficient cachereplacement policy namely Least Error Rate (LER) which utilises Hamming Distance (HD) computations to reduce the Write Error Rate (WER) of L2-STT-MRAM caches with acceptable overheads. The performance analysis of the algorithm ensures its effectiveness in reducing the WER and cost overheads as compared to the conventional LRU techniques.
The access conflict from different threads or processes for parallel applications,can lead the system performance to degrade for multi-core system with shared *** replacementalgorithm for L2 shared cache can be used ...
详细信息
The access conflict from different threads or processes for parallel applications,can lead the system performance to degrade for multi-core system with shared *** replacementalgorithm for L2 shared cache can be used to solve the problem efficiently and *** LRU cache replacement algorithm can better reflect the locality of program and is widely used,it is not optimal for reducing the shared cache miss ratio and MPKI (misses per thousand instructions),and can not predict whether the data is be used *** this paper,based on the consideration of time prediction,the disadvantage of LRU and the conflict between parallel application and shared cache,we propose the LRU2-MRU collaborative cache replacement algorithm to solve these *** use 10 benchmark programs to show that the LRU2-MRU collaborative cachealgorithm may reduce the miss ratio of L2 shared cache by 4.61%,and the MPKI is average 4.54% lower than LRU.
Video sharing system for mobile devices is an important mobile service which can be used to share videos with others. Scalability is critical to the system. cache technology, which employs cache servers to offload the...
详细信息
ISBN:
(纸本)9781467329637
Video sharing system for mobile devices is an important mobile service which can be used to share videos with others. Scalability is critical to the system. cache technology, which employs cache servers to offload the original server, has been proposed to attack the scalability problem. Traditional replacementalgorithms for cache technology are not suitable for the video sharing system. In this paper, we proposed a cache replacement algorithm called S-LRU for the system. In S-LRU, we consider the size difference of videos. Experimental results demonstrate that S-LRU outperforms the other algorithms.
In software-defined networking, flow tables of OpenFlow switches are implemented by ternary content addressable memory (TCAM). Although TCAM can process input packets in high speed, it is a scarce and expensive resour...
详细信息
In software-defined networking, flow tables of OpenFlow switches are implemented by ternary content addressable memory (TCAM). Although TCAM can process input packets in high speed, it is a scarce and expensive resource providing only a few thousands of rule entries on a network switch. Rules caching is a technique to solve the TCAM capacity problem. However, the rule dependency problem is a challenging issue for wildcard rules caching where packets can mismatch rules. In this paper, we use a cover-set approach to solve the rule dependency problem and cache important rules to TCAM. We also propose a rule cache replacement algorithm considering the temporal and spatial traffic localities. Simulation results show that our algorithms have better cache hit ratio than previous works.
Information-centric networking (ICN) has received increasing attention from all over the world. The novel aspects of ICN (e.g., the combination of caching, multicasting, and aggregating requests) is based on names tha...
详细信息
Information-centric networking (ICN) has received increasing attention from all over the world. The novel aspects of ICN (e.g., the combination of caching, multicasting, and aggregating requests) is based on names that act as addresses for content. The communication with name has the potential to cope with the growing and complicating Internet technology, for example, Internet of Things, cloud computing, and a smart society. To realize ICN, router hardware must implement an innovative cache replacement algorithm that offers performance far superior to a simple policy-based algorithm while still operating with feasible computational and memory overhead. However, most previous studies on cachereplacement policies in ICN have proposed policies that are too blunt to achieve significant performance improvement, such as first-in first-out (popularly, FIFO) and random policies, or impractical policies in a resource-restricted environment, such as least recently used (LRU). Thus, we propose CLOCK-Pro Using Switching Hash-tables (CUSH) as the suitable policy for network caching. CUSH can identify and keep popular content worth caching in a network environment. CUSH also employs CLOCK and hash-tables, which are low-overhead data structure, to satisfy the cost requirement. We numerically evaluate our proposed approach, showing that our proposal can achieve cache hits against the traffic traces that simple conventional algorithms hardly cause any hits.
暂无评论