The transmission optimization of VR video streaming can improve the quality of user experience, which includes content prediction optimization and caching strategy optimization. Existing work either focuses on content...
详细信息
The transmission optimization of VR video streaming can improve the quality of user experience, which includes content prediction optimization and caching strategy optimization. Existing work either focuses on content prediction or on caching strategy. However, in the end-edge-cloud system, prediction and caching should be considered together. In this paper, we jointly optimize the four stages of prediction, caching, computing and transmission in mobile edge caching system, aimed to maximize the user's quality of experience. In terms of caching strategy, we design a caching algorithm VIE with unknown future request content, which can efficiently improve the content hit rate, as well as the durations for prediction, computing and transmission. The VIE caching algorithm is proved to be ahead of other algorithms in terms of delay. We optimize the four stages under arbitrary resource allocation and obtain the optimal results. Finally, under the real scenario, the proposed caching algorithm is verified by comparing with several other caching algorithms, simulation results show that the user's QoE is improved under the proposed caching algorithm.
Cooperative caching among nodes is a hot topic in Content Centric Networking (CCN). However, the cooperative caching mechanisms are performed in an arbitrary graph topology, leading to the complex cooperative operatio...
详细信息
Cooperative caching among nodes is a hot topic in Content Centric Networking (CCN). However, the cooperative caching mechanisms are performed in an arbitrary graph topology, leading to the complex cooperative operation. For this reason, hierarchical CCN has received widespread attention, which provides simple cooperative operation due to the explicit affiliation between nodes. In this study, the authors propose a heuristic cooperative caching algorithm for maximising the average provider earned profit under the two-level CCN topology. This algorithm divides the cache space of control nodes into two fractions for caching contents which are downloaded from different sources. One fraction cachesduplicatedcontents and the other cachesuniquecontents. The optimal value of the split factor can be obtained by maximising the earned profit. Furthermore, they also propose a replacement policy to support the proposed caching algorithm. Finally, simulation results show that the proposed caching algorithm can perform better than some traditional caching strategies.
Cloud computing is a rapidly growing technology in recent years. However, as there are a limited number of cloud data centers across the world, accessing a distant cloud will cause long response latency. To solve this...
详细信息
ISBN:
(纸本)9781728196565
Cloud computing is a rapidly growing technology in recent years. However, as there are a limited number of cloud data centers across the world, accessing a distant cloud will cause long response latency. To solve this issue, edge computing has been proposed as one of the most promising approaches. It brings the computing resources geologically closer to end users with reduced access latency. Since the development of edge computing is still at its infancy, there is no mature architecture framework. A three-tiered system architecture is one of the many frameworks proposed. In this paper, a new caching algorithm based on data mining technique was proposed for the three-tiered edge computing architecture. The simulation and experiment results show that equipped with the new caching algorithm, the performance of a web application can be improved using the three-tiered edge computing architecture.
This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incom...
详细信息
This paper presents a caching algorithm that offers better reconstructed data quality to the requesters than a probabilistic caching scheme while maintaining comparable network performance. It decides whether an incoming data packet must be cached based on the dynamic caching probability, which is adjusted according to the priorities of content carried by the data packet, the uncertainty of content popularities, and the records of cache events in the router. The adaptation of caching probability depends on the priorities of content, the multiplication factor adaptation, and the addition factor adaptation. The multiplication factor adaptation is computed from an instantaneous cache-hit ratio, whereas the addition factor adaptation relies on a multiplication factor, popularities of requested contents, a cache-hit ratio, and a cache-miss ratio. We evaluate the performance of the caching algorithm by comparing it with previous caching schemes in network simulation. The simulation results indicate that our proposed caching algorithm surpasses previous schemes in terms of data quality and is comparable in terms of network performance.
Existing online social network (OSN) services use caching systems with the least recently used (LRU) algorithm as an eviction policy for improving service performance. However, they do not consider the characteristics...
详细信息
ISBN:
(纸本)9781450353656
Existing online social network (OSN) services use caching systems with the least recently used (LRU) algorithm as an eviction policy for improving service performance. However, they do not consider the characteristics of users' usage pattern in OSN services. In addition, they do not consider the fact that the users and cloud servers are geographically distributed over a large area. It makes relatively unnecessary data occupy limited memory space. Consequently, they cannot prevent the degradation of cache efficiency. We introduce a social-aware caching algorithm to improve the performance of OSN services in a multi-cloud environment. Our approach is designed to consider the locations of the user and cloud server and to allocate memory space differently to each user by considering the user's frequency of service usage. To validate our approach, we implemented a OSN service that manages user data in the same way as Twitter that is a representative OSN service. Furthermore, we experimented with actual users' locations and times of use as collected from Twitter. Our findings indicate that this approach can improve the cache hit ratio by an average of more than 24% and reduce the execution delay by an average of more than 1095 ms.
The growing demand for high-capacity content such as 3D and 360 degrees videos highlights the need for efficient data delivery in B5G/6G networks. Multi-access edge computing (MEC) has emerged as a promising solution,...
详细信息
The growing demand for high-capacity content such as 3D and 360 degrees videos highlights the need for efficient data delivery in B5G/6G networks. Multi-access edge computing (MEC) has emerged as a promising solution, but its limited memory capacity makes cache replacement strategies essential. To address this, we propose an adaptive recency masking caching (ARMC) algorithm for 360 degrees video streaming in MEC environments. The proposed caching mechanism efficiently replaces cached data by combining two techniques: recency masking and Frequency-Filtered Least Recently Used. The use of the techniques is determined by the importance of the cache data and the available cache capacity. In addition, we introduce the concept of an observation window to improve cache performance by reflecting the recency of data request patterns. We conducted experiments using a field of view (FoV) dataset recorded from real users watching videos via head-mounted displays. Given the characteristics of 360 degrees videos, we assumed that the MEC cache would require high-quality tiles matching the user's FoV at each moment. Through experiments, we confirmed that the proposed method achieved a higher cache hit rate compared to existing cache replacement techniques. In particular, ARMC improved the hit rate by up to 29% compared to Least Frequency Used algorithm under constrained cache conditions of 6% or less of the total data. Higher cache hit rates contribute to reducing transmission latency and lowering bandwidth consumption. Meanwhile, the size of the observation window, a key variable in the proposed technique, varied in terms of its optimal size depending on the cache size and user viewing patterns. To address this issue, we proposed ARMC-RL, a variant of ARMC with reinforcement learning (RL) assistance, which is designed to dynamically estimate the optimal observation window size according to the given environment. Based on the experiments, ARMC-RL, depending on the learning model, a
The irregular distribution of non-zero elements of large-scale sparse matrix leads to low data access efficiency caused by the unique architecture of the Sunway many-core processor, which brings great challenges to th...
详细信息
The irregular distribution of non-zero elements of large-scale sparse matrix leads to low data access efficiency caused by the unique architecture of the Sunway many-core processor, which brings great challenges to the efficient implementation of sparse matrix-vector multiplication (SpMV) computing by SW26010P many-core processor. To address this problem, a study of SpMV optimization strategies is carried out based on the SW26010P many-core processor. Firstly, we design a memorized data storage transformation strategy to transform the matrix in CSR storage format into BCSR (Block Compressed Sparse Row) storage. Secondly, the dynamic task scheduling method is introduced to the algorithm to realize the load balance between slave cores. Thirdly, the LDM memory is refined and designed, and the slave core dual cache strategy is optimized to further improve the performance. Finally, we selected a large number of representative sparse matrices from the Matrix Market for testing. The results show that the scheme has obviously speedup the processing procedure of sparse matrices with various sizes and sizes, and the master-slave speedup ratio can reach up to 38 times. The optimization method used in this paper has implications for other complex applications of the SW26010P many-core processor.
For mixed HDD and SSD storage scenarios, Ceph Cache Tier provides a tiered caching feature that separates fast and slow storage pools to manage data objects more efficiently. However, due to the limited total capacity...
详细信息
ISBN:
(数字)9789819708345
ISBN:
(纸本)9789819708338;9789819708345
For mixed HDD and SSD storage scenarios, Ceph Cache Tier provides a tiered caching feature that separates fast and slow storage pools to manage data objects more efficiently. However, due to the limited total capacity of the cache pool, only some data objects can be stored. Performance can be significantly improved when clients focus on accessing hot objects in the cache pool. If a client accesses the cache pool without hitting data, redundant IO operations occur, which increases client access latency and reduces throughput. To improve the hit rate of the Ceph Cache Tier cache pool, this paper proposes a temperature density-based cache replacement algorithm (TDC). The algorithm improves the hit rate of the cache pool by calculating the temperature density of the space consumed by each object and evicting objects with the lowest temperature density, thus evicting objects that contribute less to the hit rate. The algorithm mainly includes object temperature calculation, temperature density calculation and cache replacement policy. Subsequently, we evaluate the TDC algorithm on a real traces dataset using playback workload IO and demonstrate the efficiency of the algorithm in improving the cache hit rate. Finally, we applied the TDC algorithm to a Ceph distributed storage system and verified the performance of the Cache Tier based on the TDC algorithm.
Information-centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by s...
详细信息
Information-centric network (ICN) emphasizes on content retrieval without much bothering about the location of its actual producer. This novel networking paradigm makes content retrieval faster and less expensive by shifting data provisioning into content holder rather than content owner. caching is the feature of ICN that makes content serving possible from any intermediate device. An efficient caching is one of the primary requirements for effective deployment of ICN. In this paper, a caching approach with balanced content distribution among network devices is proposed. The selection of contents to be cached is determined through universal and computed using Zipf's law. The dynamic change in popularity of contents is also considered to take make caching decisions. For balancing the cached content across the network, every router keeps track of its neighbor's cache status. Three parameters, the proportionate distance of the router from the client (p(d)), the router congestion (r(c)), and the cache status (c(s)), are contemplated to select a router for caching contents. The new caching approach is evaluated in the simulated environment using ndnSIM-2.0. Three state-of-the-art approaches, Leave Copy Everywhere (LCE), centrality measures-based algorithm (CMBA), and a probability-based caching (probCache), are considered for comparison. The proposed method of caching shows the better performance compared to the other three protocols used in the comparison.
A novel proxy caching algorithm P(2)CASM (Proxy caching algorithm Based on Popularity for Streaming Media) based on segment popularity for streaming media was proposed. A proxy caching admission and replacement algori...
详细信息
ISBN:
(纸本)9781467347143
A novel proxy caching algorithm P(2)CASM (Proxy caching algorithm Based on Popularity for Streaming Media) based on segment popularity for streaming media was proposed. A proxy caching admission and replacement algorithm based on the segment popularity for streaming media objects was implemented. The principle was obeyed that the data cached for each streaming media objects were in proportion to their popularity on the proxy server. The size of the caching window was updated periodically according to the average access time of the clients. P2P replica could be cached on a P2P proxy. Simulation results showed that this algorithm was more adaptive than A(2)LS (Adaptive and Lazy Segmentation algorithm) algorithm for the variety of the proxy server cache. It could gain more average number of cached streaming media objects and less delayed requested ratio while byte-hit ratio of P(2)CASM algorithm was close to or exceeded A(2)LS algorithm under the circumstance of the same proxy cache space.
暂无评论