In recent years, energy saving have become an important issue, especially for mobile systems. Previous studies had used the prefetching and caching practices to create large disk idle time intervals to allow disks sta...
详细信息
ISBN:
(纸本)9780769551142
In recent years, energy saving have become an important issue, especially for mobile systems. Previous studies had used the prefetching and caching practices to create large disk idle time intervals to allow disks staying in low power states. In this paper, we enhance previous study by proposing a new disk state-aware task scheduler, called DATS, to further maximize the disk idle intervals. DATS considers both the disk power state and application characteristics. First, DATS differentiates between CPU-bound and I/O-bound processes. For I/O-bound processes, DATS further classifies random I/Os from sequential or loop I/Os. Based on the classified results, DATS schedules processes according to the current disk state, so as to maximize the length of disk idle periods. The experimental results show that, compared to the current Linux default scheduler, DATS can successfully increase the length of disk idle intervals and reduce the number of lengthy disk spin-up operations. Besides, since DATS reduces of the number of the lengthy disk spin-up operations, DATS not only reduce the disk energy consumption but also reduce the tasks' average turnaround times.
The drastic increase of mobile video traffic has resulted in communication delay and slow download speed. In the 5G wireless networks, caching or prefetching video at mobile edge servers can take advantage of the high...
详细信息
ISBN:
(纸本)9781538673089
The drastic increase of mobile video traffic has resulted in communication delay and slow download speed. In the 5G wireless networks, caching or prefetching video at mobile edge servers can take advantage of the high-speed local link to alleviate the load of the core network. However, the trade-off between data prefetching and cache replacement strategy is still a challenge in Mobile-edge computing networks, especially when the users are mobile and cache size is constrained. To address this, we design a mobility-aware utility function based on the user's moving probability and the popularity of video clips. Then, we formulate the optimization problem as an integer linear programming (ILP) with a set of knapsack constraints to develop an efficient dynamic programming algorithm. Compared with the baseline algorithm, the results of simulation experiment show that our approach can not only raise the hit rate by 24.93%, but also control the cost effectively. In the meantime, video distortion reduction is increased by 21.26% on average as well.
Hadoop distributed file system(HDFS) as a popular cloud storage platform, benefiting from its scalable, reliable and low-cost storage *** it is mainly designed for batch processing of large files, it's mean that s...
详细信息
Hadoop distributed file system(HDFS) as a popular cloud storage platform, benefiting from its scalable, reliable and low-cost storage *** it is mainly designed for batch processing of large files, it's mean that small files cannot be efficiently handled by HDFS. In this paper, we propose a mechanism to store small files in HDFS. In our approach, file size need to be judged before uploading to HDFS. If the file size is less than the size of the block,all correlated small files will be merged into one single file and we will build index for each small file. Furthermore, prefetching and caching mechanism are used to improve the reading efficiency of small files. Meanwhile, for the new small files,we can execute appending operation on the basis of merged file. Contrasting to original HDFS, experimental results show that the storage efficiency of small files is improved.
Storage subsystems have become one of the most important components in computer systems nowadays and have been expanded to include all three levels of memory hierarchy, namely the cache, the secondary and the tertiary...
详细信息
Storage subsystems have become one of the most important components in computer systems nowadays and have been expanded to include all three levels of memory hierarchy, namely the cache, the secondary and the tertiary storage. This paper presents a study of data block prefetching and caching over the two upper storage levels in a hierarchical storage model, by proposing techniques for data amortization from tertiary to secondary and from secondary to cache levels. Each level reserves a specific area for data prefetching and an evolutionary algorithm is proposed for identifying the data blocks to be prefetched in each of the two upper storage levels. An analytic model is proposed such that the cache, the secondary and the tertiary storage are appropriately parameterized in order to analyse the expected performance improvement due to prefetching. The data object prefetching approach is experimented under certain workload of requests referring to all storage levels and has shown significant performance improvement with respect to request service times, as well as cache and secondary storage hit ratios. (C) 2000 Elsevier Science Inc. All rights reserved.
暂无评论