Multi-dimensional data analysis and online analytieal processing are standard querying techniques applied on today's data warehouses and data mining. Furthermore, there is often a depressing response time requirem...
详细信息
ISBN:
(纸本)9781424455379
Multi-dimensional data analysis and online analytieal processing are standard querying techniques applied on today's data warehouses and data mining. Furthermore, there is often a depressing response time requirement for these queries. A common and powerful query optimization technique is to pre-compute some data and prefetch them in cache rather than compute them from data warehouse each time. In this paper, we present a cache optimization mechanism combined with replacement algorithm and prefetching strategy. After our empirical evaluation, we found that the query latency was reduced effectively, especially for those queries covering dimensions of a large quantity.
Multi-dimensional data analysis and online analytical processing are standard querying techniques applied on today's data warehouses and data ***, there is often a depressing response time requirement for these qu...
详细信息
Multi-dimensional data analysis and online analytical processing are standard querying techniques applied on today's data warehouses and data ***, there is often a depressing response time requirement for these queries.A common and powerful query optimization technique is to pre-compute some data and prefetch them in cache rather than compute them from data warehouse each *** this paper,we present a cache optimization mechanism combined with replacement algorithm and prefetching *** our empirical evaluation,we found that the query latency was reduced effectively,especially for those queries covering dimensions of a large quantity.
Commodity mediums have been used as front-end cache to improve the performance of storage server during the period of high overload in distributed file systems. Comparing with traditional cache media such as SDRAM, th...
详细信息
ISBN:
(纸本)9780769537665
Commodity mediums have been used as front-end cache to improve the performance of storage server during the period of high overload in distributed file systems. Comparing with traditional cache media such as SDRAM, the speed of commodity medium cache is much lower, which affects traditional cache management algorithm based on high speed cache framework. In this paper, we propose a novel cache management algorithm - C-Aware, which considers the effect of the speed characteristic of cache mediums and data source on the whole system performance. By tracing the history response times of accessing cache and source, C-Aware algorithm adaptively decides whether to cache data according to current running environment, and achieves good performance regardless of the server is busy or not. Our experiments show that C-Aware gets near 80% improvement compared with traditional methods when the cache size is half of total test data set and the server is not busy. It still presents comparable performance when there is high workload on server side.
An algorithm is presented, dedicated to an WEB server intended to substantially improve the page delivery service based on the statistically collected data regarding frequency of hyperlink demand. The WEB page deliver...
详细信息
ISBN:
(纸本)078036290X;0780362918
An algorithm is presented, dedicated to an WEB server intended to substantially improve the page delivery service based on the statistically collected data regarding frequency of hyperlink demand. The WEB page delivery service is modelled as a Markov process where the most valuable demands are automatically ordered in a three-dimensional page space based on a second order Markov process model.
cache memory hierarchies are used to buffer those portions of main memory with the most frequent use by the CPU. As cache memory is very costly, good design techniques must consider small cache sizes maintaining high ...
详细信息
cache memory hierarchies are used to buffer those portions of main memory with the most frequent use by the CPU. As cache memory is very costly, good design techniques must consider small cache sizes maintaining high levels of use (hit ratio) and ease of implementation. The memory replacement policy is important is maintaining a high hit ratio. Most replacement policies used are easily implemented when the main memory has fixed page locations. A new cache algorithm using a variable page configuration is explained in terms of program behaviour.
暂无评论