As the systems perform progressively complex tasks, the search for energy efficiency in computational systems is constantly increasing. The cache memory has a fundamental role in this issue. Through dynamiccache reco...
详细信息
ISBN:
(纸本)9781665420082
As the systems perform progressively complex tasks, the search for energy efficiency in computational systems is constantly increasing. The cache memory has a fundamental role in this issue. Through dynamic cache reconfiguration techniques, it is possible to obtain an optimal cache configuration that minimizes the impacts of energy losses. To achieve this goal, a precise selection of cache parameters plays a fundamental role. In this work, a machine learning-based approach is evaluated to predict the optimal cache configuration for different applications considering their dynamic instructions and a variety of cache parameters, followed by experiments showing that using a smaller set of application instructions it is already possible to obtain good classification results from the proposed model. The results show that the model obtains an accuracy of 96.19% using the complete set of RISC-V instructions and 96.33% accuracy using the memory instructions set, a more concise set of instructions that directly affect the cache power model, besides decreasing the model complexity.
dynamic cache reconfiguration has been widely explored for energy optimization and performance improvement for single-core systems. cache partitioning techniques are introduced for the shared cache in multicore system...
详细信息
dynamic cache reconfiguration has been widely explored for energy optimization and performance improvement for single-core systems. cache partitioning techniques are introduced for the shared cache in multicore systems to alleviate inter-core interference. While these techniques focus only on performance and energy, they ignore vulnerability due to soft errors. In this article, we present a static profiling based algorithm to enable vulnerability-aware energy-optimization for real-time multicore systems. Our approach can efficiently search the space of cache configurations and partitioning schemes for energy optimization while task deadlines and vulnerability constraints are satisfied. A machine learning technique has been employed to minimize the static profiling time without sacrificing the accuracy of results. Our experimental results demonstrate that our approach can achieve 19.2% average energy savings compared with the base configuration, while drastically reducing the vulnerability (49.3% on average) compared to state-of-the-art techniques. Furthermore, the machine learning technique enabled more than 10x speedup in static profiling time with a negligible prediction error of 3%.
With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern p...
详细信息
With each CMOS technology generation, leakage energy consumption has been dramatically increasing and hence, managing leakage power consumption of large last-level caches (LLCs) has become a critical issue in modern processor design. In this paper, we present Encache, a novel software-based technique which uses dynamic profiling-based cachereconfiguration for saving cache leakage energy. Encache uses a simple hardware component called profiling cache, which dynamically predicts energy efficiency of an application for 32 possible cache configurations. Using these estimates, system software reconfigures the cache to the most energy efficient configuration. Encache uses dynamic cache reconfiguration and hence, it does not require offline profiling or tuning the parameter for each application. Furthermore, Encache optimizes directly for the overall memory subsystem (LLC and main memory) energy efficiency instead of the LLC energy efficiency alone. The experiments performed with an x86-64 simulator and workloads from SPEC2006 suite confirm that Encache provides larger energy saving than a conventional energy saving scheme. For single core and dual-core system configurations, the average savings in memory subsystem energy over a shared baseline configuration are 30.0% and 27.3%, respectively
A victim cache is a small memory block usually connected after the first level cache, and provides recovery of recently evicted cache blocks. This small component can significantly improve the hit rate of the cache hi...
详细信息
ISBN:
(纸本)9781467379427
A victim cache is a small memory block usually connected after the first level cache, and provides recovery of recently evicted cache blocks. This small component can significantly improve the hit rate of the cache hierarchy. However, its effectiveness varies greatly with the application. Thus, tuning the victim cache in combination with its upper cache level to fit the running application can yield important energy savings. In this paper, we propose a tuning cache heuristic that explores the features of a level 1 data cache in combination with the tuning of a victim cache. Experimental results show that this approach generates a significant improvement in energy efficiency, in comparison with a fixed cache hierarchy approach and against a state of the art tuning heuristic.
暂无评论