In High Performance Computing, it is often useful to fine tune an application code via recompilation of specific computational intensive code fragments to leverage runtime knowledge. Traditional compilers rarely provi...
详细信息
ISBN:
(纸本)9781450363211
In High Performance Computing, it is often useful to fine tune an application code via recompilation of specific computational intensive code fragments to leverage runtime knowledge. Traditional compilers rarely provide such capabilities, while solutions such as LIBVC allow C/C++ code to employ dynamic compilation. We evaluate the impact of the introduction of Just-in-Time (JIT) compilation in a framework supporting partial dynamic (re-) compilation of functions to provide continuousoptimization in high performance environments. We show that JIT solutions can have comparable performance in terms of code quality and smaller compilation overhead w.r.t. the libVC alternatives. We further demonstrate the strength of our approach against a state-of-the-art interpreter-based dynamic evaluation solution.
Much of the software in everyday operation is not making optimal use of the hardware on which it actually runs. Among the reasons for this discrepancy are hardware/software mismatches, modularization overheads introdu...
详细信息
Much of the software in everyday operation is not making optimal use of the hardware on which it actually runs. Among the reasons for this discrepancy are hardware/software mismatches, modularization overheads introduced by software engineering considerations, and the inability of systems to adapt to users' behaviors. A solution to these problems is to delay code generation until load time. This is the earliest point at which a piece of software can be fine-tuned to the actual capabilities of the hardware on which it is about to be executed, and also the earliest point at wich modularization overheads can be overcome by global optimization. A still better match between software and hardware can be achieved by replacing the already executing software at regular intervals by new versions constructed on-the-fly using a background code re-optimizer. This not only enables the use of live profiling data to guide optimization decisions, but also facilitates adaptation to changing usage patterns and the late addition of dynamic link libraries. This paper presents a system that provides code generation at load-time and continuous program optimization at run-time. First, the architecture of the system is presented. Then, two optimization techniques are discussed that were developed specifically in the context of continuousoptimization. The first of these optimizations continually adjusts the storage layouts of dynamic data structures to maximize data cache locality, while the second performs profile-driven instruction re-scheduling to increase instruction-level parallelism. These two optimizations have very different cost/benefit ratios, presented in a series of benchmarks. The paper concludes with an outlook to future research directions and an enumeration of some remaining research problems. The empirical results presented in this paper make a case in favor of continuousoptimization, but indicate that it needs to be applied judiciously. In many situations, the costs of dy
暂无评论