Power modeling is an essential building block for computer systems in support of energy optimization, energy profiling, and energy-aware application development. We introduce VESTA, a novel approach to modeling the po...
详细信息
Power modeling is an essential building block for computer systems in support of energy optimization, energy profiling, and energy-aware application development. We introduce VESTA, a novel approach to modeling the power consumption of applications with one key insight: language runtime events are often correlated with a sustained level of power consumption. When compared with the established approach of power modeling based on hardware performance counters (HPCs), VESTA has the benefit of solely requiring application-scoped information and enabling a higher level of explainability, while achieving comparable or even higher precision. Through experiments performed on 37 real-world applications on the javavirtual Machine (JVM), we find the power model built by VESTA is capable of predicting energy consumption with a mean absolute percentage error of 1.56%, while the monitoring of language runtime events incurs small performance and energy overhead.
Real-time 3D space understanding is becoming prevalent across a wide range of applications and hardware platforms. To meet the desired Quality of Service (QoS), computer vision applications tend to be heavily parallel...
详细信息
ISBN:
(纸本)9781450349482
Real-time 3D space understanding is becoming prevalent across a wide range of applications and hardware platforms. To meet the desired Quality of Service (QoS), computer vision applications tend to be heavily parallelized and exploit any available hardware accelerators. Current approaches to achieving real-time computer vision, evolve around programming languages typically associated with High Performance Computing along with binding extensions for OpenCL or CUDA execution. Such implementations, although high performing, lack portability across the wide range of diverse hardware resources and accelerators. In this paper, we showcase how a complex computer vision application can be implemented within a managed runtime system. We discuss the complexities of achieving high-performing and portable execution across embedded and desktop configurations. Furthermore, we demonstrate that it is possible to achieve the QoS target of over 30 frames per second (FPS) by exploiting FPGA and GPGPU acceleration transparently through the managed runtime system.
Our everyday lives are becoming increasingly filled with mobile devices of varying capabilities. The common practice of creating multiple versions of the same application to cope with diverse device resource capabilit...
详细信息
ISBN:
(纸本)0769519954
Our everyday lives are becoming increasingly filled with mobile devices of varying capabilities. The common practice of creating multiple versions of the same application to cope with diverse device resource capabilities increases software development and maintenance costs. In this paper we discuss an offloading method to mask out the memory constraints on devices running a typical javavirtual machine. The method allows the garbage collector to selectively offload part of the object heap into a nearby wired server In comparison with traditional virtual memory techniques, the garbage collector can make wiser offloading choices using information about object access patterns at a finer granularity. Our experiments show that our prototype introduces modest overhead in the JVM while allowing applications to execute on devices without enough physical memory. In addition, when running with the Linux virtual memory system under intense memory constraints, the prototype achieves an average improvement of 24% in run-time performance and 53% in energy savings.
We introduce the concept of compilation space as a new pivot for the comprehensive validation of just-in-time (JIT) compilers in modern language virtualmachines (LVMs). The compilation space of a program, encompasses...
详细信息
We introduce the concept of compilation space as a new pivot for the comprehensive validation of just-in-time (JIT) compilers in modern language virtualmachines (LVMs). The compilation space of a program, encompasses a wide range of equivalent JIT-compilation choices, which can be cross-validated to ensure the correctness of the program’s JIT compilations. To thoroughly explore the compilation space in a lightweight and LVM-agnostic manner, we strategically mutate test programs with JIT-relevant but semantics-preserving code constructs, aiming to provoke diverse JIT compilation optimizations. We primarily implement this approach in Artemis, a tool for validating java virtual machines (JVMs). Within three months, Artemis successfully discovered 85 bugs in three widely used production JVMs — HotSpot, OpenJ9, and the Android Runtime — where 53 were already confirmed or fixed and many of which were classified as critical. It is noteworthy that all reported bugs concern JIT compilers, highlighting the effectiveness and practicality of our technique. Building on the promising results with JVMs, we experimentally applied our technique to a state-of-the-art javaScript Engine (JSE) fuzzer called Fuzzilli, aiming to augment it to find mis-compilation bugs without significantly sacrificing its ability to detect crashes. Our experiments demonstrate that our enhanced version of Fuzzilli namely Apollo could achieve comparable code coverage with a considerably smaller number of generated programs with a similar number of crashes. Additionally, Apollo successfully uncovered four mis-compilations in javaScriptCore and SpiderMonkey within seven days. Following Artemis’ and Apollo’s success, we are expecting that the generality and practicability of our approach will make it broadly applicable for understanding and validating the JIT compilers of other LVMs.
暂无评论