The healthcare domain has undergone a huge transformation thanks to the availability of new technologies. In particular, health monitoring systems have entered everyday life without interfering with the daily routine....
详细信息
Many serious games are most efective when played regularly;however, little is known about how individual game elements support player adherence over time. This work draws on evidence from existing frameworks and game ...
详细信息
In recent years, Neural Networks (NNs) have become one of the most prevailing topics in computers science, both in research and in industry. NNs are used for data analysis, natural language processing, autonomous driv...
详细信息
ISBN:
(纸本)9798400713965
In recent years, Neural Networks (NNs) have become one of the most prevailing topics in computers science, both in research and in industry. NNs are used for data analysis, natural language processing, autonomous driving and more. As such, NNs also see more application and use in High-Performance computing (HPC). At the same time, energy efficiency has become an increasingly critical topic. NNs use large amounts of energy for operation, which in return results in large amounts of CO2 emissions. This work presents a comprehensive evaluation of current NN inference soft- and hardware configurations within High-Performance computing (HPC) environments, with a focus on both performance metrics and energy consumption. NN quantization and accelerators such as FPGAs allow for an increased inference efficiency, both in terms of throughput and energy. Therefore, this work focuses on FINN, an efficient NN inference framework for FPGAs, highlighting its current lack of support for HPC systems. We provide an in-depth analysis of FINN in order to implement extensions to optimize the end-to-end execution for the usage in the HPC environment. We thoroughly evaluate the performance and energy efficiency gains using newly implemented optimizations and compare it against existing NN accelerators for HPC. With our extensions of FINN, we were able to achieve a 1847× higher throughput, while also decreasing the latency on average by 0.9978× and EDP by 0.9979× on an Alveo U55C FPGA. Data flow based NN inference accelerators on an FPGA should be used if the performance and energy footprint of the inference process is crucial, and the batch sizes are small to medium. For extremely large batch sizes and a very limited time for network-to-accelerator (less than a few days), using GPUs is still the way to go. Our results show that with the newly developed driver, we outperform a high-end Nvidia A100 GPU by up to 7.81x in throughput, while having a 0.87x lower latency and 0.88x lower energy de
This paper addresses the problem of computing valuations of the Dieudonné determinants of matrices over discrete valuation skew fields (DVSFs). This problem is an extension of computing degrees of determinants of...
详细信息
Time series data are ubiquitous. Rapid advances in diverse sensing technologies, ranging from remote sensors to wearables and social sensing, are generating a rapid growth in the size and complexity of time series arc...
详细信息
ISBN:
(纸本)9781450383325
Time series data are ubiquitous. Rapid advances in diverse sensing technologies, ranging from remote sensors to wearables and social sensing, are generating a rapid growth in the size and complexity of time series archives. This has resulted in a fundamental shift away from parsimonious, infrequent measurement to nearly continuous monitoring and recording. This demands development of new tools and solutions. The goals of this workshop are to: (1) highlight the significant challenges that underpin learning and mining from time series data (e.g. irregular sampling, spatiotemporal structure, and uncertainty quantification), (2) discuss recent algorithmic, theoretical, statistical, or systems-based developments for tackling these problems, and (3) synergize the research activities and discuss both new and open problems in time series analysis and mining.
Autonomous systems (ASes) exist in two dimensions on the Internet: the administrative and the operational one. Regional Internet Registries (RIRs) rule the former, while BGP the latter. In this work, we reconstruct th...
详细信息
ISBN:
(纸本)9781450391290
Autonomous systems (ASes) exist in two dimensions on the Internet: the administrative and the operational one. Regional Internet Registries (RIRs) rule the former, while BGP the latter. In this work, we reconstruct the lives of the ASes on both dimensions, performing a joint analysis that covers 17 years of data. For the administrative dimension, we leverage delegation files published by RIRs to report the daily status of Internet resources they allocate. For the operational dimension, we characterize the temporal activity of ASNs in the Internet control plane using BGP data collected by the RouteViews and RIPE RIS projects. We present a methodology to extract insights about AS life cycles, including dealing with pitfalls affecting authoritative public datasets. We then perform a joint analysis to establish the relationship (or lack of) between these two dimensions for all allocated ASNs and all ASNs visible in BGP. We characterize the usual behaviors, specific differences between RIRs and historical resources, as well as measure the discrepancies between the two lparallelz lives. We find discrepancies and misalignment that reveal useful insights, and we highlight through examples the potential of this new lens to help pinpoint malicious BGP activity and various types of misconfigurations. This study illuminates a largely unexplored aspect of the Internet global routing system and provides methods and data to support broader studies that relate to security, policy, and network management.
Simulations of Social Dynamics play an important role in public policy analysis and design as they help to estimate the effects that alternative policies can have on society. We argue for a tight integration of a nove...
详细信息
ISBN:
(纸本)9781665433266
Simulations of Social Dynamics play an important role in public policy analysis and design as they help to estimate the effects that alternative policies can have on society. We argue for a tight integration of a novel generation of social dynamics meta-simulation tools in a policy development framework. Such tools provide a common, agent- based modeling and concurrent execution environment for creating and running such simulations. Furthermore, they enable reasoning about simulation outcomes by examining and ranking alternative policy objectives based on a set of criteria supplied by the designer. We describe our implementation of such an idea in Politika, our Elixir-based web framework for interactive, meta-simulation-based policy design and provide appropriate examples as part of the PolicyCLOUD EU research project.
Continuously monitoring a wide variety of performance and fault metrics has become a crucial part of operating large-scale datacenter networks. In this work, we ask whether we can reduce the costs to monitor - in term...
详细信息
ISBN:
(纸本)9781450390873
Continuously monitoring a wide variety of performance and fault metrics has become a crucial part of operating large-scale datacenter networks. In this work, we ask whether we can reduce the costs to monitor - in terms of collection, storage and analysis - by judiciously controlling how much and which measurementswe collect. By positing thatwe can treat almost all measured signals as sampled time-series, we show that we can use signal processing techniques such as the Nyquist-Shannon theorem to avoid wasteful data collection. We show that large savings appear possible by analyzing tens of popular measurementsystems from a production datacenter network. We also discuss some challenges that must be solved when applying these techniques in practice.
As high-performance computingsystems continue to advance, the gap between computing performance and I/O capabilities is widening. This bottleneck limits the storage capabilities of increasingly large-scale simulation...
详细信息
ISBN:
(纸本)9798350355543
As high-performance computingsystems continue to advance, the gap between computing performance and I/O capabilities is widening. This bottleneck limits the storage capabilities of increasingly large-scale simulations, which generate data at never-before-seen granularities while only being able to store a small subset of the raw data. Recently, strategies for data-driven sampling have been proposed to intelligently sample the data in a way that achieves high data reduction rates while preserving important regions or features with high fidelity. However, a thorough analysis of how such intelligent samples can be used for data reconstruction is lacking. We propose a data-driven machine learning approach based on training neural networks to reconstruct full-scale datasets based on a simulation's sampled output. Compared to current state-of-the-art reconstruction approaches such as Delaunay triangulation-based linear interpolation, we demonstrate that our machine learning-based reconstruction has several advantages, including reconstruction quality, time-to-reconstruct, and knowledge transfer to unseen timesteps and grid resolutions. We propose and evaluate strategies that balance the sampling rates with model training (pretraining and fine-tuning) and data reconstruction time to demonstrate how such machine learning approaches can be tailored for both speed and quality for the reconstruction of grid-based datasets.
Human-Machine Pair Inspection (HMPI) is a novel code inspection technology proposed in our previous work, which is the style that machine will intelligently guide the programmer to carry out inspections of the program...
详细信息
暂无评论