Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available by leveraging information from annotated data in a source domain. Most deep UDA ap...
详细信息
Graphics processing units are available solutions for high-performance safety-critical applications, such as self-driving cars. In this application domain, functional-safety and reliability are major concerns. Thus, t...
详细信息
ISBN:
(数字)9781728157757
ISBN:
(纸本)9781728157764
Graphics processing units are available solutions for high-performance safety-critical applications, such as self-driving cars. In this application domain, functional-safety and reliability are major concerns. Thus, the adoption of fault tolerance techniques is mandatory to detect or correct faults, since these devices must work properly, even when faults are present. GPUs are designed and implemented with cutting-edge technologies, which makes them sensitive to faults caused by radiation interference, such as single event upsets. These effects can lead the system to a failure, which is unacceptable in safety-critical applications. Therefore, effective detection and mitigation strategies must be adopted to harden the GPU operation. In this paper, we analyze transient effects in the pipeline registers of a GPU architecture. We run four applications at three GPU configurations, considering the source of the fault, its effect on the GPU, and the use of software-based hardening techniques. The evaluation was performed using a general-purpose soft-core GPU based on the NVIDIA G80 architecture. Results can guide designers in building more resilient GPU architectures.
Driven by increasingly aggressive CMOS technology scaling, sub-wavelength lithography is incurring more evident variability in the technology parameters of the semiconductors fabrication process. That variability resu...
详细信息
Driven by increasingly aggressive CMOS technology scaling, sub-wavelength lithography is incurring more evident variability in the technology parameters of the semiconductors fabrication process. That variability results in otherwise identical designs displaying very different performances, power consumption levels and lifespans once fabricated. Hence, process variability may lead to execution uncertainties, impacting the expected quality of service and energy efficiency of the running software. As such uncertainties are intolerable in certain application domains such as automotive and avionic infotainment systems, it has become a persistent necessity to customize runtime engines to introduce measures for variability awareness in task allocation decisions. The purpose of compensating process variability is to avoid performance degradation and energy inefficiency. And customization is meant to take place automatically through exporting the variability-impacted platform characteristics - such as per-core manufactured clock frequency - for the runtime library to perform variability-aware workload sharing on the target cores of the hardware platform. Hence, we can eventually achieve noticeable optimization results, not only on the system performance and energy consumption levels, but also in increasing productivity in systems development, testing, integration, and marketing. This paper presents a holistic approach starting from a system model of the target multicore platform, to building and integrating the runtime library, and finally highlighting the optimization results achieved through the proposed runtime customization paradigm.
This paper states a methodology in order to manage the saturation of the propulsion system providing flexibility to the vehicle control unit. To reach this objective a n-copter propeller system is described and a gene...
详细信息
This paper states a methodology in order to manage the saturation of the propulsion system providing flexibility to the vehicle control unit. To reach this objective a n-copter propeller system is described and a general dispatching law without constrains is found. Furthermore, an optimal propeller problem is proposed and a possible solution using a recursive least squares method is presented.
This study describes three different data mining techniques for detecting abnormal lighting energy consumption using hourly recorded energy consumption and peak demand (maximum power) data. Two outliers’ detection me...
详细信息
This study describes three different data mining techniques for detecting abnormal lighting energy consumption using hourly recorded energy consumption and peak demand (maximum power) data. Two outliers’ detection methods are applied to each class and cluster for detecting abnormal consumption in the same data set. In each class and cluster with anomalous consumption the amount of variation from normal is determined using modified standard scores. The study will be helpful for building energy management systems to reduce operating cost and time by not having to detect faults manually or diagnose false warnings. In addition, it will be useful for developing fault detection and diagnosis model for the whole building energy consumption.
To promote the responsible development and use of data-driven technologies –such as machine learning and artificial intelligence– principles of trustworthiness, accountability and fairness should be followed. The qu...
详细信息
To promote the responsible development and use of data-driven technologies –such as machine learning and artificial intelligence– principles of trustworthiness, accountability and fairness should be followed. The quality of the dataset on which these applications rely, is crucial to achieve compliance with the required ethical principles. Quantitative approaches to measure data quality are abundant in the literature and among practitioners, however they are not sufficient to cover all the principles and ethical challenges *** this paper, we show that complementing data quality with measurable dimensions of data documentation and of data balance helps to cover a wider range of ethical challenges connected to the use of datasets in algorithms. A synthetic report of the metrics applied (the Extended Data Brief) and a set of Risk Labels for the Ethical Challenges provide a practical overview of the potential ethical harms due to data composition. We believe that the proposed data labelling scheme will enable practitioners to improve the overall quality of datasets and to build more responsible data-driven software systems.
暂无评论