Artificial Intelligence has profoundly impacted system and softwareengineering by facilitating the automation of complicated tasks, reducing errors, and accelerating the development process. AI-driven tools help enha...
详细信息
作者:
Grout, IanMullin, LenoreUniversity of Limerick
Faculty of Science and Engineering Department of Electronic and Computer Engineering Limerick Ireland University at Albany
SUNY College of Engineering and Applied Sciences Department of Computer Science AlbanyNY United States
In this paper, the Mathematic of Arrays (MoA) approach to solving complex multi-dimensional array computations using array indexing is considered with reference in its implementation in code targeting embedded systems...
详细信息
The increasing focus on explainability within ethical AI, mandated by frameworks such as GDPR, highlights the critical need for robust explanation mechanisms in Recommender systems (RS). A fundamental aspect of advanc...
详细信息
ISBN:
(纸本)9798400705052
The increasing focus on explainability within ethical AI, mandated by frameworks such as GDPR, highlights the critical need for robust explanation mechanisms in Recommender systems (RS). A fundamental aspect of advancing such methods involves developing reproducible and quantifiable evaluation metrics. Traditional evaluation approaches involving human subjects are inherently non-reproducible, costly, subjective, and context-dependent. Furthermore, the complexity of AI models often transcends human comprehension capabilities, rendering it challenging for evaluators to ascertain the accuracy of explanations. Consequently, there is an urgent need for objective and scalable metrics that can accurately assess explanation methods in RS. Drawing inspiration from established practices in computer vision, this research introduces a counterfactual methodology to evaluate the accuracy of explanations in RS. Although counterfactual methods are well recognized in other fields, they remain relatively unexplored within the domain of recommender systems. This study aims to establish quantifiable metrics that objectively evaluate the correctness of local explanations. In this work, we wish to adopt these methods for recommendation systems, thereby enabling an easy and reproducible approach to evaluating the correctness of explanations for recommendation systems.
In Biomedicine, computersoftware is applied to the medical image for content analysis by MRI scanning for the program. Preprocessing, removal of noise, processing images in order to detect objects and analysis of the...
详细信息
Medical errors contribute significantly to morbidity and mortality, emphasizing the critical role of Clinical Guidelines (GLs) in patient care. Automating GL application can enhance GL adherence, improve patient outco...
详细信息
Cloud-native technologies have been widely adopted, enabling organizations to build scalable, observable, resilient, and secure softwaresystems within cloud environments. However, the rapid evolution of these technol...
详细信息
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the sam...
详细信息
ISBN:
(纸本)9783031505201;9783031505218
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on overapproximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first blackbox reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.
An integrated monitoring system for power dispatching control for power market is proposed. The logical structure, software structure and configuration structure are analyzed. The software and hardware monitoring of p...
详细信息
Network systems function based on rules that are intrinsically dynamic, subject to temporal circumstances set by outside events like host utilisation, bandwidth measurements, intrusion detection, or time-specific even...
详细信息
This study addresses the issue of double frequency fluctuations caused by cascade converters in high-voltage DC transmission systems by designing a system that uses a 4.3kV, 50Hz three-phase AC input and ultimately ou...
详细信息
暂无评论