As deep learning has been successfully deployed in diverse applications, there is an ever increasing need to explain its decision. To explain decisions, case-based reasoning has proved to be effective in many areas. T...
详细信息
As deep learning has been successfully deployed in diverse applications, there is an ever increasing need to explain its decision. To explain decisions, case-based reasoning has proved to be effective in many areas. The prototype-based explanation is a method that provides an explanation of the model's prediction using the distance between an input and learned prototypes to effectively perform case-based reasoning. However, existing methods are less reliable because distance is not always consistent with human perception. In this study, we construct a latent space which we call an $explanation space$ with distributional embedding and latent space regularization. This explanation space ensures that similar (in terms of human-interpretable features) images share similar latent representations, and therefore provides a reliable explanation for the consistency between distance-based explanation and human perception. The explanation space also provides additional explanation by transition, allowing the user to understand the factors that affect the distance. Throughout extensive experiments including human evaluation, we have shown that the explanation space provides a more human-understandable explanation.
Prediction interval (PI) is a common method to represent predictive uncertainty in regression by deep neural networks. This paper proposes an extension of the prediction interval by using a union of disjoint intervals...
详细信息
Prediction interval (PI) is a common method to represent predictive uncertainty in regression by deep neural networks. This paper proposes an extension of the prediction interval by using a union of disjoint intervals. Since previous PI methods assumed a single-interval PI (one lower and upper bound), it suffers from performance degradation in uncertainty estimation when the conditional density function is multi-modal. This paper demonstrates the need to include multi-modality in uncertainty estimation for regression. To address the issue, we propose a novel method that generates a union of disjoint PI's. With UCI benchmark experiments, the proposed method is shown to improve over current state-of-the-art uncertainty quantification methods, reducing an average PI width by over 27%. With qualitative experiments, it is shown that multi-modality often exists in real-world datasets, and our method produces high-quality PI's compared to existing PI methods.
PurposeMobile C-arm systems represent the standard imaging devices within the field of spine surgery. In addition to 2D imaging, they allow for 3D scans while preserving unrestricted patient access. For viewing, the a...
详细信息
PurposeMobile C-arm systems represent the standard imaging devices within the field of spine surgery. In addition to 2D imaging, they allow for 3D scans while preserving unrestricted patient access. For viewing, the acquired volumes are adjusted such that their anatomical standard planes align with the axes of the viewing modality. This difficult and time-consuming step is currently performed manually by the leading surgeon. This process is automatized within this work to improve the usability of C-arm systems. Thereby, the spinal region consisting of multiple vertebrae and the standard planes of all vertebrae being of interest to the surgeon need to be taken into *** object detection algorithm based on the you only look once version 3 architecture, adapted to 3D inputs, is compared with a segmentation-based approach employing a 3D U-Net. Both algorithms are trained on a dataset of 440 and tested on 218 spinal *** the detection-based algorithm is slightly inferior concerning the detection (91% versus 97% accuracy), localization (1.26 mm versus 0.74 mm error) and alignment accuracy (5.00 deg versus 4.73 deg error), it outperforms the segmentation-based one in terms of speed (5 s versus 38 s).ConclusionsBoth algorithms show similar good results. However, the speed gain of the detection-based algorithm, resulting in a run time of 5 s, makes it more suitable for usage in an intra-operative scenario.
Understanding neural networks is challenging due to their high-dimensional, interacting components. Inspired by human cognition, which processes complex sensory data by chunking it into recurring entities, we propose ...
详细信息
In addition to the impressive predictive power of machinelearning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural n...
详细信息
In addition to the impressive predictive power of machinelearning (ML) models, more recently, explanation methods have emerged that enable an interpretation of complex nonlinear learning models, such as deep neural networks. Gaining a better understanding is especially important, e.g., for safety-critical ML applications or medical diagnostics and so on. Although such explainable artificial intelligence (XAI) techniques have reached significant popularity for classifiers, thus far, little attention has been devoted to XAI for regression models (XAIR). In this review, we clarify the fundamental conceptual differences of XAI for regression and classification tasks, establish novel theoretical insights and analysis for XAIR, provide demonstrations of XAIR on genuine practical regression problems, and finally, discuss challenges remaining for the field.
暂无评论