With the increasing number of deep learning applications, there is a growing demand for explanations. Visual explanations provide information about which parts of an image are relevant for a classifier's decision....
详细信息
ISBN:
(纸本)9783030438234;9783030438227
With the increasing number of deep learning applications, there is a growing demand for explanations. Visual explanations provide information about which parts of an image are relevant for a classifier's decision. However, highlighting of image parts (e.g., an eye) cannot capture the relevance of a specific feature value for a class (e.g., that the eye is wide open). Furthermore, highlighting cannot convey whether the classification depends on the mere presence of parts or on a specific spatial relation between them. Consequently, we present an approach that is capable of explaining a classifier's decision in terms of logic rules obtained by the inductive logic programming system Aleph. The examples and the background knowledge needed for Aleph are based on the explanation generation method LIME. We demonstrate our approach with images of a blocksworld domain. First, we show that our approach is capable of identifying a single relation as important explanatory construct. Afterwards, we present the more complex relational concept of towers. Finally, we show how the generated relational rules can be explicitly related with the input image, resulting in richer explanations.
Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concer...
详细信息
ISBN:
(纸本)9783030582845;9783030582852
Explainable AI has emerged to be a key component for black-box machine learning approaches in domains with a high demand for reliability or transparency. Examples are medical assistant systems, and applications concerned with the General Data Protection Regulation of the European Union, which features transparency as a cornerstone. Such demands require the ability to audit the rationale behind a classifier's decision. While visualizations are the de facto standard of explanations, they come short in terms of expressiveness in many ways: They cannot distinguish between different attribute manifestations of visual features (e.g. eye open vs. closed), and they cannot accurately describe the influence of absence of, and relations between features. An alternative would be more expressive symbolic surrogate models. However, these require symbolic inputs, which are not readily available in most computer vision tasks. In this paper we investigate how to overcome this: We use inherent features learned by the network to build a global, expressive, verbal explanation of the rationale of a feed-forward convolutional deep neural network (DNN). The semantics of the features are mined by a concept analysis approach trained on a set of human understandable visual concepts. The explanation is found by an inductive logic programming (ILP) method and presented as first-order rules. We show that our explanation is faithful to the original black-box model (The code for our experiments is available at https://***/mc-lovin-mlem/concept-embeddings-and-ilp/tree/ki2020).
We present a fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models. We reduce the problem of search for best clauses to instances of the High-Utility Itemset Mining (HUIM)...
详细信息
ISBN:
(纸本)9783030391973;9783030391966
We present a fast and scalable algorithm to induce non-monotonic logic programs from statistical learning models. We reduce the problem of search for best clauses to instances of the High-Utility Itemset Mining (HUIM) problem. In the HUIM problem, feature values and their importance are treated as transactions and utilities respectively. We make use of TreeExplainer, a fast and scalable implementation of the Explainable AI tool SHAP, to extract locally important features and their weights from ensemble tree models. Our experiments with UCI standard benchmarks suggest a significant improvement in terms of classification evaluation metrics and training time compared to ALEPH, a state-of-the-art inductive logic programming (ILP) system.
Machine Learning (ML) approaches can achieve impressive results, but many lack transparency or have difficulties handling data of high structural complexity. The class of ML known as inductive logic programming (ILP) ...
详细信息
ISBN:
(数字)9783030492106
ISBN:
(纸本)9783030492090;9783030492106
Machine Learning (ML) approaches can achieve impressive results, but many lack transparency or have difficulties handling data of high structural complexity. The class of ML known as inductive logic programming (ILP) draws on the expressivity and rigour of subsets of First Order logic to represent both data and models. When Description logics (DL) are used, the approach can be applied directly to knowledge represented as ontologies. ILP output is a prime candidate for explainable artificial intelligence;the expense being computational complexity. We have recently demonstrated how a critical component of ILP learners in DL, namely, cover set testing, can be speeded up through the use of concurrent processing. Here we describe the first prototype of an ILP learner in DL that benefits from this use of concurrency. The result is a fast, scalable tool that can be applied directly to large ontologies.
We present NrSample, a framework for program synthesis in inductive logic programming. NrSample uses propositional logic constraints to exclude undesirable candidates from the search. This is achieved by representing ...
详细信息
We present NrSample, a framework for program synthesis in inductive logic programming. NrSample uses propositional logic constraints to exclude undesirable candidates from the search. This is achieved by representing constraints as propositional formulae and solving the associated constraint satisfaction problem. We present a variety of such constraints: pruning, input-output, functional (arithmetic), and variable splitting. NrSample is also capable of detecting search space exhaustion, leading to further speedups in clause induction and optimality. We benchmark NrSample against enumeration search (Aleph's default) and Progol's A* search in the context of program synthesis. The results show that, on large program synthesis problems, NrSample induces between 1 and 1358 times faster than enumeration (236 times faster on average), always with similar or better accuracy. Compared to Progol A*, NrSample is 18 times faster on average with similar or better accuracy except for two problems: one in which Progol A* substantially sacrificed accuracy to induce faster, and one in which Progol A* was a clear winner. Functional constraints provide a speedup of up to 53 times (21 times on average) with similar or better accuracy. We also benchmark using a few concept learning (non-program synthesis) problems. The results indicate that without strong constraints, the overhead of solving constraints is not compensated for.
Professor Koichi Furukawa, an eminent computer scientist and former Editor-in-Chief of the New Generation Computing journal, passed away on January 31, 2017. His passing was a surprise, and we were all shocked and sad...
详细信息
Professor Koichi Furukawa, an eminent computer scientist and former Editor-in-Chief of the New Generation Computing journal, passed away on January 31, 2017. His passing was a surprise, and we were all shocked and saddened by the news. To remember the deceased, this article reviews the great career and contributions of Professor Koichi Furukawa, focusing on his research activities on the foundation and application of logicprogramming. Professor Furukawa had both a deep understanding and broad impact on logicprogramming, and he was always gentle but persistent in articulating its value across a broad spectrum of computer science and artificial intelligence research. This article introduces his research along with its insightful and unique philosophical framework.
Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network. We will focus attention on deep relational machines (DRMs: a term introduced in Lodhi (2013)). ...
详细信息
Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network. We will focus attention on deep relational machines (DRMs: a term introduced in Lodhi (2013)). A DRM is a deep network in which the input layer consists of Boolean-valued functions (features) that are defined in terms of relations provided as domain, or background, knowledge. Our DRMs differ from those in Lodhi (2013), which uses an inductive logic programming (ILP) engine to first select features (we use random selections from a space of features that satisfies some approximate constraints on logical relevance and non-redundancy). But why do the DRMs predict what they do? One way of answering this was provided in recent work Ribeiro et al. (2016), by constructing readable proxies for a black-box predictor. The proxies are intended only to model the predictions of the black-box in local regions of the instance-space. But readability alone may not be enough: to be understandable, the local models must use relevant concepts in an meaningful manner. We investigate the use of a Bayes-like approach to identify logical proxies for local predictions of a DRM. As a preliminary step, we show that DRM's with our randomised propositionalization method achieve predictive performance that is comparable to the best reports in the ILP literature. Our principal results on logical explanations show: (a) Models in first-order logic can approximate the DRM's prediction closely in a small local region;and (b) Expert-provided relevance information can play the role of a prior to distinguish between logical explanations that perform equivalently on prediction alone.
This paper presents an extended abstract for the PhD topic Inducing Rules about Distributed Robotic Systems for Fault Detection & Diagnosis. The research focuses on developing novel methods for fault detection and...
详细信息
ISBN:
(纸本)9781450383073
This paper presents an extended abstract for the PhD topic Inducing Rules about Distributed Robotic Systems for Fault Detection & Diagnosis. The research focuses on developing novel methods for fault detection and diagnosis using explainable machine learning. The main field of application is distributed robotic systems. With current developments in distributed robot technology, the problem of detecting and diagnosing faults becomes more complex.
Handling relational data streams has become a crucial task, given the availability of pervasive sensors and Internet-produced content, such as social networks and knowledge graphs. In a relational environment, this is...
详细信息
Handling relational data streams has become a crucial task, given the availability of pervasive sensors and Internet-produced content, such as social networks and knowledge graphs. In a relational environment, this is a particularly challenging task, since one cannot assure that the streams of examples are independent along the iterations. Thus, most relational learning systems are still designed to learn only from closed batches of data. Furthermore, in case there is a previously acquired model, these systems either would discard it or assuming it as correct. In this work, we propose an online relational learning algorithm that can handle continuous, open-ended streams of relational examples as they arrive. We employ techniques of theory revision to take advantage of the previously acquired model as a starting point, by finding where it should be modified to cope with the new examples, and automatically update it. We rely on the Hoeffding's bound statistical theory to decide if the model must, in fact, be updated in accordance with the new examples. The proposed algorithm is built upon ProPPR statistical relational language, aiming at contemplating the uncertainty inherent to real data. Experimental results in social networks and entity co-reference datasets show the potential of the proposed approach compared to other relational learners.
Recently, world-class human players have been outperformed in a number of complex two-person games (Go, Chess, Checkers) by Deep Reinforcement Learning systems. However, the data efficiency of the learning systems is ...
详细信息
Recently, world-class human players have been outperformed in a number of complex two-person games (Go, Chess, Checkers) by Deep Reinforcement Learning systems. However, the data efficiency of the learning systems is unclear given that they appear to require far more training games to achieve such performance than any human player might experience in a lifetime. In addition, the resulting learned strategies are not in a form which can be communicated to human players. This contrasts to earlier research in Behavioural Cloning in which single-agent skills were machine learned in a symbolic language, facilitating their being taught to human beings. In this paper, we consider Machine Discovery of human-comprehensible strategies for simple two-person games (Noughts-and-Crosses and Hexapawn). One advantage of considering simple games is that there is a tractable approach to calculating minimax regret. We use these games to compare Cumulative Minimax Regret for variants of both standard and deep reinforcement learning against two variants of a new Meta-interpretive Learning system called MIGO. In our experiments, tested variants of both normal and deep reinforcement learning have consistently worse performance (higher cumulative minimax regret) than both variants of MIGO on Noughts-and-Crosses and Hexapawn. In addition, MIGO's learned rules are relatively easy to comprehend, and are demonstrated to achieve significant transfer learning in both directions between Noughts-and-Crosses and Hexapawn.
暂无评论