Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine de...
详细信息
Exploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the inductive logic programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.
There have been significant efforts to understand, describe, and predict the social commerce intention of users in the areas of social commerce and web data management. Based on recent developments in knowledge graph ...
详细信息
There have been significant efforts to understand, describe, and predict the social commerce intention of users in the areas of social commerce and web data management. Based on recent developments in knowledge graph and inductive logic programming in artificial intelligence, in this paper, we propose a knowledge-graph-based social commerce intention analysis method. In particular, a knowledge base is constructed to represent the social commerce environment by integrating information related to social relationships, social commerce factors, and domain background knowledge. In this study, knowledge graphs are used to represent and visualize the entities and relationships related to social commerce, while inductive logic programming techniques are used to discover implicit information that can be used to interpret the information behaviors and intentions of the users. Evaluation tests confirmed the effectiveness of the proposed method. In addition, the feasibility of using knowledge graphs and knowledge-based data mining techniques in the social commerce environment is also confirmed.
Propositionalization is the process of summarizing relational data into a tabular (attribute-value) format. The resulting table can next be used by any propositional learner. This approach makes it possible to apply a...
详细信息
ISBN:
(数字)9783030492106
ISBN:
(纸本)9783030492090;9783030492106
Propositionalization is the process of summarizing relational data into a tabular (attribute-value) format. The resulting table can next be used by any propositional learner. This approach makes it possible to apply a wide variety of learning methods to relational data. However, the transformation from relational to propositional format is generally not lossless: different relational structures may be mapped onto the same feature vector. At the same time, features may be introduced that are not needed for the learning task at hand. In general, it is hard to define a feature space that contains all and only those features that are needed for the learning task. This paper presents LazyBum, a system that can be considered a lazy version of the recently proposed OneBM method for propositionalization. LazyBum interleaves OneBM's feature construction method with a decision tree learner. This learner both uses and guides the propositionalization process. It indicates when and where to look for new features. This approach is similar to what has elsewhere been called dynamic propositionalization. In an experimental comparison with the original OneBM and with two other recently proposed propositionalization methods (nFOIL and MODL, which respectively perform dynamic and static propositionalization), LazyBum achieves a comparable accuracy with a lower execution time on most of the datasets.
A new genetic inductive logic programming (GILP for short) algorithm named PT-NFF-GILP (Phase Transition and New Fitness Function based Genetic inductive logic programming) is proposed in this paper. Based on phase tr...
详细信息
ISBN:
(纸本)9781467315098
A new genetic inductive logic programming (GILP for short) algorithm named PT-NFF-GILP (Phase Transition and New Fitness Function based Genetic inductive logic programming) is proposed in this paper. Based on phase transition of the covering test, PT-NFF-GILP randomly generates initial population in phase transition region instead of the whole space of candidate clauses. Moreover, a new fitness function, which not only considers the number of examples covered by rules, but also considers the ratio of the examples covered by rules to the training examples, is defined in PT-NFF-GILP. The new fitness function measures the quality of first-order rules more precisely, and enhances the search performance of algorithm. Experiments on ten learning problems show that: 1) the new method of generating initial population can effectively reduce iteration number and enhance predictive accuracy of GILP algorithm;2) the new fitness function measures the quality of first-order rules more precisely and avoids generating over-specific hypothesis;3) The performance of PT-NFF-GILP is better than other algorithms compared with it, such as G-NET, KFOIL and NFOIL.
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) ...
详细信息
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory. A major issue with this class of algorithms is getting stuck in local optima. In our new approach, however, the data-dependent hill-climbing search is replaced with a model-dependent search where a globally optimal SVM model is trained first, then the algorithm looks into support vectors as the most influential data points in the model, and induces a clause that would cover the support vector and points that are most similar to that support vector. Instead of defining a fixed hypothesis search space, our algorithm makes use of SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features. This approach yields an algorithm that captures the SVM model's underlying logic and outperforms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics.
From a set of technical drawings, we learn a parser program to interpret the tabular data contained in such a drawing. This enables automatic reasoning and learning on top of a database of technical drawings. For exam...
详细信息
ISBN:
(纸本)9783030438234;9783030438227
From a set of technical drawings, we learn a parser program to interpret the tabular data contained in such a drawing. This enables automatic reasoning and learning on top of a database of technical drawings. For example to help designers find or complete designs more easily.
Learning production rules from continuous data streams, e.g. surgical videos, is a challenging problem. To learn production rules, we present a novel framework consisting of deep learning models and inductivelogic pr...
详细信息
ISBN:
(纸本)9783030316358;9783030316341
Learning production rules from continuous data streams, e.g. surgical videos, is a challenging problem. To learn production rules, we present a novel framework consisting of deep learning models and inductive logic programming (ILP) system for learning surgical workflow entities that are needed in subsequent surgical tasks, e.g. "what kind of instruments will be needed in the next step?" As a prototypical scenario, we analyzed the Robot-Assisted Partial Nephrectomy (RAPN) workflow. To verify our framework, first consistent and complete rules were learnt from the video annotations which can classify RAPN surgical workflow and temporal sequence at high-granularity e.g. steps. After we found that RAPN workflow is hierarchical, we used combination of learned predicates, presenting workflow hierarchy, to predict the information on the next step followed by a classification of step sequences with deep learning models. The predicted rules on the RAPN workflow was verified by an expert urologist and conforms with the standard workflow of RAPN.
The present work surveys research that integrates successfully a number of complementary fields in Artificial Intelligence. Starting from integrations in Reinforcement Learning: Deep Reinforcement Learning and Relatio...
详细信息
ISBN:
(纸本)9783030456900;9783030456917
The present work surveys research that integrates successfully a number of complementary fields in Artificial Intelligence. Starting from integrations in Reinforcement Learning: Deep Reinforcement Learning and Relational Reinforcement Learning, we then present Neural-Symbolic Learning and Reasoning since it is applied to Deep Reinforcement Learning. Finally, we present integrations in Deep Reinforcement Learning, such as, Relational Deep Reinforcement Learning. We propose that this road is breaking through barriers in Reinforcement Learning and making us closer to Artificial General Intelligence, and we share views about the current challenges to get us further towards this goal.
The increasing importance of resource-efficient production entails that manufacturing companies have to create a more dynamic production environment, with flexible manufacturing machines and processes. To fully utiliz...
详细信息
ISBN:
(纸本)9781728189567
The increasing importance of resource-efficient production entails that manufacturing companies have to create a more dynamic production environment, with flexible manufacturing machines and processes. To fully utilize this potential of dynamic manufacturing through automatic production planning, formal skill descriptions of the machines are essential. However, generating those skill descriptions in a manual fashion is labor-intensive and requires extensive domain-knowledge. In this contribution an ontology-based semi-automatic skill description system that utilizes production logs and industrial ontologies through inductive logic programming is introduced and benefits and drawbacks of the proposed solution are evaluated.
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) ...
详细信息
We focus on the problem of inducing logic programs that explain models learned by the support vector machine (SVM) algorithm. The top-down sequential covering inductive logic programming (ILP) algorithms (e.g., FOIL) apply hill-climbing search using heuristics from information theory. A major issue with this class of algorithms is getting stuck in local optima. In our new approach, however, the data-dependent hill-climbing search is replaced with a model-dependent search where a globally optimal SVM model is trained first, then the algorithm looks into support vectors as the most influential data points in the model, and induces a clause that would cover the support vector and points that are most similar to that support vector. Instead of defining a fixed hypothesis search space, our algorithm makes use of SHAP, an example-specific interpreter in explainable AI, to determine a relevant set of features. This approach yields an algorithm that captures the SVM model's underlying logic and outperforms other ILP algorithms in terms of the number of induced clauses and classification evaluation metrics.
暂无评论