The integration of inductive logic programming (ILP) and Bottlenose Dolphin Optimization (BDO) in this research addresses a pressing issue in today's information-saturated landscape: the proliferation of fake news...
详细信息
ISBN:
(纸本)9798350348798;9798350348804
The integration of inductive logic programming (ILP) and Bottlenose Dolphin Optimization (BDO) in this research addresses a pressing issue in today's information-saturated landscape: the proliferation of fake news. In an era where misleading information can spread rapidly, traditional methods often fall short in effectively identifying deceptive *** combat this challenge, our approach harnesses the synergies of ILP and BDO. ILP plays a crucial role in constructing logical rules that capture intricate relationships within news data. By doing so, it delves deep into the content, seeking out patterns and inconsistencies that may not be obvious at first glance. BDO, on the other hand, takes inspiration from the social behavior of bottlenose dolphins to optimize the process of generating these rules. Just as dolphins collaborate and communicate to solve complex problems, BDO helps refine the logical rules for better accuracy. Ultimately, this research underscores the potential of bio-inspired optimization, such as BDO, combined with the precision of logicprogramming (ILP) to strengthen the integrity of information dissemination platforms. In an age where the veracity of information is paramount, this innovative approach offers a promising solution to combat the spread of fake news and promote the dissemination of authentic, reliable information.
inductive logic programming (ILP) is a form of logic-based machine learning. The goal is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we ...
详细信息
inductive logic programming (ILP) is a form of logic-based machine learning. The goal is to induce a hypothesis (a logic program) that generalises given training examples and background knowledge. As ILP turns 30, we review the last decade of research. We focus on (i) new meta-level search methods, (ii) techniques for learning recursive programs, (iii) new approaches for predicate invention, and (iv) the use of different technologies. We conclude by discussing current limitations of ILP and directions for future research.
Graph clustering is a popular method to understand networks and make them accessible for downstream Machine Learning tasks. Especially for large networks, the results of graph clustering are difficult to explain, sinc...
详细信息
ISBN:
(纸本)9798350307092
Graph clustering is a popular method to understand networks and make them accessible for downstream Machine Learning tasks. Especially for large networks, the results of graph clustering are difficult to explain, since visualizations can be chaotic and the metric used for clustering can be non-interpretable. We propose a method to use inductive logic programming to provide a post-hoc, local explanation for the results of an arbitrary graph clustering algorithm. Our interactive approach is based on three steps: (1) Formalizing graph data as input for the inductive logic programming system Popper, (2) Qualifying entities from the graph as instances for the variables in the resulting formal logic statements and (3) Re-integrating user feedback on the provided, comprehensible statements into Popper. Compared to other benchmark graph clustering explanation approaches, the proposed method shows superior behaviour in terms of interpretability of the rules and clauses as well as in terms of interactivity on six benchmark datasets.
Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, for example, coming from sensory data or neural networks with probabilities. We propose Proppe...
详细信息
Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, for example, coming from sensory data or neural networks with probabilities. We propose Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (binary cross-entropy) and a relaxation of the hypothesis constrainer (NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few as 8 examples. It outperforms binary ILP and statistical models such as a graph neural network.
This paper explores a hybrid approach to the multimodal co-construction of explanations for robot faults, integrating inductive logic programming (ILP) and Large Language Models (LLMs). As AI and robotics continue to ...
详细信息
ISBN:
(纸本)9798400704635
This paper explores a hybrid approach to the multimodal co-construction of explanations for robot faults, integrating inductive logic programming (ILP) and Large Language Models (LLMs). As AI and robotics continue to permeate various aspects of daily life, the ability of these systems to explain their actions and failures is crucial for fostering user trust and ensuring safe interactions. We propose a framework that combines the interpretability of ILP, which generates logical rules from data, with the linguistic capabilities of LLMs, which provide natural language explanations. This approach enables the generation of coherent, contextually appropriate explanations that can be tailored to the needs of users.
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Con...
详细信息
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver. To overcome these limitations, we introduce a new model-oriented approach for Answer Set programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the inductive logic programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.
This paper presents an interactive system for exploring and editing logic-based machine learning models specialised for the relational reasoning problem domain. Prior work has highlighted the value of visual interface...
详细信息
ISBN:
(纸本)9798400701078
This paper presents an interactive system for exploring and editing logic-based machine learning models specialised for the relational reasoning problem domain. Prior work has highlighted the value of visual interfaces for enabling effective user interaction during model training. However, these existing systems require two-dimensional tabular data and are not well-suited to relational machine learning tasks. logic-based methods, such as those developed in the field of inductive logic programming, can address this;they retain relational information by operating directly on raw relational data while remaining inherently interpretable and editable to allow for human intervention. However, such systems require logical expertise to operate effectively and do not enable visual exploration. We aim to address this;taking design inspiration from equivalent interfaces for propositional learning, we present a visual interface that enhances the usability of inductive logic programming systems for domain experts without a background in computational logic.
Learning settings are crucial for most inductive logic programming (ILP) systems to learn efficiently. Hypothesis spaces can be huge, and ILP systems take a long time to output solutions or even cannot terminate withi...
详细信息
ISBN:
(数字)9783031492990
ISBN:
(纸本)9783031492983;9783031492990
Learning settings are crucial for most inductive logic programming (ILP) systems to learn efficiently. Hypothesis spaces can be huge, and ILP systems take a long time to output solutions or even cannot terminate within time limits. Therefore, users must set suitable learning settings for each ILP task to bring the best performance of the system. However, most users struggle to set appropriate settings for the task they see for the first time. In this paper, we propose a method to make an ILP system more adaptable to tasks with weak learning biases. In particular, we attempt to learn efficient strategies for an ILP system using reinforcement learning (RL). We use Popper, a state-of-the-art ILP system that implements the concept of learning from failures (LFF). We introduce RL-Popper, which divides the hypothesis space into subspaces more minutely than Popper. RL is used to learn the efficient search order of the divided spaces. We provide the details of RL-Popper and showsome empirical results.
inductive logic programming (ILP) is an inductive reasoning method based on the first-order predicative logic. This technology is widely used for data mining using symbolic artificial intelligence. ILP searches for a ...
详细信息
ISBN:
(纸本)9783031291258;9783031291265
inductive logic programming (ILP) is an inductive reasoning method based on the first-order predicative logic. This technology is widely used for data mining using symbolic artificial intelligence. ILP searches for a suitable hypothesis that covers positive examples and uncovers negative examples. The searching process requires a lot of execution cost to interpret many given examples for practical problems. In this paper, we propose a new hypothesis search method using particle swarm optimization (PSO). PSO is a meta-heuristic algorithm based on behaviors of particles. In our approach, each particle repeatedly moves from a hypothesis to another hypothesis within a hypothesis space. At that time, some hypotheses are refined based on the value returned by a predefined evaluation function. Since PSO just searches a part of the hypothesis space, it contributes to the speed up of the execution of ILP. In order to demonstrate the effectiveness of our method, we have implemented it on Progol that is one of the ILP systems [6], and then we conducted numerical experiments. The results showed that our method reduced the hypothesis search time compared to another conventional Progol.
In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in ...
详细信息
In recent research, human-understandable explanations of machine learning models have received a lot of attention. Often explanations are given in form of model simplifications or visualizations. However, as shown in cognitive science as well as in early AI research, concept understanding can also be improved by the alignment of a given instance for a concept with a similar counterexample. Contrasting a given instance with a structurally similar example which does not belong to the concept highlights what characteristics are necessary for concept membership. Such near misses have been proposed by Winston (Learning structural descriptions from examples, 1970) as efficient guidance for learning in relational domains. We introduce an explanation generation algorithm for relational concepts learned with inductive logic programming (GeNME). The algorithm identifies near miss examples from a given set of instances and ranks these examples by their degree of closeness to a specific positive instance. A modified rule which covers the near miss but not the original instance is given as an explanation. We illustrate GeNME with the well-known family domain consisting of kinship relations, the visual relational Winston arches domain, and a real-world domain dealing with file management. We also present a psychological experiment comparing human preferences of rule-based, example-based, and near miss explanations in the family and the arches domains.
暂无评论