Learning first -order logic programs from relational facts yields intuitive insights into the data. inductivelogicprogramming (ILP) models are effective in learning first -order logic programs from observed relation...
详细信息
Learning first -order logic programs from relational facts yields intuitive insights into the data. inductivelogicprogramming (ILP) models are effective in learning first -order logic programs from observed relational data. Symbolic ILP models support rule learning in a data -efficient manner. However, symbolic ILP models are not robust to learn from noisy data. Neuro-symbolic ILP models utilize neural networks to learn logic programs in a differentiable manner which improves the robustness of ILP models. However, most neuro-symbolic methods need a strong language bias to learn logic programs, which reduces the usability and flexibility of ILP models and limits the logic program formats. In addition, most neuro-symbolic ILP methods cannot learn logic programs effectively from both small -size datasets and large -size datasets such as knowledge graphs. In the paper, we introduce a novel differentiable ILP model called differentiable firstorder rule learner (DFORL), which is scalable to learn rules from both smaller and larger datasets. Besides, DFORL only needs the number of variables in the learned logic programs as input. Hence, DFORL is easy to use and does not need a strong language bias. We demonstrate that DFORL can perform well on several standard ILP datasets, knowledge graphs, and probabilistic relation facts and outperform several well-known differentiable ILP models. Experimental results indicate that DFORL is a precise, robust, scalable, and computationally cheap differentiable ILP model.
The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. differentiable inductive logic programming is a technique for learning a symbolic knowledge representation fr...
详细信息
The combination of learning and reasoning is an essential and challenging topic in neuro-symbolic research. differentiable inductive logic programming is a technique for learning a symbolic knowledge representation from either complete, mislabeled, or incomplete observed facts using neural networks. In this paper, we propose a novel differentiable inductive logic programming system called differentiable learning from interpretation transition (D-LFIT) for learning logic programs through the proposed embeddings of logic programs, neural networks, optimization algorithms, and an adapted algebraic method to compute the logic program semantics. The proposed model has several characteristics, including a small number of parameters, the ability to generate logic programs in a curriculum-learning setting, and linear time complexity for the extraction of trained neural networks. The well-known bottom clause positionalization algorithm is incorporated when the proposed system learns from relational datasets. We compare our model with NN-LFIT, which extracts propositional logic rules from retuned connected networks, the highly accurate rule learner RIPPER, the purely symbolic LFIT system LF1T, and CILP++, which integrates neural networks and the propositionalization method to handle first-order logic knowledge. From the experimental results, we conclude that D-LFIT yields comparable accuracy with respect to the baselines when given complete, incomplete, and mislabeled data. Our experimental results indicate that D-LFIT not only learns symbolic logic programs quickly and precisely but also performs robustly when processing mislabeled and incomplete datasets.
暂无评论