Handling relational data streams has become a crucial task, given the availability of pervasive sensors and Internet-produced content, such as social networks and knowledge graphs. In a relational environment, this is...
详细信息
Handling relational data streams has become a crucial task, given the availability of pervasive sensors and Internet-produced content, such as social networks and knowledge graphs. In a relational environment, this is a particularly challenging task, since one cannot assure that the streams of examples are independent along the iterations. Thus, most relational learning systems are still designed to learn only from closed batches of data. Furthermore, in case there is a previously acquired model, these systems either would discard it or assuming it as correct. In this work, we propose an online relational learning algorithm that can handle continuous, open-ended streams of relational examples as they arrive. We employ techniques of theoryrevision to take advantage of the previously acquired model as a starting point, by finding where it should be modified to cope with the new examples, and automatically update it. We rely on the Hoeffding's bound statistical theory to decide if the model must, in fact, be updated in accordance with the new examples. The proposed algorithm is built upon ProPPR statistical relational language, aiming at contemplating the uncertainty inherent to real data. Experimental results in social networks and entity co-reference datasets show the potential of the proposed approach compared to other relational learners.
theory revision from examples is the process of repairing incorrect theories and/or improving incomplete theories from a set of examples. This process usually results in more accurate and comprehensible theories than ...
详细信息
theory revision from examples is the process of repairing incorrect theories and/or improving incomplete theories from a set of examples. This process usually results in more accurate and comprehensible theories than purely inductive learning. However, so far, progress on the use of theoryrevision techniques has been limited by the large search space they yield. In this article, we argue that it is possible to reduce the search space of a theoryrevision system by introducing stochastic local search. More precisely, we introduce a number of stochastic local search components at the key steps of the revision process, and implement them on a state-of-the-art revision system that makes use of the most specific clause to constrain the search space. We show that with the use of these SLS techniques it is possible for the revision system to be executed in a feasible time, while still improving the initial theory and in a number of cases even reaching better accuracies than the deterministic revision process. Moreover, in some cases the revision process can be faster and still achieve better accuracies than an ILP system learning from an empty initial hypothesis or assuming an initial theory to be correct.
In this paper, we present two online structure learning algorithms for NeuralLog, NeuralLog+OSLR and NeuralLog+OMIL. NeuralLog is a system that compiles first-order logic programs into neural networks. Both learning a...
详细信息
ISBN:
(数字)9783030974541
ISBN:
(纸本)9783030974541;9783030974534
In this paper, we present two online structure learning algorithms for NeuralLog, NeuralLog+OSLR and NeuralLog+OMIL. NeuralLog is a system that compiles first-order logic programs into neural networks. Both learning algorithms are based on Online Structure Learner by revision (OSLR). NeuralLog+OSLR is a port of OSLR to use NeuralLog as inference engine;while NeuralLog+OMIL uses the underlying mechanism from OSLR, but with a revision operator based on Meta-Interpretive Learning. We compared both systems with OSLR and RDN-Boost on link prediction in three different datasets: Cora, UMLS and UWCSE. Our experiments showed that NeuralLog+OMIL outperforms both the compared systems on three of the four target relations from the Cora dataset and in the UMLS dataset, while both NeuralLog+OSLR and NeuralLog+OMIL outperform OSLR and RDNBoost on the UWCSE, assuming a good initial theory is provided.
暂无评论