We present a novel approach that uses an iterative deepening algorithm in order to perform probabilisticlogic inference for ProbLog, a probabilistic extension of Prolog. The most used inference method for ProbLog is ...
详细信息
ISBN:
(纸本)9783319516769;9783319516752
We present a novel approach that uses an iterative deepening algorithm in order to perform probabilisticlogic inference for ProbLog, a probabilistic extension of Prolog. The most used inference method for ProbLog is exact inference combined with tabling. Tabled exact inference first collects a set of SLG derivations which contain the probabilistic structure of the ProbLog program including the cycles. At a second step, inference requires handling these cycles in order to create a noncyclic Boolean representation of the probabilistic information. Finally, the Boolean representation is compiled to a data structure where inference can be performed in linear time. Previous work has illustrated that there are two limiting factors for ProbLog's exact inference. The first factor is the target compilation language and the second factor is the handling of the cycles. In this paper, we address the second factor by presenting an iterative deepening algorithm which handles cycles and produces solutions to problems that previously ProbLog was not able to solve. Our experimental results show that our iterative deepening approach gets approximate bounded values in almost all cases and in most cases we are able to get the exact result for the same or one lower scaling factor.
We present an algorithm that learns acyclic propositional probabilisticlogic programs from complete data, by adapting techniques from Bayesian network learning. Specifically, we focus on score-based learning and on e...
详细信息
ISBN:
(纸本)9783319675824;9783319675817
We present an algorithm that learns acyclic propositional probabilisticlogic programs from complete data, by adapting techniques from Bayesian network learning. Specifically, we focus on score-based learning and on exact maximum likelihood computations. Our main contribution is to show that by restricting any rule body to contain at most two literals, most needed optimization steps can be solved exactly. We describe experiments indicating that our techniques do produce accurate models from data with reduced numbers of parameters.
The distribution semantics integrates logicprogramming and probability theory using a possible worlds approach. Its intuitiveness and simplicity have made it the most widely used semantics for probabilisticlogic pro...
详细信息
The distribution semantics integrates logicprogramming and probability theory using a possible worlds approach. Its intuitiveness and simplicity have made it the most widely used semantics for probabilistic logic programming, with successful applications in many domains. When the program has function symbols, the semantics was defined for special cases: either the program has to be definite or the queries must have a finite number of finite explanations. In this paper we show that it is possible to define the semantics for all programs. We also show that this definition coincides with that of Sato and Kameya on positive programs. Moreover, we highlight possible approaches for inference, both exact and approximate. (C) 2016 Elsevier Inc. All rights reserved.
With the recent development of a new ubiquitous nature of data and the profusity of available knowledge, there is nowadays the need to reason from multiple sources of often incomplete and uncertain knowledge. Our goal...
详细信息
With the recent development of a new ubiquitous nature of data and the profusity of available knowledge, there is nowadays the need to reason from multiple sources of often incomplete and uncertain knowledge. Our goal was to provide a way to combine declarative knowledge bases – represented as logicprogramming modules under the answer set semantics – as well as the individual results one already inferred from them, without having to recalculate the results for their composition and without having to explicitly know the original logicprogramming encodings that produced such results. This posed us many challenges such as how to deal with fundamental problems of modular frameworks for logicprogramming, namely how to define a general compositional semantics that allows us to compose unrestricted modules. Building upon existing logicprogramming approaches, we devised a framework capable of composing generic logicprogramming modules while preserving the crucial property of compositionality, which informally means that the combination of models of individual modules are the models of the union of modules. We are also still able to reason in the presence of knowledge containing incoherencies, which is informally characterised by a logic program that does not have an answer set due to cyclic dependencies of an atom from its default negation. In this thesis we also discuss how the same approach can be extended to deal with probabilistic knowledge in a modular and compositional way. We depart from the Modular logicprogramming approach in Oikarinen & Janhunen (2008); Janhunen et al. (2009) which achieved a restricted form of com- positionality of answer set programming modules. We aim at generalising this framework of modular logicprogramming and start by lifting restrictive conditions that were originally imposed, and use alternative ways of combining these (so called by us) Generalised Modular logic Programs. We then deal with conflicts arising in generalised modular lo
We present an approximate query answering algorithm for the probabilistic logic programming language CP-logic. It complements existing sampling algorithms by using the rules from body to head instead of in the other d...
详细信息
We present an approximate query answering algorithm for the probabilistic logic programming language CP-logic. It complements existing sampling algorithms by using the rules from body to head instead of in the other direction. We present an implementation in OpenCL, which is able to exploit the multicore architecture of modern CPUs to compute a large number of samples in parallel, and demonstrate that this is competitive with existing inference algorithms. (C) 2015 Elsevier Inc. All rights reserved.
probabilisticlogic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community ...
详细信息
probabilisticlogic programs are logic programs in which some of the facts are annotated with probabilities. This paper investigates how classical inference and learning tasks known from the graphical model community can be tackled for probabilisticlogic programs. Several such tasks, such as computing the marginals, given evidence and learning from (partial) interpretations, have not really been addressed for probabilisticlogic programs before. The first contribution of this paper is a suite of efficient algorithms for various inference tasks. It is based on the conversion of the program and the queries and evidence to a weighted Boolean formula. This allows us to reduce inference tasks to well-studied tasks, such as weighted model counting, which can be solved using state-of-the-art methods known from the graphical model and knowledge compilation literature. The second contribution is an algorithm for parameter estimation in the learning from interpretations setting. The algorithm employs expectation-maximization, and is built on top of the developed inference algorithms. The proposed approach is experimentally evaluated. The results show that the inference algorithms improve upon the state of the art in probabilistic logic programming, and that it is indeed possible to learn the parameters of a probabilisticlogic program from interpretations.
Representing uncertain information is crucial for modeling real world domains. In this paper we present a technique for the integration of probabilistic information in Description logics (DLs) that is based on the dis...
详细信息
Representing uncertain information is crucial for modeling real world domains. In this paper we present a technique for the integration of probabilistic information in Description logics (DLs) that is based on the distribution semantics for probabilisticlogic programs. In the resulting approach, that we called DISPONTE, the axioms of a probabilistic knowledge base (KB) can be annotated with a real number between 0 and 1. A probabilistic knowledge base then defines a probability distribution over regular KBs called worlds and the probability of a given query can be obtained from the joint distribution of the worlds and the query by marginalization. We present the algorithm BUNDLE for computing the probability of queries from DISPONTE KBs. The algorithm exploits an underlying DL reasoner, such as Pellet, that is able to return explanations for queries. The explanations are encoded in a Binary Decision Diagram from which the probability of the query is computed. The experimentation of BUNDLE shows that it can handle probabilistic KBs of realistic size.
probabilisticlogic languages, such as ProbLog and CP-logic, are probabilistic generalizations of logicprogramming that allow one to model probability distributions over complex, structured domains. Their key probabi...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
probabilisticlogic languages, such as ProbLog and CP-logic, are probabilistic generalizations of logicprogramming that allow one to model probability distributions over complex, structured domains. Their key probabilistic constructs are probabilistic facts and annotated disjunctions to represent binary and mutli-valued random variables, respectively. ProbLog allows the use of annotated disjunctions by translating them into probabilistic facts and rules. This encoding is tailored towards the task of computing the marginal probability of a query given evidence (MARG), but is not correct for the task of finding the most probable explanation (MPE) with important applications e.g., diagnostics and scheduling. In this work, we propose a new encoding of annotated disjunctions which allows correct MARG and MPE. We explore from both theoretical and experimental perspective the trade-off between the encoding suitable only for MARG inference and the newly proposed (general) approach.
Benjamin Franklin once said that "the only things certain in life are death and taxes". What he probably meant was that in everyday life probabilities play a crucial role in our decision making. Almost 300 y...
详细信息
Benjamin Franklin once said that "the only things certain in life are death and taxes". What he probably meant was that in everyday life probabilities play a crucial role in our decision making. Almost 300 years later, the problem of learning new things from a knowledge base has become even more important. In order to more precisely model probabilistic reasoning in knowledge bases, researchers have applied statistical methods within logicprogramming systems, so that learning that something is True or False comes with a certain probability. However, since Russell's proposal of his famous "who shaves the barber" paradox, it has become clear that the semantics of the languages we use to describe the world we live in is multi-valued. In this paper, we introduce a statistical approach to logic programs by combining Well-Founded Semantics (which captures the core in Russell's paradox) with probabilistic inference. The result is a probabilistic logic programming framework where uncertainty in inference can be described using both a third logic value, and statistical information from probabilities.
probabilistic logic programming (PLP) allows one to represent domains containing many entities connected by uncertain relations and has many applications in particular in Machine Learning. PITA is a PLP algorithm for ...
详细信息
probabilistic logic programming (PLP) allows one to represent domains containing many entities connected by uncertain relations and has many applications in particular in Machine Learning. PITA is a PLP algorithm for computing the probability of queries, which exploits tabling, answer subsumption and Binary Decision Diagrams (BDDs). PITA does not impose any restriction on the programs. Other algorithms, such as PRISM, reduce computation time by imposing restrictions on the program, namely that subgoals are independent and that clause bodies are mutually exclusive. Another assumption that simplifies inference is that clause bodies are independent. In this paper, we present the algorithms PITA(IND,IND) and PITA(OPT). PITA(IND,IND) assumes that subgoals and clause bodies are independent. PITA(OPT) instead first checks whether these assumptions hold for subprograms and subgoals: if they do, PITA(OPT) uses a simplified calculation, otherwise it resorts to BDDs. Experiments on a number of benchmark datasets show that PITA(IND,IND) is the fastest on datasets respecting the assumptions, while PITA(OPT) is a good option when nothing is known about a dataset.
暂无评论