probabilistic logic programming (PLP) provides a powerful tool for reasoning with uncertain relational models. However, learning probabilisticlogic programs is expensive due to the high cost of inference. Among the p...
详细信息
probabilistic logic programming (PLP) provides a powerful tool for reasoning with uncertain relational models. However, learning probabilisticlogic programs is expensive due to the high cost of inference. Among the proposals to overcome this problem, one of the most promising is lifted inference. In this paper we consider PLP models that are amenable to lifted inference and present an algorithm for performing parameter and structure learning of these models from positive and negative examples. We discuss parameter learning with EM and LBFGS and structure learning with LIFTCOVER, an algorithm similar to SLIPCOVER. The results of the comparison of LIFTCOVER with SLIPCOVER on 12 datasets show that it can achieve solutions of similar or better quality in a fraction of the time.
This System Description paper describes the software framework PrASP ("probabilistic Answer Set programming"). PrASP is both an uncertainty reasoning and machine learning software and a probabilisticlogic p...
详细信息
ISBN:
(数字)9783319487588
ISBN:
(纸本)9783319487588;9783319487571
This System Description paper describes the software framework PrASP ("probabilistic Answer Set programming"). PrASP is both an uncertainty reasoning and machine learning software and a probabilistic logic programming language based on Answer Set programming (ASP). Besides serving as a research software platform for non-monotonic ( inductive) probabilistic logic programming, our framework mainly targets applications in the area of uncertainty stream reasoning. PrASP programs can consist of ASP (AnsProlog) as well as First-Order logic formulas (with stable model semantics), annotated with conditional or unconditional probabilities or probability intervals. A number of alternative inference algorithms allow to attune the system to different task characteristics (e.g., whether or not independence assumptions can be made).
In this paper we present a novel framework and full implementation of probabilistic spatial reasoning within a logicprogramming context. The crux of our approach is extending probabilistic logic programming (based on...
详细信息
ISBN:
(纸本)9783319458564;9783319458557
In this paper we present a novel framework and full implementation of probabilistic spatial reasoning within a logicprogramming context. The crux of our approach is extending probabilistic logic programming (based on distribution semantics) to support reasoning over spatial variables via Constraint logicprogramming. Spatial reasoning is formulated as a numerical optimisation problem, and we implement our approach within ProbLog 1. We demonstrate a range of powerful features beyond what is currently provided by existing probabilistic and spatial reasoning tools.
The past few years have seen a surge of interest in the field of probabilisticlogic learning and statistical relational learning. In this endeavor, many probabilisticlogics have been developed. ProbLog is a recent p...
详细信息
The past few years have seen a surge of interest in the field of probabilisticlogic learning and statistical relational learning. In this endeavor, many probabilisticlogics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with probabilities. These facts are treated as mutually independent random variables that indicate whether these facts belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.
The past few years have seen a surge of interest in the field of probabilisticlogic learning and statistical relational learning. In this endeavor, many probabilisticlogics have been developed. ProbLog is a recent p...
详细信息
The past few years have seen a surge of interest in the field of probabilisticlogic learning and statistical relational learning. In this endeavor, many probabilisticlogics have been developed. ProbLog is a recent probabilistic extension of Prolog motivated by the mining of large biological networks. In ProbLog, facts can be labeled with probabilities. These facts are treated as mutually independent random variables that indicate whether these facts belong to a randomly sampled program. Different kinds of queries can be posed to ProbLog programs. We introduce algorithms that allow the efficient execution of these queries, discuss their implementation on top of the YAP-Prolog system, and evaluate their performance in the context of large networks of biological entities.
This paper proposes multi-model optimization through SAT witness or answer set sampling, with common probabilistic reasoning tasks as primary use cases (including deduction-style probabilistic inference and hypothesis...
详细信息
ISBN:
(数字)9783319999609
ISBN:
(纸本)9783319999609;9783319999593
This paper proposes multi-model optimization through SAT witness or answer set sampling, with common probabilistic reasoning tasks as primary use cases (including deduction-style probabilistic inference and hypothesis weight learning). Our approach enhances a state-of-the-art SAT/ASP solving algorithm with Gradient Descent as branching literal decision approach, and optionally a cost backtracking mechanism. Sampling of models using these methods minimizes a task-specific, user-provided multi-model cost function while adhering to given logical background knowledge (either a Boolean formula in CNF or a normal logic program under stable model semantics). Features of the framework include its relative simplicity and high degree of expressiveness, since arbitrary differentiable cost functions and background knowledge can be provided.
We propose an extension of Poole's independent choice logic based on a relaxation of the underlying independence assumptions. A credal semantics involving multiple joint probability mass functions over the possibl...
详细信息
ISBN:
(纸本)9783030004613;9783030004606
We propose an extension of Poole's independent choice logic based on a relaxation of the underlying independence assumptions. A credal semantics involving multiple joint probability mass functions over the possible worlds is adopted. This represents a conservative approach to probabilistic logic programming achieved by considering all the mass functions consistent with the probabilistic facts. This allows to model tasks for which independence among some probabilistic choices cannot be assumed, and a specific dependence model cannot be assessed. Preliminary tests on an object ranking application show that, despite the loose underlying assumptions, informative inferences can be extracted.
To learn a probabilisticlogic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of ...
详细信息
To learn a probabilisticlogic program is to find a set of probabilistic rules that best fits some data, in order to explain how attributes relate to one another and to predict the occurrence of new instantiations of these attributes. In this work, we focus on acyclic programs, because in this case the meaning of the program is quite transparent and easy to grasp. We propose that the learning process for a probabilistic acyclic logic program should be guided by a scoring function imported from the literature on Bayesian network learning. We suggest novel techniques that lead to orders of magnitude improvements in the current state-of-art represented by the ProbLog package. In addition, we present novel techniques for learning the structure of acyclic probabilisticlogic programs.
cplint is a suite of programs for reasoning and learning with probabilistic logic programming languages that follow the distribution semantics. In this paper we describe how we have extended cplint to perform causal r...
详细信息
cplint is a suite of programs for reasoning and learning with probabilistic logic programming languages that follow the distribution semantics. In this paper we describe how we have extended cplint to perform causal reasoning. In particular, we consider Pearl's do calculus for models where all the variables are measured. The two cplint modules for inference, PITA and MCINTYRE, have been extended for computing the effect of actions/interventions on these models. We also executed experiments comparing exact and approximate inference with conditional and causal queries, showing that causal inference is often cheaper than conditional inference. (C) 2017 Elsevier Inc. All rights reserved.
We present a formalism for combining logicprogramming and its flavour of nondeterminism with probabilistic reasoning, In particular, we focus on representing prior knowledge for Bayesian inference. Distributional log...
详细信息
We present a formalism for combining logicprogramming and its flavour of nondeterminism with probabilistic reasoning, In particular, we focus on representing prior knowledge for Bayesian inference. Distributional logicprogramming (Dip), is considered in the context of a class of generative probabilistic languages. A characterisation based on probabilistic paths which can play a central role in clausal probabilistic reasoning is presented. We illustrate how the characterisation can be utilised to clarify derived distributions with regards to mixing the logical and probabilistic constituents of generative languages. We use this operational characterisation to define a class of programs that exhibit probabilistic determinism. We show how Dlp can be used to define generative priors over statistical model spaces. For example, a single program can generate all possible Bayesian networks having N nodes while at the same time it defines a prior that penalises networks with large families. Two classes of statistical models are considered: Bayesian networks and classification and regression trees. Finally we discuss: (1) a Metropolis-Hastings algorithm that can take advantage of the defined priors and the probabilistic choice points in the prior programs and (2) its application to real-world machine learning tasks. (C) 2016 Elsevier Inc. All rights reserved.
暂无评论