The goal of learning from positive and unlabeled (PU) examples is to learn a classifier that predicts the posterior class probability. The challenge is that the available labels in the data are determined by (1) the t...
详细信息
The goal of learning from positive and unlabeled (PU) examples is to learn a classifier that predicts the posterior class probability. The challenge is that the available labels in the data are determined by (1) the true class, and (2) the labeling mechanism that selects which positive examples get labeled, where often certain examples have a higher probability to be selected than others. Incorrectly assuming an unbiased labeling mechanism leads to learning a biased classifier. Yet, this is what most existing methods do. A handful of methods makes more realistic assumptions, but they are either so general that it is impossible to distinguish between the effects of the true classification and of the labeling mechanism, or too restrictive to correctly model the real situation, or require knowledge that is typically unavailable. This paper studies how to formulate and integrate more realistic assumptions for learning better classifiers, by exploiting the strengths of probabilistic logic programming (PLP). Concretely, (1) we propose PU ProbLog: a PLP-based general method that allows to (partially) model the labeling mechanism. (2) We show that our method generalizes existing methods, in the sense that it can model the same assumptions. (3) Thanks to the use of PLP, our method supports also PU learning in relational domains. (4) Our empirical analysis shows that partially modeling the labeling bias, improves the learned classifiers.
A ProbLog program is a logic program with facts that only hold with a specified probability. In this contribution, we extend this ProbLog language by the ability to answer "What if" queries. Intuitively, a P...
详细信息
A ProbLog program is a logic program with facts that only hold with a specified probability. In this contribution, we extend this ProbLog language by the ability to answer "What if" queries. Intuitively, a ProbLog program defines a distribution by solving a system of equations in terms of mutually independent predefined Boolean random variables. In the theory of causality, Judea Pearl proposes a counterfactual reasoning for such systems of equations. Based on Pearl's calculus, we provide a procedure for processing these counterfactual queries on ProbLog programs, together with a proof of correctness and a full implementation. Using the latter, we provide insights into the influence of different parameters on the scalability of inference. Finally, we also show that our approach is consistent with CP-logic, that is with the causal semantics for logic programs with annotated with disjunctions.
In probabilistic Abductive logicprogramming we are given a probabilisticlogic program, a set of abducible facts, and a set of constraints. Inference in probabilistic abductive logic programs aims to find a subset of...
详细信息
In probabilistic Abductive logicprogramming we are given a probabilisticlogic program, a set of abducible facts, and a set of constraints. Inference in probabilistic abductive logic programs aims to find a subset of the abducible facts that is compatible with the constraints and that maximizes the joint probability of the query and the constraints. In this paper, we extend the PITA reasoner with an algorithm to perform abduction on probabilistic abductive logic programs exploiting Binary Decision Diagrams. Tests on several synthetic datasets show the effectiveness of our approach. (C) 2021 Elsevier Inc. All rights reserved.
Representing uncertain information is crucial for modeling real world domains. This has been fully recognized both in the field of logicprogramming and of Description logics (DLs), with the introduction of probabilis...
详细信息
Representing uncertain information is crucial for modeling real world domains. This has been fully recognized both in the field of logicprogramming and of Description logics (DLs), with the introduction of probabilisticlogic languages and various probabilistic extensions of DLs respectively. Several works have considered the distribution semantics as the underlying semantics of probabilistic logic programming (PLP) languages and probabilistic DLs (PDLs), and have then targeted the problem of reasoning and learning in them. This paper is a survey of inference, parameter and structure learning algorithms for PLP languages and PDLs based on the distribution semantics. A few of these algorithms are also available as web applications.
probabilisticlogic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form "90% of birds fly", which were defined "Type 1" statements by Hal...
详细信息
ISBN:
(纸本)9783031157073;9783031157066
probabilisticlogic Programs under the distribution semantics (PLPDS) do not allow statistical probabilistic statements of the form "90% of birds fly", which were defined "Type 1" statements by Halpern. In this paper, we add this kind of statements to PLPDS and introduce the PASTA ("probabilistic Answer set programming for STAtistical probabilities") language. We translate programs in our new formalism into probabilistic answer set programs under the credal semantics. This approach differs from previous proposals, such as the one based on "probabilistic conditionals" as, instead of choosing a single model by making the maximum entropy assumption, we take into consideration all models and we assign probability intervals to queries. In this way we refrain from making assumptions and we obtain a more neutral framework. We also propose an inference algorithm and compare it with an existing solver for probabilistic answer set programs on a number of programs of increasing size, showing that our solution is faster and can deal with larger instances.
作者:
Kurokochi, RentoOzaki, TomonobuNihon Univ
Grad Sch Integrated Basic Sci Setagaya Ward 3-25-40 Sakurajosui Tokyo 1568550 Japan Nihon Univ
Dept Informat Sci Setagaya Ward 3-25-40 Sakurajosui Tokyo 1568550 Japan
The werewolf game is one of incomplete information games by multiplayer, and it is recognized widely as a new promising standard problem in Artificial Intelligence recently. In this paper, we try to evaluate the valid...
详细信息
ISBN:
(纸本)9781665475327
The werewolf game is one of incomplete information games by multiplayer, and it is recognized widely as a new promising standard problem in Artificial Intelligence recently. In this paper, we try to evaluate the validity and effectiveness of common knowledge, i.e. common tendencies felt by a large number of players, in werewolf games. Specifically, we extract such tendencies as rules from the werewolf BBS data manually, and verify their effects on the role and team estimation tasks by using a probabilistic logic programming model having game rules and extracted tendencies. As a result, we succeeded in extracting some common tendencies for team estimation, and it was confirmed that the extracted knowledges have positive effects in some purposes such as the prediction of werewolves.
probabilistic logic programming is a major part of statistical relational artificial intelligence, where approaches from logic and probability are brought together to reason about and learn from relational domains in ...
详细信息
probabilistic logic programming is a major part of statistical relational artificial intelligence, where approaches from logic and probability are brought together to reason about and learn from relational domains in a setting of uncertainty. However, the behaviour of statistical relational representations across variable domain sizes is complex, and scaling inference and learning to large domains remains a significant challenge. In recent years, connections have emerged between domain size dependence, lifted inference and learning from sampled subpopulations. The asymptotic behaviour of statistical relational representations has come under scrutiny, and projectivity was investigated as the strongest form of domain size dependence, in which query marginals are completely independent of the domain size. In this contribution we show that every probabilisticlogic program under the distribution semantics is asymptotically equivalent to an acyclic probabilisticlogic program consisting only of determinate clauses over probabilistic facts. We conclude that every probabilisticlogic program inducing a projective family of distributions is in fact everywhere equivalent to a program from this fragment, and we investigate the consequences for the projective families of distributions expressible by probabilisticlogic programs.
Smart contracts are computer programs that run in a distributed network, the blockchain. These contracts are used to regulate the interaction among parties in a fully decentralized way without the need of a trusted au...
详细信息
ISBN:
(纸本)9783030611453;9783030611460
Smart contracts are computer programs that run in a distributed network, the blockchain. These contracts are used to regulate the interaction among parties in a fully decentralized way without the need of a trusted authority and, once deployed, are immutable. The immutability property requires that the programs should be deeply analyzed and tested, in order to ensure that they behave as expected and to avoid bugs and errors. In this paper, we present a method to translate smart contracts into probabilisticlogic programs that can be used to analyse expected values of several smart contract's utility parameters and to get a quantitative idea on how smart contracts variables changes over time. Finally, we applied this method to study three real smart contracts deployed on the Ethereum blockchain.
We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying proba...
详细信息
We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates. We show how existing inference and learning techniques of the underlying probabilistic logic programming language ProbLog can be adapted for the new language. We theoretically and experimentally demonstrate that DeepProbLog supports (i) both symbolic and subsymbolic representations and inference, (ii) program induction, (iii) probabilistic (logic) programming, and (iv)(deep) learning from examples. To the best of our knowledge, this work is the first to propose a framework where general-purpose neural networks and expressive probabilistic-logical modeling and reasoning are integrated in a way that exploits the full expressiveness and strengths of both worlds and can be trained end-to-end based on examples. (C) 2021 Elsevier B.V. All rights reserved.
probabilistic logic programming is increasingly important in artificial intelligence and related fields as a formalism to reason about uncertainty. It generalises logicprogramming with the possibility of annotating c...
详细信息
probabilistic logic programming is increasingly important in artificial intelligence and related fields as a formalism to reason about uncertainty. It generalises logicprogramming with the possibility of annotating clauses with probabilities. This paper proposes a coalgebraic semantics on probabilistic logic programming. Programs are modelled as coalgebras for a certain functor F, and two semantics are given in terms of cofree coalgebras. First, the cofree F-coalgebra yields a semantics in terms of derivation trees. Second, by embedding F into another type G, as cofree G-coalgebra we obtain a 'possible worlds' interpretation of programs, from which one may recover the usual distribution semantics of probabilistic logic programming. Furthermore, we show that a similar approach can be used to provide a coalgebraic semantics to weighted logicprogramming.
暂无评论