A key feature of ProPPR, a recent probabilistic logic language inspired by stochastic logic programs (SLPs), is its use of personalized PageRank for efficient inference. We adopt this view of probabilistic inference a...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
A key feature of ProPPR, a recent probabilistic logic language inspired by stochastic logic programs (SLPs), is its use of personalized PageRank for efficient inference. We adopt this view of probabilistic inference as a random walk over a graph constructed from a labeled logic program to investigate the relationship between these two languages, showing that the differences in semantics rule out direct, generally applicable translations between them.
this paper introduces a new open-source implementation of a nonmonotonic learning method called XHAIL and shows how it can be used for abductive and inductive inference on metabolic networks that are many times larger...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
this paper introduces a new open-source implementation of a nonmonotonic learning method called XHAIL and shows how it can be used for abductive and inductive inference on metabolic networks that are many times larger than could be handled by the preceding prototype. We summarise several implementation improvements that increase its efficiency and we introduce an extended form of language bias that further increases its usability. We investigate the system's scalability in a case study involving real data previously collected by a Robot Scientist and show how it led to the discovery of an error in a whole-organism model of yeast metabolism.
We introduce a probabilistic language and an efficient inference algorithm based on distributional clauses for static and dynamic inference in hybrid relational domains. Static inference is based on sampling, where th...
详细信息
We introduce a probabilistic language and an efficient inference algorithm based on distributional clauses for static and dynamic inference in hybrid relational domains. Static inference is based on sampling, where the samples represent (partial) worlds (with discrete and continuous variables). Furthermore, we use backward reasoning to determine which facts should be included in the partial worlds. For filtering in dynamic models we combine the static inference algorithm with particle filters and guarantee that the previous partial samples can be safely forgotten, a condition that does not hold in most logical filtering frameworks. Experiments show that the proposed framework can outperform classic sampling methods for static and dynamic inference and that it is promising for robotics and vision applications. In addition, it provides the correct results in domains in which most probabilistic programming languages fail.
Mode declarations are a successful form of language bias in explanatory ilp. But, while they are heavily used in Horn systems, they have yet to be similarly exploited in more expressive clausal settings. this paper pr...
详细信息
ISBN:
(纸本)9783540784685
Mode declarations are a successful form of language bias in explanatory ilp. But, while they are heavily used in Horn systems, they have yet to be similarly exploited in more expressive clausal settings. this paper presents a mode-directed ilp procedure for full clausal logic. It employs a first-order inference engine to abductively and inductively explain a set of examples with respect to a background theory. Each stage of hypothesis formation is guided by mode declarations using a generalisation of efficient Horn clause techniques for inverting entailment. Our approach exploits language bias more effectively than previous non-Horn ilp methods and avoids the need for interactive user assistance.
In a previous work we proposed a framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I, J) such that J = T-P (I), where T-P is the immediate conse...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
In a previous work we proposed a framework for learning normal logic programs from transitions of interpretations. Given a set of pairs of interpretations (I, J) such that J = T-P (I), where T-P is the immediate consequence operator, we infer the program P. Here we propose a new learning approach that is more efficient in terms of output quality. this new approach relies on specialization in place of generalization. It generates hypotheses by specialization from the most general clauses until no negative transition is covered. Contrary to previous approaches, the output of this method does not depend on variables/transitions ordering. the new method guarantees that the learned rules are minimal, that is, the body of each rule constitutes a prime implicant to infer the head.
Probabilistic logic languages, such as ProbLog and CP-logic, are probabilistic generalizations of logicprogrammingthat allow one to model probability distributions over complex, structured domains. their key probabi...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
Probabilistic logic languages, such as ProbLog and CP-logic, are probabilistic generalizations of logicprogrammingthat allow one to model probability distributions over complex, structured domains. their key probabilistic constructs are probabilistic facts and annotated disjunctions to represent binary and mutli-valued random variables, respectively. ProbLog allows the use of annotated disjunctions by translating them into probabilistic facts and rules. this encoding is tailored towards the task of computing the marginal probability of a query given evidence (MARG), but is not correct for the task of finding the most probable explanation (MPE) with important applications e.g., diagnostics and scheduling. In this work, we propose a new encoding of annotated disjunctions which allows correct MARG and MPE. We explore from boththeoretical and experimental perspective the trade-off between the encoding suitable only for MARG inference and the newly proposed (general) approach.
We revisit an application developed originally using abductive inductivelogicprogramming (ilp) for modeling inhibition in metabolic networks. the example data was derived from studies of the effects of toxins on rat...
详细信息
ISBN:
(纸本)3540784683
We revisit an application developed originally using abductive inductivelogicprogramming (ilp) for modeling inhibition in metabolic networks. the example data was derived from studies of the effects of toxins on rats using Nuclear Magnetic Resonance (NMR) time-trace analysis of their biofluids together with background knowledge representing a subset of the Kyoto Encyclopedia of Genes and Genomes (KEGG). We now apply two Probabilistic ilp (Pilp) approaches-abductive Stochastic logic Programs (SLPs) and programming In Statistical modeling (PRISM) to the application. Both approaches support abductive learning and probability predictions. Abductive SLPs are a Pilp framework that provides possible worlds semantics to SLPs through abduction. Instead of learning logic models from non-probabilistic examples as done in ilp, the Pilp approach applied in this paper is based on a general technique for introducing probability labels within a standard scientific experimental setting involving control and treated data. Our results demonstrate that the Pilp approach provides a way of learning probabilistic logic models from probabilistic examples, and the Pilp models learned from probabilistic examples lead to a significant decrease in error accompanied by improved insight from the learned results compared withthe Pilp models learned from non-probabilistic examples.
this paper introduces a new inductive, logicprogramming. (ilp) framework called Top Directed Hypothesis Derivation (TDHD) In this framework each hypothesised clause must be derivable from a given logic program called...
详细信息
ISBN:
(纸本)9783540899815
this paper introduces a new inductive, logicprogramming. (ilp) framework called Top Directed Hypothesis Derivation (TDHD) In this framework each hypothesised clause must be derivable from a given logic program called top theory (T). the top theory call be viewed as a declarative bias which defines the hypothesis space. this replaces the metalogical mode, statements which are used ill many ilp) systems. Firstly we present a theoretical framework for TDHD and show that standard SLID derivation call be used to efficiently derive hypotheses from) T. Secondly. we present a prototype implementation of TDHD within a new ilp system called TopLog. thirdly. we show that the accuracy and efficiency of TopLog, oil several benchmark datasets, is competitive with a state of the art ilp system like Aleph.
Although inductivelogicprogramming can have other usages such as program synthesis, it is normally used as a logic based supervised machine learning algorithm. the usual setting for its application is, given: 1) a s...
详细信息
Heuristics such as the Occam Razor's principle have played a significant role in reducing the search for solutions of a learning task, by giving preference to most compressed hypotheses. For some application domai...
详细信息
ISBN:
(数字)9783319237084
ISBN:
(纸本)9783319237084;9783319237077
Heuristics such as the Occam Razor's principle have played a significant role in reducing the search for solutions of a learning task, by giving preference to most compressed hypotheses. For some application domains, however, these heuristics may become too weak and lead to solutions that are irrelevant or inapplicable. this is particularly the case when hypotheses ought to conform, within the scope of a given language bias, to precise domain-dependent structures. In this paper we introduce a notion of inductive learning through constraint-driven bias that addresses this problem. Specifically, we propose a notion of learning task in which the hypothesis space, induced by its mode declaration, is further constrained by domain-specific denials, and acceptable hypotheses are (brave inductive) solutions that conform withthe given domain-specific constraints. We provide an implementation of this new learning task by extending the ASPAL learning approach and leveraging on its meta-level representation of hypothesis space to compute acceptable hypotheses. We demonstrate the usefulness of this new notion of learning by applying it to two class of problems - automated revision of software system goals models and learning of stratified normal programs.
暂无评论