the proceedings contain 10 papers. the special focus in this conference is on . the topics include: Large-Scale Assessment of Deep Relational Machines;how Much Can Experimental Cost Be Reduced in Active Learning of Ag...
ISBN:
(纸本)9783319999593
the proceedings contain 10 papers. the special focus in this conference is on . the topics include: Large-Scale Assessment of Deep Relational Machines;how Much Can Experimental Cost Be Reduced in Active Learning of Agent Strategies?;diagnostics of Trains with Semantic Diagnostics Rules;the game of bridge: A challenge for ilp;Sampling-Based SAT/ASP Multi-model Optimization as a Framework for Probabilistic Inference;Explaining Black-Box Classifiers withilp – Empowering LIME with Aleph to Approximate Non-linear Decisions with Relational Rules;learning Dynamics with Synchronous, Asynchronous and General Semantics;was the Year 2000 a Leap Year? Step-Wise Narrowing theories with Metagol.
the proceedings contain 28 papers. the topics discussed include: probabilistic relational learning and inductivelogicprogramming at a global scale;practical probabilistic programming;learning multi-class theories in...
ISBN:
(纸本)9783642212949
the proceedings contain 28 papers. the topics discussed include: probabilistic relational learning and inductivelogicprogramming at a global scale;practical probabilistic programming;learning multi-class theories in ilp;a numerical refinement operator based on multi-instance learning;not far away from home: a relational distance-based approach to understanding images of houses;approximate inference for logic programs with annotated disjunctions;approximate Bayesian computation for the parameters of PRISM programs;probabilistic rule learning;interactive discriminative mining of chemical fragments;MMRF for proteome annotation applied to human protein disease prediction;multivariate prediction for learning on the semantic web;hypothesizing about causal networks with positive and negative effects by meta-level abduction;BET: an inductivelogicprogramming workbench;and seeing the world through homomorphism: an experimental study on reducibility of examples.
the proceedings contain 12 papers. the special focus in this conference is on . the topics include: Pruning hypothesis spaces using learned domain theories;an investigation into the role of domain-knowledge on the use...
ISBN:
(纸本)9783319780894
the proceedings contain 12 papers. the special focus in this conference is on . the topics include: Pruning hypothesis spaces using learned domain theories;an investigation into the role of domain-knowledge on the use of embeddings;positive and unlabeled relational classification through label frequency estimation;on applying probabilistic logicprogramming to breast cancer data;logical vision: One-shot meta-interpretive learning from real images;demystifying relational latent representations;parallel online learning of event definitions;relational restricted boltzmann machines: A probabilistic logic learning approach;parallel inductivelogicprogramming system for superlinear speedup;inductive learning from state transitions over continuous domains.
the proceedings contain 28 papers. the topics discussed include: learning with kernels and logical representations;beyond prediction: directions for probabilistic and relational learning;learning probabilistic logic m...
详细信息
ISBN:
(纸本)3540784683
the proceedings contain 28 papers. the topics discussed include: learning with kernels and logical representations;beyond prediction: directions for probabilistic and relational learning;learning probabilistic logic models from probabilistic examples;learning directed probabilistic logical models using ordering-search;learning to assign degrees of belief in relational domains;bias/variance analysis for relational domains;induction of optimal semantic semi-distances for clausal knowledge bases;clustering relational data based on randomized propositionalization;structural statistical software testing with active learning in a graph;learning declarative bias;learning relational options for inductive transfer in relational reinforcement learning;a phase transition-based perspective on multiple instance kernels;and combining clauses with various precisions and recalls to produce accurate probabilistic estimates.
this paper proposes a method for efficiently enumerating all solutions of a given ilp problem. inductivelogicprogramming (ilp) is a machine learning technique that assumes that all data, background knowledge, and hy...
详细信息
this paper proposes multi-model optimization through SAT witness or answer set sampling, with common probabilistic reasoning tasks as primary use cases (including deduction-style probabilistic inference and hypothesis...
详细信息
ISBN:
(数字)9783319999609
ISBN:
(纸本)9783319999609;9783319999593
this paper proposes multi-model optimization through SAT witness or answer set sampling, with common probabilistic reasoning tasks as primary use cases (including deduction-style probabilistic inference and hypothesis weight learning). Our approach enhances a state-of-the-art SAT/ASP solving algorithm with Gradient Descent as branching literal decision approach, and optionally a cost backtracking mechanism. Sampling of models using these methods minimizes a task-specific, user-provided multi-model cost function while adhering to given logical background knowledge (either a Boolean formula in CNF or a normal logic program under stable model semantics). Features of the framework include its relative simplicity and high degree of expressiveness, since arbitrary differentiable cost functions and background knowledge can be provided.
In this study, we improve our parallel inductivelogicprogramming (ilp) system to enable superlinear speedup. this improvement redesigns several features of our ilp learning system and parallel mechanism. the redesig...
详细信息
ISBN:
(数字)9783319780900
ISBN:
(纸本)9783319780900;9783319780894
In this study, we improve our parallel inductivelogicprogramming (ilp) system to enable superlinear speedup. this improvement redesigns several features of our ilp learning system and parallel mechanism. the redesigned ilp learning system searches and gathers all rules that have the same evaluation. the redesigned parallel mechanism adds a communication protocol for sharing the evaluation of the identified rules, thereby realizing superlinear speedup.
Meta-interpretive learning (MIL) is a form of inductivelogicprogramming. MIL uses second-order Horn clauses, called metarules, as a form of declarative bias. Metarules define the structures of learnable programs and...
详细信息
ISBN:
(数字)9783319999609
ISBN:
(纸本)9783319999609;9783319999593
Meta-interpretive learning (MIL) is a form of inductivelogicprogramming. MIL uses second-order Horn clauses, called metarules, as a form of declarative bias. Metarules define the structures of learnable programs and thus the hypothesis space. Deciding which metarules to use is a trade-off between efficiency and expressivity. the hypothesis space increases given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. A recent paper used Progol's entailment reduction algorithm to identify irreducible, or minimal, sets of metarules. In some cases, as few as two metarules were shown to be sufficient to entail all hypotheses in an infinite language. Moreover, it was shown that compared to non-minimal sets, learning with minimal sets of metarules improves predictive accuracies and lowers learning times. In this paper, we show that entailment reduction can be too strong and can remove metarules necessary to make a hypothesis more specific. We describe a new reduction technique based on derivations. Specifically, we introduce the derivation reduction problem, the problem of finding a finite subset of a Horn theory from which the whole theory can be derived using SLD-resolution. We describe a derivation reduction algorithm which we use to reduce sets of metarules. We also theoretically study whether certain sets of metarules can be derivationally reduced to minimal finite subsets. Our experiments compare learning with entailment and derivation reduced sets of metarules. In general, using derivation reduced sets of metarules outperforms using entailment reduced sets of metarules, both in terms of predictive accuracies and learning times.
ilp learners are commonly implemented to consider sequentially each training example for each of the hypotheses tested. Computing the cover set of a hypothesis in this way is costly, and introduces a major bottleneck ...
详细信息
暂无评论