this work aims at improving the scalability of memory usage in inductivelogicprogramming systems. In this context, we propose two efficient data structures: the Trie, used to represent lists and clauses;and the RL-T...
详细信息
ISBN:
(纸本)3540201440
this work aims at improving the scalability of memory usage in inductivelogicprogramming systems. In this context, we propose two efficient data structures: the Trie, used to represent lists and clauses;and the RL-Tree, a novel data structure used to represent the clauses coverage. We evaluate their performance in the April system using well known datasets. Initial results show a substantial reduction in memory usage without incurring extra execution time overheads. Our proposal is applicable in any ILP system.
We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in th...
详细信息
We study several complexity parameters for first order formulas and their suitability for first order learning models. We show that the standard notion of size is not captured by sets of parameters that are used in the literature and thus they cannot give a complete characterization in terms of learnability with polynomial resources. We then identify an alternative notion of size and a simple set of parameters that are useful for first order Horn Expressions. these parameters are the number of clauses in the expression, the maximum number of distinct terms in a clause, and the maximum number of literals in a clause. Matching lower bounds derived using the Vapnik Chervonenkis dimension complete the picture showing that these parameters are indeed crucial.
this paper proposes a logic-based machine learning approach called Acuity which is designed to facilitate user-guided elucidation of novel phenomena from evidence sparsely distributed across large volumes of linked re...
详细信息
the learning system Progo15 and the underlying inference method of Bottom Generalisation are firmly established within inductivelogicprogramming (ILP). But despite their success, it is known that Bottom Generalisati...
详细信息
ISBN:
(纸本)3540201440
the learning system Progo15 and the underlying inference method of Bottom Generalisation are firmly established within inductivelogicprogramming (ILP). But despite their success, it is known that Bottom Generalisation, and therefore Progo15, are restricted to finding hypotheses that lie within the semantics of Plotkin's relative subsumption. this paper exposes a previously unknown incompleteness of Progo15 with respect to Bottom Generalisation, and proposes a new approach, called Hybrid Abductive inductive Learning, that integrates the ILP principles of Progo15 with Abductive logicprogramming (ALP). A proof procedure is proposed, called HAIL, that not only overcomes this newly discovered incompleteness, but further generalises Progo15 by computing multiple clauses in response to a single seed example and deriving hypotheses outside Plotkin's relative subsumption. A semantics is presented, called Kernel Generalisation, which extends that of Bottom Generalisation. and includes the hypotheses constructed by HAIL.
Learning theoretic aspects of mathematics and logic have been studied by many authors. they study how mathematical and logical objects are algorithmically "learned" (inferred) from finite data. Although they...
详细信息
Learning theoretic aspects of mathematics and logic have been studied by many authors. they study how mathematical and logical objects are algorithmically "learned" (inferred) from finite data. Although they study mathematical objects, the objective of the studies is learning. In this paper, a mathematics whose foundation itself is learning theoretic will be introduced. It is called Limit-Computable Mathematics. It was originally introduced as a means for "Proof Animation", which is expected to make interactive formal proof development easier. Although the original objective was not learning theoretic at all, learning theory is indispensable for our research. It suggests that logic and learning theory are related in a still unknown but deep new way. (c) 2005 Published by Elsevier B.V.
this paper reviews experiments with an approach to discovery through robot's experimentation in its environment. In addition to discovering laws that enable predictions, we are particularly interested in the mecha...
详细信息
the ability to efficiently solve hard combinatorial optimization problems is a key prerequisite to various applications of declarative programming paradigms. Symmetries in solution candidates pose a significant challe...
详细信息
ISBN:
(纸本)9781577358800
the ability to efficiently solve hard combinatorial optimization problems is a key prerequisite to various applications of declarative programming paradigms. Symmetries in solution candidates pose a significant challenge to modern optimization algorithms since the enumeration of such candidates might substantially reduce their optimization performance. this paper proposes a novel approach using inductivelogicprogramming (ILP) to lift symmetry-breaking constraints for optimization problems modeled in Answer Set programming (ASP). Given an ASP encoding with optimization statements and a set of small representative instances, our method augments ground ASP programs with auxiliary normal rules enabling the identification of symmetries using existing tools, like SBASS. then, the obtained symmetries are lifted to first-order constraints with ILP. We prove the correctness of our method and evaluate it on real-world optimization problems from the domain of automated configuration. Our experiments show significant improvements of optimization performance due to the learned first-order constraints.
Many domains in the field of inductivelogicprogramming (ILP) involve highly unbalanced data. A common way to measure performance in these domains is to use precision and recall instead of simply using accuracy. the ...
详细信息
Many domains in the field of inductivelogicprogramming (ILP) involve highly unbalanced data. A common way to measure performance in these domains is to use precision and recall instead of simply using accuracy. the goal of our research is to find new approaches within ILP particularly suited for large, highly-skewed domains. We propose Gleaner, a randomized search method that collects good clauses from a broad spectrum of points along the recall dimension in recall-precision curves and employs an "at least L of these K clauses" thresholding method to combine sets of selected clauses. Our research focuses on Multi-Slot Information Extraction (IE), a task that typically involves many more negative examples than positive examples. We formulate this problem into a relational domain, using two large testbeds involving the extraction of important relations from the abstracts of biomedical journal articles. We compare Gleaner to ensembles of standard theories learned by Aleph, finding that Gleaner produces comparable testset results in a fraction of the training time.
Sized types provides a type-based mechanism to enforce termination of recursive definitions in typed A-calculi. Previous work has provided strong indications that type-based termination provides an appropriate foundat...
详细信息
ISBN:
(纸本)3540482814
Sized types provides a type-based mechanism to enforce termination of recursive definitions in typed A-calculi. Previous work has provided strong indications that type-based termination provides an appropriate foundation for proof assistants based on type theory;however, most work to date has been confined to non-dependent type systems. In this article, we introduce a variant of the Calculus of inductive Constructions with sized types and study its meta theoretical properties: subject reduction, normalization, and thus consistency and decidability of type-checking and of size-inference. A prototype implementation has been developed alongside case studies.
暂无评论