Character recognition systems can contribute tremendously to the advancement of the automation process and can improve the interaction between man and machine in many applications, including office automation, check v...
详细信息
Character recognition systems can contribute tremendously to the advancement of the automation process and can improve the interaction between man and machine in many applications, including office automation, check verification and a large variety of banking, business and data entry applications. The main theme of this paper is the automatic recognition of hand printed Arabic characters using machine learning. Conventional methods have relied on hand-constructed dictionaries which are tedious to construct and difficult to make tolerant to variation in writing styles. The advantages of machine learning are that it can generalize over the large degree of variation between writing styles and recognition rules can be constructed by example. The system was tested on a sample of handwritten characters from several individuals whose writing ranged from acceptable to poor in quality and the correct average recognition rate obtained using cross-validation was 89.65%.
This paper discusses the role that background knowledge can play in building flexible multistrategy learning systems. We contend that a variety of learning strategies can be embodied in the background knowledge provid...
详细信息
This paper discusses the role that background knowledge can play in building flexible multistrategy learning systems. We contend that a variety of learning strategies can be embodied in the background knowledge provided to a general purpose learning algorithm. To be effective, the general purpose algorithm must have a mechanism for learning new concept descriptions that can refer to knowledge provided by the user or learned during some other task. The method of knowledge representation is a central problem in designing such a system since it should be possible to specify background knowledge in such a way that the learner can apply its knowledge to new information.
We propose the identification of feedback mechanisms in biological systems by learning logical rules in R. Thomas' Kinetic logic (Thomas and D'Ari in Biological feedback. CRC Press, 1990). The principal advant...
详细信息
We propose the identification of feedback mechanisms in biological systems by learning logical rules in R. Thomas' Kinetic logic (Thomas and D'Ari in Biological feedback. CRC Press, 1990). The principal advantages claimed for Kinetic logic are that it captures an important class of regulatory networks at an appropriate level of precision, and that the representation is close to that used routinely by biologists, with a well-understood relationship to a differential description. In this paper we present a formalisation of Kinetic logic as a labelled transition system and provide a provably correct implementation in a modified form of the Event Calculus. The behaviour of a system is then a logical consequence of the core-axioms of a (modified) Event Calculus C, the axioms K implementing Kinetic logic and the axioms H describing the system. This formulation allows us to specify system identification in the manner adopted in inductive logic programming (ILP), namely, given C, K, system behaviour S and possibly some additional domain-knowledge B, find H s.t. B <^> C <^> K <^> H (sic) S. Identifying a suitable Kinetic logic hypothesis requires the simultaneous identification of definite clauses for: (a) logical definitions relating the occurrence of events to values of fluents;(b) delays in changes of the values of fluents arising from the occurrence of events;and possibly (c) exceptions to changes in fluent values, arising from asynchronous behaviour inherent to the system. We use a standard ILP engine for (a), and special-purpose abduction procedures for (b) and (c). We demonstrate this combination of induction and abduction on several canonical feedback patterns described by Thomas, and to identify the regulatory mechanism in two well-known biological problems (immuneresponse and phage-infection).
This paper introduces a novel logical framework for concept-learning called brave induction. Brave induction uses brave inference for induction and is useful for learning from incomplete information. Brave induction i...
详细信息
This paper introduces a novel logical framework for concept-learning called brave induction. Brave induction uses brave inference for induction and is useful for learning from incomplete information. Brave induction is weaker than explanatory induction which is normally used in inductive logic programming, and is stronger than learning from satisfiability, a general setting of concept-learning in clausal logic. We first investigate formal properties of brave induction, then develop an algorithm for computing hypotheses in full clausal theories. Next we extend the framework to induction in nonmonotonic logic programs. We analyze computational complexity of decision problems for induction on propositional theories. Further, we provide examples of problem solving by brave induction in systems biology, requirement engineering, and multiagent negotiation.
Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network. We will focus attention on deep relational machines (DRMs: a term introduced in Lodhi (2013)). ...
详细信息
Our interest in this paper is in the construction of symbolic explanations for predictions made by a deep neural network. We will focus attention on deep relational machines (DRMs: a term introduced in Lodhi (2013)). A DRM is a deep network in which the input layer consists of Boolean-valued functions (features) that are defined in terms of relations provided as domain, or background, knowledge. Our DRMs differ from those in Lodhi (2013), which uses an inductive logic programming (ILP) engine to first select features (we use random selections from a space of features that satisfies some approximate constraints on logical relevance and non-redundancy). But why do the DRMs predict what they do? One way of answering this was provided in recent work Ribeiro et al. (2016), by constructing readable proxies for a black-box predictor. The proxies are intended only to model the predictions of the black-box in local regions of the instance-space. But readability alone may not be enough: to be understandable, the local models must use relevant concepts in an meaningful manner. We investigate the use of a Bayes-like approach to identify logical proxies for local predictions of a DRM. As a preliminary step, we show that DRM's with our randomised propositionalization method achieve predictive performance that is comparable to the best reports in the ILP literature. Our principal results on logical explanations show: (a) Models in first-order logic can approximate the DRM's prediction closely in a small local region;and (b) Expert-provided relevance information can play the role of a prior to distinguish between logical explanations that perform equivalently on prediction alone.
The discovery of interesting patterns in relational databases is an important data mining task. This paper is concerned with the development of a search algorithm for first-order hypothesis spaces adopting an importan...
详细信息
The discovery of interesting patterns in relational databases is an important data mining task. This paper is concerned with the development of a search algorithm for first-order hypothesis spaces adopting an important pruning technique (termed subset pruning here) from association rule mining in a first-order setting. The basic search algorithm is extended by so-called requires and excludes constraints allowing to declare prior knowledge about the data, such as mutual exclusion or generalization relationships among attributes, so that it can be exploited for further structuring and restricting the search space. Furthermore, it is illustrated how to process taxonomies and numerical attributes in the search algorithm. Several task settings using different interestingness criteria and search modes with corresponding pruning criteria are described. Three settings serve as test beds for evaluation of the proposed approach. The experimental evaluation shows that the impact of subset pruning is significant, since it reduces the number of hypothesis evaluations in many cases by about 50%. The impact of generalization relationships is shown to be less effective in our experimental set-up.
This paper is a study of the problem of relevance in inductive concept learning. It gives definitions of irrelevant literals and irrelevant examples and presents efficient algorithms that enable their elimination. The...
详细信息
This paper is a study of the problem of relevance in inductive concept learning. It gives definitions of irrelevant literals and irrelevant examples and presents efficient algorithms that enable their elimination. The proposed approach is directly applicable in propositional learning and in relation learning tasks that can be solved using a LINUS transformation approach. A simple inductive logic programming (ILP) problem is used to illustrate the approach to irrelevant literal and example elimination. Results of utility studies show the usefulness of literal reduction applied in LINUS and in the search of refinement graphs. (C) 1999 Elsevier Science Inc. All rights reserved.
logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified in a structured logical form. To address this limitation, we propose a n...
详细信息
logic-based machine learning aims to learn general, interpretable knowledge in a data-efficient manner. However, labelled data must be specified in a structured logical form. To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data. We demonstrate the generality of FFNSL on four neural-symbolic classification problems, where different pre-trained neural network models and logic-based machine learning systems are integrated to learn interpretable knowledge from sequences of images. We evaluate the robustness of our framework by using images subject to distributional shifts, for which the pre-trained neural networks may predict incorrectly and with high confidence. We analyse the impact that these shifts have on the accuracy of the learned knowledge and run-time performance, comparing FFNSL to tree-based and pure neural approaches. Our experimental results show that FFNSL outperforms the baselines by learning more accurate and interpretable knowledge with fewer examples.
Machine Learning from examples may be used, within Artificial Intelligence, as a way to acquire general knowledge or associate to a concrete problem solving system. inductive learning methods are typically used to acq...
详细信息
Machine Learning from examples may be used, within Artificial Intelligence, as a way to acquire general knowledge or associate to a concrete problem solving system. inductive learning methods are typically used to acquire general knowledge from examples. Lazy methods are those in which the experience is accessed, selected and used in a problem-centered way. In this paper we report important approaches to inductive learning methods such as propositional and relational learners, with an emphasis in inductive logic programming based methods, as well as to lazy methods such as instance-based and case-based reasoning. (C) 1998 Elsevier Science B.V.
作者:
Taylor, KCSIRO
Math & Informat Sci Canberra ACT 2601 Australia
Absorption is one of the so-called inverse resolution operators of inductive logic programming. The paper studies the properties of absorption that make it suitable for incremental generalization of definite clauses u...
详细信息
Absorption is one of the so-called inverse resolution operators of inductive logic programming. The paper studies the properties of absorption that make it suitable for incremental generalization of definite clauses using background knowledge represented by a definite program. The soundness and completeness of the operator are established according to Buntine's model of generalization called generalized subsumption. The completeness argument proceeds by viewing absorption as the inversion of SLD-resolution. In addition, some simplifying techniques are introduced for reducing the non-determinism inherent in usual presentations of absorption. The effect of these simplifications on completeness is discussed. (C) 1999 Elsevier Science Inc. All rights reserved.
暂无评论