We use inductive logic programming (ILP) to learn classifiers for generic object recognition from point clouds, as generated by 3D cameras, such as the Kinect. Each point cloud is segmented into planar surfaces. Each ...
详细信息
We use inductive logic programming (ILP) to learn classifiers for generic object recognition from point clouds, as generated by 3D cameras, such as the Kinect. Each point cloud is segmented into planar surfaces. Each subset of planes that represents an object is labelled and predicates describing those planes and their relationships are used for learning. Our claim is that a relational description for classes of 3D objects can be built for robust object categorisation in real robotic application. To test the hypothesis, labelled sets of planes from 3D point clouds gathered during the RoboCup Rescue Robot competition are used as positive and negative examples for an ILP system. The robustness of the results is evaluated by 10-fold cross validation. In addition, common household objects that have curved surfaces are used for evaluation and comparison against a well-known non-relational classifier. The results show that ILP can be successfully applied to recognise objects encountered by a robot especially in an urban search and rescue environment.
A novel relational learning approach that tightly integrates the naive Bayes learning scheme with the inductive logic programming rule-learner FOIL is presented. In contrast to previous combinations that have employed...
详细信息
A novel relational learning approach that tightly integrates the naive Bayes learning scheme with the inductive logic programming rule-learner FOIL is presented. In contrast to previous combinations that have employed naive Bayes only for post-processing the rule sets, the presented approach employs the naive Bayes criterion to guide its search directly. The proposed technique is implemented in the NFOIL and TFOIL systems, which employ standard naive Bayes and tree augmented naive Bayes models respectively. We show that these integrated approaches to probabilistic model and rule learning outperform post-processing approaches. They also yield significantly more accurate models than simple rule learning and are competitive with more sophisticated ILP systems.
Professor Koichi Furukawa, an eminent computer scientist and former Editor-in-Chief of the New Generation Computing journal, passed away on January 31, 2017. His passing was a surprise, and we were all shocked and sad...
详细信息
Professor Koichi Furukawa, an eminent computer scientist and former Editor-in-Chief of the New Generation Computing journal, passed away on January 31, 2017. His passing was a surprise, and we were all shocked and saddened by the news. To remember the deceased, this article reviews the great career and contributions of Professor Koichi Furukawa, focusing on his research activities on the foundation and application of logicprogramming. Professor Furukawa had both a deep understanding and broad impact on logicprogramming, and he was always gentle but persistent in articulating its value across a broad spectrum of computer science and artificial intelligence research. This article introduces his research along with its insightful and unique philosophical framework.
In this paper we propose a use-case-driven iterative design methodology for normative frameworks, also called virtual institutions, which are used to govern open systems. Our computational model represents the normati...
详细信息
In this paper we propose a use-case-driven iterative design methodology for normative frameworks, also called virtual institutions, which are used to govern open systems. Our computational model represents the normative framework as a logic program under answer set semantics (ASP). By means of an inductive logic programming approach, implemented using ASP, it is possible to synthesise new rules and revise the existing ones. The learning mechanism is guided by the designer who describes the desired properties of the framework through use cases, comprising (i) event traces that capture possible scenarios, and (ii) a state that describes the desired outcome. The learning process then proposes additional rules, or changes to current rules, to satisfy the constraints expressed in the use cases. Thus, the contribution of this paper is a process for the elaboration and revision of a normative framework by means of a semi-automatic and iterative process driven from specifications of (un)desirable behaviour. The process integrates a novel and general methodology for theory revision based on ASP.
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and ...
详细信息
During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as Mitchell's, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present two sets of experiments testing human comprehensibility of logic programs. In the first experiment we test human comprehensibility with and without predicate invention. Results indicate comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous predicate symbols. In the second experiment we directly test whether any state-of-the-art ILP systems are ultra-strong learners in Michie's sense, and select the Metagol During the 1980s Michie defined Machine Learning in terms of two orthogonal axes of performance: predictive accuracy and comprehensibility of generated hypotheses. Since predictive accuracy was readily measurable and comprehensibility not so, later definitions in the 1990s, such as Mitchell's, tended to use a one-dimensional approach to Machine Learning based solely on predictive accuracy, ultimately favouring statistical over symbolic Machine Learning approaches. In this paper we provide a definition of comprehensibility of hypotheses which can be estimated using human participant trials. We present two sets of experiments testing human comprehensibility of logic programs. In the first experiment we test human comprehensibility with and without predicate invention. Results indicate comprehensibility is affected not only by the complexity of the presented program but also by the existence of anonymous
The traditional approach for estimating the performance of numerical methods is to combine an operation's count with an asymptotic error analysis. This analytic approach gives a general feel of the comparative eff...
详细信息
The traditional approach for estimating the performance of numerical methods is to combine an operation's count with an asymptotic error analysis. This analytic approach gives a general feel of the comparative efficiency of methods, but it rarely leads to very precise results. It is now recognized that accurate performance evaluation can be made only with actual measurements on working software. Given that such an approach requires an enormous amount of performance data related to actual measurements, the development of novel approaches and systems that intelligently and efficiently analyze these data is of great importance to scientists and engineers. This paper presents new intelligent knowledge acquisition approaches and an integrated prototype system, which enables the automatic and systematic analysis of performance data. The system analyzes the performance data which is usually stored in a database with statistical, and inductive learning techniques and generates knowledge which can be incorporated in a knowledge base incrementally. We demonstrate the use of the system in the context of a case study, covering the analysis of numerical algorithms for the pricing of American vanilla options in a Black and Scholes modeling framework. We also present a qualitative and quantitative comparison of two techniques used for the automated knowledge acquisition phase. Although the system is presented with a particular pricing library in mind, the analysis and evaluation methodology can be used to study algorithms available from other libraries, as long as, these libraries can provide the necessary performance data.
The multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvement...
详细信息
The multiscalar architecture advocates a distributed processor organization and task-level speculation to exploit high degrees of instruction level parallelism (ILP) in sequential programs without impeding improvements in clock speeds. The main goal of this paper is to understand the key implications of the architectural features of distributed processor organization and task-level speculation for compiler task selection from the point of view of performance. We identify the fundamental performance issues to be: control flow speculation, data communication, data dependence speculation, load imbalance, and task overhead. We show that these issues are intimately related to a few key characteristics of tasks: task size, intertask control flow, and intertask data dependence. We describe compiler heuristics to select tasks with favorable characteristics. We report experimental results to show that the heuristics are successful in boosting overall performance by establishing larger ILP windows. We also present a breakdown of execution times to show that register wait, load imbalance, control flow squash, and conventional pipeline losses are significant for almost all the SPEC95 benchmarks. (C) 1999 Academic Press.
The finite element method (FEM) is the most successful numerical method, that is used extensively by engineers to analyse stresses and deformations in physical structures. These structures should be represented as a f...
详细信息
The finite element method (FEM) is the most successful numerical method, that is used extensively by engineers to analyse stresses and deformations in physical structures. These structures should be represented as a finite element mesh. Defining an appropriate geometric mesh model that ensures low approximation errors and avoids unnecessary computational overheads is a very difficult and time consuming task. It is the major bottleneck in the FEM analysis process. The inductive logic programming system GOLEM has been employed to construct the rules for deciding about the appropriate mesh resolution. Five cylindrical mesh models have been used as a source of training examples. The evaluation of the resulting knowledge base shows that conditions in the domain are well represented by the rules, which specify the required number of the finite elements on the edges of the structures to be analysed using FEM. A comparison between the results obtained by this knowledge base and conventional mesh generation techniques confirms that the application of inductive logic programming is an effective approach to solving the problem of mesh design.
In real-life domains, learning systems often have to deal with various kinds of imperfections in data such as noise, incompleteness and inexactness. This problem seriously affects the knowledge discovery process, spec...
详细信息
In real-life domains, learning systems often have to deal with various kinds of imperfections in data such as noise, incompleteness and inexactness. This problem seriously affects the knowledge discovery process, specifically in the case of traditional Machine Learning approaches that exploit simple or constrained knowledge representations and are based on single inference mechanisms. Indeed, this limits their capability of discovering fundamental knowledge in those situations. In order to broaden the investigation and the applicability of machine learning schemes in such particular situations, it is necessary to move on to more expressive representations which require more complex inference mechanisms. However, the applicability of such new and complex inference mechanisms, such as abductive reasoning, strongly relies on a deep background knowledge about the specific application domain. This work aims at automatically discovering the meta-knowledge needed to abduction inference strategy to complete the incoming information in order to handle cases of missing knowledge.
The paper studies the learnability of Horn expressions within the framework of learning from entailment, where the goal is to exactly identify some pre-fixed and unknown expression by making queries to membership and ...
详细信息
The paper studies the learnability of Horn expressions within the framework of learning from entailment, where the goal is to exactly identify some pre-fixed and unknown expression by making queries to membership and equivalence oracles. It is shown that a class that includes both range restricted Horn expressions (where terms in the conclusion also appear in the condition of a Horn clause) and constrained Horn expressions (where terms in the condition also appear in the conclusion of a Horn clause) is learnable. This extends previous results by showing that a larger class is learnable with better complexity bounds. A further improvement in the number of queries is obtained when considering the class of Horn expressions with inequalities on all syntactically distinct terms. (C) 2002 Elsevier Science (USA).
暂无评论