This paper proposes a novel approach to cardiac arrhythmia recognition from electrocardiograms (ECGs). ECGs record the electrical activity of the heart and are used to diagnose many heart disorders. The numerical ECG ...
详细信息
This paper proposes a novel approach to cardiac arrhythmia recognition from electrocardiograms (ECGs). ECGs record the electrical activity of the heart and are used to diagnose many heart disorders. The numerical ECG is first temporally abstracted into series of time-stamped events. Temporal abstraction makes use of artificial neural networks to extract interesting waves and their features from the input signals. A temporal reasoner called a chronicle recogniser processes such series in order to discover temporal patterns called chronicles which can be related to cardiac arrhythmias. Generally, it is difficult to elicit an accurate set of chronicles from a doctor. Thus, we propose to learn automatically from symbolic ECG examples the chronicles discriminating the arrhythmias belonging to some specific subset. Since temporal relationships are of major importance, inductive logic programming (ILP) is the tool of choice as it enables first-order relational learning. The approach has been evaluated on real ECGs taken from the MIT-BIH database. The performance of the different modules as well as the efficiency of the whole system is presented. The results are rather good and demonstrate that integrating numerical techniques for low level perception and symbolic techniques for high level classification is very valuable. (C) 2003 Elsevier B.V. All rights reserved.
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Con...
详细信息
Efficient omission of symmetric solution candidates is essential for combinatorial problem-solving. Most of the existing approaches are instance-specific and focus on the automatic computation of Symmetry Breaking Constraints (SBCs) for each given problem instance. However, the application of such approaches to large-scale instances or advanced problem encodings might be problematic since the computed SBCs are propositional and, therefore, can neither be meaningfully interpreted nor transferred to other instances. As a result, a time-consuming recomputation of SBCs must be done before every invocation of a solver. To overcome these limitations, we introduce a new model-oriented approach for Answer Set programming that lifts the SBCs of small problem instances into a set of interpretable first-order constraints using the inductive logic programming paradigm. Experiments demonstrate the ability of our framework to learn general constraints from instance-specific SBCs for a collection of combinatorial problems. The obtained results indicate that our approach significantly outperforms a state-of-the-art instance-specific method as well as the direct application of a solver.
Relational Tri-training (k-Tri-training for short), as a relational semi-supervised learning system, can effectively exploit unlabeled examples to improve the generalization ability. However, the R-Tri-training may al...
详细信息
Relational Tri-training (k-Tri-training for short), as a relational semi-supervised learning system, can effectively exploit unlabeled examples to improve the generalization ability. However, the R-Tri-training may also suffer from the common problem in traditional semi-supervised learning, i.e., the performance is usually not stable for the unlabeled examples often be wrongly labeled and accumulated during the iterative learning process. In this paper, a new Relational Tri-training system named ADE-R-Tri-training (R-Tri-training with Adaptive Data Editing) is proposed. Not only does it employ a specific data editing technique to identify and correct the examples possibly mislabeled throughout the co-labeling iterations, but it also takes an adaptive strategy to decide whether to trigger the editing operation according to different cases. The adaptive strategy consists of five pre-conditional theorems, all of which ensure the iterative reduction of classification error under PAC (Probably Approximately Correct) learning theory. Experiments on well-known benchmarks show that ADE-R-Tri-training can more effectively enhance the performance of the hypothesis learned than R-Tri-training. (C) 2012 Elsevier B.V. All rights reserved.
inductive logic programming (ILP) is one of the most popular approaches in machine learning. ILP can be used to automate the construction of first-order definite clause theories from examples and background knowledge....
详细信息
inductive logic programming (ILP) is one of the most popular approaches in machine learning. ILP can be used to automate the construction of first-order definite clause theories from examples and background knowledge. Although ILP has been successfully applied in various domains, it has the following weaknesses: (1) weak capabilities in numerical data processing. (2) zero noise tolerance, and (3) unsatisfactory learning time with a large number of arguments in the relation. This paper proposes a phenotypic genetic algorithm (PGA) to overcome these weaknesses. To strengthen the numerical data processing capabilities, a multiple level encoding structure is used that can represent three different types of relationships between two numerical data. To tolerate noise, PGA's goal of finding perfect rules is changed to finding top-k rules, which allows noise in the induction process. Finally, to shorten learning time, we incorporate the semantic roles constraint into PGA, reducing search space and preventing the discovery of infeasible rules. Stock trading data from Yahool Finance Online is used in our experiments. The results indicate that the PGA algorithm can find interesting trading rules from real data. (c) 2008 Elsevier Ltd. All rights reserved.
Ontologies are key concepts in the semantic web and have an impressive role which comprise the biggest and the most prominent part of the infrastructure in this realm of web research. By fast growth of the semantic we...
详细信息
Ontologies are key concepts in the semantic web and have an impressive role which comprise the biggest and the most prominent part of the infrastructure in this realm of web research. By fast growth of the semantic web and also, the variety of its applications, ontology mapping (ontology alignment) has been transformed into a crucial issue in the realm of computer science. Several approaches are introduced for ontology alignment during these last years, but developing more accurate and efficient algorithms and finding new effective techniques and algorithms for this problem is an interesting research area since real-world applications with respect to their more complicated concepts need more efficient algorithms. In this paper, we illustrated a new ontology mapping method based on learning using inductive logic programming (ILP), and show how the ILP can be used to solve the ontology mapping problem. As a matter of fact, in this approach, an ontology which is described in OWL format is interpreted to first-order logic. Then, with the use of learning based on inductivelogic, the existing hidden rules and relationships between concepts are discovered and presented. Since the inductivelogic has high flexibility in solving problems such as discovering relationships between concepts and links, it also can be performed effectively in solving the ontology alignment problem. Our experimental results show that this technique yield to more accurate results comparing to other matching algorithms and systems, achieving an F-measure of 95.6% and 91% on two well-known reference datasets the Anatomy and the Library, respectively. (C) 2019 Elsevier Ltd. All rights reserved.
We describe a new approach to the application of stochastic search in inductive logic programming (ILP). Unlike traditional approaches we do not focus directly on evolving logical concepts but our refinement-based app...
详细信息
We describe a new approach to the application of stochastic search in inductive logic programming (ILP). Unlike traditional approaches we do not focus directly on evolving logical concepts but our refinement-based approach uses the stochastic optimization process to iteratively adapt the initial working concept. Utilization of context-sensitive concept refinements (adaptations) helps the search operations to produce mostly syntactically correct concepts. It also enables using available background knowledge both for efficiently restricting the search space and for directing the search. Thereby, the search is more flexible, less problem-specific and the framework can be easily used with any stochastic search algorithm within ILP domain. Experimental results on several data sets verify the usefulness of this approach.
We present NrSample, a framework for program synthesis in inductive logic programming. NrSample uses propositional logic constraints to exclude undesirable candidates from the search. This is achieved by representing ...
详细信息
We present NrSample, a framework for program synthesis in inductive logic programming. NrSample uses propositional logic constraints to exclude undesirable candidates from the search. This is achieved by representing constraints as propositional formulae and solving the associated constraint satisfaction problem. We present a variety of such constraints: pruning, input-output, functional (arithmetic), and variable splitting. NrSample is also capable of detecting search space exhaustion, leading to further speedups in clause induction and optimality. We benchmark NrSample against enumeration search (Aleph's default) and Progol's A* search in the context of program synthesis. The results show that, on large program synthesis problems, NrSample induces between 1 and 1358 times faster than enumeration (236 times faster on average), always with similar or better accuracy. Compared to Progol A*, NrSample is 18 times faster on average with similar or better accuracy except for two problems: one in which Progol A* substantially sacrificed accuracy to induce faster, and one in which Progol A* was a clear winner. Functional constraints provide a speedup of up to 53 times (21 times on average) with similar or better accuracy. We also benchmark using a few concept learning (non-program synthesis) problems. The results indicate that without strong constraints, the overhead of solving constraints is not compensated for.
Character recognition systems can contribute tremendously to the advancement of the automation process and can improve the interaction between man and machine in many applications, including office automation, cheque ...
详细信息
Character recognition systems can contribute tremendously to the advancement of the automation process and can improve the interaction between man and machine in many applications, including office automation, cheque verification and a large variety of banking, business and data entry applications. The main theme of this paper is the automatic recognition of hand-printed Arabic characters using machine learning. Conventional methods have relied on hand-constructed dictionaries which are tedious to construct and difficult to make tolerant to variation in writing styles. The advantages of machine learning are that it can generalize over the large degree of variation between writing styles and recognition rules can be constructed by example. The system was tested on a sample of handwritten characters from several individuals whose writing ranged from acceptable to poor in quality and the average correct recognitions rate obtained using cross-validation was 86.65%. (C) 2003 Elsevier B.V. All rights reserved.
inductive logic programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising.a set of rules given background knowledge and constraints for the search space...
详细信息
inductive logic programming (ILP) combines rule-based and statistical artificial intelligence methods, by learning a hypothesis comprising.a set of rules given background knowledge and constraints for the search space. We focus on extending the XHAIL algorithm for ILP which is based on Answer Set programming and we evaluate our extensions using the Natural Language Processing application of sentence chunking. With respect to processing natural language, ILP can cater for the constant change in how we use language on a daily basis. At the same time, ILP does not require huge amounts of training examples such as other statistical methods and produces interpretable results, that means a set of rules, which can be analysed and tweaked if necessary. As contributions we extend XHAIL with (i) a pruning mechanism within the hypothesis generalisation algorithm which enables learning from larger datasets, (ii) a better usage of modern solver technology using recently developed optimisation methods, and (iii) a time budget that permits the usage of suboptimal results. We evaluate these improvements on the task of sentence chunking using three datasets from a recent SemEval competition. Results show that our improvements allow for learning on bigger datasets with results that are of similar quality to state-of-the-art Systems on the same task. Moreover, we compare the hypotheses obtained on datasets to gain insights on the structure of each dataset. (c) 2017 Elsevier Ltd. All rights reserved.
暂无评论