Given an ontology that does not meet required properties such as consistency or the (non-)entailment of certain axioms, Ontology debugging aims at identifying a set of axioms, called diagnosis, that must be properly m...
详细信息
ISBN:
(纸本)9783319999067;9783319999050
Given an ontology that does not meet required properties such as consistency or the (non-)entailment of certain axioms, Ontology debugging aims at identifying a set of axioms, called diagnosis, that must be properly modified or deleted in order to resolve the ontology's faults. As there are, in general, large numbers of competing diagnoses and the choice of each diagnosis leads to a repaired ontology with different semantics, test-driven Ontology debugging (TOD) aims at narrowing the space of diagnoses until a single (highly probable) one is left. To this end, TOD techniques automatically generate a sequence of queries to an interacting oracle (domain expert) about (non-)entailments of the correct ontology. Diagnoses not consistent with the answers are discarded. To minimize debugging cost (oracle effort), various heuristics for selecting the best next query have been proposed. We report preliminary results of extensive ongoing experiments with a set of such heuristics on real-world debugging cases. In particular, we try to answer questions such as "Is some heuristic always superior to all others?", "On which factors does the (relative) performance of the particular heuristics depend?" or "Under which circumstances should I use which heuristic?".
When ontologies reach a certain size and complexity, faults such as inconsistencies, unsatisfiable classes or wrong entailments are hardly avoidable. Locating the incorrect axioms that cause these faults is a hard and...
详细信息
When ontologies reach a certain size and complexity, faults such as inconsistencies, unsatisfiable classes or wrong entailments are hardly avoidable. Locating the incorrect axioms that cause these faults is a hard and time-consuming task. Addressing this issue, several techniques for a semi-automatic fault localization in ontologies have been proposed and extensively studied. One class of these approaches involve a human expert who provides answers to system-generated queries about the intended (correct) ontology in order to reduce the possible fault locations. To suggest as informative questions as possible, existing methods draw on various algorithmic optimizations as well as heuristics. However, these computations are often based on certain assumptions about the interacting expert. In this work, we demonstrate that these assumptions might not always be adequate and discuss consequences of their violations. In particular, we characterize a range of expert types with different query answering behavior and show that existing approaches are far from achieving optimal efficiency for all of them. In addition, we find that the cost metric adopted by state-of-the-art techniques might not always be realistic and that a change of metric has a decisive impact on the best choice of query answering strategy. As a remedy, we suggest a new - and simpler - type of expert question that leads to a stable fault localization performance for all analyzed expert types and effort metrics, and has numerous further advantages over existing techniques. Moreover, we present an algorithm which computes and optimizes this new query type in worst-case polynomial time and which is fully compatible with existing concepts (e.g., query selection heuristics) and infrastructure (e.g., debugging user interfaces) in the field. Comprehensive experiments on faulty real-world ontologies attest that the new querying method is substantially and statistically significantly superior to existing techniques b
When ontologies reach a certain size and complexity, faults such as inconsistencies or wrong entailments are hardly avoidable. Locating the faulty axioms that cause these faults is a hard and time-consuming task. Addr...
详细信息
ISBN:
(纸本)9783030229993;9783030229986
When ontologies reach a certain size and complexity, faults such as inconsistencies or wrong entailments are hardly avoidable. Locating the faulty axioms that cause these faults is a hard and time-consuming task. Addressing this issue, several techniques for semi-automatic fault localization in ontologies have been proposed. Often, these approaches involve a human expert who provides answers to system-generated questions about the intended (correct) ontology in order to reduce the possible fault locations. To suggest as few and as informative questions as possible, existing methods draw on various algorithmic optimizations as well as heuristics. However, these computations are often based on certain assumptions about the interacting user and the metric to be optimized. In this work, we critically discuss these optimization criteria and suppositions about the user. As a result, we suggest an alternative, arguably more realistic metric to measure the expert's effort and show that existing approaches do not achieve optimal efficiency in terms of this metric. Moreover, we detect that significant differences regarding user interaction costs arise if the assumptions made by existing works do not hold. As a remedy, we suggest a new notion of expert question that does not rely on any assumptions about the user's way of answering. Experiments on faulty real-world ontologies testify that the new querying method minimizes the necessary expert consultations in the majority of cases and reduces the computation time for the best next question by at least 80% in all scenarios.
暂无评论