This article considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: (i) the quantity of data that can be handled contemporarily is li...
详细信息
This article considers the problem of reasoning on massive amounts of (possibly distributed) data. Presently, existing proposals show some limitations: (i) the quantity of data that can be handled contemporarily is limited, because reasoning is generally carried out in main-memory;(ii) the interaction with external (and independent) Database Management Systems is not trivial and, in several cases, not allowed at all;and (iii) the efficiency of present implementations is still not sufficient for their utilization in complex reasoning tasks involving massive amounts of data. This article provides a contribution in this setting;it presents a new system, called DLVDB, which aims to solve these problems. Moreover, it reports the results of a thorough experimental analysis we have carried out for comparing our system with several state-of-the-art systems (both logic and databases) on some classical deductive problems;the other tested systems are LDL++, XSB, Smodels, and three top-level commercial Database Management Systems. DLVDB significantly outperforms even the commercial database systems on recursive queries.
We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in the work by Gebser and Schaub (Tableau calculi for answer set programming. In Procee...
详细信息
We introduce an extended tableau calculus for answer set programming (ASP). The proof system is based on the ASP tableaux defined in the work by Gebser and Schaub (Tableau calculi for answer set programming. In Proceedings of the 22nd International Conference on logicprogramming (ICLP 2006), S. Etalle and M. Truszczynski, Eds. Lecture Notes in Computer Science, vol. 4079. Springer, 11-25) with an added extension rule. We investigate the power of Extended ASP Tableaux both theoretically and empirically. We study the relationship of Extended ASP Tableaux with the Extended Resolution proof system defined by Tseitin for sets of clauses, and separate Extended ASP Tableaux from ASP Tableaux by giving a polynomial-length proof for a family of normal logic programs {Pi(n)} for which ASP Tableaux has exponential-length minimal proofs with respect to n. Additionally, Extended ASP Tableaux imply interesting insight into the effect of program simplification on the lengths of proofs in ASP. Closely related to Extended ASP Tableaux, we empirically investigate the effect of redundant rules on the efficiency of ASP solving.
Tracers provide users with useful information about program executions. In this article, we propose a "tracer driver". From a single tracer, it provides a powerful front-end enabling multiple dynamic analysi...
详细信息
Tracers provide users with useful information about program executions. In this article, we propose a "tracer driver". From a single tracer, it provides a powerful front-end enabling multiple dynamic analysis tools to be easily implemented, while limiting the overhead of the trace generation. The relevant execution events are specified by flexible event patterns and a large variety of trace data can be given either systematically or "on demand". The proposed tracer driver has been designed in the context of constraint logicprogramming (CLP);experiments have been made within GNU-Prolog. Execution views provided by existing tools have been easily emulated with a negligible overhead. Experimental measures show that the flexibility and power of the described architecture lead to good performance. The tracer driver overhead is inversely proportional to the average time between two traced events. Whereas the principles of the tracer driver are independent of the traced programming language, it is best suited for high-level languages, such as CLP, where each traced execution event encompasses numerous low-level execution steps. Furthermore, CLP is especially hard to debug. The current environments do not provide all the useful dynamic analysis tools. They can significantly benefit froth our tracer driver which enables dynamic analyses to be integrated at a very low cost.
Refactoring is an established technique from the object-oriented (00) programming community to restructure code: it aims at improving software readability, maintainability, and extensibility. Although refactoring is n...
详细信息
Refactoring is an established technique from the object-oriented (00) programming community to restructure code: it aims at improving software readability, maintainability, and extensibility. Although refactoring is not tied to the OO-paradigm in particular, its ideas have not been applied to logicprogramming until now. This paper applies the ideas of refactoring to Prolog programs. A catalogue is presented listing refactorings classified according to scope. Some of the refactorings have been adapted from the 00-paradigm, while others have been specifically designed for Prolog. The discrepancy between intended and operational semantics in Prolog is also addressed by some of the refactorings. In addition, ViPReSS, a semi-automatic refactoring browser, is discussed and the experience with applying ViPReSS to a large Prolog legacy system is reported. The main conclusion is that refactoring is both a viable technique in Prolog and a rather desirable one.
Disjunctive logicprogramming (DLP) is a very expressive formalism. It allows for expressing every property of finite structures that is decidable in the complexity class Sigma(P)(2) (=NPNP). Despite this high express...
详细信息
Disjunctive logicprogramming (DLP) is a very expressive formalism. It allows for expressing every property of finite structures that is decidable in the complexity class Sigma(P)(2) (=NPNP). Despite this high expressiveness, there are some simple properties, often arising in real-world applications, which cannot be encoded in a simple and natural manner. Especially properties that require the use of arithmetic operators (like sum, times, or count) on a set or multiset of elements, which satisfy some conditions, cannot be naturally expressed in classic DLP. To overcome this deficiency, we extend DLP by aggregate functions in a conservative way. In particular, we avoid the introduction of constructs with disputed semantics, by requiring aggregates to be stratified. We formally define the semantics of the extended language (called DLPA), and illustrate how it can be profitably used for representing knowledge. Furthermore, we analyze the computational complexity of DLPA, showing that the addition of aggregates does not bring a higher cost in that respect. Finally, we provide an implementation of DLPA in DLV-a state-of-the-art DLP system-and report on experiments which confirm the usefulness of the proposed extension also for the efficiency of computation.
Recently, there has been a lot of interest in the integration of Description logics (DL) and rules on the Semantic Web. We define guarded hybrid knowledge bases (or g-hybrid knowledge bases) as knowledge bases that co...
详细信息
Recently, there has been a lot of interest in the integration of Description logics (DL) and rules on the Semantic Web. We define guarded hybrid knowledge bases (or g-hybrid knowledge bases) as knowledge bases that consist of a Description logic knowledge base and a guarded logic program, similar to the DL+log knowledge bases from Rosati (In Proceedings of the 10th International Conference on Principles of Knowledge Representation and Reasoning, AAAI Press, Menlo Park, CA, 2006, pp. 68 - 78.). g-Hybrid knowledge bases enable an integration of Description logics and logicprogramming where, unlike in other approaches, variables in the rules of a guarded program do not need to appear in positive non-DL atoms of the body, i.e., DL atoms can act as guards as well. Decidability of satisfiability checking of g-hybrid knowledge bases is shown for the particular DL DLRO-{<=} which is close to OWL DL, by a reduction to guarded programs under the open answer set semantics. Moreover, we show 2-EXPTIME-completeness for satisfiability checking of such g-hybrid knowledge bases.
Requirements about the quality of clinical guidelines can be represented by schemata borrowed from the theory of abductive diagnosis, using temporal logic to model the time-oriented aspects expressed in a guideline. P...
详细信息
Requirements about the quality of clinical guidelines can be represented by schemata borrowed from the theory of abductive diagnosis, using temporal logic to model the time-oriented aspects expressed in a guideline. Previously, we have shown that these requirements can be verified using interactive theorem proving techniques. In this paper, we investigate how this approach can be mapped to the facilities of a resolution-based theorem prover, OTTER and a complementary program that searches for finite models of first-order statements, MACE-2. It is shown that the reasoning required for checking the quality of a guideline can be mapped to such a fully automated theorem-proving facilities. The medical quality of an actual guideline concerning diabetes mellitus 2 is investigated in this way.
We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and ...
详细信息
We are researching the interaction between the rule and the ontology layers of the Semantic Web, by comparing two options: 1) using OWL and its rule extension SWRL to develop an integrated ontology/rule language, and 2) layering rules on top of an ontology with RuleML and OWL. Toward this end, we are developing the SWORIER system, which enables efficient automated reasoning on ontologies and rules, by translating all of them into Prolog and adding a set of general rules that properly capture the semantics of OWL. We have also enabled the user to make dynamic changes on the fly, at run time. This work addresses several of the concerns expressed in previous work, such as negation, complementary classes, disjunctive heads, and cardinality, and it discusses alternative approaches for dealing with inconsistencies in the knowledge base. In addition, for efficiency, we implemented techniques called extensionalization, avoiding reanalysis, and code minimization.
Twenty years of stable, model semantics (SMS) and almost ten years of Answer Set, programming (ASP) are a good reason for a moment of reflection on these important concepts. This position paper gives a personal accoun...
详细信息
ISBN:
(纸本)9783540899815
Twenty years of stable, model semantics (SMS) and almost ten years of Answer Set, programming (ASP) are a good reason for a moment of reflection on these important concepts. This position paper gives a personal account of their history, aspect's of ASP, and emphasizes the role of theory and practice in this area.
In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of SMODELS programs under the stable model semantics. The composition of SMODELS program modules is suitably limited by module conditi...
详细信息
In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of SMODELS programs under the stable model semantics. The composition of SMODELS program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire SMODELS program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens V. Lifschitz and H. Turner's splitting-set theorem (June 1994, Splitting a logic program. In logicprogramming: Proceedings of the Eleventh International Conference on logicprogramming, Santa Margherita Ligure, Italy, P. V. Hentenryck, Ed. MIT Press, 23-37) for the class of SMODELS programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover SMODELS programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract it module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of SMODELS programs.
暂无评论