We show how automated support can be provided for identifying refactoring opportunities, e.g., when an application's design should be refactored and which refactoring(s) in particular should be applied. Such suppo...
详细信息
We show how automated support can be provided for identifying refactoring opportunities, e.g., when an application's design should be refactored and which refactoring(s) in particular should be applied. Such support is achieved by using the technique of logic meta programming to detect so-called bad smells and by defining a framework that uses this information to propose adequate refactorings. We report on some initial but promising experiments that were applied using the proposed techniques.
As artificial intelligence techniques are maturing and being deployed in large applications, the problem of specifying control and reasoning strategies is regaining attention. Complex AI systems tend to comprise a sui...
详细信息
As artificial intelligence techniques are maturing and being deployed in large applications, the problem of specifying control and reasoning strategies is regaining attention. Complex AI systems tend to comprise a suite of modules, each of which is capable of solving a different aspect of the overall problem, and each of which may incorporate a different reasoning paradigm. The orchestration of such heterogeneous problem solvers can be divided into two subproblems: (1) When and how are various reasoning modes invoked? and (2) How is information passed between various reasoning modes? In this paper, we explore some solutions to this problem. In particular, we describe a logic programming system that is based on three ideas: equivalence of declarative and operational semantics, declarative specification of control information, and smoothness of interaction with nonlogic-based programs. Meta-level predicates are used to specify control information declaratively, compensating for the absence of procedural constructs that usually facilitate formulation of efficient programs. Knowledge that has been derived in the course of the current inference process can at any time be passed to non-logic-based program modules. Traditional SLD inference engines maintain only the linear path to the current state in the SLD search tree: fonnulae that have been proved on this path are implicitly represented in a stack of recursive calls to the inference engine, and formulae that have been proved on previous, unsuccessful paths are lost altogether. In our system, previously proved formulae are maintained explicitly and therefore can be passed to other reasoning modules. As an application example, we show how this inference system acts as the knowledge representation and reasoning framework of PRET-a program that automates system identification. (C) 2003 Elsevier Inc. All rights reserved.
Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic programs, derives useful aliasing information. It is well-known that a commonly used core of techniques, such as the integrati...
详细信息
Sharing, an abstract domain developed by D. Jacobs and A. Langen for the analysis of logic programs, derives useful aliasing information. It is well-known that a commonly used core of techniques, such as the integration of Sharing with freeness and linearity information, can significantly improve the precision of the analysis. However, a number of other proposals for refined domain combinations have been circulating for years. One feature that is common to these proposals is that they do not seem to have undergone a thorough experimental evaluation even with respect to the expected precision gains. In this paper we experimentally evaluate: helping Sharing with the definitely ground variables found using Pos, the domain of positive Boolean formulas;the incorporation of explicit structural information;a full implementation of the reduced product of Sharing and Pos;the issue of reordering the bindings in the computation of the abstract mgu;an original proposal for the addition of a new mode recording the set of variables that are deemed to be ground or free;a refilled way of using linearity to improve the analysis;the recovery of hidden information in the combination of Sharing with freeness information. Finally, we discuss the issue Of whether tracking compoundness allows the computation of more sharing information.
We propose a path-based framework for deriving and simplifying source-tracking information for first-order term unification in the empty theory. Such a framework is useful for diagnosing unification-based systems, inc...
详细信息
We propose a path-based framework for deriving and simplifying source-tracking information for first-order term unification in the empty theory. Such a framework is useful for diagnosing unification-based systems, including debugging of type errors in programs and the generation of success and failure proofs in logic programming. The objects of source-tracking are deductions in the logic of term unification. The semantics of deductions are paths over a unification graph whose labels form the suffix language of a semi-Dyck set. Based on this idea of unification paths, two algorithms for generating proofs are presented: the first uses context-free labeled shortest-path algorithms to generate optimal (shortest) proofs in time O(n(3)) for a fixed signature, where n is the number of vertices of the unification graph. The second algorithm integrates easily with standard unification algorithms, entailing an overhead of only a constant factor, but generates non-optimal proofs. These non-optimal proofs may be further simplified by group rewrite rules. (c) 2005 Elsevier Inc. All rights reserved.
Computer Aided Design systems provide tools for building and manipulating models of solid objects. Some also provide access to programming languages so that parametrised designs can be expressed. There is a sharp dist...
详细信息
Computer Aided Design systems provide tools for building and manipulating models of solid objects. Some also provide access to programming languages so that parametrised designs can be expressed. There is a sharp distinction, therefore, between building models, a concrete graphical editing activity, and programming, an abstract, textual, algorithm-construction activity. The recently proposed Language for Structured Design (LSD) was motivated by a desire to combine the design and programming activities in one language. LSD achieves this by extending a visual logic programming language to incorporate the notions of solids and operations on solids. Here we investigate another aspect of the LSD approach, namely, that by using visual logic programming as the engine to drive the parametrised assembly of objects, we also gain the powerful symbolic problem-solving capability that is the forte of logic programming languages. This allows the designer/ programmer to work at a higher level, giving declarative specifications of a design in order to obtain the design descriptions. Hence LSD integrates problem solving, design synthesis, and prototype assembly in a single homogeneous programming/design environment. We demonstrate this specification-to-final-assembly capability using the masterkeying problem for designing systems of locks and keys.
We explore the major issues involved in the automatic exploitation of parallelism from the execution models of logic-based non-monotonic reasoning systems. We describe orthogonal techniques to parallelize the computat...
详细信息
We explore the major issues involved in the automatic exploitation of parallelism from the execution models of logic-based non-monotonic reasoning systems. We describe orthogonal techniques to parallelize the computation of models of non-monotonic logic theories, and demonstrate the effectiveness of the proposed techniques in prototypical implementation. (c) 2005 Elsevier B.V All rights reserved.
In this paper we define a model theory and give a semantic proof of cut-elimination for ICTT, an intuitionistic formulation of Church's theory of types defined by Miller et al. and the basis for the lambda Prolog ...
详细信息
In this paper we define a model theory and give a semantic proof of cut-elimination for ICTT, an intuitionistic formulation of Church's theory of types defined by Miller et al. and the basis for the lambda Prolog programming language. Our approach, extending techniques of Takahashi and Andrews and tableaux machinery of Fitting, Smullyan, Nerode and Shore, is to prove a completeness theorem for the cut-free fragment and show semantically that cut is a derived rule. This allows us to generalize a result of Takahashi and Schutte on extending partial truth valuations in impredicative systems. We extend Andrews' notion of Hintikka sets to intuitionistic higher-order logic in a way that also defines tableau-provability for intuitionistic type theory. In addition to giving a completeness theorem without using cut we then show, using cut, how to establish completeness of more conventional term models. These models give a declarative semantics for the logic underlying the lambda Prolog programming language.
The conventional approach for the implementation of the knowledge base of a planning agent, on an intelligent embedded system, is solely of software nature. It requires the existence of a compiler that transforms the ...
详细信息
The conventional approach for the implementation of the knowledge base of a planning agent, on an intelligent embedded system, is solely of software nature. It requires the existence of a compiler that transforms the initial declarative logic program, specifying the knowledge base, to its equivalent procedural one, to be programmed to the embedded systems microprocessor. This practice increases the complexity of the final implementation (the declarative to sequential transformation adds a great amount of software code for simulating the declarative execution) and reduces the overall systems performance (logic derivations require the use of a stack and a great number of jump instructions for their evaluation). The design of specialized hardware implementations, which are only capable of supporting logic programs, in an effort to resolve the aforementioned problems, introduces limitations in their use in applications where logic programs need to be intertwined with traditional procedural ones in a desired application. In this paper, we exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of logic derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation increases the performance of logic derivations for the control inference process (experimental analysis yields an approximate 1000% - 10 times increase in performance) and reduces the complexity of the final implemented code through the introduction of an extended C language called C
The task of generating minimal models of a knowledge base is at the computational heart of diagnosis systems like truth maintenance systems, and of nonmonotonic systems like autoepistemic logic, default logic, and dis...
详细信息
The task of generating minimal models of a knowledge base is at the computational heart of diagnosis systems like truth maintenance systems, and of nonmonotonic systems like autoepistemic logic, default logic, and disjunctive logic programs. Unfortunately, it is NP-hard. In this paper we present a hierarchy of classes of knowledge bases, Psi(1), Psi(2),..., with the following properties: first, Psi(1) is the class of all Horn knowledge bases;second, if a knowledge base T is in Psi(k), then T has at most k minimal models, and all of them may be found in time O(lk(2)), where l is the length of the knowledge base;third, for an arbitrary knowledge base T, we can find the minimum k such that T belongs to Psi(k) in time polynomial in the size of T;and, last, where K is the class of all knowledge bases, it is the case that U-i=1(infinity) Psi(i) = IC, that is, every knowledge base belongs to some class in the hierarchy. The algorithm is incremental, that is, it is capable of generating one model at a time. (c) 2005 Elsevier B.V All rights reserved.
We present cTI, the first system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis and checking. Traditionally, a termination analyzer tries to prove th...
详细信息
We present cTI, the first system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis and checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, for instance by means of user annotations. Moreover, the analysis must be redone every time the class of queries of interest is updated. Termination inference, in contrast, requires neither user annotations nor recomputation. In this approach, terminating classes for all predicates are inferred at once. We describe the architecture of cTI and report an extensive experimental evaluation of the system covering many classical examples from the logic programming termination literature and several Prolog programs of respectable size and complexity.
暂无评论