In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of SMODELS programs under the stable model semantics. The composition of SMODELS program modules is suitably limited by module conditi...
详细信息
In this paper, a Gaifman-Shapiro-style module architecture is tailored to the case of SMODELS programs under the stable model semantics. The composition of SMODELS program modules is suitably limited by module conditions which ensure the compatibility of the module system with stable models. Hence the semantics of an entire SMODELS program depends directly on stable models assigned to its modules. This result is formalized as a module theorem which truly strengthens V. Lifschitz and H. Turner's splitting-set theorem (June 1994, Splitting a logic program. In logicprogramming: Proceedings of the Eleventh International Conference on logicprogramming, Santa Margherita Ligure, Italy, P. V. Hentenryck, Ed. MIT Press, 23-37) for the class of SMODELS programs. To streamline generalizations in the future, the module theorem is first proved for normal programs and then extended to cover SMODELS programs using a translation from the latter class of programs to the former class. Moreover, the respective notion of module-level equivalence, namely modular equivalence, is shown to be a proper congruence relation: it is preserved under substitutions of modules that are modularly equivalent. Principles for program decomposition are also addressed. The strongly connected components of the respective dependency graph can be exploited in order to extract it module structure when there is no explicit a priori knowledge about the modules of a program. The paper includes a practical demonstration of tools that have been developed for automated (de)composition of SMODELS programs.
The refinement calculus for logic programs is a framework for deriving logic programs from specifications. It is based on a wide-spectrum language that can express both specifications and code, and a refinement relati...
详细信息
The refinement calculus for logic programs is a framework for deriving logic programs from specifications. It is based on a wide-spectrum language that can express both specifications and code, and a refinement relation that models the notion of correct implementation. In this paper we extend and generalise earlier work on contextual refinement. Contextual refinement simplifies the refinement process by abstractly capturing the context of a subcomponent of a program, which typically includes information about the values of the free variables. This paper also extends and generalises module refinement. A module is a collection of procedures that operate on a common data type;module refinement between a specification module A and an implementation module C allows calls to the procedures of A to be systematically replaced with calls to the corresponding procedures of C. Based on the conditions for module refinement, we present a method for calculating an implementation module from a specification module. Both contextual and module refinement within the refinement calculus have been generalised from earlier work and the results are presented in a unified framework.
The Semantic Web drives toward the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying represe...
详细信息
The Semantic Web drives toward the use of the Web for interacting with logically interconnected data. Through knowledge models such as Resource Description Framework (RDF), the Semantic Web provides a unifying representation of richly structured data. Adding logic to the Web implies the use of rules to make inferences, choose courses of action, and answer questions. This logic must be powerful enough to describe complex properties of objects but not so powerful that agents can be tricked by being asked to consider a paradox. The Web has several characteristics that can lead to problems when existing logics are used, in particular, the inconsistencies that inevitably arise due to the openness of the Web, where anyone can assert anything. N3logic is a logic that allows rules to be expressed in a Web environment. It extends RDF with syntax for nested graphs and quantified variables and with predicates for implication and accessing resources on the Web, and functions including cryptographic, string, math. The main goal of N3logic is to be a minimal extension to the RDF data model such that the same language can be used for logic and data. In this paper, we describe N3logic and illustrate through examples why it is an appropriate logic for the Web.
logicprogramming under the answer-set semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordin...
详细信息
logicprogramming under the answer-set semantics nowadays deals with numerous different notions of program equivalence. This is due to the fact that equivalence for substitution (known as strong equivalence) and ordinary equivalence are different concepts. The former holds, given programs P and Q, iff P can be faithfully replaced by Q within any context R, while the latter holds iff P and Q provide the same output, that is, they have the same answer sets. Notions in between strong and ordinary equivalence have been introduced as theoretical tools to compare incomplete programs and are defined by either restricting the syntactic structure of the considered context programs R or by bounding the set A of atoms allowed to occur in R (relativized equivalence). For the latter approach, different v yield properly different equivalence notions, in general. For the former approach, however, it turned out that any "reasonable" syntactic restriction to R coincides with either ordinary, strong, or uniform equivalence (for uniform equivalence, the context ranges over arbitrary sets of facts, rather than program rules). In this paper, we propose a parameterization for equivalence notions which takes care of both such kinds of restrictions simultaneously by bounding, on the one hand, the atoms which are allowed to occur in the rule heads of the context and, on the other hand, the atoms which are allowed to occur in the rule bodies of the context. We introduce a general semantical characterization which includes known ones as SE-models (for strong equivalence) or UE-models (for uniform equivalence) as special cases. Moreover, we provide complexity bounds for the problem in question and sketch a possible implementation method making use of dedicated systems for checking ordinary equivalence.
We present in this paper a first-order axiomatization of an extended theory T of finite or infinite trees, built on a signature containing an infinite set of function symbols and a relation finite(t), which enables to...
详细信息
We present in this paper a first-order axiomatization of an extended theory T of finite or infinite trees, built on a signature containing an infinite set of function symbols and a relation finite(t), which enables to distinguish between finite and infinite trees. We show that T has at least one model and prove its completeness by giving not only a decision procedure, but a full first-order constraint solver that gives clear and explicit solutions for any first-order constraint satisfaction problem in T. The solver is given in the form of 16 rewriting rules that transform any first-order constraint phi into an equivalent disjunction phi of simple formulas such that phi is either the formula true or the formula false or a formula having at least one free variable, being equivalent neither to true nor to false and where the solutions of the free variables are expressed in a clear and explicit way. The correctness of our rules implies the completeness of T. We also describe an implementation of our algorithm in CHR (Constraint Handling Rules) and compare the performance with an implementation in C++ and that of a recent decision procedure for decomposable theories.
The formal modelling of programming languages has always been a challenging activity due to the gap occurring between formal definition and actual implementation. On the other hand, the Maude rewriting language has al...
详细信息
The formal modelling of programming languages has always been a challenging activity due to the gap occurring between formal definition and actual implementation. On the other hand, the Maude rewriting language has already proven to be a suitable tool to bridge the gap between theory and practice when implementing the operational semantics of programming languages. In particular, Maude has been exploited to model languages belonging to different paradigms and levels of abstraction, leading to specifications that represent de facto executable prototypes of such languages. In this paper we focus on A&A ReSpecT, a coordination language based on the agents and artifacts (A&A) meta-model, and exploit Maude to generate an execution machine for A&A ReSpecT programs, acting as an implementation of its operational semantics.
Constraint programming often involves global constraints, for which various custom filtering algorithms have been published. This work Presents a semi-automatic generation of CHR solvers for the subset of global const...
详细信息
ISBN:
(纸本)9783540859574
Constraint programming often involves global constraints, for which various custom filtering algorithms have been published. This work Presents a semi-automatic generation of CHR solvers for the subset of global constraints defineable by specific automata. The generation is hissed on a constraint logic program modelling an automaton and an improved version of the Prim-Miner algorithm. The solvers only need to be generated once and achieve are-consistency for over 40 global constraints.
Compared with the history' of computing hardware, the history of software is in a relatively unde veloped state. In particular, the history of programming languages still consists for the most part of technical ac...
Compared with the history' of computing hardware, the history of software is in a relatively unde veloped state. In particular, the history of programming languages still consists for the most part of technical accounts presenting a rather Whiggish perspective on developments. Given the importance of software in the contemporary world, however, it is important to develop a more sophisticated un derstanding of the medium in which it is expressed. This thesis considers some aspects of this history with the aim of examining the influence of formal logic on the evolution of notations for expressing computer programs. It is argued that this was not a natural or inev itable application of theory to practice, as is sometimes suggested, but a complex and contingent process with a rich history of its own. Two introductory chapters discuss the work on computability carried out by logicians in the mid-1930s. and the controversial topic of the role of logic in the invention of the computer. The body of the thesis proceeds chronologically, considering machine codes, the introduction of higher level notations, structured progTamrning and software engineering, and the early object-oriented languages. The picture that emerges is that formal logic was deliberately employed by programming lan guage designers to provide a model for a theoretical understanding of programming languages and the process of program development. This led to a flourishing research programme in the 1960s and 1970s, many of whose results are of lasting significance and value. The thesis concludes by exam ining the early history of object-oriented languages, arguing that this episode shows the emergence of limits to the applicability of the logical research programme.
Difference constraints of the form x - y ≤ d are well studied, with efficient algorithms for satisfaction and implication, because of their connection to shortest paths. Finite domain propagation algorithms however d...
详细信息
The proceedings contain 28 papers. The topics discussed includes: regular expression subtyping for XML query and update languages;a theory of hygienic macros;a hybrid denotational semantics for hybrid systems;practica...
详细信息
ISBN:
(纸本)3540787380
The proceedings contain 28 papers. The topics discussed includes: regular expression subtyping for XML query and update languages;a theory of hygienic macros;a hybrid denotational semantics for hybrid systems;practical programming with higher-order encodings and dependent types;typing safe deallocation;iterative specialization of horn clauses;ranking abstractions;non-disjunctive numerical domain for array predicate abstraction;cover algorithms and their combination;linear declassification;open bisimulation for the concurrent constraint Pi-calculus;inferring channel buffer via linear programming;verification of higher-order computation: a game-semantic approach;verification of equivalent-results methods;semi-persistent data structures;a realizability model for impredicative Hoare type theory;oracle semantics for concurrent separation logic;and certificate translation in abstract interpretation.
暂无评论