We present a new principle for the development of database query languages that the primitive operations should be organized around types. Viewing a relational database as consisting of sets of records, this principle...
详细信息
We present a new principle for the development of database query languages that the primitive operations should be organized around types. Viewing a relational database as consisting of sets of records, this principle dictates that we should investigate separately operations for records and sets. There are two immediate advantages of this approach, which is partly inspired by basic ideas from category theory. First, it provides a language for structures in which record and set types may be freely combined: nested relations or complex objects. Second, the fundamental operations for sets are closely related to those for other ''collection types'' such as bags or lists, and this suggests how database languages may be uniformly extended to these new types. The most general operation on sets, that of structural recursion, is one in which not all programs are well-defined. In looking for limited forms of this operation that always give rise to well-defined operations, we find a number of close connections with existing database languages, notably those developed for complex objects. Moreover, even though the general paradigm of structural recursion is shown to be no more expressive than one of the existing languages for complex objects, it possesses certain properties of uniformity that make it a better candidate for an efficient, practical language. Thus rather than developing query languages by extending, for example, relational calculus, we advocate a very powerful paradigm in which a number of well-known languages are to be found as natural sublanguages.
Type inference is the compile-time process of reconstructing missing type information in a program based on the usage of its variables. ML and Haskell are two languages where this aspect of compilation has enjoyed som...
详细信息
Type inference is the compile-time process of reconstructing missing type information in a program based on the usage of its variables. ML and Haskell are two languages where this aspect of compilation has enjoyed some popularity, allowing type information to be omitted while static type checking is still performed. Type inference may be expected to have some application in the prototyping and scripting languages which are becoming increasingly popular. A difficulty with type inference is the confusing and sometimes counter-intuitive diagnostics produced by the type checker as a result of type errors. A modification of the unification algorithm used in Hindley-Milner type inference is presented, which allows the specific reasoning which led to a program variable having a particular type to be recorded for type explanation. This approach is close to the intuitive process used in practice for debugging type errors. The algorithm is practical, and has been implemented in the Standard ML of New Jersey compiler. The modified unification algorithm also appears useful in other domains, including logic program debuggers and semantics-based programming environments.
From a theoretical point of view, lazy evaluation corresponds to the call-by-name evaluation method, which substitutes arguments for parameters before evaluating them and never evaluates under a lambda. From an implem...
详细信息
From a theoretical point of view, lazy evaluation corresponds to the call-by-name evaluation method, which substitutes arguments for parameters before evaluating them and never evaluates under a lambda. From an implementation perspective, lazy evaluation is often equated with the call-by-need method, which is similar to call-by-name except that arguments are shared. When an argument's value is required, it is evaluated and its result is stored and used for any other reference to it. The theoretical version of lazy evaluation, or call-by-name, is easily formalized with the reduction rules of lambda calculus. However, it has proven rather difficult to formalize the rules of lazy evaluation with sharing, or call-by-need, in such a way that it both captures sharing and is useful for reasoning. Many optimizations are based on analyses of program behavior which are dependent on whether arguments are implemented via sharing or not. Thus, it is important to have such a model of lazy evaluation in order to develop correct analyses. In this paper, an operational semantics of PCF is presented which captures the sharing inherent in the call-by-need implementation of lazy evaluation in a form that is suitable for reasoning. The semantics uses explicit substitutions to implement sharing. The link between the theoretical and implementation versions of lazy evaluation is made by showing the correctness of the call-by-need semantics with respect to a standard call-by-name semantics for PCF.
We investigate the modularity behavior of termination and confluence properties of (join) conditional term rewriting systems. We give counterexamples showing that the properties weak termination, weak innermost termin...
详细信息
We investigate the modularity behavior of termination and confluence properties of (join) conditional term rewriting systems. We give counterexamples showing that the properties weak termination, weak innermost termination and (strong) innermost termination are not preserved in general under signature extensions. Then we develop sufficient conditions for the preservation of these properties under signature extensions, and more generally, for their modularity in the disjoint union case. This leads to new criteria for modularity of termination and completeness generalizing known results for unconditional systems. Finally, combining our analysis with recent related results on the preservation of semi-completeness, we show how to cover the (non-disjoint) case of combined conditional rewrite systems with shared constructors too.
We view the process of constructing a program as the stepwise transformation of a relation into simpler relations. In this paper, we focus on a particular transformation: that which decomposes the specification of an ...
详细信息
We view the process of constructing a program as the stepwise transformation of a relation into simpler relations. In this paper, we focus on a particular transformation: that which decomposes the specification of an iterative program into the specification of the initialization segment and the specification of the while loop. We investigate in some detail the mathematics of this decomposition.
Results are presented of an analysis of several defect models using data collected from two large commercial projects. Traditional models typically use either program matrices (i.e. measurements from software products...
详细信息
Results are presented of an analysis of several defect models using data collected from two large commercial projects. Traditional models typically use either program matrices (i.e. measurements from software products) or testing time or combinations of these as independent variables. The limitations of such models have been well-documented. The models considered use the number of defects detected in the earlier phases of the development process as the independent variable. This number can be used to predict the number of defects to be detected later, even in modified software products. A strong correlation between the number of earlier defects and that of later ones was found. Using this relationship, a mathematical model was derived which may be used to estimate the number of defects remaining in software. This defect model may also be used to guide software developers in evaluating the effectiveness of the software development and testing processes.
In order to model and verify systems of concurrent processes (such as those involved in communication protocols), finite-state machines and Petri nets can be used as local and global models, respectively. The problem ...
详细信息
In order to model and verify systems of concurrent processes (such as those involved in communication protocols), finite-state machines and Petri nets can be used as local and global models, respectively. The problem of composing a set of communicating finite-state machines into a single global Petri net is considered in the letter with special attention to the case of more than two processes.
The paper `Best of Empirical Studies of Programmers 7', from the Seventh Workshop on Empirical Studies of Programmers held in 1997 was initiated to bring a group of high-quality papers, representing a range of iss...
详细信息
The paper `Best of Empirical Studies of Programmers 7', from the Seventh Workshop on Empirical Studies of Programmers held in 1997 was initiated to bring a group of high-quality papers, representing a range of issues in the cognition of programming to a wide audience. Four papers are contained in this special issue. They treat a wide range of themes in the cognition of programming and, thus, give some sense of the scope of the field.
In shared-memory parallel programs that use explicit synchronization, race conditions result when accesses to shared memory are not properly synchronized. Race conditions are often considered to be manifestations of b...
详细信息
In shared-memory parallel programs that use explicit synchronization, race conditions result when accesses to shared memory are not properly synchronized. Race conditions are often considered to be manifestations of bugs, since their presence can cause the program to behave unexpectedly. Unfortunately, there has been little agreement in the literature as to precisely what constitutes a race condition. Two different notions have been implicitly considered: one pertaining to programs intended to be deterministic (which we call general races) and the other to nondeterministic programs containing critical sections (which we call data races). However, the differences between general races and data races have not yet been recognized. This paper examines these differences by characterizing races using a formal model and exploring their properties. We show that two variations of each type of race exist: feasible general races and data races capture the intuitive notions desired for debugging and apparent races capture less accurate notions implicitly assumed by most dynamic race detection methods. We also show that locating feasible races is an NP-hard problem, implying that only the apparent races, which are approximations to feasible races, can be detected in practice. The complexity of dynamically locating apparent races depends on the type of synchronization used by the program. Apparent races can be exhaustively located efficiently only for weak types of synchronization that are incapable of implementing mutual exclusion. This result has important implications since we argue that debugging general races requires exhaustive race detection and is inherently harder than debugging data races (which requires only partial race detection). Programs containing data races can therefore be efficiently debugged by locating certain easily identifiable races. In contrast, programs containing general races require more complex debugging techniques.
Basic principles of address programming, namely, addressness and program control are considered and analyzed. Their further development within the framework of composition and compositionnominative programming is demo...
详细信息
Basic principles of address programming, namely, addressness and program control are considered and analyzed. Their further development within the framework of composition and compositionnominative programming is demonstrated.
暂无评论