Conservative garbage collectors can automatically reclaim unused memory in the absence of precise pointer location information. If a location can possibly contain a pointer, it is treated by the collector as though it...
详细信息
ISBN:
(纸本)9781581134506
Conservative garbage collectors can automatically reclaim unused memory in the absence of precise pointer location information. If a location can possibly contain a pointer, it is treated by the collector as though it contained a pointer. Although it is commonly assumed that this can lead to unbounded space use due to misidentified pointers, such extreme space use is rarely observed in practice, and then generally only if the number of misidentified pointers is itself unbounded. We show that if the program manipulates only data structures satisfying a simple GC-robustness criterion, then a bounded number of misidentified pointers can at most result in increasing space usage by a constant factor. We argue that nearly all common data structures are already GC-robust, and it is typically easy to identify and replace those that are not. Thus it becomes feasible to prove space bounds on programs collected by mildly conservative garbage collectors, such as the one in [2]. The worst-case space overhead introduced by such mild conservatism is comparable to the worst-case fragmentation overhead for inherent in any non-moving storage allocator. The same GC-robustness criterion also ensures the absence of temporary space leaks of the kind discussed in [13] for generational garbage collectors.
We present a minimal extension of the Hindley/Milner system to allow for overloading of identifiers. Our approach relies on a combination of the HM(X) type system framework with Constraint Handling Rules (CHRs). CHRs ...
详细信息
ISBN:
(纸本)9781581134872
We present a minimal extension of the Hindley/Milner system to allow for overloading of identifiers. Our approach relies on a combination of the HM(X) type system framework with Constraint Handling Rules (CHRs). CHRs are a declarative language for writing incremental constraint solvers. CHRs allow us to precisely describe the relationships among overloaded identifiers. Under some sufficient conditions on the CHRs we achieve decidable type inference and the semantic meaning of programs is unambiguous. Our approach allows us to combine open and closed world overloading. We also show how to deal with overlapping definitions.
Programs use rules to dictate or constrain specific decisions or actions. These rules have typically, been tested, revised, and updated continuously;therefore, the), represent a substantial and valuable business or in...
详细信息
ISBN:
(纸本)0769517277
Programs use rules to dictate or constrain specific decisions or actions. These rules have typically, been tested, revised, and updated continuously;therefore, the), represent a substantial and valuable business or intellectual asset. These valuable rules too often are not reused because the legacy program code is the only valid source for these rules, and extraction of the rules from the legacy, code is thought to be too difficult. This problem is further exacerbated when a re-engineering project potentially involves rule recovery from multiple programs in multiple languages. This paper reviews the uses of mathematically based or mathematically formal approaches to business rule recovery, and extraction. A simple framework for two different rule extraction approaches for an arbitrary program language is presented. These approaches are based on the mathematical assertions that programs are composed from language structures, and that extractable business rules can be functionally defined in terms of specific language structures and elements. The definition of an extractable rule function that specifies extractable rules in terms of language elements and structures is introduced A simple C language example of rule extraction using each approach is presented, and the requirements, advantages, and limitations of each approach are examined Directions for additional research are presented.
When, using Aspect Oriented programming in the development of software components, a developer must understand the program units actually changed by weaving, how they behave, and possibly correct the aspects used. Sup...
详细信息
ISBN:
(纸本)158113469X
When, using Aspect Oriented programming in the development of software components, a developer must understand the program units actually changed by weaving, how they behave, and possibly correct the aspects used. Support for rapid AOP prototyping and debugging is therefore crucial in such situations. Rapid prototyping is difficult with current aspect weaving tools because they do not support dynamic changes. This paper describes PROSE (PROgrammable ex-tenSions of sErvices), a platform based on Java which addresses dynamic AOP. Aspects are expressed in the same source language as the application (Java), and PROSE allows aspects to be woven, unwoven, or replaced at run-time.
Considering the most common cases in the test suit, a dependence test method for loop parallelization is designed based on the Banerjee-GCD and Banerjee-Bound methods. It is found out that not all the dependence direc...
详细信息
Considering the most common cases in the test suit, a dependence test method for loop parallelization is designed based on the Banerjee-GCD and Banerjee-Bound methods. It is found out that not all the dependence direction contribute equally to loop parallelization. A test of a small portion of the whole set of dependence directions covers most loop inter-iteration dependence. So these directions are tested to reduce the time complexity of the algorithm. The information got in GCD and Bound test can be used to extend the original algorithm. The main idea is: GCD test can find out a set of possible dependence vectors, then Bound test could check each bits of the vector and eliminate some dependence directions, which in fact does not exist. Furthermore, the information can be exchanged between GCD and Bound test to get more efficiency from the original one. The original GCD and Bound test method can be extended to make it support non-linear equations. By integrating these two dependence test method into one single algorithm and make the maximum use of them, a fast and ambitious dependence test method for the purpose of program parallelization can be provided.
We present a simple, practical algorithm for higher-order matching in the context of automatic program transformation. Our algorithm finds more matches than the standard second order matching algorithm of Huet and Lan...
详细信息
We present a simple, practical algorithm for higher-order matching in the context of automatic program transformation. Our algorithm finds more matches than the standard second order matching algorithm of Huet and Lang, but it has an equally simple specification, and it is better suited to the transformation of programs in modern programming languages such as Haskell or ML. The algorithm has been implemented as part of the MAG system for transforming functional programs. (C) 2001 Elsevier Science B.V. All rights reserved.
The lambda sigma -calculus is a concrete lambda -calculus of explicit substitutions, designed for reasoning about implementations of lambda -calculi. Higher-order abstract syntax is an approach to metaprogramming that...
详细信息
The lambda sigma -calculus is a concrete lambda -calculus of explicit substitutions, designed for reasoning about implementations of lambda -calculi. Higher-order abstract syntax is an approach to metaprogramming that explicitly captures the variable-binding aspects of programming language constructs. A new calculus of explicit substitutions for higher-order abstract syntax is introduced, allowing a high-level description of variable binding in object languages while also providing substitutions as explicit programmer-manipulable data objects. The new calculus is termed the lambda sigma beta (0)-calculus, since it makes essential use of an extension of beta (0)-unification (described in another paper). Termination and confluence are verified for the lambda sigma beta (0)-calculus similarly to that for the ncr-calculus, and an efficient implementation is given in terms of first-order renaming substitutions. The verification of confluence makes use of a verified adaptation of Nipkow's higher-order critical pairs lemma to the forms of rewrite rules required for the statement of the lambda sigma beta (0)-calculus. (C) 2001 Academic Press.
In this paper we give a topological proof of the following result. There exist 2(N0) lambda theories of the untyped lambda calculus without a model in any semantics based on Scott's view of models as partially ord...
详细信息
ISBN:
(纸本)076951281X
In this paper we give a topological proof of the following result. There exist 2(N0) lambda theories of the untyped lambda calculus without a model in any semantics based on Scott's view of models as partially ordered sets and of functions as monotonic functions. As a consequence of this result, we positively solve the conjecture, stated by Bastonero-Gouy [6, 7] and by Berline [10], that the strongly stable semantics is incomplete.
We develop a uniform type theory that integrates intensionality, extensionality, and proof irrelevance as judgmental concepts. Any object may be treated intensionally (subject only to alpha -conversion), extensionally...
详细信息
ISBN:
(纸本)076951281X
We develop a uniform type theory that integrates intensionality, extensionality, and proof irrelevance as judgmental concepts. Any object may be treated intensionally (subject only to alpha -conversion), extensionally (subject also to beta eta -conversion), or as irrelevant (equal to any other object at the same type), depending on where it occurs. Modal restrictions developed in prior work for simple types are generalized and employed to guarantee consistency between these views of objects. Potential applications are in logical frameworks, functional programming, and the foundations of first-order modal logics. Our type theory contrasts with previous approaches that a priori distinguish propositions (whose proofs are all identified-only their existence is important) from specifications (whose implementations are subject to some definitional equalities).
This paper describes the initial results of a new form of evolutionary system specifically designed for time series modeling. The system combines a grammatically-based Genetic programming system with various optimisat...
详细信息
ISBN:
(纸本)0780366573
This paper describes the initial results of a new form of evolutionary system specifically designed for time series modeling. The system combines a grammatically-based Genetic programming system with various optimisation techniques. The system uses the evolutionary system to construct the structure of equations and optimisation techniques to essentially fill in the details. Three forms of optimisation are described: optimisation of constants in an equation;the optimisation of both the constants and variables in an equation;and the use of a hill-climbing mutation to further tune the evolved and optimised equations. Preliminary results indicate that this combination of techniques produces significant improvements in convergence based on the training data, and produces equivalent generalisation on unseen data, for a given number of population member evaluations.
暂无评论