An invariance assertion for a program location l is a statement that always holds at l during execution of the program. Program invariance analyses infer invariance assertions that can be useful when trying to prove s...
详细信息
ISBN:
(纸本)9781595935755
An invariance assertion for a program location l is a statement that always holds at l during execution of the program. Program invariance analyses infer invariance assertions that can be useful when trying to prove safety properties. We use the term variance assertion to mean a statement that holds between any state at l and any previous state that was also at l. This paper is concerned with the development of analyses for variance assertions and their application to proving termination and liveness properties. We describe a method of constructing program variance analyses from invariance analyses. If we change the underlying invariance analysis, we get a different variance analysis. We describe several applications of the method, including variance analyses using linear arithmetic and shape analysis. Using experimental results we demonstrate that these variance analyses give rise to a new breed of termination provers which are competitive with and sometimes better than today's state-of-the-art termination provers.
This paper proposes a lightweight fusion method for general recursive function definitions. Compared with existing proposals, our method has several significant practical features: it works for general recursive funct...
详细信息
ISBN:
(纸本)9781595935755
This paper proposes a lightweight fusion method for general recursive function definitions. Compared with existing proposals, our method has several significant practical features: it works for general recursive functions on general algebraic data types;it does not produce extra runtime overhead (except for possible code size increase due to the success of fusion);and it is readily incorporated in standard inlining optimization. This is achieved by extending the ordinary inlining process with a new fusion law that transforms a term of the form f circle (fix *** x. E) to a new fixed point term fix *** x.E' by promoting the function f through the fixed point operator. This is a sound syntactic transformation rule that is not sensitive to the types of f and g. This property makes our method applicable to wide range of functions including those with multi-parameters in both curried and uncurried forms. Although this method does not guarantee any form of completeness, it fuses typical examples discussed in the literature and others that involve accumulating parameters, either in the foldl-like specific forms or in general recursive forms, without any additional machinery. In order to substantiate our claim, we have implemented our method in a compiler. Although it is preliminary, it demonstrates practical feasibility of this method.
Concurrent ML (CML) is a statically-typed higher-order concurrent language that is embedded in Standard ML. Its most notable feature is its support for first-class synchronous operations. This mechanism allows program...
详细信息
ISBN:
(纸本)9781595935755
Concurrent ML (CML) is a statically-typed higher-order concurrent language that is embedded in Standard ML. Its most notable feature is its support for first-class synchronous operations. This mechanism allows programmers to encapsulate complicated communication and synchronization protocols as first-class abstractions, which encourages a modular style of programming where the underlying channels used to communicate with a given thread are hidden behind data and type abstraction. While CML has been in active use for well over a decade, little attention has been paid to optimizing CML programs. In this paper, we present a new program analysis for statically-typed higher-order concurrent languages that enables the compile-time specialization of communication operations. This specialization is particularly important in a multiprocessor or multicore setting, where the synchronization overhead for general-purpose operations are high. Preliminary results from a prototype that we have built demonstrate that specialized channel operations are much faster than the general-purpose operations. Our analysis technique is modular (i.e., it analyzes and optimizes a single unit of abstraction at a time), which plays to the modular style of many CML programs. The analysis consists of three steps: the first is a type-sensitive control-flow analysis that uses the program's type-abstractions to compute more precise results. The second is the construction of an extended control-flow graph using the results of the CFA. The last step is an iterative analysis over the graph that approximates the usage patterns of known channels. Our analysis is designed to detect special patterns of use, such as one-shot channels, fan-in channels, and fan-out channels. We have proven the safety of our analysis and state those results.
A memory leak in a garbage-collected program occurs when the program inadvertently maintains references to objects that it no longer needs. Memory leaks cause systematic heap growth, degrading performance and resultin...
详细信息
ISBN:
(纸本)9781595935755
A memory leak in a garbage-collected program occurs when the program inadvertently maintains references to objects that it no longer needs. Memory leaks cause systematic heap growth, degrading performance and resulting in program crashes after perhaps days or weeks of execution. Prior approaches for detecting memory leaks rely on heap differencing or detailed object statistics which store state proportional to the number of objects in the heap. These overheads preclude their use on the same processor for deployed long-running applications. This paper introduces a dynamic heap-summarization technique based on type that accurately identifies leaks, is space efficient (adding less than 1% to the heap), and is time efficient (adding 2.3% on average to total execution time). We implement this approach in Cork which utilizes dynamic type information and garbage collection to summarize the live objects in a type points-from graph (TPFG) whose nodes (types) and edges (references between types) are annotated with volume. Cork compares TPFGs across multiple collections, identifies growing data structures, and computes a type slice for the user. Cork is accurate: it identifies systematic heap growth with no false positives in 4 of 15 benchmarks we tested. Cork's slice report enabled us (non-experts) to quickly eliminate growing data structures in SPECjbb2000 and Eclipse, something their developers had not previously done. Cork is accurate, scalable, and efficient enough to consider using online.
The proceedings contain 34 papers. The topics discussed include: formal certification of a compiler back-end or: programming a compiler with a proof assistant;decidability and proof systems for language-based noninter...
The proceedings contain 34 papers. The topics discussed include: formal certification of a compiler back-end or: programming a compiler with a proof assistant;decidability and proof systems for language-based noninterference relations;on flow-sensitive security types;a logic for information flow in object-oriented programs;polymorphic regular tree types and patterns;verifying properties of well-founded linked lists;small bisimulations for reasoning about higher-order imperative programs;a fixpoint calculus for local and global program flows;the next mainstream programming language: a game developer’s perspective;n-synchronous Kahn networks — a relaxed model of synchrony for real-time systems;compiler-directed channel allocation for saving power in on-chip networks;and a polymorphic modal type system for lisp-like multi-staged languages.
The proceedings contain 35 papers. The topics discussed include: the next 700 data description languages;a hierarchical model of data locality;formal certification of a compiler back-end or: programming a compiler wit...
详细信息
ISBN:
(纸本)1595930272
The proceedings contain 35 papers. The topics discussed include: the next 700 data description languages;a hierarchical model of data locality;formal certification of a compiler back-end or: programming a compiler with a proof assistant;decidability and proof systems for language based noninterference relations;verifying properties of well-founded linked lists;small bisimulations for reasoning about higher-order imperative programs;the next mainstream programming language;adventures in time and space;compiler-directed channel allocation for saving power in on-chip networks;modular set-based analysis form contracts;stratified type inference for generalized algebraic data types;why dependent types matter;a virtual class calculus;frame rules from answer types for code pointers;protecting representation with effect encapsulation;and a verifiable SSA program representation for aggressive compiler optimization.
We describe a new program-analysis framework, based on CPS and procedure-string abstractions, that can handle critical analyses which the k-CFA framework cannot. We present the main theorems concerning correctness, sh...
详细信息
ISBN:
(纸本)9781595930279
We describe a new program-analysis framework, based on CPS and procedure-string abstractions, that can handle critical analyses which the k-CFA framework cannot. We present the main theorems concerning correctness, show an application analysis, and describe a running implementation.
This article presents a polymorphic modal type system and its principal type inference algorithm that conservatively extend ML by all of Lisp's staging constructs (the quasi-quotation system). The combination is m...
详细信息
We present optimization techniques for high level equational programs that are generalizations of affine control loops (ACLs). Significant parts of the SpecFP and PerfectClub benchmarks are ACLs. They often contain re...
详细信息
ISBN:
(纸本)9781595930279
We present optimization techniques for high level equational programs that are generalizations of affine control loops (ACLs). Significant parts of the SpecFP and PerfectClub benchmarks are ACLs. They often contain reductions: associative and commutative operators applied to a collection of values. They also often exhibit reuse: intermediate values computed or used at different index points being identical. We develop various techniques to automatically exploit reuse to simplify the computational complexity of evaluating reductions. Finally, we present an algorithm for the optimal application of such simplifications resulting in an equivalent specification with minimum complexity.
暂无评论