implementations of many data structures use several correlated fields to improve their performance;however, inconsistencies between these fields can be a source of serious program errors. To address this problem, we p...
详细信息
ISBN:
(纸本)9781450383912
implementations of many data structures use several correlated fields to improve their performance;however, inconsistencies between these fields can be a source of serious program errors. To address this problem, we propose a new technique for automatically refining data structures from integrity constraints. In particular, consider a data structure D with fields F and methods M, as well as a new set of auxiliary fields F' that should be added to D. Given this input and an integrity constraint Phi relating F and F', our method automatically generates a refinement of.. that satisfies the provided integrity constraint. Our method is based on a modular instantiation of the CEGIS paradigm and uses a novel inductive synthesizer that augments top-down search with three key ideas. First, it computes necessary preconditions of partial programs to dramatically prune its search space. Second, it augments the grammar with promising new productions by leveraging the computed preconditions. Third, it guides top-down search using a probabilistic context-free grammar obtained by statically analyzing the integrity checking function and the original code base. We evaluated our method on 25 data structures from popular Java projects and show that our method can successfully refine 23 of them. We also compare our method against two state-of-the-art synthesis tools and perform an ablation study to justify our design choices. Our evaluation shows that (1) our method is successful at refining many data structure implementations in the wild, (2) it advances the state-of-the-art in synthesis, and (3) our proposed ideas are crucial for making this technique practical.
The proceedings contain 8 papers. The topics discussed include: static analysis for dummies: experiencing LiSA;security and quality: two sides of the same coin?;program analysis for reversible languages;PerfLens: a da...
ISBN:
(纸本)9781450384681
The proceedings contain 8 papers. The topics discussed include: static analysis for dummies: experiencing LiSA;security and quality: two sides of the same coin?;program analysis for reversible languages;PerfLens: a data-driven performance bug detection and fix platform;Weldr: fusing binaries for simplified analysis;multi-language static code analysis on the LARA framework;serialization-aware call graph construction;and scalable string analysis: an experience report.
programming concurrent, distributed systems is hard-especially when these systems mutate shared, persistent state replicated at geographic scale. To enable high availability and scalability, a new class of weakly cons...
详细信息
ISBN:
(纸本)9781450356985
programming concurrent, distributed systems is hard-especially when these systems mutate shared, persistent state replicated at geographic scale. To enable high availability and scalability, a new class of weakly consistent data stores has become popular. However, some data needs strong consistency. To manipulate both weakly and strongly consistent data in a single transaction, we introduce a new abstraction: mixed-consistency transactions, embodied in a new embedded language, MixT. Programmers explicitly associate consistency models with remote storage sites;each atomic, isolated transaction can access a mixture of data with different consistency models. Compile-time information-flow checking, applied to consistency models, ensures that these models are mixed safely and enables the compiler to automatically partition transactions. New run-time mechanisms ensure that consistency models can also be mixed safely, even when the data used by a transaction resides on separate, mutually unaware stores. Performance measurements show that despite their stronger guarantees, mixed-consistency transactions retain much of the speed of weak consistency, significantly outperforming traditional serializable transactions.
The proceedings contain 7 papers. The topics discussed include: AARTS: low overhead online adaptive auto-tuning;contentiousness vs. sensitivity: improving contention aware runtime systems on multicore architectures;pr...
详细信息
ISBN:
(纸本)9781450307086
The proceedings contain 7 papers. The topics discussed include: AARTS: low overhead online adaptive auto-tuning;contentiousness vs. sensitivity: improving contention aware runtime systems on multicore architectures;probabilistic auto-tuning for architectures with complex constraints;efficiently exploring compiler optimization sequences with pair-wise pruning;adapt or become extinct!;loaf: a framework and infrastructure for creating online adaptive solutions;and using a codelet program execution model for exascale machines.
Self-adapting systems are becoming widespread in emerging fields such as autonomic, mobile and ubiquitous computing. Context-oriented programming (COP) is a promising language-level solution for the implementation of ...
详细信息
The proceedings contain 55 papers. The topics discussed include: commutative set: a language extension for implicit parallel programming;the Tao of parallelism in algorithms;data representation synthesis;synthesizing ...
详细信息
ISBN:
(纸本)9781450306638
The proceedings contain 55 papers. The topics discussed include: commutative set: a language extension for implicit parallel programming;the Tao of parallelism in algorithms;data representation synthesis;synthesizing geometry constructions;synthesis of loop-free programs;generalized just-in-time trace compilation using a parallel task farm in a dynamic binary translator;brainy: effective selection of data structures;an SSA-based algorithm for optimal speculative code motion under an execution profile;caisson: a hardware description language for secure information flow;steno: automatic optimization of declarative queries;languages as libraries;automatic CPU-GPU communication management and optimization;automatic compilation of MATLAB programs for synergistic execution on heterogeneous processors;partial-coherence abstractions for relaxed memory models;and probabilistic, modular and scalable inference of typestate specifications.
Several recent projects have shown the feasibility of verifying low-level systems software. Verifications based on automated theorem-proving have omitted reasoning about first-class code pointers, which is critical fo...
详细信息
ISBN:
(纸本)9781450306638
Several recent projects have shown the feasibility of verifying low-level systems software. Verifications based on automated theorem-proving have omitted reasoning about first-class code pointers, which is critical for tasks like certifying implementations of threads and processes. Conversely, verifications that deal with first-class code pointers have featured long, complex, manual proofs. In this paper, we introduce the Bedrock framework, which supports mostly-automated proofs about programs with the full range of features needed to implement, e.g., language runtime systems. The heart of our approach is in mostly-automated discharge of verification conditions inspired by separation logic. Our take on separation logic is computational, in the sense that function specifications are usually written in terms of reference implementations in a purely functional language. Logical quantifiers are the most challenging feature for most automated verifiers;by relying on functional programs (written in the expressive language of the Coq proof assistant), we are able to avoid quantifiers almost entirely. This leads to some dramatic improvements compared to both past work in classical verification, which we compare against with implementations of data structures like binary search trees and hash tables;and past work in verified programming with code pointers, which we compare against with examples like function memoization and a cooperative threading library.
Sequential programming models express a total program order, of which a partial order must be respected. This inhibits parallelizing tools from extracting scalable performance. Programmer written semantic commutativit...
详细信息
ISBN:
(纸本)9781450306638
Sequential programming models express a total program order, of which a partial order must be respected. This inhibits parallelizing tools from extracting scalable performance. Programmer written semantic commutativity assertions provide a natural way of relaxing this partial order, thereby exposing parallelism implicitly in a program. Existing implicit parallel programming models based on semantic commutativity either require additional programming extensions, or have limited expressiveness. This paper presents a generalized semantic commutativity based programming extension, called Commutative Set (COMMSET), and associated compiler technology that enables multiple forms of parallelism. COMMSET expressions are syntactically succinct and enable the programmer to specify commutativity relations between groups of arbitrary structured code blocks. Using only this construct, serializing constraints that inhibit parallelization can be relaxed, independent of any particular parallelization strategy or concurrency control mechanism. COMMSET enables well performing parallelizations in cases where they were inapplicable or non-performing before. By extending eight sequential programs with only 8 annotations per program on average, Comm S ET and the associated compiler technology produced a geomean speedup of 5.7x on eight cores compared to 1.5x for the best non-COMMSET parallelization.
Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights a...
详细信息
ISBN:
(纸本)9781450306638
Information flow is an important security property that must be incorporated from the ground up, including at hardware design time, to provide a formal basis for a system's root of trust. We incorporate insights and techniques from designing information-flow secure programminglanguages to provide a new perspective on designing secure hardware. We describe a new hardware description language, Caisson, that combines domain-specific abstractions common to hardware design with insights from type-based techniques used in secure programminglanguages. The proper combination of these elements allows for an expressive, provably-secure HDL that operates at a familiar level of abstraction to the target audience of the language, hardware architects. We have implemented a compiler for Caisson that translates designs into Verilog and then synthesizes the designs using existing tools. As an example of Caisson's usefulness we have addressed an open problem in secure hardware by creating the first-ever provably information-flow secure processor with micro-architectural features including pipelining and cache. We synthesize the secure processor and empirically compare it in terms of chip area, power consumption, and clock frequency with both a standard (insecure) commercial processor and also a processor augmented at the gate level to dynamically track information flow. Our processor is competitive with the insecure processor and significantly better than dynamic tracking.
Every day, millions of computer end-users need to perform tasks over large, tabular data, yet lack the programming knowledge to do such tasks automatically. In this work, we present an automatic technique that takes f...
详细信息
ISBN:
(纸本)9781450306638
Every day, millions of computer end-users need to perform tasks over large, tabular data, yet lack the programming knowledge to do such tasks automatically. In this work, we present an automatic technique that takes from a user an example of how the user needs to transform a table of data, and provides to the user a program that implements the transformation described by the example. In particular, we present a language of programs TableProg that can describe transformations that real users require. We then present an algorithm ProgFrornEx that takes an example input and output table, and infers a program in TableProg that implements the transformation described by the example. When the program is applied to the example input, it reproduces the example output. When the program is applied to another, potentially larger, table with a "similar" layout as the example input table, then the program produces a corresponding table with a layout that is similar to the example output table. A user can apply ProgFrornEx interactively, providing multiple small examples to obtain a program that implements the transformation that the user desires. Moreover, ProgFromEx can help identify "noisy" examples that contain errors. To evaluate the practicality of TableProg and ProgFromEx, we implemented ProgFromEx as a module for the Microsoft Excel spreadsheet program. We applied the module to automatically implement over 50 table transformations specified by end-users through examples on online Excel help forums. In seconds, ProgFromEx found programs that satisfied the examples and could be applied to larger input tables. This experience demonstrates that TableProg and ProgFromEx can significantly automate the tasks over tabular data that users need to perform.
暂无评论