The proceedings contain 30 papers. The special focus in this conference is on programming Languages and Systems. The topics include: A software engineering perspective;regular programming for quantitative properties o...
ISBN:
(纸本)9783662494974
The proceedings contain 30 papers. The special focus in this conference is on programming Languages and Systems. The topics include: A software engineering perspective;regular programming for quantitative properties of data streams;formalizing single-assignment program verification;practical optional types for clojure;a timed process algebra for wireless networks with an application in routing;computing with semirings and weak rig groupoids;modular termination verification for non-blocking concurrency;call-by-push-value from a linear logic point of view;visible type application;automatically splitting a two-stage lambda calculus;probabilistic NetKAT;coordinated concurrent programming in syndicate;an application of computable distributions to the semantics of probabilistic programming languages;weakest precondition reasoning for expected run-times of probabilistic programs;a lazy approach to adaptive accuracy refinement for numerical computations;binder boilerplate tied up;on the relative expressiveness of higher-order session processes;a classical realizability model for a semantical value restriction;probabilistic functions and cryptographic oracles in higher order logic;extensible and efficient automation through reflective tactics;an algorithm inspired by constraint solvers to infer inductive invariants in numeric programs;functional big-step semantics;classical by-need;refactoring by reverse macro expansion;type error diagnosis for embedded DSLs by two-stage specialized type rules;modular verification of message passing programs;a higher-order abstract syntax approach to verified transformations on functional programs and the expressive power of monotonic parallel composition.
Call-by-need calculi are complex to design and reason with. When adding control effects, the very notion of canonicity is irremediably lost, the resulting calculi being necessarily ad hoc. This calls for a design of c...
详细信息
ISBN:
(纸本)9783662494981;9783662494974
Call-by-need calculi are complex to design and reason with. When adding control effects, the very notion of canonicity is irremediably lost, the resulting calculi being necessarily ad hoc. This calls for a design of call-by-need guided by logical rather than operational considerations. Ariola et al. proposed such an extension of call-by-need with control making use of Curien and Herbelin's duality of computation framework. In this paper, Classical by-need is developed as an alternative extension of call-by-need with control, better-suited for a programming-oriented reader. This method is proof-theoretically oriented by relying on linear head reduction (LHR) - an evaluation strategy coming from linear logic - and on the lambda mu-calculus - a classical extension of the lambda-calculus. More precisely, the paper contains three main contributions: LHR is first reformulated by introducing closure contexts and extended to the lambda mu-calculus;it is then shown how to derive a call-by-need calculus from LHR. The result is compared with standard call-by-need calculi, namely those of Ariola-Felleisen and Chang-Felleisen;it is finally shown how to lift the previous item to classical logic, that is from the lambda-calculus to the lambda mu-calculus, providing a classical by-need calculus, that is a lazy lambda mu-calculus. The result is compared with the call-by-need with control of Ariola et al.
The proceedings contain 13 papers. The special focus in this conference is on Coalgebraic Methods in Computer Science. The topics include: Fixed points of functors;compositional coinduction with sized types;lawvere ca...
ISBN:
(纸本)9783319403694
The proceedings contain 13 papers. The special focus in this conference is on Coalgebraic Methods in Computer Science. The topics include: Fixed points of functors;compositional coinduction with sized types;lawvere categories as composed props;transitivity and difunctionality of bisimulations;affine monads and side-effect-freeness;duality of equations and coequations via contravariant adjunctions;category theoretic semantics for theorem proving in logicprogramming;product rules and distributive laws;on the logic of generalised metric spaces;a complete logic for behavioural equivalence in coalgebras of finitary set functors;coalgebraic completeness-via-canonicity;relational lattices via duality and on local characterization of global timed bisimulation for abstract continuous-time systems.
Equational theories that contain axioms expressing associativity and commutativity (AC) of certain operators are ubiquitous. Theorem proving methods in such theories rely on well-founded orders that are compatible wit...
详细信息
Equational theories that contain axioms expressing associativity and commutativity (AC) of certain operators are ubiquitous. Theorem proving methods in such theories rely on well-founded orders that are compatible with the AC axioms. In this paper, we consider various definitions of AC-compatible Knuth-Bendix orders. The orders of Steinbach and of Korovin and Voronkov are revisited. The former is enhanced to a more powerful version, and we modify the latter to amend its lack of monotonicity on non-ground terms. We further present new complexity results. An extension reflecting the recent proposal of subterm coefficients in standard Knuth-Bendix orders is also given. The various orders are compared on problems in termination and completion.
Analysis of massive codebases ("big code") presents an opportunity for drawing insights about programmingpractice and enabling code reuse. One of the main challenges in analyzing big code is finding a repre...
详细信息
Analysis of massive codebases ("big code") presents an opportunity for drawing insights about programmingpractice and enabling code reuse. One of the main challenges in analyzing big code is finding a representation that captures sufficient semantic information, can be constructed efficiently, and is amenable to meaningful comparison operations. We present a formal framework for representing code in large codebases. In our framework, the semantic descriptor for each code snippet is a partial temporal specification that captures the sequences of method invocations on an API. The main idea is to represent partial temporal specifications as symbolic automata-automata where transitions may be labeled by variables, and a variable can be substituted by a letter, a word, or a regular language. Using symbolic automata, we construct an abstract domain for static analysis of big code, capturing both the partialness of a specification and the precision of a specification. We show interesting relationships between lattice operations of this domain and common operators for manipulating partial temporal specifications, such as building a more informative specification by consolidating two partial specifications, and comparing partial temporal specifications.
Learning to reason and understand the world's knowledge is a fundamental problem in Artificial Intelligence (AI). Traditional symbolic AI methods were popular in the 1980s, when first-order logic rules were mostly...
Learning to reason and understand the world's knowledge is a fundamental problem in Artificial Intelligence (AI). Traditional symbolic AI methods were popular in the 1980s, when first-order logic rules were mostly handwritten, and reasoning algorithms were built on top of them. In the 90s, more and more researchers became interested in statistical methods that deal with the uncertainty of the data, using probabilistic models. While it is always hypothesized that both the symbolic and statistical approaches are necessary for building intelligent systems, in practice, bridging the two in a combined framework often brings intractability-most probabilistic first-order logics are simply not efficient enough for real-world sized tasks. For example, Markov logics [83] integrate first-order logics with Markov random field theory, but when mapping the entities in a knowledge base (KB) to the propositional theory (i. e., grounding), the size of the network depends on the number of facts in the KB-i. e., O(nk ) where k is the arity of the predicate, and n is the number of KB constants. In this thesis, we design a new probabilistic logicprogramming paradigm to address various scalability issues in probabilistic logics. We propose a group of scalable methods for inference, learning, and inducing the structure of probabilistic logics. More specifically, we propose a scalable probabilistic logic called ProPPR [105] to combine the best of the symbolic and statistical worlds. ProPPR can be viewed as a probabilistic version of Prolog, and we associate a feature vector for each clause to learn weights from data. The learned weights are used to control search during inference. ProPPR's inference scheme is very special: instead of performing potentially intractable global inference, ProPPR uses a provably-correct approximate personalized PageRank to conduct local grounding, whose inference time is independent of the size of the KB. To test ProPPR for large, real-world relational learni
Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired ad...
详细信息
ISBN:
(纸本)9781450335492
Input-output examples have emerged as a practical and user-friendly specification mechanism for program synthesis in many environments. While example-driven tools have demonstrated tangible impact that has inspired adoption in industry, their underlying semantics are less well-understood: what are "examples" and how do they relate to other kinds of specifications? This paper demonstrates that examples can, in general, be interpreted as refinement types. Seen in this light, program synthesis is the task of finding an inhabitant of such a type. This insight provides an immediate semantic interpretation for examples. Moreover, it enables us to exploit decades of research in type theory as well as its correspondence with intuitionistic logic rather than designing ad hoc theoretical frameworks for synthesis from scratch. We put this observation into practice by formalizing synthesis as proof search in a sequent calculus with intersection and union refinements that we prove to be sound with respect to a conventional type system. In addition, we show how to handle negative examples, which arise from user feedback or counterexample-guided loops. This theory serves as the basis for a prototype implementation that extends our core language to support ML-style algebraic data types and structurally inductive functions. Users can also specify synthesis goals using polymorphic refinements and import monomorphic libraries. The prototype serves as a vehicle for empirically evaluating a number of different strategies for resolving the nondeterminism of the sequent calculus bottom-up theorem-proving, term enumeration with refinement type checking, and combinations of both the results of which classify, explain, and validate the design choices of existing synthesis systems. It also provides a platform for measuring the practical value of a specification language that combines "examples" with the more general expressiveness of refinements.
This paper is concerned with Freeze LTL, a temporal logic on data words with registers. In a (multi-attributed) data word each position carries a letter from a finite alphabet and assigns a data value to a fixed, fini...
详细信息
ISBN:
(数字)9783662496305
ISBN:
(纸本)9783662496305;9783662496299
This paper is concerned with Freeze LTL, a temporal logic on data words with registers. In a (multi-attributed) data word each position carries a letter from a finite alphabet and assigns a data value to a fixed, finite set of attributes. The satisfiability problem of Freeze LTL is undecidable if more than one register is available or tuples of data values can be stored and compared arbitrarily. Starting from the decidable one-register fragment we propose an extension that allows for specifying a dependency relation on attributes. This restricts in a flexible way how collections of attribute values can be stored and compared. This conceptual dimension is orthogonal to the number of registers or the available temporal operators. The extension is strict. Admitting arbitrary dependency relations, satisfiability becomes undecidable. Tree-like relations, however, induce a family of decidable fragments escalating the ordinal-indexed hierarchy of fast-growing complexity classes, a recently introduced framework for non-primitive recursive complexities. This results in completeness for the class F is an element of(0). We employ nested counter systems and show that they relate to the hierarchy in terms of the nesting depth.
There is a disconnection between the theory and the practice of lightweight physical unclonable function (PUF)-based protocols. At a theoretical level, there exist several PUF-based authentication protocols with uniqu...
详细信息
There is a disconnection between the theory and the practice of lightweight physical unclonable function (PUF)-based protocols. At a theoretical level, there exist several PUF-based authentication protocols with unique features and novel efficiency claims, but most of these solutions lack real-world implementations with simple performance figures. On the other hand, practical protocol implementations are ad-hoc designs fixed to a specific functionality and with limited area optimisations. This work aims to bring these approaches on PUF protocols closer. The authors' contribution is twofold. First, they provide a novel ASIP (application-specific instruction set processor) that can efficiently execute PUF-based authentication protocols. The key novelty of the proposed ASIP is optimisation for area without degrading the performance. Second, they demonstrate the capability of their ASIP by mapping three secure PUF-based authentication protocols and benchmark their execution time, memory footprint, communication overhead, and power/energy consumption. Their results demonstrate the advantage of ASIP over dedicated architectures and also as opposed to general-purpose programming on an MSP430. The results further demonstrate various efficiency metrics that can be used to compare PUF-based protocol implementations.
暂无评论