We define a call-by-value variant of Godel's system T with references, and equip it with a linear dependent type and effect system, called dlT, that can estimate the time complexity of programs, as a function of t...
详细信息
We define a call-by-value variant of Godel's system T with references, and equip it with a linear dependent type and effect system, called dlT, that can estimate the time complexity of programs, as a function of the size of their inputs. We prove that the type system is intentionally sound, in the sense that it over-approximates the complexity of executing the programs on a variant of the CEK abstract machine. Moreover, we define a sound and complete type inference algorithm which critically exploits the subrecursive nature of dlT. Finally, we demonstrate the usefulness of dlT for analyzing the complexity of cryptographic reductions by providing an upper bound for the constructed adversary of the Goldreich-Levin theorem.
We introduce a quantum lambda calculus inspired by Lafont's Soft Linear Logic and capturing the polynomial quantum complexity classes EQP, EQP and ZQP. The Calculus is based on the "classical control and quan...
详细信息
We introduce a quantum lambda calculus inspired by Lafont's Soft Linear Logic and capturing the polynomial quantum complexity classes EQP, EQP and ZQP. The Calculus is based on the "classical control and quantum data" paradigm. This is the first example of a formal system capturing quantum complexity classes in the spirit of implicit computational complexity - it is machine-free and no explicit bound (e.g., polynomials) appears in its syntax. (C) 2009 Elsevier B.V. All rights reserved.
Automatic complexity analysis has not reached mainstream adoption due to outstanding challenges, such as scalability and usability, and no formally verified analyzer exists. However, the need to evaluate resource usag...
详细信息
ISBN:
(纸本)9781450399012
Automatic complexity analysis has not reached mainstream adoption due to outstanding challenges, such as scalability and usability, and no formally verified analyzer exists. However, the need to evaluate resource usage is crucial: even a guaranteed correct program, whose memory usage exceeds available resources, is unreliable. The field of implicit computational complexity (ICC) offers a potential avenue to resolving some of these outstanding challenges by introducing unique, machine-independent, and flexible approaches to program analysis. But since ICC techniques are mostly theoretical, it is unclear how strongly these assumptions hold in practice. This project defines a 3-directional plan-focused on practical analysis, compiler-integration, and formal verification-to assess the suitability of ICC to address outstanding challenges in automatic complexity analysis.
We argue that there is a link between implicit computational complexity theory and reversible computation. We introduce inherently reversible programming languages which capture the complexity classes ETImE and p. Fur...
详细信息
We argue that there is a link between implicit computational complexity theory and reversible computation. We introduce inherently reversible programming languages which capture the complexity classes ETImE and p. Furthermore, we discuss and analyze higher order versions of our reversible programming languages. (c) 2021 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://***/licenses/by/4.0/).
We show that context semantics can be fruitfully applied to the quantitative analysis of proof normalization in linear logic. In particular, context semantics lets us de. ne the weight of a proof-net as a measure of i...
详细信息
We show that context semantics can be fruitfully applied to the quantitative analysis of proof normalization in linear logic. In particular, context semantics lets us de. ne the weight of a proof-net as a measure of its inherent complexity: it is both an upper bound to normalization time (modulo a polynomial overhead, independently on the reduction strategy) and a lower bound to the amount of resources needed to compute the normal form. Weights are then exploited in proving strong soundness theorems for various subsystems of linear logic, namely elementary linear logic, soft linear logic, and light linear logic.
Two restricted imperative programming languages are considered: One is a slight modification of a loop language studied intensively in the literature, the other is a stack programming language over an arbitrary but fi...
详细信息
Two restricted imperative programming languages are considered: One is a slight modification of a loop language studied intensively in the literature, the other is a stack programming language over an arbitrary but fixed alphabet, supporting a suitable loop concept over stacks. The paper presents a purely syntactical method for analysing the impact of nesting loops on the running time. This gives rise to a uniform measure mu on both loop and stack programs, that is, a function that assigns to each such program P a natural number mu(P) computable from the syntax of P. It is shown that stack programs of mu-measure n compute exactly those functions computed by a Turing machine whose running time lies in Grzegorczyk class epsilon(ndivided by2). In particular, stack programs of mu-measure 0 compute precisely the polynomial-time computable functions. Furthermore, it is shown that loop programs of mu-measure n compute exactly the functions in epsilon(ndivided by2). In particular, loop programs of mu-measure 0 compute precisely the linear-space computable functions. (C) 2003 Elsevier B.V. All rights reserved.
A key problem in implicitcomplexity is to analyse the impact on program run times of nesting control structures, such as recursion in all finite types in functional languages or for-do statements in imperative langua...
详细信息
A key problem in implicitcomplexity is to analyse the impact on program run times of nesting control structures, such as recursion in all finite types in functional languages or for-do statements in imperative languages. Three types of programs are studied. One type of program can only use ground type recursion. Another is concerned with imperative programs: ordinary loop programs and stack programs. Programs of the third type can use higher type recursion on notation as in functional programming languages. The present approach to analysing run time yields various characterisations of FPTIME and all Grzegorczyk classes at and above the FLINSPACE level. Central to this approach is the distinction between "top recursion" and "side recursion". Top recursions are the only form of recursion which can cause an increase in complexity. Counting the depth of nested top recursions leads to several versions of "the mu-measure". In this way, various recent solutions of key problems in implicitcomplexity are integrated into a uniform approach. (c) 2004 Elsevier B.V. All rights reserved.
A key problem in implicit computational complexity is to analyse the impact on program run times of nesting restricted control structures, such as for-do statements in imperative languages. This problem has two aspect...
详细信息
A key problem in implicit computational complexity is to analyse the impact on program run times of nesting restricted control structures, such as for-do statements in imperative languages. This problem has two aspects. One is whether there are methods of extracting information from the syntax of such programs that give insight as to why some nesting of control structures may cause a blow up in complexity, e.g. from polynomial to (iterated) exponential time, while others do not. Bearing in mind that there are limitations to any such method, the other is whether a given method is “optimal” in the sense that it provides a full understanding of the mechanisms that cause and control the complexity of computations. This paper presents a graph theoretical analysis of control in stack programs, called “garland measure”. It is shown that (1) stack programs of garland measure n compute exactly those functions computed by a Turing machine whose running time (as a function of input size) lies in Grzegorczyk class E n+2 . In particular, stack programs of garland measure 0 compute precisely the polynomial-time computable functions. Furthermore, it is shown that the garland measure is “optimal” in the sense that no other measure on stack programs satisfying (1) can admit more algorithms at any level when restricting to “core programs” that comprise those stack manipulations which cause and control computationalcomplexity.
The objective of this paper is to prove that the initial Polo setting, with both inductive and coinductive data, is sound for polynomial size (PSIZE). Explicitly this means all programs written in Pola have their outp...
详细信息
The objective of this paper is to prove that the initial Polo setting, with both inductive and coinductive data, is sound for polynomial size (PSIZE). Explicitly this means all programs written in Pola have their output size bounded by a polynomial in their input size. The paper describes the polarized categorical semantics for Polo and establishes the result by providing categorical models for various fragments of Polo which have explicit size bound information built into the maps. To obtain PSIZE soundness for Pola with just inductive data, the semantics in sized sets suffices. Sized sets are sets equipped with a "size map" which associates to each element a size. Size is usually just a natural number but, more generally, the size could be an element of a size rig. A polarized category consists of an opponent and a player category joined by a module. Sized sets can be used to create a polarized category by letting the opponent and module maps be bounded by polynomials while the player category consists of maps bounded by a constant. This gives a Pola setting with inductive data and immediately establishes the PSIZE soundness of the initial Polo setting. The main technical difficulty of the paper is to provide a semantics which correctly models coinductive data as well. For this "amortized" sets are introduced: these are set in which a higher-order size function is associated to each element, which given a size returns a size. This is amortized as one is concerned with the asymptotic behavior of these functions. While amortized sets have coinductive data they are not affine closed: a final step, using equivalence relations, is required to obtain a model which includes this aspect of Polo structure. While PSIZE by itself is a very weak bound it is a crucial step in establishing polynomial space (PSPACE) and polynomial time (PTIME) bounds for these settings. (C) 2013 Elsevier B.V. All rights reserved.
Several type systems have been proposed to statically control the time complexity of lambda-calculus programs and characterize complexity classes such as FPTIME or FEXPTIME. A first line of research stems from linear ...
详细信息
Several type systems have been proposed to statically control the time complexity of lambda-calculus programs and characterize complexity classes such as FPTIME or FEXPTIME. A first line of research stems from linear logic and restricted versions of its I-modality controlling duplication. An instance of this is light linear logic for polynomial time computation [5]. A second approach relies on the idea of tracking the size increase between input and output, and together with a restricted recursion scheme, to deduce time complexity bounds. This second approach is illustrated for instance by non-size-increasing types [8]. However, both approaches suffer from limitations. The first one, that of linear logic, has a limited intensional expressivity, that is to say some natural polynomial time programs are not typable. As to the second approach it is essentially linear, more precisely it does not allow for a non-linear use of functional arguments. In the present work we incorporate both approaches into a common type system, in order to overcome their respective constraints. The source language we consider is a lambda-calculus with datatypes and iteration, that is to say a variant of Gtidel's system T. Our goal is to design a system for this language allowing both to handle non-linear functional arguments and to keep a good intensional expressivity. We illustrate our methodology by choosing the system of elementary linear logic (ELL) and combining it with a system of linear size types. We discuss the expressivity of this new type system, called sEAL, and prove that it gives a characterization of the complexity classes FPTIME and 2k-FEXPTIME, for k >= 0. (C) 2019 Elsevier B.V. All rights reserved.
暂无评论