A characterization of definability by positive first order formulas in terms of Fraisse-Ehrenfeucht-like games is developed. Using this characterization, an elementary, purely combinatorial, proof of the failure of Ly...
详细信息
A characterization of definability by positive first order formulas in terms of Fraisse-Ehrenfeucht-like games is developed. Using this characterization, an elementary, purely combinatorial, proof of the failure of Lyndon's Lemma (that every monotone first order property is expressible positively) for finite models is given. the proof implies that first order logic is a bad candidate to the role of uniform version of positive Boolean circuits of constant depth and polynomial size. Although Lyndon's Lemma fails for finite models, some similar characterization may be established for finitely monotone properties, and we formulate a particular open problem in this direction.
Let L$-∞ω/$+ω/ be the infinitary language obtained from the first-order language of graphs by closure under conjunctions and disjunctions of arbitrary sets of formulas, provided only finitely many distinct variable...
详细信息
Let L$-∞ω/$+ω/ be the infinitary language obtained from the first-order language of graphs by closure under conjunctions and disjunctions of arbitrary sets of formulas, provided only finitely many distinct variables occur among the formulas. Let p(n) be the edge probability of the random graph on n vertices. Previous articles have shown that when p(n) is constant or p(n) = n-α and α > 1, then every sentence in L$-∞ω/omega has probability that converges as n gets large;however, when p(n) = n-α and α -α. It is shown that if α is irrational, then every sentence in L$-∞ω/$+ω/ has probability that converges to 0 to 1. It is also shown that if α = 1, then there are sentences in the deterministic transitive closure logic (and therefore in L$-∞ω/$+ω/), whose probability does not converge.
One formulation of the concept of logic programming is the notion of Abstract logic Programming Language, introduced in [8]. Central to that definition is uniform proof, which enforces the requirements of inference di...
详细信息
One formulation of the concept of logic programming is the notion of Abstract logic Programming Language, introduced in [8]. Central to that definition is uniform proof, which enforces the requirements of inference direction, including goal-directedness, and the duality of readings, declarative and procedural. We use this technology to investigate Disjunctive logic Programming (DLP), an extension of traditional logic programming that permits disjunctive program clauses. this extension has been considered by some to be inappropriately identified withlogic programming because the indefinite reasoning introduced by disjunction violates the goal-oriented search directionality central to logic programming. We overcome this criticism by showing that the requirement of uniform provability can be realized in a logic more general than that of DLP under a modest, sound, modification of programs. We use this observation to derive inference rules that capture the essential proof structure of InH-Prolog, a known proof procedure for DLP.
the Verilog hardware description language (HDL) is widely used to model the structure and behaviour of digital systems ranging from simple hardware building blocks to complete systems. Its semantics is based on the sc...
详细信息
the Verilog hardware description language (HDL) is widely used to model the structure and behaviour of digital systems ranging from simple hardware building blocks to complete systems. Its semantics is based on the scheduling of events and the propagation of changes. Different Verilog models of the same device are used during the design process and it is important that these be 'equivalent';formal methods for ensuring this could be commercially significant. Unfortunately, there is very little theory available to help. this self-contained tutorial paper explains the semantics of Verilog informally and poses a number of logical and semantic problems that are intended to provoke further research. Any theory developed to support Verilog is likely to be useful for the analysis of the similar (but more complex) language VHDL.
A major obstacle in applying finite-state model checking to the verification of large systems is the combinatorial explosion of the state space arising when many loosely coupled parallel processes are considered. the ...
详细信息
A major obstacle in applying finite-state model checking to the verification of large systems is the combinatorial explosion of the state space arising when many loosely coupled parallel processes are considered. the problem also known as the state-explosion problem has been attacked from various sides. this paper presents a new approach based on partial model checking: Parts of the concurrent system are gradually removed while transforming the specification accordingly. When the intermediate specifications constructed in this manner can be kept small, the state-explosion problem is avoided. Experimental results with a prototype implemented in Standard ML, shows that for Milner's Scheduler - an often used benchmark - this approach improves on the published results on Binary Decision Diagrams and is comparable to results obtained using generalized Decision Diagrams. Specifications are expressed in a variant of the modal μ-calculus.
In modular verification the specification of a module consists of two parts. One part describes the guaranteed behavior of the module. the other part describes the assumed behavior of the environment with which the mo...
详细信息
In modular verification the specification of a module consists of two parts. One part describes the guaranteed behavior of the module. the other part describes the assumed behavior of the environment with which the module is interacting. this is called the assume-guarantee paradigm. Even when one specifies the guaranteed behavior of the module in a branching temporal logic, the assumption in the assume-guarantee pair concerns the interaction of the environment withthe module along each computation, and is therefore often naturally expressed in linear temporal logic. In this paper we consider assume-guarantee specifications in which the assumption is given by an LTL formula and the guarantee is given by a CTL formula. Verifying modules with respect to such specifications is called the linear-branching model-checking problem. We apply automata-theoretic techniques to obtain a model-checking algorithm whose running time is linear in the size of the module and the size of the CTL guarantee, but doubly exponential in the size of the LTL assumption. We also show that the high complexity in the size of the LTL specification is inherent by proving that the problem is EXPSPACE-complete. the lower bound applies even if the branching temporal guarantee is restricted to be specified in CTL, the universal fragment of CTL.
We investigate properties of finite relational structures over the reals expressed by first-order sentences whose predicates are the relations of the structure plus arbitrary polynomial inequalities, and whose quantif...
详细信息
We investigate properties of finite relational structures over the reals expressed by first-order sentences whose predicates are the relations of the structure plus arbitrary polynomial inequalities, and whose quantifiers can range over the whole set of reals. In constraint programming terminology, this corresponds to Boolean real polynomial constraint queries on finite structures. the fact that quantifiers range over all reals seems crucial;however, we observe that each sentence in the first-order theory of the reals can be evaluated by letting each quantifier range over only a finite set of real numbers without changing its truth value. Inspired by this observation, we then show that when all polynomials used are linear, each query can be expressed uniformly on all finite structures by a sentence of which the quantifiers range only over the finite domain of the structure. In other words, linear constraint programming on finite structures can be reduced to ordinary query evaluation as usual in finite model theory and databases. Moreover, if only 'generic' queries are taken into consideration, we show that this can be reduced even further by proving that such queries can be expressed by sentences using as polynomial inequalities only those of the simple form x < y.
As processor performance continues to improve, more emphasis must be placed on the performance of the memory system. In this paper, a detailed characterization of data cache behavior for individual load instructions i...
详细信息
As processor performance continues to improve, more emphasis must be placed on the performance of the memory system. In this paper, a detailed characterization of data cache behavior for individual load instructions is given. We show that by selectively applying cache line allocation according the characteristics of individual load instructions, overall performance can be improved for boththe data cache and the memory system. this approach can improve some aspects of memory performance by as much as 60 percent on existing executables.
Current superscalar processors, both RISC and CISC, require substantial instruction fetch and decode bandwidth to keep multiple functional units utilized. While CISC instructions can sometimes provide reduced fetch ba...
详细信息
Current superscalar processors, both RISC and CISC, require substantial instruction fetch and decode bandwidth to keep multiple functional units utilized. While CISC instructions can sometimes provide reduced fetch bandwidth requirements, they are correspondingly more difficult to decode. A hardware assist, called a fill unit, can dynamically collect decoded microoperations into a decoded instruction cache. Future code fetches to those locations can be satisfied out of this cache and thus bypass the decoding logic. this approach is investigated using the Intel x86 architecture, and a speedup of approximately a factor of two over a P6-like decoding structure is seen for the three SPEC benchmarks investigated. this design is accompanied by a microengine-register allocation and renaming scheme that prevents the increased supply of microoperations from placing excessive demands on the normal register renaming hardware.
Branch target buffers, or BTBs, are small caches for recently accessed program branching information. Like data caches, the set of intercepted addresses is divided into equivalence classes based on the low order bits ...
详细信息
Branch target buffers, or BTBs, are small caches for recently accessed program branching information. Like data caches, the set of intercepted addresses is divided into equivalence classes based on the low order bits of an address. Unlike data caches, however, complete resolution of a single address from within an equivalence class is not required for correct program execution. Substantial savings are therefore possible by employing partial resolution, using fewer tag bits than necessary to uniquely identify an address. We present our analysis of the relationship between the number of tag bits in a branch target buffer and prediction accuracy, based on dynamic simulations of the SPECINT92 benchmark suite. For a 512 entry BTB, our results indicate that, on average, only 2 tag bits are necessary to obtain 99.9% of the accuracy obtainable with a full tag. this suggests that existing microprocessors can achieve substantial area savings in BTB tag storage by employing partial resolution.
暂无评论