fixed-point arithmetic is a popular alternative to floating-pointarithmetic on embedded systems. Existing work on the verification of fixed-point programs relies on custom formalizations of fixed-point arithmetic, wh...
详细信息
ISBN:
(纸本)9783030510749;9783030510732
fixed-point arithmetic is a popular alternative to floating-pointarithmetic on embedded systems. Existing work on the verification of fixed-point programs relies on custom formalizations of fixed-point arithmetic, which makes it hard to compare the described techniques or reuse the implementations. In this paper, we address this issue by proposing and formalizing an SMT theory of fixed-point arithmetic. We present an intuitive yet comprehensive syntax of the fixed-point theory, and provide formal semantics for it based on rational arithmetic. We also describe two decision procedures for this theory: one based on the theory of bit-vectors and the other on the theory of reals. We implement the two decision procedures, and evaluate our implementations using existing mature SMT solvers on a benchmark suite we created. Finally, we perform a case study of using the theory we propose to verify properties of quantized neural networks.
Probabilistic CMOS technology or PCMOS, is an ultra-low-power solution particularly well suited to multimedia applications. Novel results are presented here showing that PCMOS computes with less error and less power t...
详细信息
Probabilistic CMOS technology or PCMOS, is an ultra-low-power solution particularly well suited to multimedia applications. Novel results are presented here showing that PCMOS computes with less error and less power than reduced-bit-width, or truncated, deterministic arithmetic due to quantization noise injected by limited-precision, fxed-point processing. Simulation results, derived from physical layout of a PCMOS, fixed-point, ripple-carry, adder, are given showing energy savings of up to 2X can be achieved while surpassing the quantization performance of reduced-precision solutions.
Massive MIMO base stations are expensive to build due to the requirement for a large number of RF transceivers and high-resolution analog-to-digital converters. A way to reduce the implementation cost is to build the ...
详细信息
ISBN:
(纸本)9791188428090
Massive MIMO base stations are expensive to build due to the requirement for a large number of RF transceivers and high-resolution analog-to-digital converters. A way to reduce the implementation cost is to build the base stations with inexpensive hardware, resulting in the received signals to be coarsely quantized. To implement the data detection and decoding process in real time, fixed-point arithmetic with reduced precision is used. This article reports the minimum wordlength needed to maintain the Bit-Error Rate (BER) at acceptable levels. Specifically, the eigenvalue decomposition, which is the most computationally demanding portion of the receiver algorithm, can be calculated with wordlengths of 7 and 10 bits for eigenvectors and eigenvalues, respectively.
This paper investigates a wordlength effect on an adaptive phased array antenna using the Constant Modulus Algorithm (CMA). The antenna is a flat four-beam compact phased array whose main beam can be switched into fou...
详细信息
ISBN:
(纸本)078039433X
This paper investigates a wordlength effect on an adaptive phased array antenna using the Constant Modulus Algorithm (CMA). The antenna is a flat four-beam compact phased array whose main beam can be switched into four directions by using four one-bit phase shifters. A beam with maximum received power is exploited as an initial beam for CMA. The wordlength investigation will be shown in term of the signal constellations, SINR trajectories and antenna patterns.
In this paper the data precision behavior of adaptive filters is studied when implemented in fixedpoint. The performance evaluation has been tested under two parameters: the number of bits used for the integer and fr...
详细信息
In this paper the data precision behavior of adaptive filters is studied when implemented in fixedpoint. The performance evaluation has been tested under two parameters: the number of bits used for the integer and fractional parts in fixedpoint format and two's complement arithmetic and the preprocessing order of speech data. To validate our results we have evaluated the relative mean square error between the results obtained using the standard 32-bit floating point and different fixedpoint formats. The input data being used for testing are speech sounds in Spanish for speech recognition purposes. The obtained results allow us to optimally dimension an arithmetic unit in fixed-point formats that can be quickly implemented by high-level synthesis tools. The results from this work can also be used for implementing the pre-processing stage for begin-end word detection.
We consider the problem of estimating the numerical accuracy of programs with operations in fixed-point arithmetic and variables of arbitrary, mixed precision, and possibly non-deterministic value. By applying a set o...
详细信息
We consider the problem of estimating the numerical accuracy of programs with operations in fixed-point arithmetic and variables of arbitrary, mixed precision, and possibly non-deterministic value. By applying a set of parameterised rewrite rules, we transform the relevant fragments of the program under consideration into sequences of operations in integer arithmetic over vectors of bits, thereby reducing the problem as to whether the error enclosures in the initial program can ever exceed a given order of magnitude to simple reachability queries on the transformed program. We describe a possible verification flow and a prototype analyser that implements our technique. We present an experimental evaluation on a particularly complex industrial case study, including a preliminary comparison between bit-level and word-level decision procedures.
In safety-critical applications that rely on the solution of an optimization problem, the certification of the optimization algorithm is of vital importance. Certification and suboptimality results are available for a...
详细信息
In safety-critical applications that rely on the solution of an optimization problem, the certification of the optimization algorithm is of vital importance. Certification and suboptimality results are available for a wide range of optimization algorithms. However, a typical underlying assumption is that the operations performed by the algorithm are exact, i.e., that there is no numerical error during the mathematical operations, which is hardly a valid assumption in a real hardware implementation. This is particularly true in the case of fixed-point hardware, where computational inaccuracies are not uncommon. This article presents a certification procedure for the proximal gradient method for box -constrained QP problems implemented in fixed-point arithmetic. The procedure provides a method to select the minimal fractional precision required to obtain a certain suboptimality bound, indicating the maximum number of iterations of the optimization method required to obtain it. The procedure makes use of formal verification methods to provide arbitrarily tight bounds on the suboptimality guarantee. We apply the proposed certification procedure on the implementation of a non-trivial model predictive controller on 32-bit fixed-point hardware.(c) 2023 Elsevier Ltd. All rights reserved.
The relation between roundoff noise and attenuation sensitivity in digital filters with fixed-point arithmetic given in a letter by Fettweis (A Fettseis, IEEE Trans. Citcuit Theory, Vol. CT-20, pp. 174-175, March 1973...
详细信息
The relation between roundoff noise and attenuation sensitivity in digital filters with fixed-point arithmetic given in a letter by Fettweis (A Fettseis, IEEE Trans. Citcuit Theory, Vol. CT-20, pp. 174-175, March 1973) is found to be overly simplified. Some examples where the preceding relation does not hold are given along with a tentative explanation.
Although double-precision floating-pointarithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in ...
详细信息
Although double-precision floating-pointarithmetic currently dominates high-performance computing, there is increasing interest in smaller and simpler arithmetic types. The main reasons are potential improvements in energy efficiency and memory footprint and bandwidth. However, simply switching to lower-precision types typically results in increased numerical errors. We investigate approaches to improving the accuracy of reduced-precision fixed-point arithmetic types, using examples in an important domain for numerical computation in neuroscience: the solution of ordinary differential equations (ODEs). The Izhikevich neuron model is used to demonstrate that rounding has an important role in producing accurate spike timings from explicit ODE solution algorithms. In particular, fixed-point arithmetic with stochastic rounding consistently results in smaller errors compared to single-precision floating-point and fixed-point arithmetic with round-to-nearest across a range of neuron behaviours and ODE solvers. A computationally much cheaper alternative is also investigated, inspired by the concept of dither that is a widely understood mechanism for providing resolution below the least significant bit in digital signal processing. These results will have implications for the solution of ODEs in other subject areas, and should also be directly relevant to the huge range of practical problems that are represented by partial differential equations. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.
暂无评论