This paper shows how finite precision arithmetic effects can deleteriously manifest themselves in both the stochastic gradient and the recursive least squares adaptive lattice filters. Closed form expressions are deri...
详细信息
This paper shows how finite precision arithmetic effects can deleteriously manifest themselves in both the stochastic gradient and the recursive least squares adaptive lattice filters. Closed form expressions are derived for the steady-state variance of the accumulated arithmetic error in a single adaptive lattice coefficient using a floating-point stochastic arithmetic error analysis. Emphasis is placed on the computational form or the time-recursive adaptive lattice coefficients. The analytical results show that the performance of adaptive lattice filters using a direct updating computational form is less sensitive to finite precision effects than that of adaptive lattice filters using an indirect updating computational form. This is shown to be the result of an accumulated arithmetic error that is reduced in the directly updated filter coefficients by the division of an unnormalized residual power estimate. In addition, a method for reducing the self-generated noise is presented. Experimental results obtained on a 32-b floating-point hardware implementation of the adaptive lattice filters and with computer simulations are included to verify the analytical results describing the effects of finite precision on adaptive lattice filters.
It has been suggested in the literature that ordinary finite-precision floating-point arithmetic is inadequate for geometric computation, and that researchers in numerical analysis may believe that the difficulties of...
详细信息
It has been suggested in the literature that ordinary finite-precision floating-point arithmetic is inadequate for geometric computation, and that researchers in numerical analysis may believe that the difficulties of error in geometric computation can be overcome by simple approaches. It is the purpose of this paper to show that these suggestions, based on an example showing failure of a certain algorithm for computing planar convex hulls, are misleading, and why this is so. It is first shown how the now-classical backward error analysis can be applied in the area of computational geometry. This analysis is relevant in the context of uncertain data, which may well be the practical context for computational-geometry algorithms such as, say, those for computing convex hulls. The exposition will illustrate the fact that the backward error analysis does not pretend to over come the problem of finite precision: it merely provides a way to distinguish those algorithms that overcome the problem to whatever extent it is possible to do so. It is then shown that often the situation in computational geometry is exactly parallel to other areas, such as the numerical solution of linear equations, or the algebraic eigenvalue problem. Indeed, the example mentioned can be viewed simply as an example of the use of an unstable algorithm, for a problem for which computational geometry has already discovered provably stable algorithms. Finally, the paper discusses the implications of these analyses for applications in three-dimensional solid modeling. This is done by considering a problem defined in terms of a simple extension of the planar convex-hull algorithm, namely, the verification of the well-formedness of extruded objects. A brief discussion concerning more difficult problems in solid modeling is also included
The process of proving some mathematical theorems can be greatly reduced by relying on numerically-intensive computations with a certified arithmetic. This article presents a formalization of floating-point arithmetic...
详细信息
The process of proving some mathematical theorems can be greatly reduced by relying on numerically-intensive computations with a certified arithmetic. This article presents a formalization of floating-point arithmetic that makes it possible to efficiently compute inside the proofs of the Coq system. This certified library is a multi-radix and multi-precision implementation free from underflow and overflow. It provides the basic arithmetic operators and a few elementary functions. (C) 2012 Elsevier Inc. All rights reserved.
Scientific and engineering applications rely on floating-point arithmetic to approximate real numbers. Due to the inherent rounding errors in floating-point numbers, error propagation during calculations can accumulat...
详细信息
Scientific and engineering applications rely on floating-point arithmetic to approximate real numbers. Due to the inherent rounding errors in floating-point numbers, error propagation during calculations can accumulate and lead to serious errors that may compromise the safety and reliability of the program. In theory, the most accurate method of error detection is to exhaustively search all possible floating-point inputs, but this is not feasible in practice due to the huge search space involved. Effectively and efficiently detecting maximum floating-point errors has been a challenge. To address this challenge, we design and implement an error detection tool for floating-point arithmetic expressions called HSED. It leverages modified mantissas under double precision floating-point types to simulate hierarchical searches from either half or single precision to double precision. Experimental results show that for 32 single-parameter arithmetic expressions in the FPBench benchmark test set, the error detection effects and performance of HSED are significantly better than the state-of-the-art error detection tools Herbie, S3FP and ATOMU. HSED outperforms Herbie, Herbie+, S3FP and ATOMU in 24, 19, 27 and 25 cases, respectively. The average time taken by Herbie, Herbie+, and S3FP is 1.82, 11.20, and 129.15 times longer than HSED, respectively.
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with co...
详细信息
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. (C) 2013 Elsevier Ireland Ltd. All rights reserved.
An approach to block floating-point arithmetic in recursive second-order direct form digital filters is proposed. Used in conjunction with residue (or error) feedback, the method gives improved scaling and roundoff no...
详细信息
An approach to block floating-point arithmetic in recursive second-order direct form digital filters is proposed. Used in conjunction with residue (or error) feedback, the method gives improved scaling and roundoff noise properties compared to an earlier approach [2]. It is conjectured from extensive simulation studies that the proposed secondorder structure is free from finite wordlength oscillations of all types.
The problem of reducing the fragility of digital controllers and filters implemented using finite-precision, floating-point arithmetic is considered. floating-point arithmetic parameter uncertainty is multiplicative, ...
详细信息
The problem of reducing the fragility of digital controllers and filters implemented using finite-precision, floating-point arithmetic is considered. floating-point arithmetic parameter uncertainty is multiplicative, unlike parameter uncertainty resulting from fixed-pointarithmetic. Based on first-order eigenvalue sensitivity analysis, an upper bound on the eigenvalue perturbations is derived. Consequently, open-loop and closed-loop eigenvalue sensitivity measures are proposed. These measures are dependent upon the filter/controller realization. Problems of obtaining the optimal realization with respect to both the open-loop and the closed-loop eigenvalue sensitivity measures are posed. The problem for the open-loop case is completely solved. Solutions for the closed-loop case are obtained using non-linear programming. The problems are illustrated with a numerical example.
Assume we use a binary floating-point arith-metic and that RN is the round-to-nearest function. Also assume that c is a constant or a real function of one or more variables, and that we have at our disposal a correctl...
详细信息
Assume we use a binary floating-point arith-metic and that RN is the round-to-nearest function. Also assume that c is a constant or a real function of one or more variables, and that we have at our disposal a correctly rounded implementation of c, sayc=RN(c). For evaluat-ing xc (resp.x/c or c/x), the natural way is to replace it by RN (xc) (***(x/c) or RN(c/x)), that is, to call functioncand to perform a floating-point multiplica-tion or division. This can be generalized to the approxima-tion of n/dbyRN(n/d) and the approximation of ndbyRN(nd), wheren=RN(n) andd=RN(d),and n and d are functions for which we have at our disposal acorrectly rounded implementation. We discuss tight error bounds in ulps of such approximations. From our results, one immediately obtains tight error bounds for calculations such as x & lowast;pi,ln(2)/x,x/(y+z),(x+y)& lowast;z,x/sqrt(y),sqrt(x)/y,(x+y)(z+t),(x+y)/(z+t),(x+y)/(zt),etc. in floating-point arithmetic
floating-point arithmetic is a well-known and extremely efficient way of performing approximate computations over the real numbers. Although it requires some careful considerations, floating-point numbers are nowadays...
详细信息
floating-point arithmetic is a well-known and extremely efficient way of performing approximate computations over the real numbers. Although it requires some careful considerations, floating-point numbers are nowadays routinely used to prove mathematical theorems. Numerical computations have been applied in the context of formal proofs too, as illustrated by the CoqInterval library. But these computations do not benefit from the powerful floating-point units available in modern processors, since they are emulated inside the logic of the formal system. This paper experiments with the use of hardware floating-point numbers for numerically intensive proofs verified by the Coq proof assistant. This gives rise to various questions regarding the formalization, the implementation, the usability, and the level of trust. This approach has been applied to the CoqInterval and ValidSDP libraries, which demonstrates a speedup of at least one order of magnitude.
Leading zero anticipation with error correction is a widely adopted technique in the implementation of high-speed IEEE-754-compliant floating-point units (FPUs), which are critical for area and power in multimedia-ori...
详细信息
Leading zero anticipation with error correction is a widely adopted technique in the implementation of high-speed IEEE-754-compliant floating-point units (FPUs), which are critical for area and power in multimedia-oriented systerns-on-chips. We investigated a novel LZA algorithm allowing us to remove error correction circuitry by reducing the error rate below a commonly accepted limit for image processing applications, which is not achieved by previous techniques. We embedded our technique into a complete FPU definitely obtaining both area saving and overall FPU latency reduction with respect to traditional designs.
暂无评论