When dealing with high-frequency time series, statistical procedures giving reliable estimates of unknown parameters and forecasts in real time are required. This is why recursive estimation methods are usually prefer...
详细信息
When dealing with high-frequency time series, statistical procedures giving reliable estimates of unknown parameters and forecasts in real time are required. This is why recursive estimation methods are usually preferred to maximum-likelihood estimators. In the paper, a recursive estimation algorithm for the system parameter of dynamic linear models is proposed. A comparison with some other algorithms is given via Monte Carlo simulations. Consistency properties of the algorithms are also empirically verified. Copyright (C) 1999 John Wiley & Sons, Ltd.
In the last two decades several NC algorithms for solving basic linear algebraic problems have appeared in the literature. This interest was clearly motivated by the emergence of a parallel computing technology and by...
详细信息
In the last two decades several NC algorithms for solving basic linear algebraic problems have appeared in the literature. This interest was clearly motivated by the emergence of a parallel computing technology and by the wide applicability of matrix computations. The traditionally adopted computation model, however, ignores the arithmetic aspects of the applications, and no analysis is currently available demonstrating the concrete feasibility of many of the known fast methods. In this paper we give strong evidence to the contrary, on the sole basis of the issue of robustness, indicating that some theoretically brilliant solutions fail the severe test of the "Engineering of algorithms." We perform a comparative analysis of several well-known numerical matrix inversion algorithms under both fixed- and variable-precision models of arithmetic. We show that, for most methods investigated, a typical input leads to poor numerical performance, and that in the exact-arithmetic setting no benefit derives from conditions usually deemed favorable in standard scientific computing. Under these circumstances, the only algorithm admitting sufficiently accurate NC implementations is Newton's iterative method, and the word size required to guarantee worst-case correctness appears to be the critical complexity measure. Our analysis also accounts for the observed instability of the considered superfast methods when implemented with the same floating-point arithmetic that is perfectly adequate for the fixed-precision approach.
Spatial regularity amidst a seemingly chaotic image is often meaningful. Many papers in computational geometry are concerned with detecting some type of regularity via exact solutions to problems in geometric pattern ...
详细信息
Spatial regularity amidst a seemingly chaotic image is often meaningful. Many papers in computational geometry are concerned with detecting some type of regularity via exact solutions to problems in geometric pattern recognition. However, real-world applications often have data that is approximate, and may rely on calculations that are approximate. Thus, it is useful to develop solutions that have an error tolerance. A solution has recently been presented by Robins et al. [Inform. Process. Lett. 69 (1999) 189-195] to the problem of finding all maximal subsets of an input set in the Euclidean plane R-2 that are approximately equally-spaced and approximately collinear. This is a problem that arises in computer vision, military applications, and other areas. The algorithm of Robins et al. is different in several important respects from the optimal algorithm given by Kahng and Robins [Patter Recognition Lett. 12 (1991) 757-764] for the exact version of the problem. The algorithm of Robins et al. seems inherently sequential and runs in O(n(5/2)) time, where n is the size of the input set. In this paper, we give parallel solutions to this problem. (C) 2001 Elsevier Science B.V. All rights reserved.
Although lexicographic (lex) variants of greedy algorithms are often P-complete, NC-algorithms are known for the following lex-search problems: lexicographic dept-first search (lex-dfs) for dags [12] [17], and lexicog...
详细信息
Although lexicographic (lex) variants of greedy algorithms are often P-complete, NC-algorithms are known for the following lex-search problems: lexicographic dept-first search (lex-dfs) for dags [12] [17], and lexicographic breadth-first search (lex-tfs) for dags [12]. For the all-sources version of the problem for dense digraphs, the lex-dfs (lex-bfs, lex-tfs) in [12] is (within a log factor of ) work-optimal with respect to the all-sources sequential solution that performs a dfs (bfs, tfs) from every vertex. By contrast, to solve the single-source lexicorgraphic version on inputs of size n, all known NC-algorithms perform work that is at least an n factor away from the work performed by their sequential counterparts. We presented parallel algorithms that solve the single-source version of these lex-search problems in O(log(2)n) time using M(n) processors on the EREW PRAM. (M(n) denotes the number of processors required to multiply two n x n integer matrices in O(log n) time and has O(n(2.376)) as tightest currently known as bound.) They all offer a polynomial improvement in work efficiency over that of their corresponding best previously known and close the gap between the requirements of the best known parallel algorithms for the lex and nonlex versions of the problems. Key to the efficiency of these algorithms is the novel idea of a lex-splitting tree and lex-conquer subgraphs of a dag G from source s. These structures provide a divide and conquer skeleton from which NC-algorithms for several lexicographic search problems emerge, in particular, an algorithm that places in the class NC the lex-dfs for reducible flow graphs-an interesting class of graphs which arise naturally in connection with code optimization and data flow analysis [4], [19]. A notable aspect of these algorithms is that they solve the lex-search problem instance at hand by efficiently transforming solutions of appropriate instances of (nonlex) path problems. This renders them potentially capabl
We investigate the problem of finding the 2-D convex hull of a set of points on a coarse-grained parallel computer, Goodrich has devised a parallel sorting algorithm for n items on P processors which achieves an optim...
详细信息
We investigate the problem of finding the 2-D convex hull of a set of points on a coarse-grained parallel computer, Goodrich has devised a parallel sorting algorithm for n items on P processors which achieves an optimal number of communication phases for all ranges of P less than or equal to n. Ferreira et al. have recently introduced a deterministic convex hull algorithm with a constant number of communication phases for n and P satisfying n greater than or equal to P1+is an element of. Here we present a new parallel 2-D convex hull algorithm with an optimal bound on number of communication phases for all values of P less than or equal to n while maintaining optimal local computation time. (C) 2001 Elsevier Science B.V, All rights reserved.
We describe an efficient parallel algorithm for hidden-surface removal for terrain maps. The algorithm runs in O (log(4) n) steps on the CREW PRAM model with a work bound of O((n + k) polylog(n)) where n and k are the...
详细信息
We describe an efficient parallel algorithm for hidden-surface removal for terrain maps. The algorithm runs in O (log(4) n) steps on the CREW PRAM model with a work bound of O((n + k) polylog(n)) where n and k are the input and output sizes, respectively. In order to achieve the work bound we use a number of techniques, among which our use of persistent data structures is somewhat novel in the context of parallel algorithms.
In this paper, we describe the essential elements of a parallel algorithm for the FDTD method using the MPI (Message Passing Interface) library. To simplify and accelerate the algorithm, an MPI Cartesian 2D topology i...
详细信息
In this paper, we describe the essential elements of a parallel algorithm for the FDTD method using the MPI (Message Passing Interface) library. To simplify and accelerate the algorithm, an MPI Cartesian 2D topology is used. The inter-process communications are optimized by the use of derived data types. A general approach is also explained for parallelizing the auxiliary tools, such as far-field computation, thin-wire treatment, etc. For PMLs, we have used a new method that makes it unnecessary to split the field components. This considerably simplifies the computer programming, and is compatible with the parallel algorithm.
We describe two highly scalable, parallel software volume-rendering algorithms-one renders unstructured grid volume data and the other renders isosurfaces.
We describe two highly scalable, parallel software volume-rendering algorithms-one renders unstructured grid volume data and the other renders isosurfaces.
The first boundary value problem for a singularly perturbed parabolic equation of convection-diffusion type on an interval is studied. For the approximation of the boundary value problem we use earlier developed finit...
详细信息
The first boundary value problem for a singularly perturbed parabolic equation of convection-diffusion type on an interval is studied. For the approximation of the boundary value problem we use earlier developed finite difference schemes. epsilon -uniformly of a high order of accuracy with respect to time. based on defect correction. New in this paper is the introduction of a partitioning of the domain for these epsilon -uniform schemes. We determine the conditions under which the difference schemes. applied independently on subdomains may accelerate (epsilon -uniformly) the solution of the boundary value problem without losing the accuracy of the original schemes. Hence. the simultaneous solution on subdomains can in principle be used for parallelization of the computational method.
Improved modelling of ice sheets, by use of high resolution and with representation of inert physical processes, is constrained by long run-times even on the latest single-processor workstation. parallel processing th...
详细信息
Improved modelling of ice sheets, by use of high resolution and with representation of inert physical processes, is constrained by long run-times even on the latest single-processor workstation. parallel processing therefore has a role to play. This paper describes techniques for the parallel processing of ice sheet models and presents design approaches for both the Gray T3 series and other parallel architectures. An implementation of a fully coupled, thermodynamic, 3D ice sheet model is described for the Gray T3D and is shown to be scaleable and efficient. (C) 2001 Elsevier Science Ltd. All rights reserved.
暂无评论