In this paper we focus on efficient implementations of the multivariate decomposition method (MDM) for approximating integrals of infinity-variate functions. Such infinity-variate integrals occur, for example, as expe...
详细信息
In this paper we focus on efficient implementations of the multivariate decomposition method (MDM) for approximating integrals of infinity-variate functions. Such infinity-variate integrals occur, for example, as expectations in uncertainty quantification. Starting with the anchored decomposition f = Sigma(u subset of N) f(u), where the sum is over all finite subsets of N and each f(u) depends only on the variables x(j) with j is an element of u, our MDM algorithm approximates the integral of f by first truncating the sum to some "active set" and then approximating the integral of the remaining functions f(u) term-byterm using Smolyak or (randomized) quasi-Monte Carlo quadratures. The anchored decomposition allows us to compute f(u) explicitly by function evaluations of f. Given the specification of the active set and theoretically derived parameters of the quadrature rules, we exploit structures in both the formula for computing f(u) and the quadrature rules to develop computationally efficient strategies to implement the MDM in various scenarios. In particular, we avoid repeated function evaluations at the same point. We provide numerical results for a test function to demonstrate the effectiveness of the algorithm.
We introduce the multivariatedecomposition finite element method (MDFEM) for elliptic PDEs with lognormal diffusion coefficients, that is, when the diffusion coefficient has the form a = exp(Z) where Z is a Gaussian ...
详细信息
We introduce the multivariatedecomposition finite element method (MDFEM) for elliptic PDEs with lognormal diffusion coefficients, that is, when the diffusion coefficient has the form a = exp(Z) where Z is a Gaussian random field defined by an infinite series expansion Z(y) = Sigma(j >= 1) y(j) phi(j) with y(i) similar to N(0, 1) and a given sequence of functions {phi(j)} (j >= 1). We use the MDFEM to approximate the expected value of a linear functional of the solution of the PDE which is an infinite-dimensional integral over the parameter space. The proposed algorithm uses the multivariate decomposition method (MDM) to compute the infinite-dimensional integral by a decomposition into finite-dimensional integrals, which we resolve using quasi-Monte Carlo (QMC) methods, and for which we use the finite element method (FEM) to solve different instances of the PDE. We develop higher-order quasi-Monte Carlo rules for integration over the finite-dimensional Euclidean space with respect to the Gaussian distribution by use of a truncation strategy. By linear transformations of interlaced polynomial lattice rules from the unit cube to a multivariate box of the Euclidean space we achieve higher-order convergence rates for functions belonging to a class of anchored Gaussian, Sobolev spaces while taking into account the truncation error. These cubature rules are then used in the MDFEM algorithm. Under appropriate conditions, the MDFEM achieves higher-order convergence rates in terms of error versus cost, i. e. , to achieve an accuracy of O(epsilon) the computational cost is O(epsilon(-1)(/lambda)(-d'/ lambda)) = O(epsilon(-)((p)*+(d'/tau)) (/ (1-p)*())) where epsilon(-1/lambda) and epsilon(-d'/lambda) are respectively the cost of the quasi-Monte Carlo cubature and the finite element approximations, with d' = d (1 + delta') for some (delta' >= 0 and d the physical dimension, and 0 < p* <= (2 + d'/tau)(-1) is a parameter representing the sparsity of {phi(j)}(j >= 1).
Constructing active sets is a key part of the multivariate decomposition method. An algorithm for constructing optimal or quasi-optimal active sets is proposed in the paper. By numerical experiments, it is shown that ...
详细信息
Constructing active sets is a key part of the multivariate decomposition method. An algorithm for constructing optimal or quasi-optimal active sets is proposed in the paper. By numerical experiments, it is shown that the new method can provide sets that are significantly smaller than the sets constructed by the already existing method. The experiments also show that the superposition dimension could surprisingly be very small, at most 3, when the error demand is not smaller than 10(-3) and the weights decay sufficiently fast. (c) 2017 Elsevier Inc. All rights reserved.
We study integration and L2-approximation of functions of infinitely many variables in the following setting: The underlying function space is the countably infinite tensor product of univariate Hermite spaces and the...
详细信息
We study integration and L2-approximation of functions of infinitely many variables in the following setting: The underlying function space is the countably infinite tensor product of univariate Hermite spaces and the probability measure is the corresponding product of the standard normal distribution. The maximal domain of the functions from this tensor product space is necessarily a proper subset of the sequence space RN. We establish upper and lower bounds for the minimal worst case errors under general assumptions;these bounds do match for tensor products of well-studied Hermite spaces of functions with finite or with infinite smoothness. In the proofs we employ embedding results, and the upper bounds are attained constructively with the help of multivariate decomposition methods. (c) 2024 Elsevier Inc. All rights reserved.
Smolyak's method, also known as sparse grid method, is a powerful tool to tackle multivariate tensor product problems solely with the help of efficient algorithms for the corresponding univariate problem. In this ...
详细信息
Smolyak's method, also known as sparse grid method, is a powerful tool to tackle multivariate tensor product problems solely with the help of efficient algorithms for the corresponding univariate problem. In this paper we study the randomized setting, i.e., we randomize Smolyak's method. We provide upper and lower error bounds for randomized Smolyak algorithms with explicitly given dependence on the number of variables and the number of information evaluations used. The error criteria we consider are the worst case root mean square error (the typical error criterion for randomized algorithms, sometimes referred to as "randomized error",) and the root mean square worst case error (sometimes referred to as "worst-case error"). Randomized Smolyak algorithms can be used as building blocks for efficient methods such as multilevel algorithms, multivariate decomposition methods or dimension-wise quadrature methods to tackle successfully high-dimensional or even infinite-dimensional problems. As an example, we provide a very general and sharp result on the convergence rate of Nth minimal errors of infinite-dimensional integration on weighted reproducing kernel Hilbert spaces. Moreover, we are able to characterize the spaces for which randomized algorithms for infinite-dimensional integration are superior to deterministic ones. We illustrate our findings for the special instance of weighted Korobov spaces. We indicate how these results can be extended, e.g., to spaces of functions whose smooth dependence on successive variables increases ("spaces of increasing smoothness") and to the problem of L-2-approximation (function recovery). (C) 2019 Elsevier Inc. All rights reserved.
We study the numerical integration problem for functions with infinitely many variables. The function spaces of integrands we consider are weighted reproducing kernel Hilbert spaces with norms related to the ANOVA dec...
详细信息
We study the numerical integration problem for functions with infinitely many variables. The function spaces of integrands we consider are weighted reproducing kernel Hilbert spaces with norms related to the ANOVA decomposition of the integrands. The weights model the relative importance of different groups of variables. We investigate randomized quadrature algorithms and measure their quality by estimating the randomized worst-case integration error. In particular, we provide lower error bounds for a very general class of randomized algorithms that includes non-linear and adaptive algorithms. Furthermore, we propose new randomized changing dimension algorithms (also called multivariate decomposition methods) and present favorable upper error bounds. For product weights and finite-intersection weights our lower and upper error bounds match and show that our changing dimension algorithms are optimal in the sense that they achieve convergence rates arbitrarily close to the best possible convergence rate. As more specific examples, we discuss unanchored Sobolev spaces of different degrees of smoothness and randomized changing dimension algorithms that use as building blocks interlaced scrambled polynomial lattice rules. Crown Copyright (C) 2014 Published by Elsevier Inc. All rights reserved.
暂无评论