This paper introduces a "kernel-independent" interpolative decomposition butterfly factorization (IDBF) as a data-sparse approximation for matrices that satisfy a complementary low-rank property. The IDBF ca...
详细信息
This paper introduces a "kernel-independent" interpolative decomposition butterfly factorization (IDBF) as a data-sparse approximation for matrices that satisfy a complementary low-rank property. The IDBF can be constructed in O(N logN) operations for an N x N matrix via hierarchical interpolative decompositions (IDs) if matrix entries can be sampled individually and each sample takes O(1) operations. The resulting factorization is a product of O(logN) sparse matrices, each with O(N) nonzero entries. Hence, it can be applied to a vector rapidly in O(N logN) operations. IDBF is a general framework for nearly optimal fast matrix-vector multiplication (matvec), which is useful in a wide range of applications, e.g., special function transformation, Fourier integral operators, and high-frequency wave computation. Numerical results are provided to demonstrate the effectiveness of the butterfly factorization and its construction algorithms.
This paper focuses on the fast evaluation of the matrix-vector multiplication (matvec) g= Kf for K is an element of C-NxN, which is the discretization of a multidimensional oscillatory integral transform g(x) = simila...
详细信息
This paper focuses on the fast evaluation of the matrix-vector multiplication (matvec) g= Kf for K is an element of C-NxN, which is the discretization of a multidimensional oscillatory integral transform g(x) = similar to K(x,xi) f(xi)d xi with a kernel function K(x, xi) = e(2 pi i Phi(x,xi)), where Phi(x, xi) is a piecewise smooth phase function with x and xi in R-d for d = 2 or 3. A new framework is introduced to compute Kfwith O ( Nlog(N)) time and memories complexity in the case that only indirect access to the phase function Phi is available. This framework consists of two main steps: 1) an O (Nlog(N)) algorithm for recovering the multidimensional phase function Phi from indirect access is proposed;2) a multidimensional interpolative decomposition butterfly factorization (MIDBF) is designed to evaluate the matvec Kfwith an O (Nlog( N)) complexity once Phi is available. Numerical results are provided to demonstrate the effectiveness of the proposed framework. (C) 2020 Elsevier Inc. All rights reserved.
This paper introduces the interpolative butterfly factorization for nearly optimal implementation of several transforms in harmonic analysis, when their explicit formulas satisfy certain analytic properties and the ma...
详细信息
This paper introduces the interpolative butterfly factorization for nearly optimal implementation of several transforms in harmonic analysis, when their explicit formulas satisfy certain analytic properties and the matrix representations of these transforms satisfy a complementary low-rank property. A preliminary interpolative butterfly factorization is constructed based on interpolative low-rank approximations of the complementary low-rank matrix. A novel sweeping matrix compression technique further compresses the preliminary interpolative butterfly factorization via a sequence of structure-preserving low-rank approximations. The sweeping procedure propagates the low-rank property among neighboring matrix factors to compress dense submatrices in the preliminary butterfly factorization to obtain an optimal one in the butterfly scheme. For an N x N matrix, it takes O(N log N) operations and complexity to construct the factorization as a product of O(log N) sparse matrices, each with O(N) nonzero entries. Hence, it can be applied rapidly in O(N log N) operations. Numerical results are provided to demonstrate the effectiveness of this algorithm.
The paper introduces the butterfly factorization as a data-sparse approximation for the matrices that satisfy a complementary low-rank property. The factorization can be constructed efficiently if either fast algorith...
详细信息
The paper introduces the butterfly factorization as a data-sparse approximation for the matrices that satisfy a complementary low-rank property. The factorization can be constructed efficiently if either fast algorithms for applying the matrix and its adjoint are available or the entries of the matrix can be sampled individually. For an N x N matrix, the resulting factorization is a product of O(log N) sparse matrices, each with O(N) nonzero entries. Hence, it can be applied rapidly in O(N log N) operations. Numerical results are provided to demonstrate the effectiveness of the butterfly factorization and its construction algorithms.
Randomized sampling has recently been proven a highly efficient technique for computing approximate factorizations of matrices that have low numerical rank. This paper describes an extension of such techniques to a wi...
详细信息
Randomized sampling has recently been proven a highly efficient technique for computing approximate factorizations of matrices that have low numerical rank. This paper describes an extension of such techniques to a wider class of matrices that are not themselves rank-deficient but have off-diagonal blocks that are;specifically, the class of so-called hierarchically semiseparable (HSS) matrices. HSS matrices arise frequently in numerical analysis and signal processing, particularly in the construction of fast methods for solving differential and integral equations numerically. The HSS structure admits algebraic operations (matrix-vector multiplications, matrix factorizations, matrix inversion, etc.) to be performed very rapidly, but only once the HSS representation of the matrix has been constructed. How to rapidly compute this representation in the first place is much less well understood. The present paper demonstrates that if an N x N matrix can be applied to a vector in O(N) time, and if individual entries of the matrix can be computed rapidly, then provided that an HSS representation of the matrix exists, it can be constructed in O(N k(2)) operations, where k is an upper bound for the numerical rank of the off-diagonal blocks. The point is that when legacy codes (based on, e. g., the fast multipole method) can be used for the fast matrix-vector multiply, the proposed algorithm can be used to obtain the HSS representation of the matrix, and then well-established techniques for HSS matrices can be used to invert or factor the matrix.
暂无评论