Krylov subspace methods for approximating a matrix function f(A) times a vector v are analyzed in this paper. For the Arnoldi approximation to e (-tau A) v, two reliable a posteriori error estimates are derived from t...
详细信息
Krylov subspace methods for approximating a matrix function f(A) times a vector v are analyzed in this paper. For the Arnoldi approximation to e (-tau A) v, two reliable a posteriori error estimates are derived from the new bounds and generalized error expansion we establish. One of them is similar to the residual norm of an approximate solution of the linear system, and the other one is determined critically by the first term of the error expansion of the Arnoldi approximation to e (-tau A) v due to Saad. We prove that each of the two estimates is reliable to measure the true error norm, and the second one theoretically justifies an empirical claim by Saad. In the paper, by introducing certain functions I center dot (k) (z) defined recursively by the given function f(z) for certain nodes, we obtain the error expansion of the Krylov-like approximation for f(z) sufficiently smooth, which generalizes Saad's result on the Arnoldi approximation to e (-tau A) v. Similarly, it is shown that the first term of the generalized error expansion can be used as a reliable a posteriori estimate for the Krylov-like approximation to some other matrix functions times v. Numerical examples are reported to demonstrate the effectiveness of the a posteriori error estimates for the Krylov-like approximations to e (-tau A) v, cos(A)v and sin(A)v.
This paper discusses the systolic implementation of the computation of the exponential of a matrix by means of techniques involving “scaling and squaring” as applied to the Taylor series approximation. Further, it i...
详细信息
This paper discusses the systolic implementation of the computation of the exponential of a matrix by means of techniques involving “scaling and squaring” as applied to the Taylor series approximation. Further, it is shown that a number of other matrix functions, such asA-1A1/2A-1/2cos(A), sin(A), log(A), can be computed systolically using similar techniques.
The purpose of this paper is to develop a new approach to certain approximation and factorization problems for matrix-valued functions. The approach is based on studying properties of maximizing vectors of Hankel oper...
详细信息
The purpose of this paper is to develop a new approach to certain approximation and factorization problems for matrix-valued functions. The approach is based on studying properties of maximizing vectors of Hankel operators with matrix-valued symbols and on the solution of the so-called recovery problem for unitary-valued matrix functions. In the case of scalar functions such problems were studied in detail in [PK]. It turns out however, that the case of matrix functions is considerably more complicated than the scalar case.
matrix power series may slowly converge or even diverge if some eigenvalues of the matrix are near the boundary or outside the disk of convergence. In this case it is proposed to apply suitably chosen summability meth...
详细信息
matrix power series may slowly converge or even diverge if some eigenvalues of the matrix are near the boundary or outside the disk of convergence. In this case it is proposed to apply suitably chosen summability methods to accelerate or generate convergence; special attention is paid to Euler methods. The matrix logarithm appearing in connection with stationary Markov chains is considered as an example.
A new condition estimation procedure for general matrix functions is presented that accurately gauges sensitivity by measuring the effect of random perturbations at the point of evaluation. In this procedure the numbe...
详细信息
A new condition estimation procedure for general matrix functions is presented that accurately gauges sensitivity by measuring the effect of random perturbations at the point of evaluation. In this procedure the number of extra function evaluations used to evaluate the condition estimate determines the order of the estimate. That is, the probability that the estimate is off by a given factor is inversely proportional to the factor raised to the order of the method. The ''transpose-free'' nature of this new method allows it to be applied to a broad range of problems in which the function maps between spaces of different dimensions. This is in sharp contrast to the more common power method condition estimation procedure that is limited, in the usual case where the Frechet derivative is known only implicitly, to maps between spaces of equal dimension. A group of examples illustrates the flexibility of the new estimation procedure in handling a variety of problems and types of sensitivity estimates, such as mixed and componentwise condition estimates.
Structured matrices play a relevant role in symbolic and numerical computations. In the literature and in applications we encounter several types of structure, which are typically related to the properties of the prob...
详细信息
ISBN:
(纸本)9798400700392
Structured matrices play a relevant role in symbolic and numerical computations. In the literature and in applications we encounter several types of structure, which are typically related to the properties of the problems they stem from: banded structure is often associated with locality of functions or operators, Toeplitz structure arises from shift invariance properties, off-diagonal low-rank structure appears in inverses of banded matrices. A common trait of most matrix structures is the availability of fast algorithms that perform fundamental operations, such as matrix-vector or matrix-matrix multiplication, solution of linear systems, or eigenvalue computation. For problems of large size, such algorithms are extremely useful both from a symbolic and a numeric point of view, although of course in a numerical setting one needs to pay attention to possible stability issues. The tutorial will start with a brief overview of matrix structures and of their properties: we are especially interested here in rank structures and in certain forms of sparsity. We will then focus on selected topics concerning the analysis and computation of functions of structured matrices. functions of matrices have a wide range of applications, for which we will give several examples, from the solution of differential equations to network analysis. Think for instance of the matrix exponential exp(A): it appears in the solution of a multidimensional Cauchy problem with coefficient matrix A, but it also has a combinatorial meaning when A is the adjacency matrix of a graph. If the quantity of interest is the action of a matrix function on a vector, moreover, one can often bypass the explicit construction of the matrix function itself and devise a more efficient approach. Most methods for the computation of matrix functions are ultimately based on polynomial or rational approximation, which are either applied in explicit form, or through iterative methods such as Lanczos or Arnoldi, often in co
Many companies use the MS Office programme at various levels of their internal hierarchy. This programme includes the MS Excel table processor, a tool for processing data in tabular form. However widely this data proc...
详细信息
ISBN:
(纸本)9780986041921
Many companies use the MS Office programme at various levels of their internal hierarchy. This programme includes the MS Excel table processor, a tool for processing data in tabular form. However widely this data processing tool may be used, its utilization hardly ever goes beyond its internal functions. Based on a survey I have conducted in companies I worked for as a competitiveness consultant, I have reached the conclusion that MS Excel is far from being used to its full potential. I have found out that around 95 % companies using MS Excel have never used the Visual Basic for Applications internal programming language (or VBA), an internal tool within the MS Office package. The main reason why such a high percentage of employees do not use the VBA programming language is simple: they have not been informed about its possibilities by their provider (employer). It is quite understandable that someone who has no experience with programming or algorithmization will be reluctant to create their own user functions. Paradoxically, however, I have found the same situation in companies which have their own IT departments responsible for software development. Therefore, one of the aims of this article is to show how VBA can be used in bulk data processing, and in particular to create matrix functions simulating database functionality. I have used some of the existing MS Excel internal functions to draw a comparison with my proposal.
The main goal of this dissertation is to develop efficient numerical methods to approximate matrix functions of the form f(A)v and matrix functionals of the form vT f(A)T g(A)v, where A is a large symmetric or nonsymm...
详细信息
The main goal of this dissertation is to develop efficient numerical methods to approximate matrix functions of the form f(A)v and matrix functionals of the form vT f(A)T g(A)v, where A is a large symmetric or nonsymmetric matrix, f and g are analytic in a large enough simply connected set in the complex plane, and v is a given vector. Additionally, we estimate the error caused by the approximation of the matrix functions of the form f(A)v when A is a large matrix and computing f(A) is complicated and computationally expensive. First, we review the available approximation methods based on the Lanczos process for vT f(A)v and f(A)v when A is a large symmetric matrix, f is analytic in a large enough simply connected set in the complex plane, and v is a vector. Afterward, we present new methods that give higher accuracy than available techniques. When A is a large matrix and we use the Lanczos process for the approximation, the majority of the computational effort comes from the evaluation of matrix{vector products with A. We will show that the suggested methods can obtain higher accuracy with the same number of matrix{vector products compared with existing methods. Next, we focus on estimating the error of the approximation of the matrix function f(A)v when A is a large symmetric matrix. Determining the accuracy of any approximation method can be critical. As a part of this dissertation, we describe a new method based on the Lanczos algorithm to estimate the error of the approximation. Also, we present new approaches based on the Arnoldi process for approximating f(A)v and vT f(A)T g(A)v when A is a large nonsymmetric matrix. These new Arnoldi-based methods give a more accurate approximation than existing ones for essentially the same computational cost. We will show that in certain cases, the accuracy of the approximations of vT f(A)T g(A)v and f(A)v increase almost as much as when an additional Arnoldi step is performed.
We introduce a method for calculating individual elements of matrix functions. Our technique makes use of a novel series expansion for the action of matrix functions on basis vectors that is memory efficient even for ...
详细信息
We introduce a method for calculating individual elements of matrix functions. Our technique makes use of a novel series expansion for the action of matrix functions on basis vectors that is memory efficient even for very large matrices. We showcase our approach by calculating the matrix elements of the exponential of a transverse-field Ising model and evaluating quantum transition amplitudes for large many-body Hamiltonians of sizes up to 2(64)x2(64) on a single workstation. We also discuss the application of the method to matrix inverses. We relate and compare our method to the state-of-the-art and demonstrate its advantages. We also discuss practical applications of our method. (C) 2021 Elsevier B.V. All rights reserved.
暂无评论