We extend the error bounds from Chen et al. (SIAM J. matrix Anal. Appl 43(2):787-811, 2022) for the Lanczos method for matrix function approximation to the block algorithm. Numerical experiments suggest that our bound...
详细信息
We extend the error bounds from Chen et al. (SIAM J. matrix Anal. Appl 43(2):787-811, 2022) for the Lanczos method for matrix function approximation to the block algorithm. Numerical experiments suggest that our bounds are fairly robust to changing block size and have the potential for use as a practical stopping criterion. Further experiments work toward a better understanding of how certain hyperparameters should be chosen in order to maximize the quality of the error bounds, even in the previously studied block size one case.
matrix functions of the form f (A) v, where A is a large symmetric matrix, f is a function, and v not equal 0 is a vector, are commonly approximated by first applying a few, say n, steps of the symmetric Lanczos proce...
详细信息
matrix functions of the form f (A) v, where A is a large symmetric matrix, f is a function, and v not equal 0 is a vector, are commonly approximated by first applying a few, say n, steps of the symmetric Lanczos process to A with the initial vector v in order to determine an orthogonal section of A. The latter is represented by a (small) n x n tridiagonal matrix to which f is applied. This approach uses the n first Lanczos vectors provided by the Lanczos process. However, n steps of the Lanczos process yield n + 1 Lanczos vectors. This paper discusses how the (n + 1) st Lanczos vector can be used to improve the quality of the computed approximation of f (A) v. Also the approximation of expressions of the form v(T) f (A) v is considered.
An efficient numerical method is developed for evaluating phi(A), where A is a symmetric matrix and phi is the function defined by phi(x)=(e(x)-1)/x=1+x/2+x(2)/6+.... This matrix function is useful in the so-called ex...
详细信息
An efficient numerical method is developed for evaluating phi(A), where A is a symmetric matrix and phi is the function defined by phi(x)=(e(x)-1)/x=1+x/2+x(2)/6+.... This matrix function is useful in the so-called exponential integrators for differential equations. In particular, it is related to the exact solution of the ODE system dy/dt=Ay+b, where A and b are t-independent. Our method avoids the eigenvalue decomposition of the matrix A and it requires about 10n(3)/3 operations for a general symmetric n x n matrix. When the matrix is tridiagonal, the required number of operations is only O(n(2)) and it can be further reduced to O(n) if only a column of the matrix function is needed. These efficient schemes for tridiagonal matrices are particularly useful when the Lanczos method is used to calculate the product: of this matrix function (for a large symmetric matrix) with a given vector. (C) 2003 Elsevier B.V. All rights reserved.
Computing a function f(A) of an n-by-n matrix A is a frequently occurring problem in control theory and other applications. In this paper we introduce an effective approach for the determination of matrix function f (...
详细信息
Computing a function f(A) of an n-by-n matrix A is a frequently occurring problem in control theory and other applications. In this paper we introduce an effective approach for the determination of matrix function f (A). We propose a new technique which is based on the extension of Newton divided difference and the interpolation technique of Hermite and using the eigenvalues of the given matrix A. The new algorithm is tested on several problems to show the efficiency of the presented method. Finally, the application of this method in control theory is highlighted. (C) 2009 Elsevier B.V. All rights reserved.
Marginal fisher analysis (MFA) is a dimensionality reduction method based on a graph embedding framework. In contrast to traditional linear discriminant analysis (LDA), which requires the data to follow a Gaussian dis...
详细信息
Marginal fisher analysis (MFA) is a dimensionality reduction method based on a graph embedding framework. In contrast to traditional linear discriminant analysis (LDA), which requires the data to follow a Gaussian distribution, MFA is suitable for non-Gaussian data, and it has better pattern classification ability. However, MFA has the small-sample-size (SSS) problem. This paper aims to solve the small-sample-size problem while increasing the classification performance of MFA. Based on a matrix function dimensionality reduction framework, the criterion of the MFA method is reconstructed by using the polynomials matrix function transformation, and then a new MFA method is proposed, named PMFA (polynomial marginal fisher analysis). The major contributions of the proposed PMFA method are that it solves the small-sample-size problem of MFA, and it can enlarge the distance between marginal sample points of inter-class, so that it can get better pattern classification performance. Experiments on the public face datasets show that PMFA can get a better classification ability than MFA and its improved methods.
The inertia of a Hermitian matrix is defined to be a triplet composed by the numbers of the positive, negative and zero eigenvalues of the matrix counted with multiplicities, respectively. If we take the inertia and r...
详细信息
The inertia of a Hermitian matrix is defined to be a triplet composed by the numbers of the positive, negative and zero eigenvalues of the matrix counted with multiplicities, respectively. If we take the inertia and rank of a Hermitian matrix as objective functions, then they are neither differentiable nor smooth. In this case, maximizing and minimizing the inertia and rank of a Hermitian matrix function could be regarded as a continuous-integer optimization problem. In this paper, we use some pure algebraic operations of matrices and their generalized inverses to derive explicit expansion formulas for calculating the global maximum and minimum ranks and inertias of the linear Hermitian matrix function A + BXB* subject to some rank and definiteness restrictions on the variable matrix X. Various direct consequences of the formulas in characterizing algebraic properties of A + BXB* are also presented. In particular, solutions to a group of constrained optimization problems on the rank and inertia of a partially specified block Hermitian matrix are given. (C) 2011 Elsevier Ltd. All rights reserved.
The need to compute the trace of a large matrix that is not explicitly known, such as the matrix exp(A), where A is a large symmetric matrix, arises in various applications including in network analysis. The global La...
详细信息
The need to compute the trace of a large matrix that is not explicitly known, such as the matrix exp(A), where A is a large symmetric matrix, arises in various applications including in network analysis. The global Lanczos method is a block method that can be applied to compute an approximation of the trace. When the block size is one, this method simplifies to the standard Lanczos method. It is known that for some matrix functions and matrices, the extended Lanczos method, which uses subspaces with both positive and negative powers of A, can give faster convergence than the standard Lanczos method, which uses subspaces with nonnegative powers of A only. This suggests that it may be beneficial to use an extended global Lanczos method instead of the (standard) global Lanczos method. This paper describes an extended global Lanczos method and discusses properties of the associated Gauss-Laurent quadrature rules. Computed examples that illustrate the performance of the extended global Lanczos method are presented.
A new class of auxiliary Lyapunov functions (functionals) are proposed for the analysis of stability of hybrid systems. Using the example of a two-component system, the technique of evaluation of the sign-definiteness...
详细信息
A new class of auxiliary Lyapunov functions (functionals) are proposed for the analysis of stability of hybrid systems. Using the example of a two-component system, the technique of evaluation of the sign-definiteness of matrix functions is illustrated.
The Frechet derivative L-f of a matrix function f : C-nxn -> C-nxn is used in a variety of applications and several algorithms are available for computing it. We define a condition number for the Frechet derivative...
详细信息
The Frechet derivative L-f of a matrix function f : C-nxn -> C-nxn is used in a variety of applications and several algorithms are available for computing it. We define a condition number for the Frechet derivative and derive upper and lower bounds for it that differ by at most a factor 2. For a wide class of functions we derive an algorithm for estimating the 1-norm condition number that requires O(n(3)) flops given O(n(3)) flops algorithms for evaluating f and L-f;in practice it produces estimates correct to within a factor 6n. Numerical experiments show the new algorithm to be much more reliable than a previous heuristic estimate of conditioning.
Many dimensionality reduction methods in the manifold learning field have the so-called small-sample-size (SSS) problem. Starting from solving the SSS problem, we first summarize the existing dimensionality reduction ...
详细信息
Many dimensionality reduction methods in the manifold learning field have the so-called small-sample-size (SSS) problem. Starting from solving the SSS problem, we first summarize the existing dimensionality reduction methods and construct a unified criterion function of these methods. Then, combining the unified criterion with the matrix function, we propose a general matrix function dimensionality reduction framework. This framework is configurable, that is, one can select suitable functions to construct such a matrix transformation framework, and then a series of new dimensionality reduction methods can be derived from this framework. In this article, we discuss how to choose suitable functions from two aspects: 1) solving the SSS problem and 2) improving pattern classification ability. As an extension, with the inverse hyperbolic tangent function and linear function, we propose a new matrix function dimensionality reduction framework. Compared with the existing methods to solve the SSS problem, these new methods can obtain better pattern classification ability and have less computational complexity. The experimental results on handwritten digit, letters databases, and two face databases show the superiority of the new methods.
暂无评论