We review results from the literature on the conjugate gradient algorithm for solving symmetric positive definite linear systems and the related lanczos algorithm. We derive the conjugate gradient algorithm from the m...
详细信息
We review results from the literature on the conjugate gradient algorithm for solving symmetric positive definite linear systems and the related lanczos algorithm. We derive the conjugate gradient algorithm from the more general conjugate directionmethod, using projectors. We establish error bounds using exact arithmetic theoryand also discuss what can happen when floating-point arithmetic is used. We presentnumerical experiments to illustrate this behavior.
In theory, the lanczos algorithm generates an orthogonal basis of the corresponding Krylov subspace. However, in finite precision arithmetic the orthogonality and linear independence of the computed lanczos vectors is...
详细信息
In theory, the lanczos algorithm generates an orthogonal basis of the corresponding Krylov subspace. However, in finite precision arithmetic the orthogonality and linear independence of the computed lanczos vectors is usually lost quickly. In this paper we study a class of matrices and starting vectors having a special nonzero structure that guarantees exact computations of the lanczos algorithm whenever floating point arithmetic satisfying the IEEE 754 standard is used. Analogous results are formulated also for an implementation of the conjugate gradient method called cglanczos. This implementation then computes approximations that agree with their exact counterparts to a relative accuracy given by the machine precision and the condition number of the system matrix. The results are extended to the Arnoldi algorithm, the nonsymmetric lanczos algorithm, the Golub-Kahan bidiagonalization, the block-lanczos algorithm, and their counterparts for solving linear systems.
This paper presents a new regularization for Extreme Learning Machines (ELMs). ELMs are Randomized Neural Networks (RNNs) that are known for their fast training speed and good accuracy. Nevertheless the complexity of ...
详细信息
This paper presents a new regularization for Extreme Learning Machines (ELMs). ELMs are Randomized Neural Networks (RNNs) that are known for their fast training speed and good accuracy. Nevertheless the complexity of ELMs has to be selected, and regularization has to be performed in order to avoid under-fitting or overfitting. Therefore, a novel Regularization is proposed using a modified lanczos algorithm: Iterative lanczos Extreme Learning Machine (Lan-ELM). As summarized in the experimental Section, the computational time is on average divided by 4 and the Normalized MSE is on average reduced by 11%. In addition, the proposed method can be intuitively parallelized, which makes it a very valuable tool to analyze huge data sets in real-time. (C) 2020 Elsevier B.V. All rights reserved.
We study the lanczos algorithm where the initial vector is sampled uniformly from Sn-1. Let A be an n x n Hermitian matrix. We show that when run for few iterations, the output of lanczos on A is almost deterministic....
详细信息
We study the lanczos algorithm where the initial vector is sampled uniformly from Sn-1. Let A be an n x n Hermitian matrix. We show that when run for few iterations, the output of lanczos on A is almost deterministic. More precisely, we show that for any epsilon is an element of (0, 1) there exists c > 0 depending only on epsilon and a certain global property of the spectrum of A (in particular, not depending on n) such that when lanczos is run for at most c log n iterations, the output Jacobi coefficients deviate from their medians by t with probability at most exp(-n(epsilon) t(2)) for t < parallel to A parallel to. We directly obtain a similar result for the Ritz values and vectors. Our techniques also yield asymptotic results: Suppose one runs lanczos on a sequence of Hermitian matrices A(n) is an element of M-n(C) whose spectral distributions converge in Kolmogorov distance with rate O(n(-epsilon)) to a density mu for some epsilon > 0. Then we show that for large enough n, and for k = O(root log n), the Jacobi coefficients output after k iterations concentrate around those for mu. The asymptotic setting is relevant since lanczos is often used to approximate the spectral density of an infinite-dimensional operator by way of the Jacobi coefficients;our result provides some theoretical justification for this approach. In a different direction, we show that lanczos fails with high probability to identify outliers of the spectrum when run for at most c' log n iterations, where again c' depends only on the same global property of the spectrum of A. Classical results imply that the bound c' log n is tight up to a constant factor.
This article discusses an extension of the singular vector (SV) method in the context of an initial perturbation generator for an ensemble prediction system (EPS). In general, multiple SVs targeted at different region...
详细信息
This article discusses an extension of the singular vector (SV) method in the context of an initial perturbation generator for an ensemble prediction system (EPS). In general, multiple SVs targeted at different regions are computed in operational EPSs to extract growing modes with focus on different parts of the EPS domain. However, significant computational cost is associated with running all the procedures of the SV computations multiple times. In this study, the lanczos algorithm used for SV computation was extended to allow simultaneous computation of multiple targeted SV sets. Algebraic calculations in the algorithm, such as orthonormalizations and eigenvalue problem resolution, are separately implemented for multiple "subdomains" incorporating different targeting areas, and SV sets are computed for individual subdomains. However, forward and backward linearized propagation runs through the whole domain, shared among the SV sets from all the subdomains. As such algebraic operation accounts for a relatively small part of all computation, the computational cost increment brought by the algorithm extension is also small in relation to single SV computations. For verifications, consistency between SVs produced with the original and extended algorithms was examined. Both SV sets spanned the same subspaces with similar linear growth rates, except those derived on subdomain boundaries, where SVs produced using the extended algorithm were truncated by the boundary. To avoid such truncation, it is necessary to set a subdomain large enough to cover the target area and its surrounding region. In this article, some applications of this algorithm in operational situations are suggested. Also, an application of subdomain to the wave-number space is described.
The time-ordered exponential of a time-dependent matrix A(t) is defined as the function of A(t) that solves the first-order system of coupled linear differential equations with non-constant coefficients encoded in A(t...
详细信息
The time-ordered exponential of a time-dependent matrix A(t) is defined as the function of A(t) that solves the first-order system of coupled linear differential equations with non-constant coefficients encoded in A(t). The authors have recently proposed the first lanczos-like algorithm capable of evaluating this function. This algorithm relies on inverses of time-dependent functions with respect to a non-commutative convolution-like product, denoted by *. Yet, the existence of such inverses, crucial to avoid algorithmic breakdowns, still needed to be proved. Here we constructively prove that *-inverses exist for all non-identically null, smooth, separable functions of two variables. As a corollary, we partially solve the Green's function inverse problem which, given a distribution G, asks for the differential operator whose fundamental solution is G. Our results are abundantly illustrated by examples.
Gauss quadrature can be naturally generalized in order to approximate quasi-definite linear functionals, where the interconnections with (formal) orthogonal polynomials, (complex) Jacobi matrices, and the lanczos algo...
详细信息
Gauss quadrature can be naturally generalized in order to approximate quasi-definite linear functionals, where the interconnections with (formal) orthogonal polynomials, (complex) Jacobi matrices, and the lanczos algorithm are analogous to those in the positive definite case. In this survey we review these relationships with giving references to the literature that presents them in several related contexts. In particular, the existence of the n-weight (complex) Gauss quadrature corresponds to successfully performing the first n steps of the lanczos algorithm for generating biorthogonal bases of the two associated Krylov subspaces. The Jordan decomposition of the (complex) Jacobi matrix can be explicitly expressed in terms of the Gauss quadrature nodes and weights and the associated orthogonal polynomials. Since the output of the lanczos algorithm can be made real whenever the input is real, the value of the Gauss quadrature is a real number whenever all relevant moments of the quasi-definite linear functional are real.
An adaptation of the conventional lanczos algorithm is proposed to solve the general symmetric eigenvalue problem K phi = lambda K-G phi in the case when the geometric stiffness matrix K-G is not necessarily positive-...
详细信息
An adaptation of the conventional lanczos algorithm is proposed to solve the general symmetric eigenvalue problem K phi = lambda K-G phi in the case when the geometric stiffness matrix K-G is not necessarily positive-definite. The only requirement for the new algorithm to work is that matrix K must be positive-definite. Firstly, the algorithm is presented for the standard situation where no shifting is assumed. Secondly, the algorithm is extended to include shifting since this procedure may be important for enhanced precision or acceleration of convergence rates. Neither version of the algorithm requires matrix inversion, but more resources in terms of memory allocation are needed by the version with shifting.
This paper tries to accelerate the convergence rate of the general viscous dynamic relaxation method. For this purpose, a new automated procedure for estimating the critical damping factor is developed by employing a ...
详细信息
This paper tries to accelerate the convergence rate of the general viscous dynamic relaxation method. For this purpose, a new automated procedure for estimating the critical damping factor is developed by employing a simple variant of the lanczos algorithm, which does not require any re-orthogonalization process. All of the computational operations are performed by simple vector-matrix multiplication without requiring any matrix factorization or inversion. Some numerical examples with geometric nonlinear behavior are analyzed by the proposed algorithm. Results show that the suggested procedure could effectively decrease the total number of convergence iterations compared with the conventional dynamic relaxation algorithms. Copyright (C) 2017 John Wiley & Sons, Ltd.
Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper descr...
详细信息
Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a thick restart version of the lanczos algorithm with deflation ("locking") and a new type of polynomial filter obtained from a least-squares technique. The resulting algorithm can be utilized in a "spectrum slicing" approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different subintervals independently from one another.
暂无评论