This paper first discusses the limitation that the intrinsic mode functions (IMFs) decomposed by the empirical mode decomposition (EMD) in Hilbert-Huang transform (HHT) are not orthogonal. As an improvement to the HHT...
详细信息
This paper first discusses the limitation that the intrinsic mode functions (IMFs) decomposed by the empirical mode decomposition (EMD) in Hilbert-Huang transform (HHT) are not orthogonal. As an improvement to the HHT method, three orthogonal techniques (the forward, backward and arbitrary sequence orthogonalization algorithms) based on the Gram-Schmidt method are then proposed to obtain the completely orthogonal IMFs. According to the orthogonal index and the energy index, the effectiveness of the proposed technique and algorithms is validated through a synthetic signal generated by the combination of three sinusoidal waves with different frequencies and the El Centro (1940, N-S) earthquake accelerogram. By taking the El Centro (1940, N-S) earthquake accelerogram as an example, the problem that whether the orthogonal IMFs satisfy the requirements of IMF is discussed, then the backward and the arbitrary sequence orthogonalization algorithms are recommended. Three historic earthquake accelerograms are analyzed by using the recommended orthogonalization algorithms combined with the Hilbert spectral analysis. The results show that the orthogonal Hilbert spectrum and the orthogonal Hilbert marginal spectrum can produce more faithful representation of earthquake accelerograms than the Hilbert spectrum and the Hilbert marginal spectrum, and they can be used to quantitatively characterize the energy distribution of earthquake accelerograms at different frequency regions.
In a previous paper we presented two variants of Kovarik's approximate orthogonalization algorithm for arbitrary symmetric matrices, one with and one without explicit matrix inversion. Here we propose another inve...
详细信息
In a previous paper we presented two variants of Kovarik's approximate orthogonalization algorithm for arbitrary symmetric matrices, one with and one without explicit matrix inversion. Here we propose another inverse-free version that has the advantage of a smaller bound on the convergence factor, while the computational costs per iteration are even less than in the initial inverse-free variant. We then investigate the application of the new algorithm for the numerical solution of linear least-squares problems with a symmetric matrix. The basic idea is to modify the right-hand side of the equation during the transformation of the matrix. We prove that the sequence of vectors generated in this way converges to the minimal norm solution of the problem. Numerical tests with the collocation discretization of a first-kind integral equation demonstrate a mesh-independent behaviour and stability with respect to numerical errors introduced by the use of numerical quadrature.
In a previous paper the author presented an extension of an iterative approximate orthogonalization algorithm, due to Z. Kovarik, for arbitrary rectangular matrices. In this algorithm, as Kovarik already observed in h...
详细信息
In a previous paper the author presented an extension of an iterative approximate orthogonalization algorithm, due to Z. Kovarik, for arbitrary rectangular matrices. In this algorithm, as Kovarik already observed in his paper, at each iteration an inversion of a symmetric and positive definite matrix is made. The dimension of this matrix equals the number of rows of the initial one, thus the inverse computation can be very expensive. In the present paper we describe an algorithm in which the above matrix inversion step is replaced by an arbitrary odd degree polynomial matrix expression. We prove that this new algorithm converges to the same matrix as the original Kovarik's method. Some numerical experiments described in the last section of the paper show us that, even for small degree polynomial expressions the convergence properties of the new algorithm are comparable with those of the original one.
The paper first briefly discusses the advantages and disadvantages of using orthonormal (as opposed to non-orthogonal) bases in segmented image coding, and shows that the optimal choice is application-dependent. Next,...
详细信息
The paper first briefly discusses the advantages and disadvantages of using orthonormal (as opposed to non-orthogonal) bases in segmented image coding, and shows that the optimal choice is application-dependent. Next, it introduces fast algorithms for computing orthonormal base functions on an arbitrarily-shaped region. The algorithms are extensions of the 'natural' polynomial recursive orthogonalization (PRO) algorithm, introduced earlier by the author and differ from it in that they allow new orthogonalization orders and new types of base functions (cosines and warped polynomials in addition to ordinary polynomials). The algorithms are typically 1.5 to 3 times faster than the corresponding Gram-Schmidt (GS) methods. Three of the new algorithms, called RECT, TOTDIAG and XY are investigated in detail. The RECT and TOTDIAG algorithms are typically 15% to 30% slower than 'natural' PRO, but still 1.5 to 2.5 times faster than GS. Also, their computational advantage over GS increases;vith the number of computed base functions. A preliminary experiment shows that the combined use of the RECT or TOTDIAG base with the natural base in different areas of the image may lead to a better approximation performance, albeit at the expense of extra computations. (C) 1997 Elsevier Science B.V.
暂无评论