In this paper we examine the use of generalized singular value decomposition (GSVD) for coordinated beamforming in MIMO systems. GSVD facilitates joint decomposition of a class of matrices arising inherently in source...
详细信息
ISBN:
(纸本)9781424456383
In this paper we examine the use of generalized singular value decomposition (GSVD) for coordinated beamforming in MIMO systems. GSVD facilitates joint decomposition of a class of matrices arising inherently in source-to-2 destination MIMO broadcast scenarios. GSVD allows two channels of suitable dimensionality to be jointly diagonalized, i.e. to be reduced to non-interfering virtual broadcast channels, through the use of jointly determined transmit precoding and receiver reconstruction matrices. Potential applications for GSVD-based beamforming can be found in MIMO broadcasting, as well as in MIMO relaying under all amplify-and-forward, decode-and-forward, and code-and-forward relay processing schemes. Several of them are highlighted here. We also present simulation-based performance analysis results to justify the use of GSVD for coordinated beamforming.
For a variety of processes we can observe and register their characteristics, making up a sequence of measurement vectors or matrices (rectangular in general). Our goal is to extract some model dependent information u...
详细信息
For a variety of processes we can observe and register their characteristics, making up a sequence of measurement vectors or matrices (rectangular in general). Our goal is to extract some model dependent information using the available information. Such approaches are typical in technology (for a neat chemistry example, see [7, 9]) and model analysis like parameter identification of linear stochastic dynamic systems. Since a stochastic nature of financial and economic data is evident, we can extend this data analysis technique to a number of new applications. If we are successful, some kind of adaptive filter can be further constructed (similar to the classic Kalman's one, for example). Inspired with formal model parameters, we can apply this filter to process financial data like stock information to predict and verify how close is a mathematical model to a real-time data. Namely, when provided with a set measurements represented by matrices A(i) is an element of M-m,M-n(R), we have to estimate a problem dependent characteristic matrices [GRAPHIC] with P, Q being orthonormal matrices, B-i is an element of Mr(R), r <= min{m, n}. Formulated as above, the problem is usually called a generalized singular value decomposition (GSVD) problem and could be solved numerically [1, 2]. These matrices provide some basic information applicable for higher level automated problem solver or human interpretation.
The existing methods for performing the super-resolution of the three-dimensional images are mainly based on the simple learning algorithms with the low computational powers and the complex deep learning neural networ...
详细信息
The existing methods for performing the super-resolution of the three-dimensional images are mainly based on the simple learning algorithms with the low computational powers and the complex deep learning neural network-based learning algorithms with the high computational powers. However, these methods are based on the prior knowledge of the images and require a large database of the pairs of the low-resolution images and the corresponding high-resolution images. To address this difficulty, this paper proposes a method based on the joint generalized singular value decomposition and tensor decomposition for performing the super-resolution. Here, it is not required to know the prior knowledge of the pairs of the low-resolution images and the corresponding high-resolution images. First, an image is represented as a tensor. Compared to the three-dimensional singular spectrum analysis, the spatial structure of the local adjacent pixels of the image is retained. Second, both the generalized singular value decomposition and the Tucker decomposition are applied to the tensor to obtain two low-resolution tensors. It is worth noting that the correlation between these two low-resolution tensors is preserved. Also, these two decompositions achieve the exact perfect reconstruction. Finally, the high-resolution image is reconstructed. Compared to the de-Hankelization of the three-dimensional singular spectrum analysis, the required computational complexity of the reconstruction of our proposed method is much lower. The computer numerical simulation results show that our proposed method achieves a higher peak signal-to-noise ratio than the existing methods.
We propose a new algorithm to find the generalized singular value decompositions of two matrices with the same number of columns. We discuss in detail the sensitivity of our algorithm to errors in the entries of the m...
详细信息
We propose a new algorithm to find the generalized singular value decompositions of two matrices with the same number of columns. We discuss in detail the sensitivity of our algorithm to errors in the entries of the matrices and suggest a way to suppress this sensitivity.
Linear discriminant analysis ( LDA) has been widely used for linear dimension reduction. However, LDA has limitations in that one of the scatter matrices is required to be nonsingular and the nonlinearly clustered str...
详细信息
Linear discriminant analysis ( LDA) has been widely used for linear dimension reduction. However, LDA has limitations in that one of the scatter matrices is required to be nonsingular and the nonlinearly clustered structure is not easily captured. In order to overcome the problems caused by the singularity of the scatter matrices, a generalization of LDA based on the generalized singular value decomposition ( GSVD) was recently developed. In this paper, we propose a nonlinear discriminant analysis based on the kernel method and the GSVD. The GSVD is applied to solve the generalized eigenvalue problem which is formulated in the feature space defined by a nonlinear mapping through kernel functions. Our GSVD-based kernel discriminant analysis is theoretically compared with other kernel-based nonlinear discriminant analysis algorithms. The experimental results show that our method is an effective nonlinear dimension reduction method.
A novel multilevel information cryptosystem based on generalized singular value decomposition (GSVD), optical interference, and devil's vortex Fresnel lens (DVFL) encoding, is proposed. The proposed cryptosystem e...
详细信息
A novel multilevel information cryptosystem based on generalized singular value decomposition (GSVD), optical interference, and devil's vortex Fresnel lens (DVFL) encoding, is proposed. The proposed cryptosystem exploits a set of four fused LL sub-bands from four RGB images, which are converted into a CMYK image and split into C, M, Y, and K channels. The GSVD operation is used to produce five matrices from C and M channels, and five matrices from Y and K channels independently. A single-channel image of the first set formed by fusion of the two unitary matrices from each pair is gyrator transformed. In a similar fashion, the transformed images of a number of sets are combined into a complex image, which is then inverse gyrator transformed. The proposed system exploits optical interference of one phase-only mask (POM) as DVFL and two analytically produced POMs. The parameters of DVFL are used as remarkably sensitive decryption keys. Moreover, the individual keys and two POMs are used as decryption keys to advance the security against potential attacks. To avoid the strict alignment of the three POMs in different arms during the experiment, the summation of the three POMs is displayed on a single spatial light modulator (SLM). Further, gyrator transform does not require axial movements. Therefore, the proposed method avoids problems that result from misalignment. The proposed method can be implemented by using a hybrid optoelectronic system. Numerical simulation results demonstrate the practicability and effectiveness of the proposed system.
We present two new algorithms for floating-point computation of the generalizedsingularvalues of a real pair (A, B) of full column rank matrices and for floating-point solution of the generalized eigenvalue problem ...
详细信息
We present two new algorithms for floating-point computation of the generalizedsingularvalues of a real pair (A, B) of full column rank matrices and for floating-point solution of the generalized eigenvalue problem Hx = lambda Mx with symmetric, positive definite matrices H and M. The pair (A, B) is replaced with an equivalent pair (A', B'), and the generalizedsingularvalues are computed as the singularvalues of the explicitly computed matrix F = A'B'(?1). The singularvalues of F are computed using the Jacobi method. The relative accuracy of the computed singularvalue approximations does not depend on column scalings of A and B;that is, the accuracy is nearly the same for all pairs (AD(1);BD2), with D-1, D-2 arbitrary diagonal, nonsingular matrices. Similarly, the pencil H - lambda M is replaced with an equivalent pencil H' - lambda M', and the eigenvalues of H - lambda M are computed as the squares of the singularvalues of G = L-H L-M(-1), where L-H, L-M are the Cholesky factors of H', M', respectively, and the matrix G is explicitly computed as the solution of a linear system of equations. For the computed approximation lambda + delta lambda of any exact eigenvalue lambda, the relative error \delta lambda\/lambda is of order p(n)epsilon max{min(Delta is an element of D) (kappa 2)(Delta H Delta);min(Delta is an element of D) (kappa 2) (Delta M Delta)}, where p(n) is a modestly growing polynomial of the dimension of the problem, epsilon is the round-off unit of floating-point arithmetic, D denotes the set of diagonal nonsingular matrices, and kappa(2)(.) is the spectral condition number. Furthermore, ?oating-point computation corresponds to an exact computation with H + delta H, M + delta M, where, for all i, j, \delta H-ij\/root HiiHjj and \delta M-ij\/root MiiMjj are of order of epsilon times a modest function of n.
Discriminant analysis has been used for decades to extract features that preserve class separability. It is commonly defined as an optimization problem involving covariance matrices that represent the scatter within a...
详细信息
Discriminant analysis has been used for decades to extract features that preserve class separability. It is commonly defined as an optimization problem involving covariance matrices that represent the scatter within and between clusters. The requirement that one of these matrices be nonsingular limits its application to data sets with certain relative dimensions. We examine a number of optimization criteria, and extend their applicability by using the generalized singular value decomposition to circumvent the nonsingularity requirement. The result is a generalization of discriminant analysis that can be applied even when the sample size is smaller than the dimension of the sample data. We use classification results from the reduced representation to compare the effectiveness of this approach with some alternatives, and conclude with a discussion of their relative merits.
A powerful method for solving planar eigenvalue problems is the method of particular solutions (MPS), which is also well known under the name "point matching method." The implementation of this method usuall...
详细信息
A powerful method for solving planar eigenvalue problems is the method of particular solutions (MPS), which is also well known under the name "point matching method." The implementation of this method usually depends on the solution of one of three types of linear algebra problems: singularvaluedecomposition, generalized eigenvaluedecomposition, or generalized singular value decomposition. We compare and give geometric interpretations of these different variants of the MPS. It turns out that the most stable and accurate of them is based on the generalized singular value decomposition. We present results to this effect and demonstrate the behavior of the generalized singular value decomposition in the presence of a highly ill-conditioned basis of particular solutions.
In this paper, we discuss the sensitivity of multiple nonzero finite generalizedsingularvalues and the corresponding generalizedsingular matrix set of a real matrix pair analytically dependent on several parameters...
详细信息
In this paper, we discuss the sensitivity of multiple nonzero finite generalizedsingularvalues and the corresponding generalizedsingular matrix set of a real matrix pair analytically dependent on several parameters. From our results, the partial derivatives of multiple nonzero singularvalues and their left and right singular vector matrices are *** (c) 2012 John Wiley & Sons, Ltd.
暂无评论