In this paper, we present a three-term conjugate gradient algorithm and three approaches are used in the designed algorithm: (i) A modified weak Wolfe-Powell line search technique is introduced to obtain alpha(k). (ii...
详细信息
In this paper, we present a three-term conjugate gradient algorithm and three approaches are used in the designed algorithm: (i) A modified weak Wolfe-Powell line search technique is introduced to obtain alpha(k). (ii) The search direction d(k) is given by a symmetrical Perry matrix which contains two positive parameters, and the sufficient descent property of the generated directions holds independent of the MWWP line search technique. (iii) A parabolic will be proposed and regarded as the projection surface, the next point x(k+1) is generated by a new projection technique. The global convergence of the new algorithm under a MWWP line search is obtained for general functions. Numerical experiments show that the given algorithm is promising.
It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai-Kou conjugate gradient algorithm (SIAM J Optim 23(...
详细信息
It is gradually accepted that the loss of orthogonality of the gradients in a conjugate gradient algorithm may decelerate the convergence rate to some extent. The Dai-Kou conjugate gradient algorithm (SIAM J Optim 23(1):296-320, 2013), called CGOPT, has attracted many researchers' attentions due to its numerical efficiency. In this paper, we present an improved Dai-Kou conjugate gradient algorithm for unconstrained optimization, which only consists of two kinds of iterations. In the improved Dai-Kou conjugate gradient algorithm, we develop a new quasi-Newton method to improve the orthogonality by solving the subproblem in the subspace and design a modified strategy for the choice of the initial stepsize for improving the numerical performance. The global convergence of the improved Dai-Kou conjugate gradient algorithm is established without the strict assumptions in the convergence analysis of other limited memory conjugategradient methods. Some numerical results suggest that the improved Dai-Kou conjugate gradient algorithm (CGOPT (2.0)) yields a tremendous improvement over the original Dai-Kou CG algorithm (CGOPT (1.0)) and is slightly superior to the latest limited memory conjugategradient software package CG_DESCENT (6.8) developed by Hager and Zhang (SIAM J Optim 23(4):2150-2168, 2013) for the CUTEr library.
The Toeplitz matrix T-n with generating function f(omega) = |1 - e(-i omega)|(-2d)h(omega), where d is an element of (- (1)/(2), (1)/(2)) \ {0} and h(omega) is positive, continuous on [- pi, pi], and differentiable on...
详细信息
The Toeplitz matrix T-n with generating function f(omega) = |1 - e(-i omega)|(-2d)h(omega), where d is an element of (- (1)/(2), (1)/(2)) \ conjugate and h(omega) is positive, continuous on [- pi, pi], and differentiable on [-pi, pi] \ conjugate, has a Fisher - Hartwig singularity [ M. E. Fisher and R. E. Hartwig (1968), Adv. Chem. Phys., 32, pp. 190 - 225]. The complexity of the preconditioned conjugategradient (PCG) algorithm is known [R. H. Chan and M. Ng (1996), SIAM Rev., 38, pp. 427 - 482] to be O(nlogn) for Toeplitz systems when d = 0. However, the effect on the PCG algorithm of the Fisher - Hartwig singularity in Tn has not been explored in the literature. We show that the complexity of the conjugategradient (CG) algorithm for solving T(n)x = b without any preconditioning grows asymptotically as n(1+|d|) log(n). With T. Chan's optimal circulant preconditioner C-n [T. Chan (1988), SIAM J. Sci. Statist. Comput., 9, pp. 766 - 771], the complexity of the PCG algorithm is O(nlog(3)(n)).
The conjugate gradient algorithm is well-suited for vector computation but, because of its many synchronization points and relatively short message packets, is more difficult to implement for parallel computation. In ...
详细信息
The conjugate gradient algorithm is well-suited for vector computation but, because of its many synchronization points and relatively short message packets, is more difficult to implement for parallel computation. In this work we introduce a parallel implementation of the block conjugategradient alhorithm. In this algorithm, we carry a block of vectors along at each iteration, reducing the number of iterations and increasing the length of each message. On machines with relatively costly message passing, this algorithm is a significant improvement over the standard conjugate gradient algorithm.
The linear mixing model has been considered previously in most of the researches which are devoted to the blind source separation (BSS) problem. In practice, a more realistic BSS mixing model should be the non-linear ...
详细信息
The linear mixing model has been considered previously in most of the researches which are devoted to the blind source separation (BSS) problem. In practice, a more realistic BSS mixing model should be the non-linear one. In this paper, we propose a non-linear BSS method, in which a two-layer perceptron network is employed as the separating system to separate sources from observed non-linear mixture signals. The learning rules for the parameters of the separating system are derived based on the minimum mutual information criterion with conjugate gradient algorithm. Instead of choosing a proper non-linear functions empirically, the adaptive kernel density estimation is used in order to estimate the probability density functions and their derivatives of the separated signals. As a result, the score function of the perceptron's outputs can be estimated directly. Simulations show good performance of the proposed non-linear BSS algorithm.
In this study, we discover the parallelism of the forward/backward substitutions (FBS) for two cases and thus propose an efficient preconditioned conjugate gradient algorithm with the modified incomplete Cholesky prec...
详细信息
In this study, we discover the parallelism of the forward/backward substitutions (FBS) for two cases and thus propose an efficient preconditioned conjugate gradient algorithm with the modified incomplete Cholesky preconditioner on the GPU (GPUMICPCGA). For our proposed GPUMICPCGA, the following are distinct characteristics: (1) the vector operations are optimized by grouping several vector operations into single kernels, (2) a new kernel of inner product and a new kernel of the sparse matrix-vector multiplication with high optimization are presented, and (3) an efficient parallel implementation of FBS on the GPU (GPUFBS) for two cases are suggested. Numerical results show that our proposed kernels outperform the corresponding ones presented in CUBLAS or CUSPARSE, and GPUFBS is almost 3 times faster than the implementation of FBS using the CUSPARSE library. Furthermore, GPUMICPCGA has better behavior than its counterpart implemented by the CUBLAS and CUSPARSE libraries. (C) 2013 Elsevier Inc. All rights reserved.
Quaternions are a tool used to describe motions of rigid bodies in R-3, (Kuipers, [15]). An interesting application is the topic of moving surfaces (Traversoni, [21]), where quaternion interpolation is used which requ...
详细信息
Quaternions are a tool used to describe motions of rigid bodies in R-3, (Kuipers, [15]). An interesting application is the topic of moving surfaces (Traversoni, [21]), where quaternion interpolation is used which requires solving equations with quaternion coefficients. In this paper we investigate the well known conjugate gradient algorithm (cg-algorithm) introduced by Hestenes and Stiefel [10] applied to quaternion valued, hermitean, positive definite matrices. We shall show, that the features known from the real case are still valid in the quaternion case. These features are: error propagation, early stopping, cg-algorithm as iterative process with error estimates, applicability to indefinite matrices. We have to present some basic facts about quaternions and about matrices with quaternion entries, in particular, about eigenvalues of such matrices. We also present some numerical examples of quaternion systems solved by the cg-algorithm.
In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amou...
详细信息
In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range;the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.
The conjugategradient (CG) algorithm is the most frequently used iterative method for solving linear systemsAx=bwith a symmetric positive definite (SPD) matrix. In this paper we construct real symmetric positive defi...
详细信息
The conjugategradient (CG) algorithm is the most frequently used iterative method for solving linear systemsAx=bwith a symmetric positive definite (SPD) matrix. In this paper we construct real symmetric positive definite matricesAof ordernand real right-hand sidesbfor which the CG algorithm has a prescribed residual norm convergence curve. We also consider prescribing as well theA-norms of the error. We completely characterize the tridiagonal matrices constructed by the Lanczos algorithm and their inverses in terms of the CG residual norms andA-norms of the error. This also gives expressions and lower bounds for thel(2)norm of the error. Finally, we study the problem of prescribing both the CG residual norms and the eigenvalues ofA. We show that this is not always possible. Our constructions are illustrated by numerical examples.
This article addresses two issues. Firstly, the convergence property of conjugategradient (CG) algorithm is investigated by a Chebyshev polynomial approximation. The analysis result shows that its convergence behavio...
详细信息
This article addresses two issues. Firstly, the convergence property of conjugategradient (CG) algorithm is investigated by a Chebyshev polynomial approximation. The analysis result shows that its convergence behaviour is affected by an acceleration term over the steepest descent (SD) algorithm. Secondly, a new CG algorithm is proposed in order to boost the tracking capability for time-varying parameters. The proposed algorithim based on re-initialising forgetting factor shows a fast tracking ability and a noise-immunity property when it encounters an unexpected parameter change. A fast tracking capability is verified through a computer simulation in a system identification problem.
暂无评论