A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the newton method. The algorithm is referred to as the adjoint newtonalgorithm. The...
详细信息
A new algorithm is presented for carrying out large-scale unconstrained optimization required in variational data assimilation using the newton method. The algorithm is referred to as the adjoint newtonalgorithm. The adjoint newtonalgorithm is based on the first- and second-order adjoint techniques allowing us to obtain the newton line search direction by integrating a tangent linear equations model backwards in time (starting from a final condition with negative time steps). The error present in approximating the Hessian (the matrix of second-order derivatives) of the cost function with respect to the control variables in the quasi-newton type algorithm is thus completely eliminated, while the storage problem related to the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The adjoint newtonalgorithm is applied to three one-dimensional models and to a two-dimensional limited-area shallow water equations model with both model generated and First Global Geophysical Experiment data. We compare the performance of the adjoint newtonalgorithm with that of truncatednewton, adjoint truncatednewton, and LBFGS methods. Our numerical tests indicate that the adjoint newtonalgorithm is very efficient and could find the minima within three or four iterations for problems tested here. In the case of the two-dimensional shallow water equations model, the adjoint newtonalgorithm improves upon the efficiencies of the truncatednewton and LBFGS methods by a factor of at least 14 in terms of the CPU time required to satisfy the same convergence criterion. The newton, truncatednewton and LBFGS methods are general purpose unconstrained minimization methods. The adjoint newtonalgorithm is only useful for optimal control problems where the model equations serve as strong constraints and their corresponding tangent linear model may be integrated backwards in time. When the backwards integration of the tangent linear model is ill-posed in the s
作者:
Liu, YangUniv Maryland
Dept Human Dev & Quantitat Methodol 12308 Benjamin Bldg3942 Campus Dr College Pk MD 20742 USA
In exploratory factor analysis, latent factors and factor loadings are seldom interpretable until analytic rotation is performed. Typically, the rotation problem is solved by numerically searching for an element in th...
详细信息
In exploratory factor analysis, latent factors and factor loadings are seldom interpretable until analytic rotation is performed. Typically, the rotation problem is solved by numerically searching for an element in the manifold of orthogonal or oblique rotation matrices such that the rotated factor loadings minimize a pre-specified complexity function. The widely used gradient projection (GP) algorithm, although simple to program and able to deal with both orthogonal and oblique rotation, is found to suffer from slow convergence when the number of manifest variables and/or the number of latent factors is large. The present work examines the effectiveness of two Riemannian second-order algorithms, which respectively generalize the well-established truncatednewton and trust-region strategies for unconstrained optimization in Euclidean spaces, in solving the rotation problem. When approaching a local minimum, the second-order algorithms usually converge superlinearly or even quadratically, better than first-order algorithms that only converge linearly. It is further observed in Monte Carlo studies that, compared to the GP algorithm, the Riemannian truncatednewton and trust-region algorithms require not only much fewer iterations but also much less processing time to meet the same convergence criterion, especially in the case of oblique rotation.
暂无评论