This paper presents a fast algorithm to solve a spectral estimation problem for two-dimensional random fields. The latter is formulated as a convex optimization problem with the Itakura-Saito pseudodistance as the obj...
详细信息
This paper presents a fast algorithm to solve a spectral estimation problem for two-dimensional random fields. The latter is formulated as a convex optimization problem with the Itakura-Saito pseudodistance as the objectivefunction subject to the constraints of moment equations. The structure of the Hessian of the dual objective function is exploited in order to make possible a fast Newton solver. Then the Newton solver is incorporated to a predictor-corrector numerical continuation method which is able to produce a parametrized family of solutions to the moment equations. Two sets of numerical simulations are performed to test the algorithm and spectral estimator. The simulations on the frequency estimation problem show that our spectral estimator outperforms the classical windowed periodograms in the case of two hidden frequencies and has a higher resolution. The other set of simulations on system identification indicates that the numerical continuation method is more robust than Newton's method alone in ill-conditioned instances.
This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other op...
详细信息
This paper reveals that a common and central role, played in many error bound (EB) conditions and a variety of gradient-type methods, is a residual measure operator. On one hand, by linking this operator with other optimality measures, we define a group of abstract EB conditions, and then analyze the interplay between them;on the other hand, by using this operator as an ascent direction, we propose an abstract gradient-type method, and then derive EB conditions that are necessary and sufficient for its linear convergence. The former provides a unified framework that not only allows us to find new connections between many existing EB conditions, but also paves a way to construct new ones. The latter allows us to claim the weakest conditions guaranteeing linear convergence for a number of fundamental algorithms, including the gradient method, the proximal point algorithm, and the forward-backward splitting algorithm. In addition, we show linear convergence for the proximal alternating linearized minimization algorithm under a group of equivalent EB conditions, which are strictly weaker than the traditional strongly convex condition. Moreover, by defining a new EB condition, we show Q-linear convergence of Nesterov's accelerated forward-backward algorithm without strong convexity. Finally, we verify EB conditions for a class of dual objective functions.
暂无评论