randomizedalgorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomizedalgorithms are often faster and robust. The main purpose of this paper is to design a...
详细信息
randomizedalgorithms provide a powerful tool for scientific computing. Compared with standard deterministic algorithms, randomizedalgorithms are often faster and robust. The main purpose of this paper is to design adaptive randomized algorithms for computing the approximate tensor decompositions. We give an adaptiverandomized algorithm for the computation of a low multilinear rank approximation of the tensors with unknown multilinear rank and analyze its probabilistic error bound under certain assumptions. Finally, we design an adaptiverandomized algorithm for computing the tensor train approximations of the tensors. Based on the bounds about the singular values of sub-Gaussian matrices with independent columns or independent rows, we analyze these randomizedalgorithms. We illustrate our adaptive randomized algorithms via several numerical examples.
In the context of numerical constrained optimization, we investigate stochastic algorithms, in particular evolution strategies, handling constraints via augmented Lagrangian approaches. In those approaches, the origin...
详细信息
In the context of numerical constrained optimization, we investigate stochastic algorithms, in particular evolution strategies, handling constraints via augmented Lagrangian approaches. In those approaches, the original constrained problem is turned into an unconstrained one and the function optimized is an augmented Lagrangian whose parameters are adapted during the optimization. The use of an augmented Lagrangian however breaks a central invariance property of evolution strategies, namely invariance to strictly increasing transformations of the objective function. We formalize nevertheless that an evolution strategy with augmented Lagrangian constraint handling should preserve invariance to strictly increasing affine transformations of the objective function and the scaling of the constraints-a subclass of strictly increasing transformations. We show that this invariance property is important for the linear convergence of these algorithms and show how both properties are connected. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论