We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) solutions with Tikhonov regularization. Because the regula...
详细信息
We present an approach to regularize and approximate solution mappings of parametric convex optimization problems that combines interior penalty (log-barrier) solutions with Tikhonov regularization. Because the regularized mappings are single-valued and smooth under reasonable conditions, they can be used to build a computationally practical smoothing for the associated optimal value function. The value function in question, while resulting from parameterized convex problems, need not be convex. One motivating application of interest is two-stage (possibly nonconvex) stochasticprogramming. We show that our approach, being computationally implementable, provides locally bounded upper bounds for the subdifferential of the value function of qualified convex problems. As a by-product of our development, we also recover that in the given setting the value function is locally Lipschitz continuous. Numerical experiments are presented for two-stage convex stochasticprogramming problems, comparing the approach with the bundle method for nonsmooth optimization.
The authors prove the gradient convergence of the deep learning-based numerical method for high dimensional parabolic partial differential equations and backward stochastic differential equations, which is based on ti...
详细信息
The authors prove the gradient convergence of the deep learning-based numerical method for high dimensional parabolic partial differential equations and backward stochastic differential equations, which is based on time discretization of stochastic differential equations(SDEs for short) and the stochastic approximation method for nonconvex stochastic programming problem. They take the stochastic gradient decent method,quadratic loss function, and sigmoid activation function in the setting of the neural network. Combining classical techniques of randomized stochastic gradients, Euler scheme for SDEs, and convergence of neural networks, they obtain the O(K^(-1/4)) rate of gradient convergence with K being the total number of iterative steps.
Generalizations of the branch and bound method and of the Piyavskii method for solution of stochastic global optimization problems are considered. These methods employ the concept of a tangent minorant of an objective...
详细信息
Generalizations of the branch and bound method and of the Piyavskii method for solution of stochastic global optimization problems are considered. These methods employ the concept of a tangent minorant of an objective function as a source of global information about the function. Calculus of tangent minorants is developed.
暂无评论