We present and validate a computational approach that enables the quantification of time-dependent uncertainty in axially symmetric electromagnetic (EM) problems, in the context of a unique simulation. In essence, the...
详细信息
We present and validate a computational approach that enables the quantification of time-dependent uncertainty in axially symmetric electromagnetic (EM) problems, in the context of a unique simulation. In essence, the finite-difference time-domain (FDTD) method, adapted to model bodies of revolution (BOR), is combined with truncated polynomial-chaos (PC) expansions of the involved field components, so that the stochastic nature of the latter is modelled reliably, when media with random electric properties need to be considered. The developed approach features two distinct advantages: First, by exploiting the azimuthal periodicity of the investigated problems' geometry, the high computational burden of three-dimensional (3D) simulations is reduced. Second, the large number of simulations that are normally required by Monte-Carlo (MC) methodologies for the extraction of statistical features is avoided, thanks to the integrated PC approximations. A number of numerical tests are conducted, in order to verify the validity and performance of the suggested stochastic algorithm.
This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses ...
详细信息
This paper presents a new algorithm for derivative-free optimization of expensive black-box objective functions subject to expensive black-box inequality constraints. The proposed algorithm, called ConstrLMSRBF, uses radial basis function (RBF) surrogate models and is an extension of the Local Metric stochastic RBF (LMSRBF) algorithm by Regis and Shoemaker (2007a) [1] that can handle black-box inequality constraints. Previous algorithms for the optimization of expensive functions using surrogate models have mostly dealt with bound constrained problems where only the objective function is expensive, and so, the surrogate models are used to approximate the objective function only. In contrast, ConstrLMSRBF builds RBF surrogate models for the objective function and also for all the constraint functions in each iteration, and uses these RBF models to guide the selection of the next point where the objective and constraint functions will be evaluated. Computational results indicate that ConstrLMSRBF is better than alternative methods on 9 out of 14 test problems and on the MOPTA08 problem from the automotive industry (Jones, 2008 [2]). The MOPTA08 problem has 124 decision variables and 68 inequality constraints and is considered a large-scale problem in the area of expensive black-box optimization. The alternative methods include a Mesh Adaptive Direct Search (MADS) algorithm (Abramson and Audet, 2006 [3];Audet and Dennis, 2006 [4]) that uses a kriging-based surrogate model, the Multistart LMSRBF algorithm by Regis and Shoemaker (2007a) [1] modified to handle black-box constraints via a penalty approach, a genetic algorithm, a pattern search algorithm, a sequential quadratic programming algorithm, and COBYLA (Powell, 1994 [5]), which is a derivative-free trust-region algorithm. Based on the results of this study, the results in Jones (2008) [2] and other approaches presented at the ISMP 2009 conference, ConstrLMSRBF appears to be among the best, if not the best, known al
Sparse learning is essential in mining high-dimensional data. Iterative hard thresholding (IHT) methods are effective for optimizing nonconvex objectives for sparse learning. However, IHT methods are vulnerable to adv...
详细信息
Sparse learning is essential in mining high-dimensional data. Iterative hard thresholding (IHT) methods are effective for optimizing nonconvex objectives for sparse learning. However, IHT methods are vulnerable to adversary attacks that infer sensitive data. Although pioneering works attempted to relieve such vulnerability, they confront the issue of high computational cost for large-scale problems. We propose two differentially private stochastic IHT: one based on the stochastic gradient descent method (DP-SGD-HT) and the other based on the stochastically controlled stochastic gradient method (DP-SCSG-HT). The DP-SGD-HT method perturbs stochastic gradients with small Gaussian noise rather than full gradients, which are computationally expensive. As a result, computational complexity is reduced from O(nlog(n)) to a lower O(blog(n)), where n is the sample size and b is the mini-batch size used to compute stochastic gradients. The DP-SCSG-HT method further perturbs the stochastic gradients controlled by largebatch snapshot gradients to reduce stochastic gradient variance. We prove that both algorithms guarantee differential privacy and have linear convergence rates with estimation bias. A utility analysis examines the relationship between convergence rate and the level of perturbation, yielding the best-known utility bound for nonconvex sparse optimization. Extensive experiments show that our algorithms outperform existing methods.
We study the approximation of determinant for large scale matrices with low computational complexity. This paper develops a generalized stochastic polynomial approximation frame as well as a stochastic Legendre approx...
详细信息
We study the approximation of determinant for large scale matrices with low computational complexity. This paper develops a generalized stochastic polynomial approximation frame as well as a stochastic Legendre approximation algorithm to calculate log-determinants of large-scale positive definite matrices based on the prior eigenvalue distributions. The generalized frame is implemented by weighted L-2 orthogonal polynomial expansions with an efficient recursion formula and matrix-vector multiplications. So the proposed scheme is efficient both in computational complexity and data storage. Respective error bounds are given in theory which guarantee the convergence of the proposed algorithms. We illustrate the effectiveness of our method by numerical experiments on both synthetic matrices and counting spanning trees. (C) 2017 Elsevier Ltd. All rights reserved.
Combettes and Pesquet (SIAM J Optim 25:1221-1248,2015) investigated the almost sure weak convergence of block-coordinate fixed point algorithms and discussed their applications to nonlinear analysis and optimization. ...
详细信息
Combettes and Pesquet (SIAM J Optim 25:1221-1248,2015) investigated the almost sure weak convergence of block-coordinate fixed point algorithms and discussed their applications to nonlinear analysis and optimization. This algorithmic framework features random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and it allows for stochastic errors in the evaluation of the operators. The present paper establishes results on the mean-square and linear convergence of the iterates. Applications to monotone operator splitting and proximal optimization algorithms are presented.
In this paper, we develop a novel regularization method for deep neural networks by penalizing the trace of Hessian. This regularizer is motivated by a recent guarantee bound of the generalization error. We explain it...
详细信息
In this paper, we develop a novel regularization method for deep neural networks by penalizing the trace of Hessian. This regularizer is motivated by a recent guarantee bound of the generalization error. We explain its benefits in finding flat minima and avoiding Lyapunov stability in dynamical systems. We adopt the Hutchinson method as a classical unbiased estimator for the trace of a matrix and further accel-erate its calculation using a Dropout scheme. Experiments demonstrate that our method outperforms existing regularizers and data augmentation methods, such as Jacobian, Confidence Penalty, Label Smoothing, Cutout, and Mixup. The code is available at https://***/Dean-lyc/Hessian-Regularization.(c) 2023 Elsevier B.V. All rights reserved.
This work proposes block-coordinate fixed point algorithms with applications to nonlinear analysis and optimization in Hilbert spaces. The asymptotic analysis relies on a notion of stochastic quasi-Fejer monotonicity,...
详细信息
This work proposes block-coordinate fixed point algorithms with applications to nonlinear analysis and optimization in Hilbert spaces. The asymptotic analysis relies on a notion of stochastic quasi-Fejer monotonicity, which is thoroughly investigated. The iterative methods under consideration feature random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and they allow for stochastic errors in the evaluation of the operators. algorithms using quasi-nonexpansive operators or compositions of averaged nonexpansive operators are constructed, and weak and strong convergence results are established for the sequences they generate. As a by-product, novel block-coordinate operator splitting methods are obtained for solving structured monotone inclusion and convex minimization problems. In particular, the proposed framework leads to random block-coordinate versions of the Douglas-Rachford and forward-backward algorithms and of some of their variants. In the standard case of m = 1 block, our results remain new as they incorporate stochastic perturbations.
We consider linear systems of equations, Ax = b, with an emphasis on the case where A is singular. Under certain conditions, necessary as well as sufficient, linear deterministic iterative methods generate sequences {...
详细信息
We consider linear systems of equations, Ax = b, with an emphasis on the case where A is singular. Under certain conditions, necessary as well as sufficient, linear deterministic iterative methods generate sequences {x(k)} that converge to a solution as long as there exists at least one solution. This convergence property can be impaired when these methods are implemented with stochastic simulation, as is often done in important classes of large-scale problems. We introduce additional conditions and novel algorithmic stabilization schemes under which {x(k)} converges to a solution when A is singular and may also be used with substantial benefit when A is nearly singular.
We present an exact timestepping method for Brownian motion that does not require Gaussian random variables to be generated. Time is incremented in steps that are exponentially-distributed random variables;boundaries ...
详细信息
We present an exact timestepping method for Brownian motion that does not require Gaussian random variables to be generated. Time is incremented in steps that are exponentially-distributed random variables;boundaries can be explicitly accounted for at each timestep. The method is illustrated by numerical solution of a system of diffiising particles.
In this brief, a novel stochastic minimum-maximum finite-time consensus protocol is proposed. The stochastic consensus protocol is then applied to a system of agents with continuous high-order dynamics. Based on this ...
详细信息
In this brief, a novel stochastic minimum-maximum finite-time consensus protocol is proposed. The stochastic consensus protocol is then applied to a system of agents with continuous high-order dynamics. Based on this protocol, new continuous auxiliary variables are defined, which use only samples of the neighbors' outputs. If those variables are regulated to zero in finite time, it is proven that finite-time output consensus is achieved. This regulation problem is then solved using a standard finite-time control law. Simulations are provided that illustrate the efficiency of this distributed finite-time control scheme.
暂无评论