metropolis algorithms for approximate sampling of probability measures on infinite dimensional Hilbert spaces are considered, and a generalization of the preconditioned Crank-Nicolson (pCN) proposal is introduced. The...
详细信息
metropolis algorithms for approximate sampling of probability measures on infinite dimensional Hilbert spaces are considered, and a generalization of the preconditioned Crank-Nicolson (pCN) proposal is introduced. The new proposal is able to incorporate information on the measure of interest. A numerical simulation of a Bayesian inverse problem indicates that a metropolis algorithm with such a proposal performs independently of the state-space dimension and the variance of the observational noise. Moreover, a qualitative convergence result is provided by a comparison argument for spectral gaps. In particular, it is shown that the generalization inherits geometric convergence from the metropolis algorithm with pCN proposal.
The adaptive metropolis (AM) algorithm of Haario, Saksman and Tamminen (Bernoulli 7(2):223-242, 2001) uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new ...
详细信息
The adaptive metropolis (AM) algorithm of Haario, Saksman and Tamminen (Bernoulli 7(2):223-242, 2001) uses the estimated covariance of the target distribution in the proposal distribution. This paper introduces a new robust adaptive metropolis algorithm estimating the shape of the target distribution and simultaneously coercing the acceptance rate. The adaptation rule is computationally simple adding no extra cost compared with the AM algorithm. The adaptation strategy can be seen as a multidimensional extension of the previously proposed method adapting the scale of the proposal distribution in order to attain a given acceptance rate. The empirical results show promising behaviour of the new algorithm in an example with Student target distribution having no finite second moment, where the AM covariance estimate is unstable. In the examples with finite second moments, the performance of the new approach seems to be competitive with the AM algorithm combined with scale adaptation.
This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] for target distributions with a noncom...
详细信息
This paper describes sufficient conditions to ensure the correct ergodicity of the Adaptive metropolis (AM) algorithm of Haario, Saksman and Tamminen [Bernoulli 7 (2001) 223-242] for target distributions with a noncompact support. The conditions ensuring a strong law of large numbers require that the tails of the target density decay super-exponentially and have regular contours. The result is based on the ergodicity of an auxiliary process that is sequentially constrained to feasible adaptation sets, independent estimates of the growth rate of the AM chain and the corresponding geometric drift constants. The ergodicity result of the constrained process is obtained through a modification of the approach due to Andrieu and Moulines [Ann. Appl. Probab. 16 (2006) 1462-1505].
Order Picking in warehouses is often optimized using a method known as Order Batching, which means that one vehicle can be assigned to pick a batch of several orders at a time. There exists a rich body of research on ...
详细信息
We study the performance of the metropolis algorithm for the problem of finding a code word of weight less than or equal to M, given a generator matrix of an [n, k]-binary linear code. The algorithm uses the set Sk of...
详细信息
ISBN:
(纸本)9781450326629
We study the performance of the metropolis algorithm for the problem of finding a code word of weight less than or equal to M, given a generator matrix of an [n, k]-binary linear code. The algorithm uses the set Sk of all k x k invertible matrices as its search space where two elements are considered adjacent if one can be obtained from the other via an elementary row operation (i.e by adding one row to another or by swapping two rows.) We prove that the Markov chains associated with the metropolis algorithm mix rapidly for suitable choices of the temperature parameter T. We ran the metropolis algorithm for a number of codes and found that the algorithm performed very well in comparison to previously known experimental results.
The metropolis algorithm (MA) is a classic stochastic local search heuristic. It avoids getting stuck in local optima by occasionally accepting inferior solutions. To better and in a rigorous manner understand this ab...
详细信息
ISBN:
(纸本)9798400701191
The metropolis algorithm (MA) is a classic stochastic local search heuristic. It avoids getting stuck in local optima by occasionally accepting inferior solutions. To better and in a rigorous manner understand this ability, we conduct a mathematical runtime analysis of the MA on the CLIFF benchmark. Apart from one local optimum, cliff functions are monotonically increasing towards the global optimum. Consequently, to optimize a cliff function, the MA only once needs to accept an inferior solution. Despite seemingly being an ideal benchmark for the MA to profit from its main working principle, our mathematical runtime analysis shows that this hope does not come true. Even with the optimal temperature (the only parameter of the MA), the MA optimizes most cliff functions less efficiently than simple elitist evolutionary algorithms (EAs), which can only leave the local optimum by generating a superior solution possibly far away. This result suggests that our understanding of why the MA is often very successful in practice is not yet complete. Our work also suggests to equip the MA with global mutation operators, an idea supported by our preliminary experiments.
We study an approach to simulating the stochastic relativistic advection-diffusion equation based on the metropolis algorithm. We show that the dissipative dynamics of the boosted fluctuating fluid can be simulated by...
详细信息
We study an approach to simulating the stochastic relativistic advection-diffusion equation based on the metropolis algorithm. We show that the dissipative dynamics of the boosted fluctuating fluid can be simulated by making random transfers of charge between fluid cells, interspersed with ideal hydrodynamic time steps. The random charge transfers are accepted or rejected in a metropolis step using the entropy as a statistical weight. This procedure reproduces the expected stress of dissipative relativistic hydrodynamics in a specific (and noncovariant) hydrodynamic frame known as the density frame. Numerical results, both with and without noise, are presented and compared to relativistic kinetics and analytical expectations. An all order resummation of the density frame gradient expansion reproduces the covariant dynamics in a specific model. In contrast to all other numerical approaches to relativistic dissipative fluids, the dissipative fluid formalism presented here is strictly first order in gradients and has no nonhydrodynamic modes. The physical naturalness and simplicity of the metropolis algorithm, together with its convergence properties, make it a promising tool for simulating stochastic relativistic fluids in heavy ion collisions and for critical phenomena in the relativistic domain.
Consider a d-ary rooted tree (d >= 3) where each edge e is assigned an i.i.d. (bounded) random variable X (e) of negative mean. Assign to each vertex v the sum S(v) of X (e) over all edges connecting v to the root,...
详细信息
Consider a d-ary rooted tree (d >= 3) where each edge e is assigned an i.i.d. (bounded) random variable X (e) of negative mean. Assign to each vertex v the sum S(v) of X (e) over all edges connecting v to the root, and assume that the maximurn S-n* of S(v) over all vertices v at distance n from the root tends to infinity (necessarily, linearly) as n tends to infinity. We analyze the metropolis algorithm on the tree and show that under these assumptions there always exists a temperature 1/beta of the algorithm so that it achieves a linear (positive) growth rate in linear time. This confirms a conjecture of Aldous [algorithmica 22 (1998) 388-412]. The proof is obtained by establishing an Einstein relation for the metropolis algorithm on the tree.
As demonstrated by empirical and theoretical work, the metropolis algorithm can cope with local optima by accepting inferior solutions with suitably small probability. This paper extends this research direction into m...
详细信息
ISBN:
(纸本)9798400704956
As demonstrated by empirical and theoretical work, the metropolis algorithm can cope with local optima by accepting inferior solutions with suitably small probability. This paper extends this research direction into multi-objective *** original metropolis algorithm has two components, one-bit mutation and the acceptance strategy, which allows accepting inferior solutions. When adjusting the acceptance strategy to multi-objective optimization in the way that an inferior solution that is accepted replaces its parent, then the metropolis algorithm is not very efficient on our multi-objective version of the multimodal DLB benchmark called DLTB. With one-bit mutation, this multi-objective metropolis algorithm cannot optimize the DLTB problem, with standard bit-wise mutation it needs at least Ω(n5) time to cover the full Pareto front. In contrast, we show that many other multi-objective optimizers, namely the GSEMO, SMS-EMOA, and NSGA-II, only need time O(n4). When keeping the parent when an inferior point is accepted, the multi-objective metropolis algorithm both with one-bit or standard bit-wise mutation solves the DLTB problem efficiently, with one-bit mutation experimentally leading to better results than several other ***, our work suggests that the general mechanism of the metropolis algorithm can be interesting in multi-objective optimization, but that the implementation details can have a huge impact on the *** paper for the Hot-off-the-Press track at GECCO 2024 summarizes the work Weijie Zheng, Mingfeng Li, Renzhong Deng, and Benjamin Doerr: How to Use the metropolis algorithm for Multi-Objective Optimization? In Conference on Artificial Intelligence, AAAI 2024, AAAI Press, 20883--20891. https://***/10.1609/aaai.v38i18.30078 [22].
In this paper we obtain bounds on the spectral gap of the transition probability matrix of Markov chains associated with the metropolis algorithm and with the Gibbs sampler. In both cases we prove that, for small valu...
详细信息
In this paper we obtain bounds on the spectral gap of the transition probability matrix of Markov chains associated with the metropolis algorithm and with the Gibbs sampler. In both cases we prove that, for small values of T, the spectral gap is equal to 1 A2, where A2 is the second largest eigenvalue of P. In the case of the metropolis algorithm we give also two examples in which the spectral gap is equal to 1 Amm, where Amu., is the smallest eigenvalue of P. Furthermore we prove that random updating dynamics on sites based on the metropolis algorithm and on the Gibbs sampler have the same rate of convergence at low temperatures. The obtained bounds are discussed and compared with those obtained with a different approach.
暂无评论