The Monte Carlo simulations based on the metropolis algorithm was used to investigate the magnetic behavior of Yb2Ti2O7, Er2Ti2O7 and Er2Sn2O7 systems using classical Heisenberg spin model. The thermodynamic observabl...
详细信息
The Monte Carlo simulations based on the metropolis algorithm was used to investigate the magnetic behavior of Yb2Ti2O7, Er2Ti2O7 and Er2Sn2O7 systems using classical Heisenberg spin model. The thermodynamic observables are averaged over Monte Carlo simulations. The order parameter was computed as a function of temperature for two different configurations. The results show that the thermal phase transition is probably continuous and at most very weakly first order. The variation of the spin components with temperature for the two configurations exhibits different magnetic phases, and the spin configurations are deduced and presented.
We investigate popular trajectory-based algorithms inspired by biology and physics to answer a question of general significance: when is it beneficial to reject improvements? A distinguishing factor of SSWM (strong se...
详细信息
We investigate popular trajectory-based algorithms inspired by biology and physics to answer a question of general significance: when is it beneficial to reject improvements? A distinguishing factor of SSWM (strong selection weak mutation), a popular model from population genetics, compared to the metropolis algorithm (MA), is that the former can reject improvements, while the latter always accepts them. We investigate when one strategy outperforms the other. Since we prove that both algorithms converge to the same stationary distribution, we concentrate on identifying a class of functions inducing large mixing times, where the algorithms will outperform each other over a long period of time. The outcome of the analysis is the definition of a function where SSWM is efficient, while metropolis requires at least exponential time. The identified function favours algorithms that prefer high quality improvements over smaller ones, revealing similarities in the optimisation strategies of SSWM and metropolis respectively with best-improvement (BILS) and first-improvement (FILS) local search. We conclude the paper with a comparison of the performance of these algorithms and a (1,lambda) RLS on the identified function. The algorithm favours the steepest gradient with a probability that increases with the size of its offspring population. The results confirm that BILS excels and that the (1,lambda) RLS is efficient only for large enough population sizes.
We present a complete Bayesian treatment of autoregressive model estimation incorporating choice of autoregressive order, enforcement of stationarity, treatment of outliers, and allowance for missing values and multip...
详细信息
We present a complete Bayesian treatment of autoregressive model estimation incorporating choice of autoregressive order, enforcement of stationarity, treatment of outliers, and allowance for missing values and multiplicative seasonality. The paper makes three distinct contributions. First, we enforce the stationarity conditions using a very efficient metropolis-within-Gibbs algorithm to generate the partial autocorrelations. Second we show how to carry out the Gibbs sampler when the autoregressive order is unknown. Third, we show how to combine the various aspects of fitting an autoregressive model giving a more comprehensive and efficient treatment than previous work. We illustrate our methodology with a real example.
The appearance of the article by N. metropolis. A.W. Rosenbluth, M.N. Rosenbluth, A. H. Teller, and E. Teller marked the birth of the Monte Carlo method for the study of statistical-mechanical systems and of a specifi...
详细信息
The appearance of the article by N. metropolis. A.W. Rosenbluth, M.N. Rosenbluth, A. H. Teller, and E. Teller marked the birth of the Monte Carlo method for the study of statistical-mechanical systems and of a specific form of "importance sampling"-namely, Markov chain Monte Carlo. After nearly 40 years of statistical usage, this technique has had a profound impact on statistical theory, on both Bayesian and classical statistics. Markov chain Monte Carlo is used essentially to estimate integrals in high dimensions. This article addresses the accuracy of such estimation. Through computer experiments performed on the two-dimensional Ising model, we compare the most common method for error estimates in statistical mechanics. It appears that the moving-block bootstrap outperforms other methods based on subseries values when the number of observations is relatively small and the time correlation between successive configurations decays slowly. Moreover, the moving-block bootstrap enables estimates of the standard error to be made not only for the averages of directly obtained data but also for estimates derived from sophisticated numerical procedures.
This article presents a new numerical scheme for the discretization of dissipative particle dynamics with conserved energy. The key idea is to reduce elementary pairwise stochastic dynamics (either fluctuation/dissipa...
详细信息
This article presents a new numerical scheme for the discretization of dissipative particle dynamics with conserved energy. The key idea is to reduce elementary pairwise stochastic dynamics (either fluctuation/dissipation or thermal conduction) to effective single-variable dynamics, and to approximate the solution of these dynamics with one step of a metropolis-Hastings algorithm. This ensures by construction that no negative internal energies are encountered during the simulation, and hence allows to increase the admissible timesteps to integrate the dynamics, even for systems with small heat capacities. Stability is only limited by the Hamiltonian part of the dynamics, which suggests resorting to multiple timestep strategies where the stochastic part is integrated less frequently than the Hamiltonian one. (C) 2017 Elsevier Inc. All rights reserved.
Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multipl...
详细信息
Some multiple-point-based sampling algorithms, such as the snesim algorithm, rely on sequential simulation. The conditional probability distributions that are used for the simulation are based on statistics of multiple-point data events obtained from a training image. During the simulation, data events with zero probability in the training image statistics may occur. This is handled by pruning the set of conditioning data until an event with non-zero probability is found. The resulting probability distribution sampled by such algorithms is a pruned mixture model. The pruning strategy leads to a probability distribution that lacks some of the information provided by the multiple-point statistics from the training image, which reduces the reproducibility of the training image patterns in the outcome realizations. When pruned mixture models are used as prior models for inverse problems, local re-simulations are performed to obtain perturbed realizations. Consequently, these local re-simulations lead to additional pruning in the set of conditioning data, which further deteriorates the pattern reproduction. To mitigate this problem, it is here suggested to combine the pruned mixture model with a frequency matching model. The multiple-point statistics of outcome realizations from this combined model has improved degree of match with the statistics from the training image. An efficient algorithm that samples this combined model is suggested. Finally, a tomographic cross-borehole inverse problem with prior information expressed by the combined (prior) model is used to demonstrate the effect of pattern reproducibility on the resolution of an inverse problem.
N-15 tracing studies in combination with analyses via process-based models are the current "state-of-the-art" technique to quantify gross nitrogen (N) transformation rates in soils. A crucial component of th...
详细信息
N-15 tracing studies in combination with analyses via process-based models are the current "state-of-the-art" technique to quantify gross nitrogen (N) transformation rates in soils. A crucial component of this technique is the optimization algorithm which primarily decides how many model parameters can simultaneously be estimated. Recently, we published a Markov chain Monte Carlo (MCMC) method which has the potential to simultaneously estimate large number of parameters in N-15 tracing models [Muller et al., 2007. Estimation of parameters in complex N-15 tracing models by Monte Carlo sampling. Soil Biology & Biochemistry 39, 715-726]. Here, we present the results of a reanalysis of datasets by Kirkham and Bartholomew [1954. Equations for following nutrient transformations in soil, utilizing tracer data. Soil Science Society of America Proceedings 18, 33-34], Myrold and Tiedje [1986. Simultaneous estimation of several nitrogen cycle rates using N-15: theory and application. Soil Biology & Biochemistry 18, 559-568] and Watson et al. [2000. Overestimation of gross N transformation rates in grassland soils due to non-uniform exploitation of applied and native pools. Soil Biology & Biochemistry 32, 2019-2030] using the MCMC technique. Analytical solutions such as the ones derived by Kirkham and Bartholomew [1954. Equations for following nutrient transformations in soil, utilizing tracer data. Soil Science Society of America Proceedings 18, 33-34] result in gross rates without uncertainties. We show that the analysis of the same data sets with the MCMC method provides standard deviations for gross N transformations. The standard deviations are further reduced if realistic data uncertainties are considered. Reanalyzing data by Myrold and Tiedje [1986. Simultaneous estimation of several nitrogen cycle rates using 15N: theory and application. Soil Biology & Biochemistry 18, 559-568] (Capac soil) resulted in a model fit similar to the one of the original analysis but with mor
This paper addresses the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level. A fast, robust, and complete...
详细信息
This paper addresses the problem of locating two straight and parallel road edges in images that are acquired from a stationary millimeter-wave radar platform positioned near ground-level. A fast, robust, and completely data-driven Bayesian solution to this problem is developed, and it has applications in automotive vision enhancement. The method employed in this paper makes use of a deformable template model of the expected road edges, a two-parameter lognormal model of the ground-level millimeter-wave (GLEM) radar imaging process, a maximum a posteriori (MAP) formulation of the straight edge detection problem, and a Monte Carlo algorithm to maximize the posterior density. Experimental results are presented by applying the method on GLEM radar images of actual roads. The performance of the method is assessed against ground truth for a variety of road scenes.
Multicomponent analysis attempts to simultaneously predict the ingredients of a mixture. If near-infrared spectroscopy provides the predictor variables, then modern scanning instruments may offer absorbances at a very...
详细信息
Multicomponent analysis attempts to simultaneously predict the ingredients of a mixture. If near-infrared spectroscopy provides the predictor variables, then modern scanning instruments may offer absorbances at a very large number of wavelengths. Although it is perfectly possible to use whole spectrum methods (e.g. PLS, ridge and principal component regression), for a number of reasons it is often desirable to select a small number of wavelengths from which to construct the prediction equation relating absorbances to composition. This paper considers wavelength selection with a view to using the chosen wavelengths to simultaneously predict the compositional ingredients and is therefore an example of multivariate variable selection. It adopts a binary exclusion/inclusion latent variable formulation of selection and uses a Bayesian approach. Problems of search of the vast number of possible selected models are overcome by a Markov chain Monte Carlo sampling technique. (C) 1998 John Wiley & Sons, Ltd.
We prove explicit, i.e., non-asymptotic, error bounds for Markov Chain Monte Carlo methods, Such as the metropolis algorithm, The problem is to compute the expectation (or integral) off with respect to a measure pi wh...
详细信息
We prove explicit, i.e., non-asymptotic, error bounds for Markov Chain Monte Carlo methods, Such as the metropolis algorithm, The problem is to compute the expectation (or integral) off with respect to a measure pi which can be given by a density rho with respect to another measure. A straight simulation of the desired distribution by a random number generator is in general not possible. Thus it is reasonable to use Markov chain sampling with a burnin. We study such an algorithm and extend the analysis of Lovasz and Simonovits [L Lovasz, M. Simonovits, Random walks in a convex body and an improved Volume algorithm, Random Structures algorithms 4 (4) (1993) 359-412] to obtain an explicit error bound. (C) 2008 Elsevier Inc. All rights reserved.
暂无评论