Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure be...
详细信息
Several models for studies related to tensile strength of materials are proposed in the literature where the size or length component has been taken to be an important factor for studying the specimens' failure behaviour. An important model, developed on the basis of cumulative damage approach, is the three-parameter extension of the Birnbaum-Saunders fatigue model that incorporates size of the specimen as an additional variable. This model is a strong competitor of the commonly used Weibull model and stands better than the traditional models, which do not incorporate the size effect. The paper considers two such cumulative damage models, checks their compatibility with a real dataset, compares them with some of the recent toolkits, and finally recommends a model, which appears an appropriate one. Throughout the study is Bayesian based on Markov chain Monte Carlo simulation.
Many complex problems like Speech Recognition, Bioinformatics, Climatology, Control and Communication are solved using Hidden Markov Models (HMM). Mostly, optimization problems are modeled as HMM learning problem in w...
详细信息
ISBN:
(纸本)9781424429271
Many complex problems like Speech Recognition, Bioinformatics, Climatology, Control and Communication are solved using Hidden Markov Models (HMM). Mostly, optimization problems are modeled as HMM learning problem in which HMM parameters are either maximized or minimized. In general, Baum-Welch Method (BW) is used to solve HMM learning problem giving only local maxima/minima in exponential time. In this paper, we have modeled HMM learning problem as a discrete optimization problem such that randomized search methods can be used to solve the learning problem. We have implemented metropolis algorithm (MA) and Simulated Annealing algorithm (SAA) to solve the discretized HMM learning problem. A comparative study of randomized algorithms with the Baum Welch method to estimate the HMM learning parameters has been made. The metropolis algorithm is found to reach maxima in minimum number of transactions as compared to the Baum-Welch and Simulated Annealing algorithms.
The metropolis algorithm and its variants are perhaps the most widely used methods of generating Markov chains with a specified equilibrium distribution. We study an extension of the metropolis algorithm from both a d...
详细信息
The metropolis algorithm and its variants are perhaps the most widely used methods of generating Markov chains with a specified equilibrium distribution. We study an extension of the metropolis algorithm from both a decision theoretic and rate of convergence point of view. The decision theoretic approach was first taken by Peskun (1973) who showed some optimality properties of the classical metropolis sampler. In this article, we propose an extension of the metropolis algorithm which reduces the asymptotic variance and accelerates the convergence rate of its classic from. The principle method used to improve the properties of a sampler is to move mass from the diagonal elements of the Markov chain's transition matrix to the off-diagonal elements. A low dimensional example is given to illustrate that our extended algorithm converges to stationary distribution in the fastest possible order n steps, while the conventional metropolis chain takes at least order n(2) log(n) steps.
The authors provide an overview of optimal scaling results for the metropolis algorithm with Gaussian proposal distribution. They address in more depth the case of high-dimensional target distributions formed of indep...
详细信息
The authors provide an overview of optimal scaling results for the metropolis algorithm with Gaussian proposal distribution. They address in more depth the case of high-dimensional target distributions formed of independent, but not identically distributed components. They attempt to give an intuitive explanation as to why the well-known optimal acceptance rate of 0.234 is not always suitable. They show how to find the asymptotically optimal acceptance rate when needed, and they explain why it is sometimes necessary to turn to inhomogeneous proposal distributions. Their results are illustrated with a simple example.
[ 1] This paper is aimed at improving the efficiency of model uncertainty analyses that are conditioned on measured calibration data. Specifically, the focus is on developing an alternative methodology to the generali...
详细信息
[ 1] This paper is aimed at improving the efficiency of model uncertainty analyses that are conditioned on measured calibration data. Specifically, the focus is on developing an alternative methodology to the generalized likelihood uncertainty estimation ( GLUE) technique when pseudolikelihood functions are utilized instead of a traditional statistical likelihood function. We demonstrate for multiple calibration case studies that the most common sampling approach utilized in GLUE applications, uniform random sampling, is much too inefficient and can generate misleading estimates of prediction uncertainty. We present how the new dynamically dimensioned search ( DDS) optimization algorithm can be used to independently identify multiple acceptable or behavioral model parameter sets in two ways. DDS could replace random sampling in typical applications of GLUE. More importantly, we present a new, practical, and efficient uncertainty analysis methodology called DDS-approximation of uncertainty ( DDS-AU) that quantifies prediction uncertainty using prediction bounds rather than prediction limits. Results for 13, 14, 26, and 30 parameter calibration problems show that DDS-AU can be hundreds or thousands of times more efficient at finding behavioral parameter sets than GLUE with random sampling. Results for one example show that for the same limited computational effort, DDS-AU prediction bounds can simultaneously be smaller and contain more of the measured data in comparison to GLUE prediction bounds. We also argue and then demonstrate that within the GLUE framework, when behavioral parameter sets are not sampled frequently enough, Latin hypercube sampling does not offer any improvements over simple random sampling.
We describe a serial algorithm called feature-inclusion stochastic search, or FINCS, that uses online estimates of edge-inclusion probabilities to guide Bayesian model determination in Gaussian graphical models. FINCS...
详细信息
We describe a serial algorithm called feature-inclusion stochastic search, or FINCS, that uses online estimates of edge-inclusion probabilities to guide Bayesian model determination in Gaussian graphical models. FINCS is compared to MCMC, to metropolis-based search methods, and to the popular lassos it is found to be superior along a variety of dimensions. leading to better sets of discovered models, greater speed and stability, and reasonable estimates of edge-inclusion probabilities. We illustrate FINCS on an example involving mutual-fund data, where we compare the model-averaged predictive performance of models discovered with FINCS to those discovered by competing methods.
Incumbency advantage is one of the most widely studied features in American legislative elections. In this article we construct and implement an estimate that allows incumbency advantage to vary between individual inc...
详细信息
Incumbency advantage is one of the most widely studied features in American legislative elections. In this article we construct and implement an estimate that allows incumbency advantage to vary between individual incumbents. This model predicts that open-seat elections will be less variable than those with incumbents running, an observed empirical pattern that is not explained by previous models. We apply our method to the U.S. House of Representatives in the twentieth century. Our estimate of the overall pattern of incumbency advantage over time is similar to previous estimates (although slightly lower), and we also find a pattern of increasing variation. More generally, our multilevel model represents a new method for estimating effects in before-after studies.
Hierarchical classes models are models for N-way N-mode data that represent the association among the N modes and simultaneously yield, for each mode, a hierarchical classification of its elements. In this paper we pr...
详细信息
Hierarchical classes models are models for N-way N-mode data that represent the association among the N modes and simultaneously yield, for each mode, a hierarchical classification of its elements. In this paper we present a stochastic extension of the hierarchical classes model for two-way two-mode binary data. In line with the original model, the new probabilistic extension still represents both the association among the two modes and the hierarchical classifications. A fully Bayesian method for fitting the new model is presented and evaluated in a simulation study. Furthermore, we propose tools for model selection and model checking based on Bayes factors and posterior predictive checks. We illustrate the advantages of the new approach with applications in the domain of the psychology of choice and psychiatric diagnosis.
In this paper, we shall optimize the efficiency of metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method for determining the approp...
详细信息
In this paper, we shall optimize the efficiency of metropolis algorithms for multidimensional target distributions with scaling terms possibly depending on the dimension. We propose a method for determining the appropriate form for the scaling of the proposal distribution as a function of the dimension, which leads to the proof of an asymptotic diffusion theorem. We show that when there does not exist any component with a scaling term significantly smaller than the others, the asymptotically optimal acceptance rate is the well-known 0.234.
Background: Neutral networks or sets consist of all genotypes with a given phenotype. The size and structure of these sets has a strong influence on a biological system's robustness to mutations, and on its evolva...
详细信息
Background: Neutral networks or sets consist of all genotypes with a given phenotype. The size and structure of these sets has a strong influence on a biological system's robustness to mutations, and on its evolvability, the ability to produce phenotypic variation;in the few studied cases of molecular phenotypes, the larger this set, the greater both robustness and evolvability of phenotypes. Unfortunately, any one neutral set contains generally only a tiny fraction of genotype space. Thus, current methods cannot measure neutral set sizes accurately, except in the smallest genotype spaces. Results: Here we introduce a generalized Monte Carlo approach that can measure neutral set sizes in larger spaces. We apply our method to the genotype-to-phenotype mapping of RNA molecules, and show that it can reliably measure neutral set sizes for molecules up to 100 bases. We also study neutral set sizes of RNA structures in a publicly available database of functional, noncoding RNAs up to a length of 50 bases. We find that these neutral sets are larger than the neutral sets in 99.99% of random phenotypes. Software to estimate neutral network sizes is available at http://***/wagner/***. Conclusion: The biological RNA structures we examined are more abundant than random structures. This indicates that their robustness and their ability to produce new phenotypic variants may also be high.
暂无评论