We propose an extension of the stochastic block model for recurrent interaction events in continuous time, where every individual belongs to a latent group and conditional interactions between two individuals follow a...
详细信息
We propose an extension of the stochastic block model for recurrent interaction events in continuous time, where every individual belongs to a latent group and conditional interactions between two individuals follow an inhomogeneous Poisson process with intensity driven by the individuals' latent groups. We show that the model is identifiable and estimate it with a semiparametric variational expectation-maximization algorithm. We develop two versions of the method, one using a nonparametric histogram approach with an adaptive choice of the partition size, and the other using kernel intensity estimators. We select the number of latent groups by an integrated classification likelihood criterion. We demonstrate the performance of our procedure on synthetic experiments, analyse two datasets to illustrate the utility of our approach, and comment on competing methods.
Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models....
详细信息
Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing.
In this paper, the goodness-of-fit test based on a convex combination of Akaike and Bayesian information criteria is used to explain the features of interoccurrence times of earthquakes. By analyzing the seismic catal...
详细信息
In this paper, the goodness-of-fit test based on a convex combination of Akaike and Bayesian information criteria is used to explain the features of interoccurrence times of earthquakes. By analyzing the seismic catalog of Iran for different tectonic settings, we have found that the probability distributions of time intervals between successive earthquakes can be described by the generalized normal distribution. This indicates that the sequence of successive earthquakes is not a Poisson process. It is found that by decreasing the threshold magnitude, the interoccurrence time distribution changes from the generalized normal distribution to the gamma distribution in some seismotectonic regions. As a new insight, the probability distribution of time intervals between earthquakes is described as a mixture distribution via the expectation-maximization algorithm.
In this paper, we address the problem of parameter estimation in railway systems. For this purpose, a physical model of the train based on the fundamental principle of dynamics is proposed. Then, the parameter estimat...
详细信息
ISBN:
(纸本)9781538652398
In this paper, we address the problem of parameter estimation in railway systems. For this purpose, a physical model of the train based on the fundamental principle of dynamics is proposed. Then, the parameter estimation is handled via an approach using a combination of expectation-maximization algorithm and Sequential Monte Carlo methods. The experiments performed both on synthetic and real data show the efficiency of the considered method.
Phase distribution in the flow field provides an insight in to the hydrodynamics and heat transfer between the fluids. Void fraction, which is one of the key flow parameters, can be determined by estimating the phase ...
详细信息
Phase distribution in the flow field provides an insight in to the hydrodynamics and heat transfer between the fluids. Void fraction, which is one of the key flow parameters, can be determined by estimating the phase boundaries. Electrical impedance tomography (EIT), which has high temporal characteristics, has been used as an imaging modality to estimate the void boundaries, using the prior knowledge of conductivities. The voids formed within the process vessel are not stable and their movement is random in nature, thus dynamic estimation schemes are necessary to track the fast changes. Kalman-type estimators like extended Kalman filter (EKF) assume the knowledge of model parameters, such as the initial states, state transition matrix and the covariance of process and measurement noise. In real situations, we do not have the prior information of the model parameters;therefore, in such circumstances the estimation performance of the Kalman-type filters is affected. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the model parameters as well as non-stationary void boundary. The uncertainties caused in Kalman-type filters, due to the inaccurate selection of model parameters are over come using an EM algorithm. The performance of the method is tested with numerical and experimental data. The results show that an EM has better estimation of the void boundary as compared to the conventional EKF. (C) 2010 Elsevier Ltd. All rights reserved.
This study proposes an expectation-maximization(EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shap...
详细信息
This study proposes an expectation-maximization(EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method. (C) 2011 Elsevier Inc. All rights reserved.
作者:
Ortiz-Rosario, AlexisAdeli, HojjatBuford, John A.Ohio State Univ
Dept Biomed Engn Columbus OH 43210 USA Ohio State Univ
Dept Biomed Engn Dept Biomed Informat Dept Civil & Environm Engn 470 Hitchcock Hall2070 Neil Ave Columbus OH 43210 USA Ohio State Univ
Dept Geodet Sci Dept Elect & Comp Engn Dept Neurol 470 Hitchcock Hall2070 Neil Ave Columbus OH 43210 USA Ohio State Univ
Dept Neurosci 470 Hitchcock Hall2070 Neil Ave Columbus OH 43210 USA Ohio State Univ
Sch Hlth & Rehabil Sci Div Phys Therapy 453 W 10th AveRm 516E Columbus OH 43210 USA
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups...
详细信息
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. (C) 2016 Elsevier B.V. All rights reserved.
We discuss inverted exponentiated Rayleigh distribution under progressive first-failure censoring. Maximum likelihood and Bayes estimates of unknown parameters are obtained. An expectation-maximization algorithm is us...
详细信息
We discuss inverted exponentiated Rayleigh distribution under progressive first-failure censoring. Maximum likelihood and Bayes estimates of unknown parameters are obtained. An expectation-maximization algorithm is used for computing maximum likelihood estimates. Asymptotic intervals are constructed from the observed Fisher information matrix. Bayes estimates of unknown parameters are obtained under the squared error loss function. We construct highest posterior density intervals based on importance sampling. Different predictors and prediction intervals of censored observations are discussed. A Monte Carlo simulations study is performed to compare different methods. Finally, three real data sets are analyzed for illustration purposes.
We prove a general sufficient condition for a noise benefit in the expectation-maximization (EM) algorithm. Additive noise speeds the average convergence of the EM algorithm to a local maximum of the likelihood surfac...
详细信息
ISBN:
(纸本)9781424496365
We prove a general sufficient condition for a noise benefit in the expectation-maximization (EM) algorithm. Additive noise speeds the average convergence of the EM algorithm to a local maximum of the likelihood surface when the noise condition holds. The sufficient condition states when additive noise makes the signal more probable on average. The performance measure is Kullback relative entropy. A Gaussian-mixture problem demonstrates the EM noise benefit. Corollary results give other special cases when noise improves performance in the EM algorithm.
In recent years, the online blogging community is growing bigger as the social network service. When it is growing, the blog posts are increasing day by day. Generally speaking, people were using the blog search engin...
详细信息
In recent years, the online blogging community is growing bigger as the social network service. When it is growing, the blog posts are increasing day by day. Generally speaking, people were using the blog search engines to search and recommend potentially interesting blog posts. When people search from the blog search engines, they were faced with two major problems: synonymy (two different terms with the same meaning) and polysemy (a term with different meanings). In this paper, we use two semantic analysis methods, Latent Semantic Indexing (LSI) and Probabilistic LSI (PLSI), to solve these two problems. LSI uses singular value decomposition as the fundamental method to capture the synonymous relationship between terms. PLSI uses the expectation-maximization algorithm for parameter estimation to additionally deal with the problem of polysemy. Although PLSI can gracefully deal with these two semantic problems, it needs a huge computing time. To solve the problem of computing time, in this paper, we propose a novel termination mechanism to dynamically determine the required number of iterations for PLSI. According to the experiment results, the result derived from our mechanism can not only deal with these two semantic problems but also reach a cost-effective solution.
暂无评论