In this paper, we introduce restricted empirical likelihood and restricted penalized empirical likelihood estimators. These estimators are obtained under both unbiasedness and minimum variance criteria for estimating ...
详细信息
In this paper, we introduce restricted empirical likelihood and restricted penalized empirical likelihood estimators. These estimators are obtained under both unbiasedness and minimum variance criteria for estimating equations. These scopes produce estimators which have appealing properties and particularly are more robust against outliers than some currently existing estimators. Assuming some prior densities, we develop the Bayesian analysis of the restricted empirical likelihood and the restricted penalized empirical likelihood. Moreover, we provide an em algorithm to approximate hyper-parameters. Finally, we carry out a simulation study and illustrate the theoretical results for a real data set.
Background: Multivariate statistical process monitoring (MSPM) methods have been widely studied and applied for dynamic process monitoring. However, limitations exist in the extant literature, as numerous extant MSPM ...
详细信息
Background: Multivariate statistical process monitoring (MSPM) methods have been widely studied and applied for dynamic process monitoring. However, limitations exist in the extant literature, as numerous extant MSPM approaches fail to concurrently account for the governing dynamics and inherent non-linearity of systems. Methods: To handle the above-mentioned issue, a novel probabilistic dynamic process monitoring algorithm named probabilistic sparse identification of nonlinear dynamics (PSINDy) is proposed. In this algorithm, the Expectation-Maximization (em) method is employed to estimate the parameters. In the E-step, the particle filtering technique is adopted to calculate corresponding latent expectations whose posterior distributions are not Gaussian. Furthermore, Hotelling's T2 and two novel monitoring statistics, termed Mahalanobis-based predictive error (D P E ) and dynamic predictive error (D P E), are designed and utilized for fault *** Findings: The effectiveness of the proposed method is validated through the Tennessee Eastman (TE) chemical process and the three-phase flow facility (TPFF).
Multidimensional item response theory (MIRT) is widely used in assessment and evaluation of educational and psychological tests. It models the individual response patterns by specifying a functional relationship betwe...
详细信息
Multidimensional item response theory (MIRT) is widely used in assessment and evaluation of educational and psychological tests. It models the individual response patterns by specifying a functional relationship between individuals' multiple latent traits and their responses to test items. One major challenge in parameter estimation in MIRT is that the likelihood involves intractable multidimensional integrals due to the latent variable structure. Various methods have been proposed that involve either direct numerical approximations to the integrals or Monte Carlo simulations. However, these methods are known to be computationally demanding in high dimensions and rely on sampling data points from a posterior distribution. We propose a new Gaussian variational expectation--maximization (GVem) algorithm which adopts variational inference to approximate the intractable marginal likelihood by a computationally feasible lower bound. In addition, the proposed algorithm can be applied to assess the dimensionality of the latent traits in an exploratory analysis. Simulation studies are conducted to demonstrate the computational efficiency and estimation precision of the new GVem algorithm compared to the popular alternative Metropolis-Hastings Robbins-Monro algorithm. In addition, theoretical results are presented to establish the consistency of the estimator from the new GVem algorithm.
We propose a framework for inference based on an "iterative likelihood function," which provides a unified representation for a number of iterative approaches, including the em algorithm and the generalized ...
详细信息
We propose a framework for inference based on an "iterative likelihood function," which provides a unified representation for a number of iterative approaches, including the em algorithm and the generalized estimating equations (GEEs). The parameters are decoupled to facilitate construction of the inference vehicle, to simplify computation, or to ensure robustness to model misspecification and then recoupled to retain their original interpretations. For simplicity, throughout the paper, we will refer to the log-likelihood as the "likelihood." We define the global, local, and stationary estimates of an iterative likelihood and, correspondingly, the global, local, and stationary attraction points of the expected iterative likelihood. Asymptotic properties of the global, local, and stationary estimates are derived under certain assumptions. An iterative likelihood is usually constructed such that the true value of the parameter is a point of attraction of the expected log-likelihood. Often, one can only verify that the true value of the parameter is a local or stationary attraction, but not a global attraction. We show that when the true value of the parameter is a global attraction, any global estimate is consistent and asymptotically normal;when the true value is a local or stationary attraction, there exists a local or stationary estimate that is consistent and asymptotically normal, with a probability tending to 1. The behavior of the estimates under a misspecified model is also discussed. Our methodology is illustrated with three examples: (i) estimation of the treatment group difference in the level of censored HIV RNA viral load from an AIDS clinical trial;(ii) analysis of the relationship between forced expiratory volume and height in girls from a longitudinal pulmonary function study;and (iii) investigation of the impact of smoking on lung cancer in the presence of DNA adducts. Two additional examples are in the , GEEs with missing covariates and an unweighted
Testing for deviations from Hardy-Weinberg equilibrium (HWE) can provide fundamental information about genetic variation and evolutionary processes in natural populations. In contrast to diploids, where genotype frequ...
详细信息
Testing for deviations from Hardy-Weinberg equilibrium (HWE) can provide fundamental information about genetic variation and evolutionary processes in natural populations. In contrast to diploids, where genotype frequencies remain constant after a single episode of random mating, polyploids, characterized by polysomic inheritance, approach HWE gradually. Here, we mathematically show the asymptotic trajectory of tetraploid equilibrium from any initial genotype frequencies. We formulate a statistical framework to test and estimate the degree of deviation from HWE at individual loci in allotetraploids and autotetraploids. Knowledge about HWE test fills an important gap in population genetic studies of tetraploids related to their evolution and ecology.
Models for situations where some individuals are long-term survivors, immune or non-susceptible to the event of interest, are extensively studied in biomedical research. Fitting a regression can be problematic in situ...
详细信息
Models for situations where some individuals are long-term survivors, immune or non-susceptible to the event of interest, are extensively studied in biomedical research. Fitting a regression can be problematic in situations involving small sample sizes with high censoring rate, since the maximum likelihood estimates of some coefficients may be infinity. This phenomenon is called monotone likelihood, and it occurs in the presence of many categorical covariates, especially when one covariate level is not associated with any failure (in survival analysis) or when a categorical covariate perfectly predicts a binary response (in the logistic regression). A well known solution is an adaptation of the Firth method, originally created to reduce the estimation bias. The method provides a finite estimate by penalizing the likelihood function. Bias correction in the mixture cure model is a topic rarely discussed in the literature and it configures a central contribution of this work. In order to handle this point in such context, we propose to derive the adjusted score function based on the Firth method. An extensive Monte Carlo simulation study indicates good inference performance for the penalized maximum likelihood estimates. The analysis is illustrated through a real application involving patients with melanoma assisted at the Hospital das Clinicas/UFMG in Brazil. This is a relatively novel data set affected by the monotone likelihood issue and containing cured individuals.
A pivotal quantity is a random variable that is a function of both the random data and the unknown population parameters and whose probability distribution does not depend on any of the unknown parameters. The populat...
详细信息
A pivotal quantity is a random variable that is a function of both the random data and the unknown population parameters and whose probability distribution does not depend on any of the unknown parameters. The population parameters here may include nuisance parameters. Historically, pivotal quantities have been used for the construction of test statistics for hypothesis testing of some of these unknown parameters. They have also been used for the construction of confidence intervals for some of these parameters. Generalized pivotal quantities (GPQ) were introduced by Tsui and Weerahandi (1989) and Weerahandi (1993). A GPQ is a function, not only of the random data and the unknown parameters, but also of an independent copy of the random data. In addition, an observed GPQ does not depend on the nuisance parameters (but may depend on the parameters of interest). These GPQ’s can be used to construct generalized confidence intervals and to perform hypothesis tests on a single unknown parameter in cases where the traditional method fails. In this MS thesis, we first estimate the parameters of a mixture of two normal distributions using a modified em algorithm proposed by Ghojogh, Ghojogh, Crowley, and Karray (2020). We then calculate asymptotic confidence intervals after proposing a method for finding asymptotic standard errors for the estimates of these parameters. Next, we review the theory of classical pivotal quantities and we give some examples. In these examples, using pivotal quantities, we construct confidence sets for single unknown parameters. We next review the theory of generalized pivotal quantities (GPQ’s) introduced by Tsui and Weerahandi (1989) and Weerahandi (1993). Using this generalized pivotal quantities, we compute generalized confidence intervals for the parameters of a log-normal distribution. Finally, in this thesis, we use the modifications of Nkurunziza and Chen (2011) to the theory of Tsui and Weerahandi (1989) in order to define modified gener
Unsupervised learning, such as unsupervised image segmentation and clustering, are fundamental tasks in image representation learning. In this paper, we design a deep expectation-maximization (Dem) network for unsuper...
详细信息
Unsupervised learning, such as unsupervised image segmentation and clustering, are fundamental tasks in image representation learning. In this paper, we design a deep expectation-maximization (Dem) network for unsuper-vised image segmentation and clustering. It is based on the statistical modeling of image in its latent feature space by Gaussian mixture model (GMM), implemented in a novel deep learning framework. Specifically, in the unsu-pervised setting, we design an auto-encoder network and an em module over the image latent features, for jointly learning the image latent features and GMM model of the latent features in a single framework. To con-struct the em-module, we unfold the iterative operations of em algorithm and the online em algorithm in fixed steps to be differentiable network blocks, plugged into the network to estimate the GMM parameters of the image latent features. The proposed network parameters can be end-to-end optimized using losses based on log-likelihood of GMM, entropy of Gaussian component assignment probabilities and image reconstruction error. Extensive experiments confirm that our proposed networks achieve favorable results compared with sev-eral state-of-the-art methods in unsupervised image segmentation and clustering.& COPY;2023 Elsevier B.V. All rights reserved.
This article introduces a new class of stochastic volatility models called log GARCH Stochastic Volatility models ( log GARCH - SV ). We establish the strict stationarity and second -order stationarity properties of t...
详细信息
This article introduces a new class of stochastic volatility models called log GARCH Stochastic Volatility models ( log GARCH - SV ). We establish the strict stationarity and second -order stationarity properties of this model class. Additionally, we provide conditions for the existence of higher -order moments. To estimate the parameters of the proposed model, we utilize a sequential Monte Carlo method. Finally, we assess the performance of the suggested estimation method through a simulation study.
In this paper, the shape mixtures of the skew Laplace normal (SMSLN) distribution is introduced as a flexible extension of the skew Laplace normal distribution which is also a heavy-tailed distribution. The SMSLN dist...
详细信息
In this paper, the shape mixtures of the skew Laplace normal (SMSLN) distribution is introduced as a flexible extension of the skew Laplace normal distribution which is also a heavy-tailed distribution. The SMSLN distribution includes an extra shape parameter, which controls skewness and kurtosis. Some distributional properties of this distribution are derived. Besides, we propose finite mixtures of SMSLN distributions to model both skewness and heavy-tailedness in heterogeneous data sets. The maximum likelihood estimators for parameters of interests are obtained via the expectation-maximization algorithm. We also give a simulation study and examine a real data example for the numerical illustration of proposed estimators.
暂无评论