With the improvement of reliability of current aviation equipment, the field failure data is limited with few observations and random censoring observations. Aiming at the difficulty in reliability evaluation in this ...
详细信息
ISBN:
(纸本)9781479910144
With the improvement of reliability of current aviation equipment, the field failure data is limited with few observations and random censoring observations. Aiming at the difficulty in reliability evaluation in this condition, a Monte Carlo simulation algorithm is employed to validate the feasibility of em algorithm in the reliability evaluation of aviation equipments. The main thought is to assume a certain amount of aircrafts which can fly in a long time. The random censoring observations during the fault time are acquired with the dynamic change of the number of the aircraft, the flight years, and daily utilization rate. Together with Pauta criterion, the precision and validity of em algorithm are analyzed quantitatively. A simulation example based on field data of a kind of aircraft shows that for this kind of aircraft, the validity of em algorithm can reach 90% only if the sample size is above 20. Thus a larger sample size request is needed to improve the performance of em algorithm
Elucidating the mechanistic underpinnings of genetic associations with complex traits requires formally characterizing and testing associated cell and tissue-specific expression profiles. New opportunities exist to bo...
详细信息
Elucidating the mechanistic underpinnings of genetic associations with complex traits requires formally characterizing and testing associated cell and tissue-specific expression profiles. New opportunities exist to bolster this investigation with the growing numbers of large publicly available omics level data resources. Herein, we describe a fully likelihood-based strategy to leveraging external resources in the setting that expression profiles are partially or fully unobserved in a genetic association study. A general framework is presented to accommodate multiple data types, and strategies for implementation using existing software packages are described. The method is applied to an investigation of the genetics of evoked inflammatory response in cardiovascular disease research. Simulation studies suggest appropriate type-1 error control and power gains compared to single regression imputation, the most commonly applied practice in this setting.
We consider data generating structures which can be represented as a Markov switching of nonlinear autoregressive model with considering skew-symmetric innovations such that switching between the states is controlled ...
详细信息
We consider data generating structures which can be represented as a Markov switching of nonlinear autoregressive model with considering skew-symmetric innovations such that switching between the states is controlled by a hidden Markov chain. We propose semi-parametric estimators for the nonlinear functions of the proposed model based on a maximum likelihood (ML) approach and study sufficient conditions for geometric ergodicity of the process. Also, an Expectation-Maximization type optimization for obtaining the ML estimators are presented. A simulation study and a real world application are also performed to illustrate and evaluate the proposed methodology.
The stress-strength parameter R=P(Y<X), as a reliability parameter, is considered in different statistical distributions. In the present paper, the stress-strength reliability is estimated based on progressively ty...
详细信息
The stress-strength parameter R=P(Yalgorithm and the Bayes estimate of R are obtained. Furthermore, we obtain the bootstrap confidence intervals, HPD credible interval and confidence intervals based on generalized pivotal quantity for R. Additionally, the performance of point estimators and confidence intervals are evaluated by a simulation study. Finally, the proposed methods are conducted on a set of real data for illustrative purposes.
We propose a new survival model for lifetime data in the presence of surviving fraction and obtain some of its properties. Its genesis is based on extensions of the promotion time cure model, where an extra parameter ...
详细信息
We propose a new survival model for lifetime data in the presence of surviving fraction and obtain some of its properties. Its genesis is based on extensions of the promotion time cure model, where an extra parameter controls the heterogeneity or dependence of an unobserved number of lifetimes. We construct a regression model to evaluate the effects of covariates in the cured fraction. We discuss inference aspects for the proposed model in a classical approach, where some maximum likelihood tools are explored. Further, an expectation maximization algorithm is developed to calculate the maximum likelihood estimates of the model parameters. We also perform an empirical study of the likelihood ratio test in order to compare the promotion time cure and the proposed models. We illustrate the usefulness of the new model by means of a colorectal cancer data set.
Survival analysis is used in the medical field to identify the effect of predictive variables on time to a specific event. Generally, not all variation of survival time can be explained by observed covariates. The eff...
详细信息
Survival analysis is used in the medical field to identify the effect of predictive variables on time to a specific event. Generally, not all variation of survival time can be explained by observed covariates. The effect of unobserved variables on the risk of a patient is called frailty. In multicenter studies, the unobserved center effect can induce frailty on its patients, which can lead to selection bias over time when ignored. For this reason, it is common practice in multicenter studies to include a random frailty term modeling center effect. In a more complex event structure, more than one type of event is possible. Independent frailty variables representing center effect can be incorporated in the model for each competing event. However, in the medical context, events representing disease progression are likely related and correlation is missed when assuming frailties to be independent. In this work, an additive gamma frailty model to account for correlation between frailties in a competing risks model is proposed, to model frailties at center level. Correlation indicates a common center effect on both events and measures how closely the risks are related. Estimation of the model using the expectation-maximization algorithm is illustrated. The model is applied to a data set from a multicenter clinical trial on breast cancer from the European Organisation for Research and Treatment of Cancer (EORTC trial 10854). Hospitals are compared by employing empirical Bayes estimates methodology together with corresponding confidence intervals.
The estimation of the parameters of a continuous-time Markov chain from discrete-time observations, also known as the embedding problem for Markov chains, plays in particular an important role for the modeling of cred...
详细信息
The estimation of the parameters of a continuous-time Markov chain from discrete-time observations, also known as the embedding problem for Markov chains, plays in particular an important role for the modeling of credit rating transitions. This missing data problem boils down to a latent variable setting and thus, maximum likelihood estimation is usually conducted using the expectation-maximization (em) algorithm. We illustrate that the em algorithm is likely to get stuck in local maxima of the likelihood function in this specific problem setting and adapt a stochastic approximation simulated annealing scheme (SASem) as well as a genetic algorithm (GA) to combat this issue. Above that, our main contribution is to extend our method GA by a rejection sampling scheme, which allows one to derive stochastic monotone maximum likelihood estimates in order to obtain proper (non-crossing) multi-year probabilities of default. We advocate the use of this procedure as direct constrained optimization (of the likelihood function) will not be numerically stable due to the large number of side conditions. Furthermore, the monotonicity constraint enables one to combine structural knowledge of the ordinality of credit ratings with real-life data into a statistical estimator, which has a stabilizing effect on far off-diagonal generator matrix elements. We illustrate our methods by Standard and Poor's credit rating data as well as a simulation study and benchmark our novel procedure against an already existing smoothing algorithm.
In logistic regression with nonignorable missing responses, Ibrahim and Lipsitz proposed a method for estimating regression parameters. It is known that the regression estimates obtained by using this method are biase...
详细信息
In logistic regression with nonignorable missing responses, Ibrahim and Lipsitz proposed a method for estimating regression parameters. It is known that the regression estimates obtained by using this method are biased when the sample size is small. Also, another complexity arises when the iterative estimation process encounters separation in estimating regression coefficients. In this article, we propose a method to improve the estimation of regression coefficients. In our likelihood-based method, we penalize the likelihood by multiplying it by a noninformative Jeffreys prior as a penalty term. The proposed method reduces bias and is able to handle the issue of separation. Simulation results show substantial bias reduction for the proposed method as compared to the existing method. Analyses using real world data also support the simulation findings. An R package called brlrmr is developed implementing the proposed method and the Ibrahim and Lipsitz method.
In carcinogenicity experiments with animals where the tumor is not palpable it is common to observe only the time of death of the animal, the cause of death (the tumor or another independent cause, as sacrifice) and w...
详细信息
In carcinogenicity experiments with animals where the tumor is not palpable it is common to observe only the time of death of the animal, the cause of death (the tumor or another independent cause, as sacrifice) and whether the tumor was present at the time of death. These last two indicator variables are evaluated after an autopsy. Defining the non-negative variables T-1 (time of tumor onset), T-2 (time of death from the tumor) and C (time of death from an unrelated cause), we observe (Y, Delta(1), Delta(2)), where Y = min{T-2, C}, Delta(1) = 1{T-1 <= C}, and Delta(2) = 1{T-2 <= C}. The random variables T-1 and T-2 are independent of C and have a joint distribution such that P(T-1 <= T-2) = 1. Some authors call this model a "survival-sacrifice model". [20] (generally to be denoted by LJP (1997)) proposed a Weighted Least Squares estimator for F-1 (the marginal distribution function of T-1), using the Kaplan-Meier estimator of F-2 (the marginal distribution function of T2). The authors claimed that their estimator is more efficient than the MLE (maximum likelihood estimator) of F1 and that the Kaplan-Meier estimator is more efficient than the MLE of F2. However, we show that the MLE of F-1 was not computed correctly, and that the (claimed) MLE estimate of F-1 is even undefined in the case of active constraints. In our simulation study we used a primal-dual interior point algorithm to obtain the true MLE of F-1. The results showed a better performance of the MLE of F-1 over the weighted least squares estimator in LJP (1997) for points where F-1 is close to F-2. Moreover, application to the model, used in the simulation study of LJP (1997), showed smaller variances of the MLE estimators of the first and second moments for both F-1 and F-2, and sample sizes from 100 up to 5000, in comparison to the estimates, based on the weighted least squares estimator for F-1, proposed in LJP (1997), and the Kaplan-Meier estimator for F-2. R scripts are provided for computing the es
Tag cardinality estimation is one of the most crucial issues in radio frequency identification technology. The issue, however, usually faces with challenges in wireless fading environments due to the presence of the s...
详细信息
Tag cardinality estimation is one of the most crucial issues in radio frequency identification technology. The issue, however, usually faces with challenges in wireless fading environments due to the presence of the so-called capture effect (CE) and detection error (DE). The aim of this letter is to provide an efficient and accurate estimation method to cope with the CE and DE using expectation-maximization algorithm and the standard Aloha-based protocol. We show that the proposed method gives more accurate estimates than a conventional one. Thanks to this fact, the Aloha frame size used for the tag identification process can also be optimally selected so that the identification efficiency can be improved. Computer simulations are presented to confirm the merit of the proposed method.
暂无评论