In this paper, we propose a robust parameters estimation algorithm for channel coded systems based on the low-density parity-check (LDPC) code over fading channels with impulse noise. The estimated parameters are then...
详细信息
In this paper, we propose a robust parameters estimation algorithm for channel coded systems based on the low-density parity-check (LDPC) code over fading channels with impulse noise. The estimated parameters are then used to generate bit log-likelihood ratios (LLRs) for a soft-inputLDPC decoder. The expectation-maximization (EM) algorithm is used to estimate the parameters, including the channel gain and the parameters of the Bernoulli-Gaussian (B-G) impulse noise model. The parameters can be estimated accurately and the average number of iterations of the proposed algorithm is acceptable. Simulation results show that over a wide range of impulse noise power, the proposed algorithm approaches the optimal performance under different Rician channel factors and even under Middleton class-A (M-CA) impulse noise models.
In this paper, we propose a method to model the relationship between degradation and failure time for a simple step-stress test where the underlying degradation path is linear and different causes of failure are possi...
详细信息
In this paper, we propose a method to model the relationship between degradation and failure time for a simple step-stress test where the underlying degradation path is linear and different causes of failure are possible. It is assumed that the intensity function depends only on the degradation value. No assumptions are made about the distribution of the failure times. A simple step-stress test is used to induce failure experimentally and a tampered failure rate model is proposed to describe the effect of the changing stress on the intensities. We assume that some of the products that fail during the test have a cause of failure that is only known to belong to a certain subset of all possible failures. This case is known as masking. In the presence of masking, the maximum likelihood estimates of the model parameters are obtained through the expectation-maximization algorithm by treating the causes of failure as missing values. The effect of incomplete information on the estimation of parameters is studied through a Monte-Carlo simulation. Finally, a real-world example is analysed to illustrate the application of the proposed methods.
We consider, robust estimation of wrapped models to multivariate circular data that are points on the surface of a p-torus based on the weighted likelihood methodology. Robust model fitting is achieved by a set of wei...
详细信息
We consider, robust estimation of wrapped models to multivariate circular data that are points on the surface of a p-torus based on the weighted likelihood methodology. Robust model fitting is achieved by a set of weighted likelihood estimating equations, based on the computation of data dependent weights aimed to down-weight anomalous values, such as unexpected directions that do not share the main pattern of the bulk of the data. Weighted likelihood estimating equations with weights evaluated on the torus or obtained after unwrapping the data onto the Euclidean space are proposed and compared. Asymptotic properties and robustness features of the estimators under study have been studied, whereas their finite sample behavior has been investigated by Monte Carlo numerical experiment and real data examples.
This paper illustrates the use of mixture of experts (ME) network structure to guide model selection for diagnosis of two subtypes of adult hydrocephalus (normal-pressure hydrocephalus-NPH and aqueductal stenosis-AS)....
详细信息
This paper illustrates the use of mixture of experts (ME) network structure to guide model selection for diagnosis of two subtypes of adult hydrocephalus (normal-pressure hydrocephalus-NPH and aqueductal stenosis-AS). The ME is a modular neural network architecture for supervised learning. expectation-maximization (EM) algorithm was used for training the ME so that the learning process is decoupled in a manner that fits well with the modular structure. To improve classification accuracy, the outputs of expert networks were combined by a gating network simultaneously trained in order to stochastically select the expert that is performing the best at solving the problem. The classifiers were trained on the defining features of NPH and AS (velocity and flux). Three types of records (normal, NPH and AS) were classified with the accuracy of 95.83% by the ME network structure. The ME network structure achieved accuracy rates which were higher than that of the stand-alone neural network models.
When an earthquake affects an inhabited area, a need for information immediately arises among the population. In general, this need is not immediately fulfilled by official channels which usually release expert-valida...
详细信息
When an earthquake affects an inhabited area, a need for information immediately arises among the population. In general, this need is not immediately fulfilled by official channels which usually release expert-validated information with delays of many minutes. Seismology is among the research fields where citizen science projects succeeded in collecting useful scientific information. More recently, the ubiquity of smartphones is giving the opportunity to involve even more citizens. This paper focuses on seismic intensity reports collected through smartphone applications while an earthquake is occurring. The aim is to provide a framework for predicting and updating in near realtime earthquake parameters that are useful for assessing the effect of the earthquake. This is done by using a multivariate space-time model based on time-varying coefficients and a spatial latent variable. As a case-study, the model is applied to more than 200000 seismic reports globally collected over a period of around 4 years by the Earthquake Network citizen science project. It is shown how the time-varying coefficients are needed to adapt the model to an information content that changes with time, and how the spatial latent variable can capture the local seismicity and the heterogeneity in the people's response across the globe.
Variable selection or feature extraction is fundamental to identify important risk factors from a large number of covariates and has applications in many fields. In particular, its applications in failure time data an...
详细信息
Variable selection or feature extraction is fundamental to identify important risk factors from a large number of covariates and has applications in many fields. In particular, its applications in failure time data analysis have been recognized and many methods have been proposed for right-censored data. However, developing relevant methods for variable selection becomes more challenging when one confronts interval censoring that often occurs in practice. In this article, motivated by an Alzheimer's disease study, we develop a variable selection method for interval-censored data with a general class of semiparametric transformation models. Specifically, a novel penalized expectation-maximization algorithm is developed to maximize the complex penalized likelihood function, which is shown to perform well in the finite-sample situation through a simulation study. The proposed methodology is then applied to the interval-censored data arising from the Alzheimer's disease study mentioned above.
We investigate public trust among the society by a statistical model suitable for panel data. At this aim, using trust's levels measured from individual items recorded through a long-term survey we dispose of key ...
详细信息
We investigate public trust among the society by a statistical model suitable for panel data. At this aim, using trust's levels measured from individual items recorded through a long-term survey we dispose of key variables with appropriate meaning. We account for the repeated and missing item responses by a hidden Markov model using longitudinal sampling weights. Since trust may be conceived as a psychological unobservable process of each person that fluctuates over time we consider observed time-varying and time-fixed individual covariates. We estimate the model parameters by a weighted log-likelihood through the expectation-maximization algorithm by using data collected in an East-Central European country like Poland. The latter is a country where the level of support to the national and international institutions is one of the lowest among the European member states. We apply a suitable algorithm based on the posterior probabilities to predict the best allocation to each latent typology. The proposed model is validated by generating out-of-sample responses and we find reasonable predictive values. We disentangle four hidden groups of Poles: discouraged, with no opinion, with selective trust and with fully public trust. We reveal an increasing number of people that are going to trust only some selected institutions over time.
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often dis...
详细信息
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an expectation-maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based Intrusion Detection Systems (IDS) have not scaled accordingly. Most, if not all IDS assume the availabilit...
详细信息
With the ever increasing deployment and usage of gigabit networks, traditional network anomaly detection based Intrusion Detection Systems (IDS) have not scaled accordingly. Most, if not all IDS assume the availability of complete and clean audit data. We contend that this assumption is not valid. Factors like noise, mobility of the nodes and the large amount of network traffic make it difficult to build a traffic profile of the network that is complete and immaculate for the purpose of anomaly detection. In this paper, we attempt to address these issues by presenting an anomaly detection scheme. called SCAN (Stochastic Clustering algorithm for Network Anomaly Detection), that has the capability to detect intrusions with high accuracy even with incomplete audit data. To address the threats posed by network-based denial-of-service attacks in high speed networks, SCAN consists of two modules: an anomaly detection module that is at the core of the design and an adaptive packet sampling scheme that intelligently samples packets to aid the anomaly detection module. The noteworthy features of SCAN include: (a) it intelligently samples the incoming network traffic to decrease the amount of audit data being sampled while retaining the intrinsic characteristics of the network traffic itself, (b) it computes the missing elements of the sampled audit data by utilizing an improved expectation-maximization (EM) algorithm-based clustering algorithm;and (c) it improves the speed of convergence of the clustering process by employing Bloom filters and data summaries. (c) 2007 Elsevier B.V. All rights reserved.
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censore...
详细信息
The generalized Pareto distribution plays a significant role in reliability research. This study concentrates on the statistical inference of the generalized Pareto distribution utilizing progressively Type-II censored data. Estimations are performed using maximum likelihood estimation through the expectation-maximization approach. Confidence intervals are derived using the asymptotic confidence intervals. Bayesian estimations are conducted using the Tierney and Kadane method alongside the Metropolis-Hastings algorithm, and the highest posterior density credible interval estimation is accomplished. Furthermore, Bayesian predictive intervals and future sample estimations are explored. To illustrate these inference techniques, a simulation and practical example are presented for analysis.
暂无评论