In a Pharmacokinetic reaction a ligand may be bound to different types of macro-molecules, each with different number of binding sites. They are frequently involved in certain diseases diagnostics. Adair equation is u...
详细信息
In a Pharmacokinetic reaction a ligand may be bound to different types of macro-molecules, each with different number of binding sites. They are frequently involved in certain diseases diagnostics. Adair equation is used very often to model the reactions of biological macro-molecules with a ligand. This equation relates the saturation ratio with the free ligand concentration when the equilibrium is reached and it depends on some association constants of the chemical reaction, which have to be estimated. The main problem considered in this paper is the computation of optimal experimental designs for a mixture of Adair models when different types of macromolecules are mixed in the experiment. The main contribution of this work is obtaining the Fisher Information Matrix for a model with a mixture of probability distributions. Since this is not anymore in the Exponential family the expectation cannot be obtained analytically. Then the computation of optimal designs through the information matrix cannot be done with traditional methods. In data analysis when one has the data this expectation can be computed empirically from the data. But in experimental design data are not available when the experiment is being scheduled. Assuming nominal values of the parameters, as is usually done for nonlinear models, simulations were performed for each point in a suitable discretized design space. The number of simulations and the sample size used in each simulation were empirically tuned for both. A sensitivity analysis was performed for different possible true values of the parameters. Since this meant an important computational burden fractional designs were used to cover a reasonable neighborhood of the nominal values of the parameters. In order to display the results in a friendly way for the practitioner, "safe" neighborhoods of the optimal designs are provided. (C) 2016 Elsevier B.V. All rights reserved.
This paper presents a framework which relies on the linear dynamical Kalman filter to perform a reliable prediction for solar and photovoltaic production. The method is convenient for real-time forecasting and we desc...
详细信息
This paper presents a framework which relies on the linear dynamical Kalman filter to perform a reliable prediction for solar and photovoltaic production. The method is convenient for real-time forecasting and we describe its use to perform these predictions for different time horizons, between one minute and one hour ahead. The dataset used is a set of measurements of solar irradiance and PV power production measured in a sub-tropical zone: Guadeloupe. In this zone, fluctuating meteorological conditions can occur, with highly variable atmospheric events having severe impact in the solar irradiance and the PV power. In such conditions, heterogeneous ramp events are observed making difficult to control and manage these sources of energy. The present work hopes to build a suitable statistical method, based on bayesian inference and state-space modeling, able to predict the evolution of solar radiation and PV production. We develop a forecast method based on the Kalman filter combined with a robust parameter estimation procedure built with an Auto Regressive model or with an Expectation-Maximisation algorithm. The model is built to run with univariate or multivariate data according to their availability. The model is used here to forecast the univariate solar and PV data and also PV with exogenous data such as cloud cover and air temperature. The accuracy of this technique is studied with a set of performance criterion including the root mean square error and the mean bias error. We compare the results for the different tests performed, from one minute to one hour ahead, to the simple persistence model. The performance of our technique exceeds by far the traditional persistence model with a skill score improvement around 39% and 31%, respectively for PV production and GHI, for one hour ahead forecast. (C) 2016 Elsevier Ltd. All rights reserved.
The purpose is to propose a new em algorithm for doubly censored data subject to non-parametric moment constraints. empirical likelihood confidence regions are constructed for one- or two-samples of doubly censored da...
详细信息
The purpose is to propose a new em algorithm for doubly censored data subject to non-parametric moment constraints. empirical likelihood confidence regions are constructed for one- or two-samples of doubly censored data. It is shown that the corresponding empirical likelihood ratio converges to a standard chi-square random variable. Simulations are carried out to assess the finite-sample performance of the proposed method. For illustration purpose, the proposed method is applied to a real data set with two samples. (C) 2015 Elsevier B.V. All rights reserved.
In this paper, the statistical inference of the unknown parameters of a Burr Type III (BIII) distribution based on the unified hybrid censored sample is studied. The maximum likelihood estimators of the unknown parame...
详细信息
In this paper, the statistical inference of the unknown parameters of a Burr Type III (BIII) distribution based on the unified hybrid censored sample is studied. The maximum likelihood estimators of the unknown parameters are obtained using the Expectation-Maximization algorithm. It is observed that the Bayes estimators cannot be obtained in explicit forms, hence Lindley's approximation and the Markov Chain Monte Carlo (MCMC) technique are used to compute the Bayes estimators. Further the highest posterior density credible intervals of the unknown parameters based on the MCMC samples are provided. The new model selection test is developed in discriminating between two competing models under unified hybrid censoring scheme. Finally, the potentiality of the BIII distribution to analyze the real data is illustrated by using the fracture toughness data of the three different materials namely silicon nitride (Si3N4), Zirconium dioxide (ZrO2) and sialon (Si6-xAlxOxN8-x). It is observed that for the present data sets, the BIII distribution has the better fit than the Weibull distribution which is frequently used in the fracture toughness data analysis.
Models and algorithms for nonparametric estimation of finite multivariate mixtures have been recently proposed, where it is usually assumed that coordinates are independent conditional on the subpopulation from which ...
详细信息
Models and algorithms for nonparametric estimation of finite multivariate mixtures have been recently proposed, where it is usually assumed that coordinates are independent conditional on the subpopulation from which each observation is drawn. Hence in these models the dependence structure comes only from the mixture. This assumption is relaxed, allowing for independent multivariate blocks of coordinates, conditional on the subpopulation from which each observation is drawn. Otherwise the density functions of these blocks are completely multivariate and nonparametric. An em-like algorithm for this model is proposed, and some strategies for selecting the bandwidth matrix involved in the nonparametric estimation step of it are derived. The performance of this algorithm is evaluated through several numerical simulations. A real dataset of reasonably large dimension is experimented on this new model and algorithm to illustrate its potential from the model based, unsupervised clustering perspective. (C) 2016 Elsevier B.V. All rights reserved.
In this paper, a maximum likelihood approach is proposed for analyzing panel count data under the gamma frailty non-homogeneous Poisson process model. The approach allows one to estimate the baseline mean function and...
详细信息
In this paper, a maximum likelihood approach is proposed for analyzing panel count data under the gamma frailty non-homogeneous Poisson process model. The approach allows one to estimate the baseline mean function and the regression parameters jointly while taking the within-subject correlation into account. The within-subject correlation is quantified explicitly by Pearson's correlation coefficient. Monotone splines are adopted to approximate the unspecified nondecreasing baseline mean function in the model. An expectation-maximization (em) algorithm is derived to facilitate the computation by exploiting a data augmentation based on Poisson latent variables. The em algorithm is robust to initial values, easy to implement, converges fast, and provides closed-form variance estimates. It can be also applied to the non-homogeneous Poisson model without frailty. The proposed approach is evaluated through simulations and illustrated by two real life examples coming from a skin cancer study and a bladder tumor study. A companion R package PCDSpline has been developed and is available on R CRAN for public use. (C) 2015 Elsevier B.V. All rights reserved.
Incorporating prior knowledge about the sources and/or the mixture is a way to improve under-determined audio source separation performance. A great number of informed source separation techniques concentrate on takin...
详细信息
Incorporating prior knowledge about the sources and/or the mixture is a way to improve under-determined audio source separation performance. A great number of informed source separation techniques concentrate on taking priors on the sources into account, but fewer works have focused on constraining the mixing model. In this paper, we address the problem of under-determined multichannel audio source separation in reverberant conditions. We target a semi-informed scenario where some room parameters are known. Two probabilistic priors on the frequency response of the mixing filters are proposed. Early reverberation is characterized by an autoregressive model while according to statistical room acoustics results, late reverberation is represented by an autoregressive moving average model. Both reverberation models are defined in the frequency domain. They aim to transcribe the temporal characteristics of the mixing filters into frequency domain correlations. Our approach leads to a maximum a posteriori estimation of the mixing filters which is achieved thanks to the expectation-maximization algorithm. We experimentally show the superiority of this approach compared with a maximum likelihood estimation of the mixing filters.
Classical finite mixture regression is useful for modeling the relationship between scalar predictors and scalar responses arising from subpopulations defined by the differing associations between those predictors and...
详细信息
Classical finite mixture regression is useful for modeling the relationship between scalar predictors and scalar responses arising from subpopulations defined by the differing associations between those predictors and responses. The classical finite mixture regression model is extended to incorporate functional predictors by taking a wavelet-based approach in which both the functional predictors and the component-specific coefficient functions are represented in terms of an appropriate wavelet basis. By using the wavelet representation of the model, the coefficients corresponding to the functional covariates become the predictors. In this setting, there are typically many more predictors than observations. Hence a lasso-type penalization is employed to simultaneously perform feature selection and estimation. Specification of the model is discussed and a fitting algorithm is provided. The wavelet-based approach is evaluated on synthetic data as well as applied to a real data set from a study of the relationship between cognitive ability and diffusion tensor imaging measures in subjects with multiple sclerosis. (C) 2014 Elsevier B.V. All rights reserved.
Motivated from a changing market environment over time, we consider high-dimensional data such as financial returns, generated by a hidden Markov model that allows for switching between different regimes or states. To...
详细信息
Motivated from a changing market environment over time, we consider high-dimensional data such as financial returns, generated by a hidden Markov model that allows for switching between different regimes or states. To get more stable estimates of the covariance matrices of the different states, potentially driven by a number of observations that are small compared to the dimension, we modify the expectation-maximization (em) algorithm so that it yields the shrinkage estimators for the covariance matrices. The final algorithm turns out to reproduce better estimates not only for the covariance matrices but also for the transition matrix. It results into a more stable and reliable filter that allows for reconstructing the values of the hidden Markov chain. In addition to a simulation study performed in this article, we also present a series of theoretical results that include dimensionality asymptotics and provide the motivation for certain techniques used in the algorithm. Supplementary materials for this article are available online.
Coverage optimization is an important process for the operator, as it is a crucial prerequisite toward offering a satisfactory quality of service to the end users. The first step of this process is coverage prediction...
详细信息
Coverage optimization is an important process for the operator, as it is a crucial prerequisite toward offering a satisfactory quality of service to the end users. The first step of this process is coverage prediction, which can be performed by interpolating geo-located measurements reported to the network by mobile user's equipments. In the previous works, we proposed a low complexity coverage prediction algorithm based on the adaptation of the geo-statistics fixed rank kriging (FRK) algorithm. We supposed that the geo-location information reported with the radio measurements was perfect, which is not the case in reality. In this paper, we study the impact of location uncertainty on the coverage prediction accuracy and we extend the previously proposed algorithm to include geo-location error in the prediction model. We validate the proposed algorithm using both simulated and real-field measurements. The FRK is extended to take into account that the location uncertainty proves to enhance the prediction accuracy while keeping a reasonable computational complexity.
暂无评论