Swindel (1976) introduced a modified ridge regression estimator based on prior information. A necessary and sufficient condition is derived for Swindel's proposed estimator to have lower risk than the conventional...
详细信息
Swindel (1976) introduced a modified ridge regression estimator based on prior information. A necessary and sufficient condition is derived for Swindel's proposed estimator to have lower risk than the conventional ordinary ridge regression estimator when both estimators are computed using the same value of k.
This paper puts the case for the inclusion of point optimal tests in the econometrician's repertoire. They do not suit every testing situation but the current evidence, which is reviewed here, indicates that they ...
详细信息
Small-disturbance approximations for the bias vector and mean squared error matrix of the mixed regression estimator for the coefficients in a linear regression model are derived and efficiency with respect to least s...
详细信息
Small-disturbance approximations for the bias vector and mean squared error matrix of the mixed regression estimator for the coefficients in a linear regression model are derived and efficiency with respect to least squares estimator is examined.
The aim of this paper is to provide criteria which allow to compare two estimators of the parameter vector in the linear regression model with respect to their mean square error matrices, where the main interest is fo...
详细信息
The aim of this paper is to provide criteria which allow to compare two estimators of the parameter vector in the linear regression model with respect to their mean square error matrices, where the main interest is focussed on the case when the difference of the covariance matrices is singular. The results obtained are applied to equality restricted and pretest estimators.
This article gives a nonlinear version of the Gauss-Markov theorem. It is shown that in a linear regression model y = X beta + u, the lower bound for the risk matrix E[beta - beta][beta - beta]' of a nonlinear est...
详细信息
This article gives a nonlinear version of the Gauss-Markov theorem. It is shown that in a linear regression model y = X beta + u, the lower bound for the risk matrix E[beta - beta][beta - beta]' of a nonlinear estimator of beta belonging to a certain class is the covariance matrix of the Gauss-Markov estimator, provided the distribution of error term u belongs to the class of elliptically symmetric distributions with second moments. [ABSTRACT FROM AUTHOR]
In the linear regression model, the asymptotic distributions of certain functions of confidence bounds of a class of confidence intervals for the regression parameter arc investigated. The class of confidence interval...
详细信息
We consider the use of minimax shrinkage estimators for the linearregression mcjel under several loss functions when severe multicollinearity is present. The examples considered illustrate that little or no departure...
详细信息
We consider the use of minimax shrinkage estimators for the linearregression mcjel under several loss functions when severe multicollinearity is present. The examples considered illustrate that little or no departure from the least squares estimates is permitted in many cases when the data is highly multicollinear and/or shrinkage is toward a point in the parameter space that does not closely agree with the sample data
It is not always prossible to establish a preference ordering among regression estimators in terms of the generalized mean square error criterion. In the paper, we determine when it is feasible to use this criteion to...
详细信息
It is not always prossible to establish a preference ordering among regression estimators in terms of the generalized mean square error criterion. In the paper, we determine when it is feasible to use this criteion to couduct comparisons among ordinary least squares, principal components, ridge regression, and shrunken least squares estimators.
The paper deals with the question, how to choose the observation points to obtain a linear estimator or predictor with a mean squared error as small as possible. In this connection the least squares estimator, the bes...
详细信息
The paper deals with the question, how to choose the observation points to obtain a linear estimator or predictor with a mean squared error as small as possible. In this connection the least squares estimator, the best linear unbiased estimator and the best linear unbiased predictor are considered. The idea is to find, at least approximately, the best weight function in the continuous observation case and to use this function to construct discrete designs. Some examples are given to compare e.g. the results with those of SACKS/YLVISAKEE and BICKEL/HEBZBEBG.
The stability of a slightly modified version of the usual jackknife variance estimator is evaluated exactly in small samples under a suitable linear regression model and compared with that of two different linearizati...
详细信息
The stability of a slightly modified version of the usual jackknife variance estimator is evaluated exactly in small samples under a suitable linear regression model and compared with that of two different linearization variance estimators. Depending on the degree of heteroscedasticity of the error variance in the model, the stability of the jackknife variance estimator is found to be somewhat comparable to that of one or the other of the linearization variance estimators under conditions especially favorable to ratio estimation (i.e., regression approximately through the origin with a relatively small coefficient of variation in the x population). When these conditions do not hold, however, the jackknife variance estimator is found to be less stable than either of the linearization variance estimators.
暂无评论