Feb 21, 2010

Mean Square Error (MSE) and Variance

The difference between the variance of an estimator and its MSE is that the variance measures the dispersion of the estimator around its mean whereas the MSE measures its dispersion around the true value of the parameter being estimated. for unbiased estimators they are identical.

Biased estimator with smaller variances than unbiased estimators are easy to find. The MSE estimator has not been as popular as the best unbiased estimator because of the mathematical difficulties in its derivation. Furthermore, when it can be derived its formula often involves unknown coefficients (the value of beta), making its application impossible. Monte Carlo studies have shown that approximating the estimator by using OLS estimates of the unknown parameters can sometimes circumvent this problem (a little confused here, using approximated OLS estimates to substitute the real beta?)

Note: Weighted Square(d) Error Criterion can be a very interested topic to explore!
Peter Kennedy: When the weights are equal, the criterion is the popular mean square error (MSE) criterion. It happens that the expected value of a loss function consisting of the square of the difference between beta and its estimate (i.e. the square of the estimation error) is the same as the sum of the variance and the squared bias. 
Please refer to following derivation:



OLS: It is not the case that the OLS estimator is the minimum mean square error estimator in the Classic Linear Regression model. Even among linear estimators, it is possible that a substantial reduction in variance can be obtained by adopting a slightly biased estimator.

0 comments:

Post a Comment

 
Locations of visitors to this page