Skip to main content

This tag is for questions about mean-square-error. In statistics, the mean squared error (MSE) of an estimator measures the average of the squares of the errors or deviations, that is, the difference between the estimator and what is estimated.

In statistics, the mean squared error (MSE) of an estimator measures the average of the squares of the errors or deviations, that is, the difference between the estimator and what is estimated. MSE is a risk function, corresponding to the expected value of the squared error loss or quadratic loss. The difference occurs because of randomness or because the estimator doesn't account for information that could produce a more accurate estimate.

The MSE is the second moment (about the origin) of the error, and thus incorporates both the variance of the estimator and its bias. For an unbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated.

The MSE assesses the quality of an estimator or a predictor. Definition of an MSE differs according to whether one is describing an estimator or a predictor.

Def

Predictor:

If $\hat{Y}$ is a vector of n predictions, and $Y$ is the vector of observed values corresponding to the inputs to the function which generated the predictions, then the MSE of the predictor can be estimated by $$\operatorname{MSE}=\frac{1}{n}\sum_{i=1}^n(\hat{Y_i} - Y_i)^2$$

Estimator:

The MSE of an estimator $\hat{\theta}$ with respect to an unknown parameter $\theta$ is defined as $$\operatorname{MSE}(\hat{\theta})=\operatorname{E}\big[(\hat{\theta}-\theta)^2\big]$$