Friday, June 21, 2013

Errors and residuals in statistics

This post is from here.


In statisticsand optimizationstatistical errorsand residualsare two closely related and easily confused measures of the deviationof an observed value of an element of a statistical samplefrom its "theoretical value" .The errorof an observed value is the deviation of the observed value from the (unobservable) truefunction value, while the residualof an observed value is the difference between the observed value and the estimatedfunction value .
The distinction is most important in regression analysis, where it leads to the concept of studentized residuals .

Introduction

Suppose there is a series of observations from a univariate distributionand we want to estimate the meanof that distribution (the so-called location model.In this case, the errors are the deviations of the observations from the population mean, while the residuals are the deviations of the observations from the sample mean .
statistical erroris the amount by which an observation differs from its expected value, the latter being based on the whole populationfrom which the statistical unit was chosen randomly .For example, if the mean height in a population of 21-year-old men is 1 .75 meters, and one randomly chosen man is 1 .80 meters tall, then the "error" is 0.05 meters; if the randomly chosen man is 1 .70 meters tall, then the "error" is −0.05 meters .The expected value, being the meanof the entire population, is typically unobservable, and hence the statistical error cannot be observed either .
residual(or fitting error), on the other hand, is an observable estimateof the unobservable statistical error .Consider the previous example with men's heights and suppose we have a random sample ofnpeople .The sample mean could serve as a good estimator of the populationmean .Then we have:
  • The difference between the height of each man in the sample and the unobservable populationmean is a statistical error, whereas
  • The difference between the height of each man in the sample and the observable samplemean is a residual .
Note that the sum of the residuals within a random sample is necessarily zero, and thus the residuals are necessarily not independent .The statistical errors on the other hand are independent, and their sum within the random sample is almost surelynot zero .
One can standardize statistical errors (especially of a normal distribution) in a z-score(or "standard score"), and standardize residuals in a t-statistic, or more generally studentized residuals .

Example with some mathematical theory

If we assume a normally distributedpopulation with mean μ and standard deviationσ, and choose individuals independently, then we have
X_1, \dots, X_n\sim N(\mu,\sigma^2)\,
and the sample mean
\overline{X}={X_1 + \cdots + X_n \over n}
is a random variable distributed thus:
\overline{X}\sim N(\mu, \sigma^2/n).
The statistical errorsare then
\varepsilon_i=X_i-\mu,\,
whereas the residualsare
\widehat{\varepsilon}_i=X_i-\overline{X}.
(As is often done, the " hat" over the letter ε indicates an observable estimateof an unobservable quantity called ε .)
The sum of squares of the statistical errors, divided by σ 2, has a chi-squared distributionwith n degrees of freedom:
\sum_{i=1}^n \left(X_i-\mu\right)^2/\sigma^2\sim\chi^2_n.
This quantity, however, is not observable .The sum of squares of the residuals, on the other hand, is observable .The quotient of that sum by σ 2has a chi-squared distribution with only n − 1 degrees of freedom:
\sum_{i=1}^n \left(\,X_i-\overline{X}\,\right)^2/\sigma^2\sim\chi^2_{n-1}.
It is remarkable that the sum of squares of the residualsand the sample mean can be shown to be independent of each other .That fact and the normal and chi-squared distributions given above form the basis of calculations involving the quotient
{\overline{X}_n - \mu \over S_n/\sqrt{n}}.
The probability distributions of the numerator and the denominator separately depend on the value of the unobservable population standard deviation σ, but σappears in both the numerator and the denominator and cancels .That is fortunate because it means that even though we do not know  σ, we know the probability distribution of this quotient: it has a Student's t-distributionwith n − 1 degrees of freedom .We can therefore use this quotient to find a confidence intervalfor  μ .

Regressions

In regression analysis, the distinction between errorsand residualsis subtle and important, and leads to the concept of studentized residuals .Given an unobservable function that relates the independent variable to the dependent variable – say, a line – the deviations of the dependent variable observations from this function are the unobservable errors .If one runs a regression on some data, then the deviations of the dependent variable observations from the fittedfunction are the residuals .
However, a terminological difference arises in the expression mean squared error(MSE) .The mean squared error of a regression is a number computed from the sum of squares of the computed residuals, and not of the unobservable errors .If that sum of squares is divided by n, the number of observations, the result is the mean of the squared residuals .Since this is a biasedestimate of the variance of the unobserved errors, the bias is removed by multiplying the mean of the squared residuals by n /  dfwhere dfis the number of degrees of freedomnminus the number of parameters being estimated) .This latter formula serves as an unbiased estimate of the variance of the unobserved errors, and is called the mean squared error . [1]
However, because of the behavior of the process of regression, the distributionsof residuals at different data points (of the input variable) may vary even ifthe errors themselves are identically distributed.Concretely, in a linear regressionwhere the errors are identically distributed, the variability of residuals of inputs in the middle of the domain will be higherthan the variability of residuals at the ends of the domain: linear regressions fit endpoints better than the middle .This is also reflected in the influence functionsof various data points on the regression coefficients: endpoints have more influence .
Thus to compare residuals at different inputs, one needs to adjust the residuals by the expected variability of residuals,which is called studentizing .This is particularly important in the case of detectingoutliers: a large residual may be expected in the middle of the domain, but considered an outlier at the end of the domain .

Stochastic error

The stochastic error in a measurement is the error that is random from one measurement to the next .Stochastic errors tend to be gaussian, or normal, in their distribution .That's because the stochastic error is most often the sum of many random errors, and when we add many random errors together, the distribution of their sum looks gaussian, as shown by the Central Limit Theorem .A stochastic error is added to a regression equation to introduce all the variations in Y that cannot be explained by the included Xs .It is, in effect, a symbol of the inability to model all the movements of the dependent variable .

Other uses of the word "error" in statistics

The use of the term "error" as discussed in the sections above is in the sense of a deviation of a value from a hypothetical unobserved value .At least two other uses also occur in statistics, both referring to observable prediction errors:
Mean square erroror mean squared error(abbreviated MSE) and root mean square error(RMSE) refer to the amount by which the values predicted by an estimator differ from the quantities being estimated (typically outside the sample from which the model was estimated) .
Sum of squared errors, typically abbreviated SSE or SS e, refers to the residual sum of squares(the sum of squared residuals) of a regression; this is the sum of the squares of the deviations of the actual values from the predicted values, within the sample used for estimation .Likewise, the sum of absolute errors(SAE) refers to the sum of the absolute values of the residuals, which is minimized in the least absolute deviationsapproach to regression .
See also

References

  1. Steel, Robert G. D.; Torrie, James H. (1960) . Principles and Procedures of Statistics, with Special Reference to Biological Sciences .McGraw-Hill .p. 288.

No comments:

Post a Comment