1.
Autocorrelation refers
to error term one observation related to or affected by the error term of another
observation in other words it correlated to it. There is no similar number of
features between autocorrelation and heteroscedasticity. It occurs in data when
the error term of a regression forecasting model is correlated.
2. Consequences of
autocorrelation.
v The
estimates of the regression coefficients no longer have a minimum variable
property and may be inefficient.
v The
variance of the square error terms may be greatly underestimated by the mean
sequence error value.
v The
true standard deviation of the estimated regression coefficient is seriously
underestimated.
v The
confidence intervals and test using T and E distributed are no longer strictly
applicable.
v Ordinary
least square (OLS) estimators are still unbiased and linear. This is because
both unbiased and consistency do not depend on the assumption six which in this
case is violated.
v As
∑e2 is affected then R2 is also affected.
v The
ordinary square estimators will be inefficient and therefore no longer BLUE.
3 The ways of detection
of autocorrelation.
v Graphical
Method: There are various ways of examining the residuals.
The time sequence plot can be produced. Alternatively, we can plot the
standardized residuals against time. The standardized residuals are simply the
residuals divided by the standard error of the regression. If the actual and
standard plot shows a pattern, then the errors may not be random. We can also
plot the error term with its first lag. A positive autocorrelation is
identified by a clustering of residuals with the same sign. A negative
autocorrelation is identified by fast changes in the signs of consecutive
residuals.
v The Runs
Test- Consider a list of
estimated error term, the errors term can be positive or negative. In the
following sequence, there are three runs.(─ ─ ─ ─ ─ ─ ) ( + + + + + + + + + + +
+ + ) (─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ─ ) A run is defined as uninterrupted sequence
of one symbol or attribute, such as + or -. The length of the run is defined as
the number of element in it. The above sequence as three runs, the first run is
6 minuses, the second one has 13 pluses and the last one has 11 minuses
v Use the Durbin-Watson statistic to
test for the presence of autocorrelation. The test is
based on an assumption that errors are generated by a first-order autoregressive
process. If there are missing observations, these are omitted from the
calculations, and only the no missing observations are used. To get a
conclusion from the test, you will need to compare the displayed statistic with
lower and upper bounds in the table. If
D > upper bound, no correlation exists; if D < lower bound, positive
correlation exists; if D is in between the two bounds, the test is
inconclusive.
4. Remedies of
autocorrelation
- To
find out if the autocorrelation is pure and not the result of mis-
specification of the model. Sometimes we observe patterns in residual
because the model is mis- specified, that is to say it has excluded some
important variables or because it is functional form is incorrect.
- Transformation
of the origin model
If it is pure autocorrelation, one can use appropriate transformation of origin model so that in the transformed model we do not have the problem of pure autocorrelation. As in the case of heterogeneity, we will have to use some type of generalized Least – square (GLS) method. - Newey
– west method.
In
large samples we can use the newey- west method to data standard error of
ordinal least square (OLS) estimator that are corrected for autocorrelation.
This method is actually an extension of White’s heteroscadicity consistent
standard error method.
- Ordinary
Least Square OLS)
No comments:
Post a Comment