Temple University

Introductory Econometrics

Basic Time Series Analysis

Some facts:

  1. At any point in time a variable, like household spending on Pringles, is a random variable.
  2. Time flows in only one direction.
  3. Since household spending on Pringles is a random variable and time flows in only one direction we only ever get to see one realization of the data; the experimant cannot be repeated.
  4. In effect our time series sample is composed of a sequence of individual draws from the probability distribution that exists at each point in time. There is an ensemble of distributions and we get a data point from each member of the ensemble.
  5. In cross section data all the sample observations are drawn from the same distribution that exists at that point in time, so we can use the sample drawn from a single distribution to infer something about that distribution's parameters.
  6. In order to make headway in our job of inference we need to introduce the idea of staionarity.


There are two flavors of stationarity.

Weak Dependence:

In the definition of weak stationarity we said that Cov(zt, zt-h) depends on h, the difference between the two points in time, but not on t, the absolute point in time. The notion of weak dependence goes a little further. Namely, in order to invoke the Central Limit Theorem in time series data we must assume that as h gets large the covariance vanishes. Weak stationarity implies weak dependence, but the reverse is not true. A trended (see below) series is not stationary, but it can be weakly dependent.

Finite Distributed Lag Models

Our model now has both contemporaneous and lagged observations on the independent variable.


With reference to the above model can you explain impact multiplier and the long run multiplier?

Properties of OLS in a distributed lag model

What assumptions are necessary for unbiasedness and consistency of OLS? Is OLS efficient? What assumptions are necessary? What are the meanings of contemporaneous and strict\1exogeneity?

Does a model like ar1 satisfy strict exogeneity?

What is the meaning of serial correlation\2 in the error term?

Functional Forms

Is there any new news when it comes to the use of functional forms in time series specifications?

Dummy Variables

In our earlier examples of the use of dummy variables we used them to categorize, perhaps ordinally, groups in the data. What is their function in time series models?

Time Trends and Seasonality

1. Trends

Linear trend: lin trend How do you interpret the 'slope' coefficient?

Exponential trend: exp trend. How do you interpret the 'slope' coefficient in this specification?


2. Common or Shared Trends

Don't confuse causation for the fact that two variables may exhibit trends in the same or opposite directions.

3. Seasonality

Retail sales of candy canes are always greater in December than in other months of the year. Therefore, in discussing the increase in candy cane sales we want to be sure to compare the change in sales after adjusting for the fact that sales are always higher in December.

Highly Persistent Time Series

A strongly dependent series would be one that is said to have a unit root. Such series are not stationary. An example would be a random walk, in which this period's realization of the variable is the best predictor for next period's realization of the random variable.

random walk

The logical implication of this is that there is no systematic component to the variable of interest that can be learned in order to gain an advantage over others trying to make the same prediction. In particular, see Burton Malkiel's A Random Walk Down Wall Street.

Another example of strong dependence is a random walk with stochastic drift.


There are transformations that can be used to solve the problem of strong dependence. The most common is differencing the data: difference operator.

To see how we test (we are, after all, in the business of testing hypotheses) for the presence of a unit root, strong dependence, you'll have to read Chapter 18 in Wooldridge.

Serially Correlated Errors: What is the disease, what are the consequences, how do we test for it, and how do we cure it?

In olden times we had a model of the sort


and the error had the properties that

error mean

error var

The times they are a changin' and we now believe


noise02 , this is not as stringent as strict exogeneity,

error var

The first expression tells us that the error this period is a fraction of last period's error plus a white noise error, e. White noise simply means that the error term e has a mean of zero, is uncorrelated with the independent variables, and has constant variance.

In this brave new world OLS is still unbiased and consistent as long as the right hand side variables are strictly exogenous (See above or footnote 1). If we just rely on no correlation between the exogenous variables and e then OLS is still consistent.

To repeat something that we have said many times before


And the variance of this estimator is autovar

The obvious consequence is that OLS is not efficient because it does not account for all of the stuff in square brackets. It is also paramount that you remember that by default any computer regression software reports the coefficient variance as mistake , which is clearly wrong.

Testing for serially correlated errors

I. A t-test with strictly exogenous regressors

Again, in this modern era we suppose fn02 to be true, but hope that it is not. With this in mind, our null and alternate hypothesis are



1. Run the OLS regression for your posited model and save the residual series as resid.

2. Run the regression ar1 regr.

3. Do a t-test on your estimate of rho.

II. The Durbin-Watson Under Classical Assumptions

The null and alternate are as stated above. The DW is computed for you and reported in the regression output for any software package. The DW statistic is centered on 2. There are both upper and lower critical values, as in a z-score or t-stat, notwithstanding the remarks in your text. The null is rejected for very large or very small observed values of the test statistic.

III. Testing for Serial Correlation when the regressors are not strictly exogenous


1. Run the OLS regression for your posited model and save the residual series as resid.

2. Run the regression aux regr

3. Do a t-test on your estimate of rho.

IV. Higher Order Serial Correlation -- Breusch-Godfrey Test

Now you believe that this period's error depends, say, on last period's error and the error before that. Your suppostion can be written as

ar2 , but hope that it is not true. Now our null and alternate are





1. Run the OLS regression for your posited model and save the residual series as resid.

2. Run the regression ar2 aux

3. Compute the coefficient of determinaton for the auxiliary regression.

4. One's first guess might be to construct an F-test, but what we want is LM serial.

Correcting for Serial Correlation with Strictly Exogenous Regressors

Once again our starting point is model with the error structure fn02. If we wanted to get rid of the autocorrelation in the error we might try transforming the model. Suppose that we know rho. In that case we could lag our model, multiply through by rho, then subtract the result from our starting model.





In doing this operation we lose the first observation, so our dataset runs t = 2, ..., T. The consequence is that we need a little correction in order to get the first observation back into the data set, the details of which are in your textbook. The more serious question is how we come up with a best guess for rho. Operationally this is just some button clicks on any of the usual statistical programs. We can describe how it is done. Begin by estimating the proposed model and save the residuals. Use the LS residuals to estimate rho. Use the estimeated rho in the partial difference equation to reestimate the intercept and slope coefficients.




1. Strict exogeneity requires that the error be uncorrelated not just with the past values of z, but also the future values of z.

2. Serial correlation of the error => fn02