3. Stationary Increments#

Logarithms of many economic time series that appear to display stochastic growth can be modeled as having stationary increments. Multivariate versions of these models possess stochastic process versions of balanced growth paths. Applied econometricians seek permanent shocks that contribute to such growth. Furthermore, we shall see that it is convenient to pose central limit theory in terms of processes with stationary increments. The mathematical formulation in this chapter opens the door to studying these topics.

3.1. Basic setup#

We adopt assumptions from Inventing an Infinite Past that allow an infinite past and again let \( {\mathfrak A}\) be a subsigma algebra of \({\mathfrak F}\) and

\[{\mathfrak A}_t = \left\{ \Lambda_t \in {\mathfrak F} : \Lambda_t = \{ \omega \in \Omega : {\mathbb S}^t(\omega) \in \Lambda \} \textrm{ for some } \Lambda \in {\mathfrak F} \right\} .\]

Let \(X\) be a scalar measurement function. Assume that \(Y_0\) is \({\mathfrak A}_0\) measurable and consider a scalar process \(\{Y_t : t=0,1,... \}\) with stationary increments \(\{X_t\}\):

(3.1)#\[Y_{t+1} - Y_t = X_{t+1}\]

for \(t=0,1, \ldots\). Let

\[\begin{split}\nu = E\left(X_{t+1} \vert {\mathfrak I} \right), \\\end{split}\]

and

\[U_{t+1} = X_{t+1} - E\left(X_{t+1} \vert {\mathfrak A}_t \right).\]

We can interpret the above equations as providing two contributions to the \(\{Y_{t}: t \ge 0\}\) process. Thus, component \(U_{t+1}\) is unpredictable and represents new information about \(Y_{t+1}\) that arrives at date \(t+1\). Component \(\nu\) is the trend rate of growth or decay in \(\{Y_{t} : t \ge 0\}\) conditioned on the invariant events. In the following sections, we present full decomposition of a stationary increment process that will be useful both in connecting to sources of permanent versus transitory shocks and to central limit theorems.

3.2. A martingale decomposition#

A special class of stationary increment processes called additive martingales interests us.

Definition 3.1

The process \(\{Y_t^m : t=0,1,... \}\) is said to be an additive martingale relative to \(\{ {\mathfrak A}_{t} : t=1,2,... \}\) if for \(t=0,1,... \)

  • \(Y_t^m\) is \({\mathfrak A}_{t}\) measurable, and

  • \(E\left(Y_{t+1}^m \vert {\mathfrak A}_t \right) = Y_t^m\) .

Notice that by the Law of Iterated Expectations, for a martingale \(\{Y_{t}^m : t \ge 0\}\), best forecasts satisfy:

\[E \left (Y_{t+j}^m \mid {\mathfrak A}_t \right) = Y_t^m\]

for \(j \ge 1\). Under suitable additional restrictions on the increment process \(\{X_t : t \ge 0 \}\), we can deploy a construction of Gordin [1969] to show that the \(\{V_t\}\) process contributes a martingale component to the \(\{Y_t^m : t=0,1, ... \}\) process.[1] Let \({\mathcal H}\) denote the set of all scalar random variables \(X\) such that \(E(X^2) < \infty\) and such that[2]

\[H_t = \sum_{j=0}^\infty E\bigl( X_{t+j} - \nu \vert {\mathfrak A}_t \bigr)\]

is well defined as a mean-square convergent series. Convergence of the infinite sum on the right side limits temporal dependence of the process \(\{ X_t \}\). For example, it can exclude so-called long memory processes.[3]

Construct the one-period ahead forecast of \(H_{t+1}\):

\[H_t^+ = E\left( H_{t+1} \mid {\mathfrak A}_{t} \right)\]

Notice that

\[X_t - \nu = H_t - H_t^+ = G_t + \left( H_{t-1}^+ - H_t^+ \right)\]

where

(3.2)#\[G_{t} = H_{t} - H_{t-1}^+ = H_t - E\left( H_{t} \mid {\mathfrak A}_{t-1} \right). \]

Since \(G_t\) is a forecast error,

\[E \left( G_{t+1} \vert {\mathfrak A}_{t} \right) = 0.\]

Assembling these parts, we have

(3.3)#\[Y_{t+1} - Y_t = X_{t+1} = \nu + G_{t+1} + H_t^+ - H_{t+1}^+ .\]

Let

\[Y^m_t = \sum_{j=1}^t G_j .\]

Since \(Y_t^m\) is \({\mathfrak A}_{t}\) measurable, the equality

\[E \left( \sum_{j=1}^{t+1} G_j \mid {\mathfrak A}_t \right) = \sum_{j=1}^{t} G_j \]

implies that the process \(\{Y_t^m : t \ge 0 \}\) is an additive martingale.

For a given stationary increment process, \(\{Y_t : t \ge 0\}\), express the martingale increment as

(3.4)#\[G_{t} = \sum_{j=0}^\infty \left[ E\left( X_{t+j} \mid {\mathfrak A}_{t} \right) - E\left( X_{t+j} \mid {\mathfrak A}_{t-1} \right) \right] = \lim_{j \rightarrow \infty} \left[ E\left(Y_{t+j} \vert {\mathfrak A}_{t} \right) - E\left(Y_{t+j} \vert {\mathfrak A}_{t-1} \right) \right] .\]

So the increment to the martingale component of \(\{Y_t : t \ge 0 \}\) is new information about the limiting optimal forecast of \(Y_{t+j}\) as \(j \rightarrow + \infty\).

By accumulating equation (3.3) forward, we arrive at:

Proposition 3.1

If \(X\) is in \({\mathcal H}\), the stationary increments process \(\{Y_t : t=0,1,...\}\) satisfies the additive decomposition

\[\begin{matrix} Y_{t} & = & \underbrace{t\nu} & + & Y_t^m & - &\underbrace{ H_t^+} & + & \underbrace{Y_0 + H_0^+}.\cr &&\textbf{trend} &&\textbf{martingale}&& \textbf{stationary} && \textbf{invariant} \end{matrix}\]

The stationary increment process, \(\{Y_{t}^m : t\ge 0 \},\) is the martingale component with \(Y_0^m = 0\), The component \(\{H_{t}^+\}\) is stationary. The other components are constant over time.

Proposition 3.1 decomposes a stationary-increment process into a linear time trend, a martingale, and a transitory component of a stationary-increments process. A permanent shock is the increment to the martingale. The martingale and transitory contributions are typically correlated.

Example 3.1

(Moving-average increment process) Consider again the Example 1.7 moving-average process:

(3.5)#\[X_{t} = \sum_{j=0}^\infty \alpha_j \cdot W_{t-j} .\]

Use this \(\{X_t\}\) process as the increment for \(\{ Y_t : t \ge 0 \}\) in formula (3.1). New information about the unpredictable component of \(X_{t+j}\) for \(j \ge 0\) that arrives at date \(t\) is

\[E \left( X_{t+j} \mid {\mathfrak A}_{t} \right)- E \left( X_{t+j} \mid {\mathfrak A}_{t-1} \right)= \alpha_{j} \cdot W_{t}\]

Summing these terms over \(j\) gives

\[G_{t} = \alpha(1) \cdot W_{t} \]

where

\[\alpha(1) = \sum_{j=0}^\infty \alpha_j\]

provided that the coefficient sequence \(\{ \alpha_j : j\ge 0\}\) is summable, a condition that restricts temporal dependence of the increment process \(\{X_t\}\). Indeed, it is possible for \(\alpha(1) = \infty\) or for it not to be well defined while

\[ \sum_{j=0}^\infty |\alpha_j|^2 < \infty\]

ensuring that \(X_t\) is well defined. This possibility opened the door to the literature on long-memory processes that allow for \(\alpha(1)\) to be infinite as discussed in Granger and Joyeux [1980] and elsewhere.

In what follows, we presume that \(\alpha(1)\) is finite. This sum of the coefficients \(\{\alpha_j: j\ge 0 \}\) in moving-average representation (3.5) for the first difference \(Y_{t+1} - Y_t = X_{t+1}\) of \(\{ Y_t : t=0,1,.... \}\) tells the permanent effect of \(W_{t+1}\) on current and future values of the level of \(Y\), i.e., the effect on \(\lim_{j\rightarrow + \infty} Y_{t+j}\). Models of Blanchard and Quah [1989] and Shapiro and Watson [1988] build on this property. The variance of the random variable \(\alpha(1) \cdot W_{t+1}\) conditioned on the invariant events in \({\mathfrak I}\) is \(|\alpha(1)|^2\). The overall variance of \(X_{t}\) is given by

\[\sum_{j=0}^\infty|\alpha_j|^2 \ne |\alpha(1)|^2.\]

To form a transitory-transitory shock decomposition, construct the permanent shock as:

\[W_{t+1}^p = \left( \frac {1}{|\alpha(1)|} \right) \alpha(1) \cdot W_{t+1}\]

where we introduce an additional scaling so the permanent shock has variance one. Form

\[W_{t+1}^{tr} = W_{t+1} - \left( \frac {1}{|\alpha(1)|} \right) \alpha(1) W_{t+1}^p \]

which by construction will be uncorrelated with \(W_{t+1}^p\). Since the covariance matrix of \(W_{t+1}^{tr}\) will be singular, the components of \(W_{t+1}^{tr}\) can be expressed as linear combinations of a vector of transitory shocks with unit variances.

Example 3.2

This is a process in which \(W_t\) has transient but no permanent effects on future \(Y\)’s. Let \(\alpha_0 = 1\) and \(\alpha_j = (\lambda -1) \lambda^{j-1}\) for \(j \geq 1\) and \(-1 < \lambda < 1\). Construct the power series

(3.6)#\[\alpha(\zeta) = 1 - \sum_{j=1}^\infty ( 1- \lambda ) \lambda^{j-1} \zeta^j = 1 - {\frac { (1 - \lambda) \zeta}{ 1 - \lambda \zeta}} = {\frac {1 - \zeta}{1 - \lambda \zeta}} .\]

Evidently, \(\alpha(1) = 0\). Define

\[H_t = - \sum_{j=0}^\infty \lambda^j W_{t-j}\]

and note that since (3.6) is satisfied

\[Y_{t+1} - Y_t = -H_{t+1} + H_t.\]

The process \(\{ Y_t : t = 0,1,... \}\) is stationary provided that \(Y_0 = - H_0\), ensuring that \(Y_t = - H_t\) for all \(t \ge 0\).

3.3. Central limit approximation#

Example 3.1 starts from a moving average of martingale differences that is used as an increment \(\{X_t \}\) to a \(\{Y_t: t \ge 0\}\) process, after which it constructs a process of innovations to the martingale component of the \(\{Y_t: t \ge 0 \}\) process. That analysis illustrates the workings of an operator \(\mathbb{D}\) that maps an admissible increment process in \(\mathcal{H}\) into the innovation in a martingale component. To construct \(\mathbb{D}\), let \(\mathcal{G}\) be the set of all random variables \(G\) with finite second moments that satisfy the conditions that \(G\) is \(\mathfrak{A}\) measurable and that \(E(G_1 \vert \mathfrak{A}) = 0\) where \(G_1 = G \circ \mathbb{S}\). Define \(\mathbb{D}: \mathcal{H} \rightarrow \mathcal{G}\) via

\[\mathbb{D}(X) = G .\]

Both \(\mathcal{G}\) and \(\mathcal{H}\) are linear spaces of random variables and \(\mathbb{D}\) is a linear transformation. The operator \(\mathbb{D}\) plays a prominent role in a central limit approximation.

To form a central limit approximation, construct the following scaled partial sum that nets out trend growth

\[{\frac 1 {\sqrt{t}}}(Y_t - \nu t) = {\frac 1 {\sqrt{t}}} Y_t^m - {\frac 1 {\sqrt t}} H_{t}^+ + {\frac 1 {\sqrt{t}}} (H_0^+ + Y_0) \]

where

\[Y_t^m= \sum_{j=1}^t G_j\]

From Billingsley [1961]’s central limit theorem for martingales

\[{\frac 1 {\sqrt{t}}} Y_t^m \Rightarrow \mathcal{N} \left(0, E\left[ \mathbb{D}(X)^2 \vert \mathfrak{I} \right] \right)\]

where \(\Rightarrow\) denotes weak convergence, meaning convergence in distribution. Clearly, \(\{(1/ {\sqrt t}) H_{t}^+\}\) and \(\{(1/{\sqrt{t}}) (H_0^+ + Y_0) \}\) both converge in mean square to zero.

Proposition 3.2

For all stationary increment processes \(Y_t : t=0,1,2, ...\) represented by \(X\) in \(\mathcal{H}\)

\[{\frac 1 {\sqrt{t}}}(Y_t - \nu t) \Rightarrow {\mathcal{N}} \left( 0, E\left[ \mathbb{D}(X)^2 \vert {\mathfrak{I}} \right] \right) .\]

Furthermore,

\[E\left[ \mathbb{D}(X)^2 \vert {\mathfrak{I}} \right] = \lim_{t \rightarrow \infty} E \left[ \left({\frac 1 {\sqrt{t}}} \left(Y_t - t \nu \right) \right)^2 \Bigl| {\mathfrak{I}} \right].\]

3.4. Cointegration#

Linear combinations of stationary increment processes \(Y_t^1\) and \(Y_t^2\) have stationary increments. For real-valued scalars \(r_1\) and \(r_2\), form

\[Y_{t} = r_1 Y_{t}^1 + r_2 Y_{t}^2\]

where

\[\begin{split}\begin{align*} Y_{t+1}^1 - Y_t^1 & = X_{t+1}^1 \\ Y_{t+1}^2 - Y_t^2 & = X_{t+1}^2. \end{align*}\end{split}\]

The increment in \(\{Y_t : t=0, 1, \ldots \}\) is

\[X_{t+1} = r_1 X_{t+1}^1 + r_2 X_{t+1}^2\]

and

\[Y_0 = r_1 Y_0^1 + r_2 Y_0^2.\]

The Proposition 3.1 martingale component of \(\{ Y_t : t \ge 0 \}\) is the corresponding linear combination of the martingale components of \(\{ Y_t^1 : t =0,1,...\}\) and \(\{ Y_t^2 : t =0,1,...\}\). The Proposition 3.1 trend component of \(\{ Y_t : t =0,1, \ldots \}\) is the corresponding linear combination of the trend components of \(\{ Y_t^1 : t =0,1, \ldots \}\) and \(\{ Y_t^2 : t =0,1, \ldots \}\).

Proposition 3.1 sheds light on the cointegration concept of Engle and Granger [1987] that is associated with linear combinations of stationary increment processes whose trend and martingale components are both zero. call two processes cointegrated if there exists a linear combination of them that is stationary.[4] That situation prevails when there exist real-valued scalars \(r_1\) and \(r_2\) such that

\[\begin{split}\begin{eqnarray*} r_1 \nu_1 + r_2 \nu_2 & = & 0 \\ r_1 \mathbb{D}(X^1) + r_2 \mathbb{D}(X^2) & = & 0, \end{eqnarray*}\end{split}\]

where the \(\nu\)’s correspond to the trend components in Proposition 3.1. These two zero restrictions imply that the time trend and the martingale component for the linear combination \(Y_t\) are both zero.[5] When \(r_1 = 1\) and \(r_2 = - 1\), the component stationary increment processes \(Y_{t}^1\) and \(Y_{t}^2\) share a common growth component.

This notion of cointegration provides one way to formalize balanced growth paths in stochastic environments through determining a linear combination of growing time series for which stochastic growth is absent.