3. Stationary Increments#
\(\newcommand{\eqdef}{\stackrel{\text{def}}{=}}\)
Earlier chapters have why we like statistical models that generate stationary stochatic processes: stationarity brings a law of large numbers that helps us make inferences about model parameters. However, logarithms of many economic time series appear not to be stationary. Instead they grow systematically. This situation motivates us to study models that generate stochastic processes with stationary increments. Multivariate versions of such models possess stochastic process versions of balanced growth paths. Applied econometricians sometimes study permanent shocks that contribute to stochastic growth. We shall describe how to pose central limit theory in terms of processes with stationary increments.
The mathematical formulation in this chapter opens the door to studying these topics using a unified set of tools. In this chapter we return to the mathematical formulation used in Chapter Laws of Large Numbers and Stochastic Processes, while in the next chapter we will assume a Markov structure.
3.1. Basic setup#
We adopt assumptions from Section Inventing an Infinite Past of Chapter Laws of Large Numbers and Stochastic Processes that allow an infinite past. and again let \( {\mathfrak A}\) be a subsigma algebra of \({\mathfrak F}\) and
The event collection \({\mathfrak A}\) can include invariant events as well as past information.
Let \(X\) be a scalar measurement function that is \({\mathfrak A} = {\mathfrak A}_0\) measurable. Assume that \(Y_0\) is \({\mathfrak A}_0\) measurable, and consider a scalar process \(\{Y_t : t=0,1,... \}\) with stationary increments \(\{X_t\}\):
for \(t=0,1, \ldots\). Let
and
We can interpret the above equations as providing two contributions to the \(\{Y_{t}: t \ge 0\}\) process. Thus, component \(U_{t+1}\) is unpredictable and represents new information about \(Y_{t+1}\) that arrives at date \(t+1\). Component \(\eta\) is the trend rate of growth or decay in \(\{Y_{t} : t \ge 0\}\) conditioned on the invariant events. In the following sections, we present a full decomposition of a stationary increment process that will be useful both in connecting to sources of permanent versus transitory shocks and to central limit theorems.
3.2. A martingale decomposition#
A special class of stationary increment processes called additive martingales interests us.
Definition 3.1
The process \(\{Y_t^m : t=0,1,... \}\) is said to be an additive martingale relative to \(\{ {\mathfrak A}_{t} : t=1,2,... \}\) if for \(t=0,1,... \)
\(Y_t^m\) is \({\mathfrak A}_{t}\) measurable, and
\(E\left(Y_{t+1}^m \vert {\mathfrak A}_t \right) = Y_t^m\) .
Notice that by the Law of Iterated Expectations, for a martingale \(\{Y_{t}^m : t \ge 0\}\), best forecasts satisfy:
for \(j \ge 1\). Under suitable additional restrictions on the increment process \(\{X_t : t \ge 0 \}\), we can deploy a construction of Gordin [1969] to construct a martingale component to the \(\{Y_t^m : t=0,1, ... \}\) process.[1] Let \({\mathcal H}\) denote the set of all scalar random variables \(X\) such that \(E(X^2) < \infty\) and such that[2]
is well defined as a mean-square convergent series. Convergence of the infinite sum on the right side limits temporal dependence of the process \(\{ X_t \}\). For example, it can exclude so-called long memory processes.[3]
Construct the one-period ahead forecast of \(H_{t+1}\) conditioned on date \(t\) information:
Notice that
where
Since \(G_t\) is a forecast error,
Assembling these parts, we have
Let
Since \(Y_t^m\) is \({\mathfrak A}_{t}\) measurable, the equality
implies that the process \(\{Y_t^m : t \ge 0 \}\) is an additive martingale.
For a given stationary increment process, \(\{Y_t : t \ge 0\}\), express the martingale increment as
So the increment to the martingale component of \(\{Y_t : t \ge 0 \}\) provides new information about the limiting optimal forecast of \(Y_{t+j}\) as \(j \rightarrow + \infty\).
By accumulating equation (3.3) forward, we arrive at:
Proposition 3.1
If \(X\) is in \({\mathcal H}\), the stationary increments process \(\{Y_t : t=0,1,...\}\) satisfies the additive decomposition
The stationary increment process, \(\{Y_{t}^m : t\ge 0 \},\) is the martingale component with \(Y_0^m = 0\), The component \(\{H_{t}^+\}\) is stationary. The other components are constant over time.
Proposition 3.1 decomposes a stationary-increment process into a linear time trend, a martingale, and a transitory component. A permanent shock is the increment to the martingale. The martingale and transitory contributions are typically correlated. Some decompositions methods go one-step further by adjusting the decomposition to remove the correlation between these two components as we will illustrate in an example that. follows.
With this mathematical structure in place, we construct an an operator \(\mathbb{D}\) that maps an admissible increment process in \(\mathcal{H}\) into the innovation in a martingale component. Let \(\mathcal{G}\) be the set of all random variables \(G\) with finite second moments that satisfy the conditions that i) \(G\) is \(\mathfrak{A}\) measurable and that ii) \(E(G_1 \vert \mathfrak{A}) = 0\) where \(G_t = G \circ \mathbb{S}^t\). Define \(\mathbb{D}: \mathcal{H} \rightarrow \mathcal{G}\)
for \(G = G_0\) given by for \(t = 0\). Both \(\mathcal{G}\) and \(\mathcal{H}\) are linear spaces of random variables and \(\mathbb{D}\) is a linear transformation. The operator \(\mathbb{D}\) plays a prominent role in some of the analysis that follows.
3.3. Permanent shocks#
In this construction, we impose a moving-average structure on the underlying time series.
Specifically, consider again the Example 1.8 moving-average process:
Use this \(\{X_t\}\) process as the increment for \(\{ Y_t : t \ge 0 \}\) in formula (3.1). New information about the unpredictable component of \(X_{t+j}\) for \(j \ge 0\) that arrives at date \(t\) is
Summing these terms over \(j\) gives
where
provided that the coefficient sequence \(\{ \alpha_j : j\ge 0\}\) is summable, a condition that restricts temporal dependence of the increment process \(\{X_t\}\). Indeed, it is possible for \(\alpha(1) = \infty\) or for it not to be well defined while
ensuring that \(X_t\) is well defined. This possibility opened the door to the literature on long-memory processes that allow for \(\alpha(1)\) to be infinite as discussed in Granger and Joyeux [1980] and elsewhere.
In what follows, we presume that \(\alpha(1)\) is finite. This sum of the coefficients \(\{\alpha_j: j\ge 0 \}\) in moving-average representation (3.5) for the first difference \(Y_{t+1} - Y_t = X_{t+1}\) of \(\{ Y_t : t=0,1,.... \}\) tells the permanent effect of \(W_{t+1}\) on current and future values of the level of \(Y\), i.e., the effect on \(\lim_{j\rightarrow + \infty} Y_{t+j}\). Models of Blanchard and Quah [1989] and Shapiro and Watson [1988] build on this property.
The variance of the random variable \(\alpha(1) \cdot W_{t+1}\) conditioned on the invariant events in \({\mathfrak I}\) is \(|\alpha(1)|^2\). The overall variance of \(X_{t}\) is given by
where \(|\cdot |\) is the Euclidean norm. To form a permanent-transitory shock decomposition, construct the scalar permanent shock as:
where we introduce an additional scaling so the permanent shock has variance one. Form
which by construction will be uncorrelated with \(W_{t+1}^p\). Since the covariance matrix of \(W_{t+1}^{tr}\) will be singular, the components of \(W_{t+1}^{tr}\) can be expressed as linear combinations of a vector of transitory shocks with unit variances.
3.4. Central limit approximation#
In this section, we produce a central limit approximation for temporally dependent processes originally due to [Gordin, 1969].
We view Gordin’s result as an application of Proposition Proposition 4.1.
To form a central limit approximation, construct the following scaled partial sum that nets out trend growth
where
From Billingsley [1961]’s central limit theorem for martingales
where \(\Rightarrow\) denotes weak convergence, meaning convergence in distribution. Clearly, \(\{(1/ {\sqrt t}) H_{t}^+\}\) and \(\{(1/{\sqrt{t}}) (H_0^+ + Y_0) \}\) both converge in mean square to zero.
Proposition 3.2
For all stationary increment processes \(Y_t : t=0,1,2, ...\) represented by \(X\) in \(\mathcal{H}\)
Furthermore,
This finding has a straightforward extension to a multivariate counterpart of \(X\) through the study of all linear combinations.
Observe that the variance in the central limit approximation is the variance of the martingale difference:
Consider the moving-average example in Section Permanent shocks. Then
which are typically distinct. The first of these computations is what is the variance pertinent for the central limit approximation.
3.5. Cointegration#
Linear combinations of stationary increment processes \(Y_t^1\) and \(Y_t^2\) have stationary increments. For real-valued scalars \(r_1\) and \(r_2\), form
where
The increment in \(\{Y_t : t=0, 1, \ldots \}\) is
and
The Proposition 3.1 martingale component of \(\{ Y_t : t \ge 0 \}\) is the corresponding linear combination of the martingale components of \(\{ Y_t^1 : t =0,1,...\}\) and \(\{ Y_t^2 : t =0,1,...\}\). The Proposition 3.1 trend component of \(\{ Y_t : t =0,1, \ldots \}\) is the corresponding linear combination of the trend components of \(\{ Y_t^1 : t =0,1, \ldots \}\) and \(\{ Y_t^2 : t =0,1, \ldots \}\).
Proposition 3.1 sheds light on the cointegration concept of Engle and Granger [1987] that is associated with linear combinations of stationary increment processes whose trend and martingale components are both zero. Call two processes cointegrated if there exists a linear combination of them that is stationary.[4] That situation prevails when there exist real-valued scalars \(r_1\) and \(r_2\) such that
where the \(\eta\)’s correspond to the trend components in Proposition 3.1. These two zero restrictions imply that the time trend and the martingale component for the linear combination \(Y_t\) are both zero.[5] When \(r_1 = 1\) and \(r_2 = - 1\), the stationary increment processes \(Y_{t}^1\) and \(Y_{t}^2\) share a common growth component.
This notion of cointegration provides one way to formalize balanced growth paths in stochastic environments through determining a linear combination of growing time series for which stochastic growth is absent.