4. Processes with Markovian increments#

In this chapter, we use a stationary Markov process to construct a process that displays stochastic arithmetic growth, then show how to extract a linear time trend and a martingale. Eventually, we will explore the implications of exponentiating this process to transform an arithmetically growing process, like those described in this chapter, to construct a process that displays geometric growth.

4.1. Definition of additive functional#

Let \(\{W_{t+1} : t \ge 0\}\) be a \(k\)-dimensional stochastic process of unanticipated economic shocks. Let \(\{ X_t : t \ge 0 \}\) be a discrete-time stationary Markov process that is generated by initial distribution \(Q\) for \(X_0\) and transition equation

(4.1)#\[X_{t+1} = \phi (X_t, W_{t+1}) ,\]

where \(\phi\) is a Borel measurable function. Let \(\left\{ \mathfrak{A}_t : t=0,1,... \right\}\) be the filtration generated by histories of \(W\) and \(X\); \(\mathfrak{A}_t\) serves as the information set (sigma algebra) generated by \(X_0, W_1, \ldots , W_t\). We presume that the conditional probability distribution for \(W_{t+1}\) conditioned on \(\mathfrak{A}_t\) depends only on \(X_t\). To assure that the process \(\{W_{t+1} : t \ge 0 \}\) represents unanticipated shocks, we restrict it to satisfy

\[E \left( W_{t+1} \vert X_t \right) = 0.\]

We condition on a statistical model in the sense of section Limiting Empirical Measures and assume that the stationary \(X_t\) process is ergodic.[1] The Markov structure of \(\{ X_t : t\ge0 \}\) makes the distribution of \((X_{t+1}, W_{t+1}) \) conditioned on \(\mathfrak{A}_t\) depend only on \(X_t\).[2]

Definition 4.1

A process \(\{ Y_{t} \}\) is said to be an additive functional if it can be represented as

(4.2)#\[Y_{t+1} - Y_t = \kappa(X_{t},W_{t+1})\]

for a (Borel measurable) function \(\kappa: {\mathbb R}^n \times {\mathbb R}^k \rightarrow {\mathbb R}\), or equivalently

\[Y_{t} = Y_0 + \sum_{j=1}^{t} \kappa(X_{j-1}, W_{j}) ,\]

where we initialize \(Y_0\) at some arbitrary (Borel measurable) function of \(X_0\).

When \(Y_0\) is a function of \(X_0\), we can construct \(Y_t\) as a function of the underlying Markov process between dates zero and \(t\).

Definition 4.2

An additive functional \(\{ Y_t : t=0,1,...\}\) is said to be an additive martingale if \(E\left[ \kappa(X_{t}, W_{t+1}) \vert X_t \right] = 0.\)

Example 4.1

(Stochastic Volatility) Suppose that

\[Y_{t+1} - Y_t = \mu(X_t) + \sigma(X_t) W_{t+1} \]
\[X_{t+1} = {\mathbb A} X_t + {\mathbb B}W_{t+1}\]

where \(\{ W_{t+1} : t\ge 0 \}\) is an i.i.d.~sequence of standardized multivariate normally distributed random vectors, \({\mathbb A}\) is a stable matrix, and \({\mathbb B}\) has full column rank, and the random vector \(X_0\) is generated by initial distribution \(Q\) associated with the stationary distribution for the \(\{ X_t \}\) process. Here \(\mu(X_t)\) is the conditional mean of \(Y_{t+1} - Y_t\) and \(|\sigma(X_t)|^2\) is its conditional variance. When \(\sigma\) depends on \(X_t\), This is called a stochastic volatility model because \(|\sigma(X_t)|^2\) is a stochastic process.

In Example 4.1, when the conditional mean \(\mu(X_t) = 0\), the process \(\{Y_t \}\) is a martingale. Note that \(E\left[ \kappa( X_t, W_{t+1} ) \vert X_t \right] = 0\) implies the usual martingale restriction

\[E\left(Y_{t+1} \vert {\mathfrak A}_t\right) = Y_t , \ \ \textrm{for} \ \ t \ge 0. \]

4.2. Extracting Martingales#

We can decompose an additive functional into a sum of components, one of which is an additive martingale that encapsulates all long-run stochastic variation as in Proposition 3.1. In this section, we show how to extract the martingale component. We adopt a construction like that used to establish Proposition 3.1 and proceed in four steps.

  1. Construct the trend coefficient as the unconditional expectation:

\[\nu = E \left[\kappa(X_t, W_{t+1}) \right].\]
  1. Form the random variable \(H_t\) by computing multiperiod forecasts for each horizon and summing these forecasts over all horizons. Start by constructing

\[\overline \kappa(x) = E \left[ \kappa(X_t, W_{t+1}) - \nu \mid X_t = x \right],\]

Thus

\[E \left[ \kappa(X_{t+j-1}, W_{t+j}) - \nu \vert X_t = x \right] = \mathbb T^{j-1} \overline \kappa (x).\]

Summing the terms, construct

\[H_{t} = \sum_{j=0}^\infty E\left( \left[\kappa(X_{t-1+ j}, W_{t+j} - \nu \right] \mid X_t \right) = \kappa(X_{t-1}, W_{t}) - \nu + \sum_{j=0}^\infty E \left[ \overline \kappa( X_{t+j} ) \mid X_t \right] = \kappa_h(X_{t-1}, W_t)\]

where

\[\kappa_h (X_{t-1}, W_t) = \kappa(X_{t-1}, W_{t}) - \nu + \sum_{j=0}^\infty {\mathbb T}^j \overline{\kappa} (X_t) = \kappa(X_{t-1}, W_{t}) - \nu + \left( \mathbb I - \mathbb T \right)^{-1} \overline \kappa(X_{t})\]

where \({\mathbb T}\) is the operator defined in (2.1). The right side becomes a function of only \((X_{t-1},W_t)\) once we substitute for \(\phi(X_{t-1},W_t)\) for \(X_t\) as implied by (4.1).

This construction requires that the infinite sum

\[\sum_{j=0}^\infty {\mathbb T}^j {\overline \kappa}(x) = \left( \mathbb I - \mathbb T \right)^{-1} \overline \kappa(x)\]

converges in mean square relative to the stationary distribution for \(\{X_t: t\ge 0\}\). A sufficient condition for this is that \({\mathbb T}^m\) is a strong contraction for some integer \(m \geq 1\) and \(\overline{\kappa} \in {\mathcal N}\) where \({\mathcal N}\) is defined in (2.9).

  1. Compute

\[H_t^+ = E\left( H_{t+1} \mid X_t \right) = \kappa_+(X_t)\]

where[3]

\[\begin{split}\begin{align*} \kappa_+(x) & \doteq E\left[\kappa(X_{t}, W_{t+1}) \mid X_{t} = x \right] - \nu + E\left[ \left(\mathbb I - \mathbb T \right)^{-1} \overline \kappa(X_{t+1}) \mid X_t = x \right] \\ & = E\left[\kappa(X_{t}, W_{t+1}) \mid X_{t}= x \right] - \nu + \left( \mathbb I - \mathbb T \right)^{-1} {\mathbb T} \overline \kappa(x). \end{align*}\end{split}\]
  1. Build the martingale increment:

\[G_t = H_t - H_{t-1}^+ = \kappa_m(X_{t-1}, W_{t})\]

where

\[\kappa_m(X_{t-1}, W_t) = \kappa_h (X_{t-1}, W_t ) - \kappa_+(X_{t-1}).\]

By construction, the expectation of \(\kappa_m(X_t, W_{t+1})\) conditioned on \(X_t\) is zero.

Armed with these calculations, we now report a Markov counterpart to Proposition 3.1.

Proposition 4.1

Suppose that \(\{Y_{t} : t\ge 0\}\) is an additive functional, that \({\mathbb T}^m\) is a strong contraction on \({\mathcal N}\) for some \(m\), and that \(E[\kappa(X_{t},W_{t+1})^2] < \infty\). Then

\[\begin{split}\begin{aligned} Y_{t} & = t\nu + \sum_{j=1}^{t} {\kappa_m}(X_{j-1},W_{j}) - \kappa_+(X_t) + Y_0 + \kappa_+(X_0).\\ &\phantom{=}\textbf{trend} \quad \textbf{martingale} \quad \textbf{stationary} \quad \textbf{invariant} \end{aligned}\end{split}\]

Notice that the martingale component is itself an additive functional. The first is a linear time trend, the second an additive martingale, the third a stationary process with mean zero, and the fourth a time-invariant constant. If we happen to impose the initialization: \(Y_0 = - \kappa_+(X_0)\), then the fourth term is zero. We use a Proposition 4.1 decomposition as a way to associate a ‘’permanent shock’’ with an additive functional. The permanent shock is the increment to the martingale.

4.3. Applications#

We now compute martingale increments for two models of economic time series.

4.3.1. Application to a VAR#

We apply the four-step construction in algorithm when the Markov state \(\{ X_t \}\) follows a first-order VAR

(4.3)#\[X_{t+1} = {\mathbb A} X_t + {\mathbb B} W_{t+1},\]

where \({\mathbb A}\) is a stable matrix and \(\{ W_{t+1} : t\ge 0 \}\) is a sequence of independent and identically normally distributed random variables with mean vector zero and identity covariance matrix. The one-step ahead conditional covariance matrix of the time \(t+1\) shocks \(B W_{t+1}\) to \(X_{t+1}\) equals \(B B'\). Let

(4.4)#\[Y_{t+1} - Y_t = \kappa(X_{t},W_{t+1}) = {\mathbb D} X_t + \nu + {\mathbb F} W_{t+1},\]

where \(D\) and \(F\) are row vectors with the same dimensions as \(X_t\) and \(W_{t+1}\), respectively, and the \((\cdot)\) symbol denotes an inner product. For this example, the four steps of algorithm become:

  1. The trend growth rate is \(\nu\) as specified.

  2. \[\kappa_h(X_{t-1}, W_t, X_t ) = {\mathbb D} X_{t-1} + {\mathbb F} W_{t} + {\mathbb D}({\mathbb I} - {\mathbb A} )^{-1} X_t \]
  3. \[\kappa_+(x) = {\mathbb D} x + {\mathbb D} ({\mathbb I} - {\mathbb A} )^{-1}{\mathbb A} x \]
  4. \[\kappa_m(X_{t-1}, W_t) = {\mathbb F} W_{t} + {\mathbb D} ({\mathbb I} - {\mathbb A} )^{-1} (X_t - {\mathbb A} X_{t-1} ) = \left[{\mathbb F} + {\mathbb D} ({\mathbb I} - {\mathbb A} )^{-1} {\mathbb B} \right] W_t \]

From Example 1.7, we expect the coefficient of martingale increment to be the sum of impulse responses for the increment process \(\{ {\mathbb D} X_t + {\mathbb F} W_{t+1} : t\ge 0\}\). The impulse response function is the sequence of vectors:

(4.5)#\[{\mathbb F}, \mathbb{ D} {\mathbb B}, {\mathbb D} {\mathbb A} {\mathbb B} , {\mathbb D}{\mathbb A}^2 {\mathbb B}, \cdots .\]

Summing these vectors gives

\[{\mathbb F} + {\mathbb D}\left({\mathbb I} + {\mathbb A} + {\mathbb A}^2 + \cdots \right) {\mathbb B} = {\mathbb F} + {\mathbb D} ({\mathbb I} - {\mathbb A} )^{-1} {\mathbb B}\]

as anticipated.

4.3.2. Growth-Rate Regimes#

We construct a Proposition 4.1 decomposition for a model with persistent switches in the conditional mean and volatility of the growth rate \(Y_{t+1}- Y_t\).

Suppose that \(\{X_t : t \ge 0\}\) evolves according to an \(n\)-state Markov chain with transition matrix \({\mathbb P}\). Realized values of \(X_t\) are coordinate vectors in \({\mathbb R}^n\). Suppose that \({\mathbb P}\) has only one unit eigenvalue. Let \({\bf q}\) be the row eigenvector associated with that unit eigenvalue normalized so that \({\bf q} \cdot {\bf 1}_n = 1\) and

\[{\bf q}'{\mathbb P} = {\bf q}'.\]

Consider an additive functional satisfying

\[Y_{t+1} - Y_t = {\mathbb D} X_t + {X_t}'{\mathbb F} W_{1,t+1},\]

where \(\{ W_{1,t} \}\) is an i.i.d. sequence of multivariate standard normally distributed random vectors. Evidently, the stationary Markov \(\{X_t : t \ge 0 \}\) process induces discrete changes in both the conditional mean and the conditional volatility of the growth rate process \(\{ Y_{t+1} - Y_t \}\).

Observe that \( E (X_{t+1} | X_t ) ={\mathbb P} X_t \) and let

(4.6)#\[W_{2,t+1} = X_{t+1} - E\left( X_{t+1} \vert X_t \right) .\]

Thus we can represent the evolution of the Markov chain as

\[X_{t+1} = {\mathbb P} X_t + W_{2,t+1}\]

\(\{W_{2,t+1} : t \ge 0 \}\) is an \(n \times 1\) discrete-valued vector process that satisfies \(E ( W_{2,t+1} | X_t) = 0 \), which is therefore a martingale increment sequence adapted to \(X_t, X_{t-1}, ..., X_0\).

We again apply the four-step construction in algorithm.[4]

  1. \[\nu = {\mathbb D} {\bf q} \]
  2. \[H_t = {\mathbb D} (X_{t-1} - {\bf q}) + {X_{t-1}}'{\mathbb F} W_{1,t} + {\mathbb D}\left(({\mathbb I} - {\mathbb P}\right)^{-1} X_t \]
  3. \[H_t^+ = E\left( H_{t+1} \mid X_t \right) = {\mathbb D} \left( X_{t} - {\bf q} \right) + {\mathbb D}\left({\mathbb I} - {\mathbb P}\right)^{-1} {\mathbb P}X_t\]

    which implies that

    \[\kappa_+(x) = {\mathbb D} \left( X_{t} - {\bf q} \right) + {\mathbb D}\left({\mathbb I} - {\mathbb P}\right)^{-1} {\mathbb P}x\]
  4. \[G_t = H_t - H_{t-1}^+ = {X_{t-1}}'{\mathbb F} W_{1,t} + {\mathbb D}\left({\mathbb I} - {\mathbb P}\right)^{-1} W_{2,t}\]

    where we have substituted from equation (4.6).

The martingale increment has both continuous and discrete components:

\[{\kappa_m}(X_t , W_{t+1}) = \underbrace{{X_t}'{\mathbb F} W_{1,t+1}}_{\rm{\bf{continuous}}} + \underbrace{ {\mathbb D}\left({\mathbb I} - {\mathbb P}\right)^{-1} W_{2,t+1}}_{\rm{\bf {discrete}}}.\]