7. Perturbing multiplicative functionals#

Authors: Borovicka, Jaroslav (NYU), Lars Peter Hansen (University of Chicago) and Thomas J. Sargent (NYU) \(\newcommand{\eqdef}{\stackrel{\text{def}}{=}}\)

The manner in which risk operates upon time preference will differ, among other things, according to the periods in the future to which the risk applies. Irving Fisher (Theory of Interest (1930))

7.1. Introduction#

Local methods open the door to intertemporal characterizations of asset valuation. Such characterizations have direct like to first-order conditions of investors. While macroeconomists often solve models and analyze the implied impulse response using time series characterizations expressed in term of logarithms, for asset valuation with compensation for uncertainty exposure, it is imperative to work with levels instead of logarithms. Given the presence of stochastic growth contributions, we are led to explore perturbations of multiplicative functions. We introduce multiplicative perturbations which in turn lead naturally to the use of elasticities as way to represent intertemporal compensations. This work builds on insights from [Hansen and Scheinkman, 2012], [Borovička and Hansen, 2014], [Borovička et al., 2014], and [Borovička and Hansen, 2016]. This chapter focuses exclusively on discrete-time specifications. In a later chapter, we discuss continuous-time counterparts.

7.2. An elasticity calculation#

We start by considering a family of positive random variables, \(N_1({\sf r})\) for \({\sf r} \ge 0\) with unit expectations with a limit \(N_1(0) = 1\). We use this family to depict date one perturbations. Let \(M\) be a multiplicative functional and compute:

\[{\mathbb E} \left[\left(\frac {M_t}{M_0} \right) N_1({\sf r}) \vert X_0 \right).\]

We use two interpretations of this computation:

  • \(N_1({\sf r})\) induces a date one change in distribution;

  • \(N_1({\sf r})\) induces a change in the date one exposure to uncertainty.

Both will be of interest to us going forward. The first one allows us to construct a type of impulse response function where we change the initial distribution of a shock. The second defines a family of alternative cash flows for which we may deduce compensations. We introduce the positive scalar \({\sf r}\) in order that we can preform local characterizations in terms of derivatives of the form:

(7.1)#\[\epsilon^m(x,t) \eqdef \frac {d} {d{\sf r}} \log {\mathbb E} \left[ \left(\frac {M_t}{M_0} \right) N_1({\sf r}) \mid X_0 = x\right] \vert_{{\sf r} = 0} = \frac { \frac {d} {d{\sf r}} {\mathbb E} \left[ \left(\frac {M_t}{M_0} \right) N_1({\sf r}) \mid X_0 = x \right] \vert_{{\sf r} = 0}}{ {\mathbb E} \left[\left(\frac {M_t}{M_0} \right) \mid X_0 \right] } \]

which is in the form of a semi-elasticity. The denominator scaling by \(M_t\) on the right-side scales the computation to offset growth in the \(M\) process. Notice also that

\[\frac {d} {d{\sf r}} \log N_1({\sf r}) \vert_{{\sf r} = 0} = \frac {d} {d{\sf r}} N_1({\sf r})\vert_{{\sf r} = 0}\]

since \(N_1(0) = 1\). Given this, we refer to computation (7.1) as an elasticity.

7.3. An important special case#

Write the evolution of the multiplicative functional as:

\[\log M_{t+1} - \log M_t = \kappa(X_t, W_{t+1} ) = \kappa_1(X_t) + \kappa_2(X_t) \cdot W_{t+1} \]

and the proportional perturbation expressed in logarithms as

\[\log N_1({\sf r}) = {\sf r} \pi(X_0) \cdot W_{1} - {\frac 1 2} | \pi(X_0)|^2 ({\sf r})^2\]

where \(W_1\) is a multivariate standard normally distributed random vector that is independent of \(X_0\). Clearly, \(N_1(0) = 1,\) and by properties of the log-normal distribution:

\[{\mathbb E} \left[ N_1({\sf r}) \mid X_0 = x \right] = 1. \]

The vector \(\pi\) gives a possibly state-dependent way to select among the different possible shocks.

Provided that we can differentiate inside the expectation operator, we find that

(7.2)#\[\epsilon^m(x,t) = \pi(x) \cdot \frac {{\mathbb E} \left[\left(\frac {M_t}{M_0} \right) W_1 \mid X_0 \right] }{{\mathbb E} \left[\left(\frac {M_t}{M_0} \right) \mid X_0 \right] }\]

Under the first interpretation of this family of perturbations, we change the distribution of \(W_1\) from being a multivariate, standard normal to a normal with mean \({\sf r} \pi(X_0)\) and an identity as the covariance matrix. That is, we perturb the shock distribution by including a nonzero mean for the date one shock vector. Under the second interpretation, we change the evolution of \(\log M\) by setting:

\[\log M_1 - \log M_0 = \kappa_1(X_0) + \kappa_2(X_2) \cdot W_1 + {\sf r} \pi(X_0) \cdot W_1 - {\frac 1 2} | \pi(X_0)|^2 ({\sf r})^2. \]

With this construction, we have changed the exposure to \(W_1\) of the multiplicative functional, an impact that persists over time. Since we look at limits as \({\sf r}\) declines to zero, the third term becomes dominated by the second.

We now investigate what happens at the one-period horizon and what happens with the horizon becomes arbitrarily long. For \(t=1\), note that \(M_1/M_0\) is conditional log normal with a conditional expectation:

\[\exp\left[ \kappa_1(x) - \frac 1 2 {\sf r}^2\vert \pi(x)\vert + \frac 1 2 \vert \pi(x) + \sf{r} \pi \vert^2 \right]\]

Differentiating the logarithm with respect to \({\sf r}\) and evaluating this derivative at zero gives:

\[\epsilon^m(x,1) = \pi(x) \cdot \kappa_2(x) .\]

To study the long-horizon counterpart, It is revealing to use the martingale factorization to represent the shock elasticities. Recall that

\[\frac {M_t}{M_0} = \exp(t \eta^m) L_t^m \left[\frac {e_t^m(X_0)}{ e_t^m(X_t)}\right]\]

where \(L^m\) is a multiplicative martingale. With this factorization, we use the Law of Iterated Expectations to write

(7.3)#\[\begin{align} \frac {{\mathbb E} \left[ \left(\frac {M_t}{M_0} \right) \mid {\mathfrak A}_1\right]}{{\mathbb E} \left[ \left(\frac {M_t}{M_0} \right) \mid {\mathfrak A}_0\right]} & = \frac{{\mathbb E} \left[ \left(\frac {M_t}{M_1} \right) \mid {\mathfrak A}_1 \right] \left( \frac {M_1}{M_0}\right)} {{\mathbb E} \left[ \left(\frac {M_t}{M_0} \right) \mid {\mathfrak A}_0 \right]}\cr & = \left(\frac {{\mathbb E} \left[ \left(\frac{L_t^m}{L_1^m}\right) \left[ \frac 1 {e^m(X_t)} \right] \mid {\mathfrak A}_1\right]} {{\mathbb E} \left( L_t^m \left[ \frac 1 {e^m(X_t)} \right] \mid {\mathfrak A}_0 \right)}\right) L_1^m \cr & = \left(\frac {{\mathbb E} \left[ \left(\frac{L_t^m}{L_1^m}\right) \left[ \frac 1 {e^m(X_t)} \right] \mid X_1\right]} {{\mathbb E} \left( L_t^m \left[ \frac 1 {e^m(X_t)} \right] \mid X_0 \right)}\right) L_1^m \end{align}\]

Notice that the random variable on the left is positive has expectation one conditioned on \({\mathfrak A}_0\). Under stochastic stability under the change of probability measure induced by the martingale \(L^m\), the random variable on the right converges to \(L_1^m\) as \(t\) tends to \(\infty\). Substituting this calculation into formula (7.2) and applying Law of Iterated expectations gives:

(7.4)#\[\epsilon^m(x,t) = \pi(x) \cdot {\mathbb E} \left[ \left(\frac {{\mathbb E} \left[ \left(\frac{L_t^m}{L_1^m}\right) \left[ \frac 1 {e^m(X_t)} \right] \mid X_1\right]} {{\mathbb E} \left( L_t^m \left[ \frac 1 {e^m(X_t)} \right] \mid X_0 \right)}\right) L_1^m W_1 \ \Biggl| X_0 = x \right] \]

These calculations suggest defining the limiting elasticity as:

\[\epsilon^m(x,\infty) \eqdef \pi(x) \cdot {\mathbb E} \left( L_1^m W_1 \mid X_0 = x \right) .\]

The expectation is under the change in probability measure induced by the increment to martingale component of \(M\). Formally, the convergence to this limit requires more than point-wise or almost sure convergence of relative densities in (7.3) for the conditional expectations to converge, but these conditions are often satisfied in applications.

Example 7.1

We now revisit Example 6.6. That is, consider \(M = \exp(Y)\) constructed with a stationary \(X\) process and an additive \(Y\) process described by the VAR

\[\begin{align*} X_{t+1} & = {\mathbb A} X_t + {\mathbb B} W_{t+1} \cr Y_{t+1} - Y_t & = \nu + {\mathbb D} \cdot X_t + {\mathbb F} \cdot W_{t+1} \end{align*}\]

where \({\mathbb A}\) is a stable matrix and \(\{ W_{t+1} : t \ge 0 \}\) is a sequence of independent and identically normally distributed random vectors with mean zero and covariance matrix \({\mathbb I}\). For this example, we showed that

\[\begin{align} \eta^m &= \nu + \frac{{\mathbb H} \cdot {\mathbb H} }{2} \cr {\mathbb H} &= {\mathbb F} + {\mathbb B}'\left({\mathbb I} - {\mathbb A}' \right)^{-1} {\mathbb D} \cr N_{t+1}^m & = \exp \left( {\mathbb H} \cdot W_{t+1} -\frac{ {\mathbb H} \cdot {\mathbb H} }{2} \right) \cr e^m(x) &= \exp \left[ {\mathbb D}' \left({\mathbb I} - {\mathbb A} \right)^{-1} x \right] . \end{align}\]

We compute

\[\begin{align} {\mathbb E} \left(Y_t - Y_0 \mid {\mathfrak A}_1 \right) & = t \nu + {\mathbb D}' \sum_{j=0}^{t-1} {\mathbb A}^j X_1 + {\mathbb D}' X_0 + {\mathbb F}' W_1\cr & = t \nu + {\mathbb D} \sum_{j=0}^{t} {\mathbb A}^{j} X_0 + \left[ {\mathbb D}' \sum_{j=0}^{t-1} {\mathbb A}^{j} {\mathbb B} + {\mathbb F}' \right] W_1 . \end{align}\]

Observe that by properties of the log-normal distribution

\[\frac {{\mathbb E}\left[\exp \left(Y_t - Y_0\right) \mid {\mathfrak A}_1 \right] } {{\mathbb E}\left[\exp \left(Y_t - Y_0\right) \mid {\mathfrak A}_0 \right]} = \exp\left( \left[ {\mathbb D}' \sum_{j=0}^{t-1} {\mathbb A}^{j} {\mathbb B} + {\mathbb F}' \right] W_1 - {\frac 1 2} \vert{\mathbb D}' \sum_{j=0}^{t-1} {\mathbb A}^{j} {\mathbb B} + {\mathbb F}' \vert^2 \right)\]

This random variable induces a change in distribution for the standard normally distributed random vector \(W_1\) by endowing it with a conditional mean:

\[ {\mathbb B}' \sum_{j=0}^{t-1} \left({{\mathbb A}'}\right)^{j} {\mathbb D} + {\mathbb F} ,\]

and the identity as the covariance matrix.
Thus

\[\epsilon^m(x,t) = \pi \cdot \left[ {\mathbb B}' \sum_{j=0}^{t-1} \left({{\mathbb A}'}\right)^{j} {\mathbb D} + {\mathbb F} \right]\]

which coincide the impulse responses from standard analyses of a vector-autoregressive system where \(\pi\) selects the shock of interest. These elasticities are constant. For many asset value analysis also feature stochastic volatility, which can be accommodated or approximated within a class of quadratic models that have computationally attractive properties. See [Borovička and Hansen, 2014] for characterization of elasticities in a class of quadratic time series models.

This special case featured shocks with normal distributions. The methods described in the previous section are more generally applicable as long as a researcher is willing to posit an interesting family of probabilistic perturbations.

7.4. Multiplicative martingale#

When \(M\) is a martingale (\(M = L^m\)), it follows from right side of (7.1) and the Law of Iterated Expectations that

\[\epsilon^m(x,t) = \frac { \frac {d} {d{\sf r}} {\mathbb E} \left[\left(\frac {M_1}{M_0} \right) N_1({\sf r}) \mid X_0 = x \right] \vert_{{\sf r} = 0}}{ {\mathbb E} \left[\left(\frac {M_1}{M_0} \right) \mid X_0 \right] },\]

and is thus constant as a function of \(t \ge 1\). For such a process, we sometimes find a second type of elasticity to be of interest. Suppose we perturb the process at date \(t\) instead of date one, giving rise to:

(7.5)#\[\varepsilon^m(x,t) \eqdef \frac {d} {d{\sf r}} \log {\mathbb E} \left[\left(\frac {M_t}{M_0} \right) N_t({\sf r}) \mid X_0 = x\right] \vert_{{\sf r} = 0} = \frac {d} {d{\sf r}} {\mathbb E} \left[\left(\frac {M_t}{M_0} \right) N_t({\sf r}) \mid X_0 = x \right] \vert_{{\sf r} = 0}.\]

These second elasticities will cease to be constant as a function of \(t\), capturing a different intertemporal aspect of valuation. Perturbations at intermediate dates are also possible.

7.5. Intertemporal asset price compensations#

As in the previous section, we work with proportional representations of risk compensations. We extend the limiting characterizations by filling in the intertemporal components through perturbations in the cash flow. Thus the objects of interest are:

\[\log {\mathbb E} \left[\frac {G_t}{G_0} N_1({\sf r}) \Biggl| X_0 = x \right] - \log {\mathbb E} \left[\frac {S_t G_t N_1}{S_0 G_0} ({\sf r}) \Biggl| X_0 = x \right] + \log {\mathbb E} \left[\frac {S_t}{S_0} \Biggl| X_0 = x \right].\]

where \(G\) is the cash-flow payout process and \(S\) is the cumulative stochastic discount factor process. We compute elasticities by differentiating with respect to \({\sf r}.\) Notice that the third term contributed by the stochastic discount factor does not depend on \({\sf r}\) and drops out of the computation. The following formula gives the risk compensations by horizon where each of terms are special cases of \(\epsilon^m(x,t):\)

\[\epsilon^g(x,t) - \epsilon^{sg}(x,t) . \]

The term, \(\epsilon^g(x,t)\) is the exposure elasticity for the stochastic payoff. Recall that this elasticity can also be viewed as an impulse response for the payout process \(G\) based on altering the conditional mean of the date one shock process. While there may be no direct empirical counterpart to these elasticities, they can be viewed as “building blocks” for intertemporal asset prices, they can be computed directly for fully specified models of asset valuation.

We will report such elasticities in the next chapter when analyzing a canonical asset pricing model with production. We also will explore stochastic counterparts to closely related impulse response functions in a later chapter.