6. Multiplicative Functionals#
Authors: Jaroslav Borovicka, Lars Peter Hansen, and Thomas Sargent
Date: November 2024
\(\newcommand{\eqdef}{\stackrel{\text{def}}{=}}\)
Chapter Processes with Markovian increments described additive functionals of a Markov process. This chapter describes exponentials of additive functionals that we call multiplicative functionals. We can use them to model stochastic growth, stochastic discounting, and their interactions. After adjusting for geometric growth or decay, a multiplicative functional contains a martingale component that turns out to be a likelihood ratio process that is itself a special type of multiplicative functional called an exponential martingale. By simply multiplying a baseline probability measure with Markov dynamics by a likelihood ratio process, we can construct an alternative probability model with Markov dynamics. This procedure is useful for asset pricing models because of how it can help us to represent stochastic growth and discounting components that persist over long horizons. It also plays an essential role in statistical model discrimination. We will encounter several other applications of multiplicative functionals, including models of returns and positive cash flows that compound over multiple horizons, cumulative stochastic discount factors used represent prices of such multi-period cash flows, and subjective beliefs of private sector investors and policy makers that might deviate from an econometrician’s model. To analyze multiplicative functionals, we apply mathematical tools related to ones used in the statistical theory of large deviations.
6.1. Geometric growth and decay#
To construct a multiplicative functional, we start with an underlying Markov process \(X\) that has stationary distribution \(Q\).
Definition 6.1
Let \(Y \eqdef \{ Y_t\}\) be an additive functional that as in Chapter Processes with Markovian increments is described by \(Y_{t+1} - Y_t = \kappa(X_t, W_{t+1})\), where \(X_t\) is the time \(t\) component of a Markov state vector and \(W_{t+1}\) is the time \(t+1\) value of a martingale difference process (\({\mathbb E} \left(W_{t+1} \mid {\mathcal A}_t \right) = 0 \)) of unanticipated shocks. We say that \(M \eqdef \{M_t: t \ge 0 \} = \{ \exp(Y_t) : t \ge 0 \}\) is a multiplicative functional parameterized by \(\kappa\). When \(Y_0\) is a (Borel measurable) function of \(X_0\), \(M_0 >0\) is also a (Borel measurable) function of \(X_0\).
An additive functional grows or decays linearly, so the exponential of an additive functional grows or decays geometrically. Chapter Processes with Markovian increments stated a Law of Large Numbers and a Central Limit Theorem for additive functionals. In this chapter, we use other mathematical tools to analyze the limiting behavior of multiplicative functionals. We refer to
as the multiplicative increment of the multiplicative process \(M\).
6.2. Special multiplicative functionals#
We define the three primitive multiplicative functionals.
Example 6.1
Suppose that \(\kappa = \eta\) is constant and that \(M_0\) is a Borel measurable function of \(X_0\). Then
This process grows or decays geometrically.
Example 6.2
Suppose that
Then
so that
A multiplicative functional that satisfies (6.1) is called a multiplicative martingale. We denote such a process as \(M = L\) because it is appropriate to view it as likelihood ratio process.
Example 6.3
Suppose that \(M_t = \exp\left[h(X_t)\right]\) where \(h\) is a Borel measurable function. The associated additive functional satisfies
and is parameterized by \(\kappa(X_t, W_{t+1}) = h\left[ \phi(X_t, W_{t+1} ) \right] - h(X_t)\) with initial condition \(Y_0 = h(X_0)\).
When the process \(\{X_t\}\) is stationary and ergodic, multiplicative functional Example 6.1 displays expected growth or decay, while multiplicative functionals Example 6.2 and Example 6.3 do not. Multiplicative functional Example 6.3 is stationary, while Example 6.1 and Example 6.2 are not.
We can construct other multiplicative functionals simply by multiplying two or more instances of these primitive ones. Soon we shall reverse that process by taking an arbitrary multiplicative functional and (multiplicatively) decomposing it into instances of our three types of multiplicative functionals. Before doing so, we explore multiplicative martingales in more depth.
6.3. Multiplicative martingales and likelihood processes#
We can use multiplicative martingales to represent alternative probability models. We can characterize an alternative model with a set of implied conditional expectations of all bounded random variables, \(B_{t+1},\) that are measurable with respect to \({\mathfrak A}_{t+1}\). The constructed conditional expectation is
We want multiplication of \(B_{t+1}\) by \(N_{t+1}\) to change the baseline probability to an alternative probability model. To accomplish this, the random variable \(N_{t+1}\) must satisfy:
\(N_{t+1} \ge 0\);
\(E\left(N_{t+1} \mid {\mathfrak A}_t \right) = 1\);
\(N_{t+1}\) is \({\mathfrak A}_{t+1}\) measurable.
Property 1 is satisfied because conditional expectations map positive random variables \(B_{t+1}\) into positive random variables that are \({\mathfrak A}_t\) measurable. Properties 2 and 3 are satisfied because \(N\) is the multiplicative increment of a multiplicative martingale. The resulting process \(L\) can be viewed as a likelihood ratio process for the alternative process relative to the baseline process.
Representing an alternative probability model in this way is restrictive. Thus, if a nonnegative random variable has conditional expectation zero under the baseline probability, it will also have zero conditional expectation under the alternative probability measure, an indication of absolute continuity of the two models’ transition probabilities. Two models that violate absolute continuity can be distinguished with probabilty one from only finite samples.
Multiplicative martingales provide a way to model diverse subjective beliefs of private agents or policy-makers within dynamic, stochastic equilibrium models when these beliefs are allowed to depart from the model builder’s model.
Here are examples of multiplicative martingales constructed from some standard probability models.
Example 6.4
Consider a baseline Markov process having transition probability density \(\pi_o\) with respect to a measure \(\lambda\) over the state space \(\mathcal{X}\)
Let \(\pi\) denote some other transition density that we represent as
where we assume that \(\pi_o(x^+ \mid x) = 0\) implies that \(\pi(x^+ \mid x) = 0\) for all \(x^+\) and \(x\) in \(\mathcal{X}\).
Construct the multiplicative increment process as:
Example 6.5
Let an alternative model for a vector \(X\) be a vector autoregression:
where \({\mathbb A}\) is a stable matrix, \(\{W_{t+1} : t \ge 0 \}\) is an i.i.d. sequence of \({\cal N}(0,I)\) random vectors conditioned on \(X_0\). and \({\mathbb B}\) is a square, nonsingular matrix. Assume that a baseline model for \(X\) has the same functional form but different settings \(({\mathbb A}_o, {\mathbb B}_o)\) of its parameters. Let \(N_{t+1}\) be the one-period conditional log-likelihood ratio
Notice how paramaters \(({\mathbb A}_o, {\mathbb B}_o)\) of the baseline model and parameters \(({\mathbb A}, {\mathbb B})\) of the alternative model both appear.
Remark 6.1
Because \(\mathbb{B}\) is a nonsingular square matrix, model Example 6.5 has the same number of shocks, i.e., entries of \(W\), as there are components of \(X\). A more general setting would be a hidden Markov state model like one presented in Section Kfilter that has a time-invariant innovations representation that conditions on an infinite past of an observation vector. Statistical analyses often likelihood functions that condition on only a finite past. That typically produces an \(N_{t+1}\) process that shares asymptotic properties with an alternative process that conditions on an infinite past.
[Hansen and Scheinkman, 2009] show that multiplicative martingales offer a way to value cumulative returns. Let \(R_t\) be a multiplicative process that measures a cumulative return between date \(t\) and date zero. Let \(S_t\) be a corresponding equilibrium discount factor between these same two dates. That \(L = RS\) is a multiplicative martingale follows from equilibrium restrictions on one-period returns:
where \(S_{t+1}/{S_t}\) is the one-period stochastic discount factor and \(R_{t+1}/{R_t}\) is the one-period gross return.
We can elicit a limiting behavior of multiplicative martingales by apply Jensen’s inequality to the concave function \(\log L\) depicted in Fig. 6.1.

Fig. 6.1 Jensen’s Inequality. The logarithmic function is a concave function that equals zero when evaluated at unity. The line segment lies below the logarithmic function.An interior average of endpoints of the straight line lies below the logarithmic function.#
By Jensen’s inequality,
Normalize \(L_0=1\) and form
Note that
so that
This implies that under the baseline model the log-likelihood ratio process \(L\) is a super martingale relative to the information sequence \(\{ {\mathfrak A}_t : t\ge 0\}\).
From the Law of Large Numbers, a population mean is well approximated by a sample average from a long time series. That opens the door to discriminating between two models. Under the baseline model, the log likelihood ratio process scaled by \(1/t\) converges to a negative number. Exchanging changing roles of baseline and alternative models, consider using \({\frac 1 {N_{t+1}} }\) instead of \(N_{t+1}\) as an increment. The scaled-by-\(1/t\) log likelihood ratio converges to the expectation of \(- \log N_{t+1}\) under the alternative model that is now in the denominator of the likelihood ratio. This limit would be positive under an assumption that the alternative model generates the data. Such calculations justify discriminating between the two models by calculating \(\log L_{t}\) and checking if it is positive or negative. This procedure amounts to an application of the method of maximum likelihood.
Suppose now that a statistical model implied by change of measure \(N_{t+1}\) governs the data, not the baseline model. Conditional relative entropy \(E \left( N_{t+1} \log N_{t+1} \mid {\mathfrak A}_t \right)\) of the martingale increment relative to the baseline model \(N\) satisfies
To understand this inequality, note that multiplication of \(\log N_{t+1}\) by \(N_{t+1}\) changes the conditional probability distribution with respect to which the conditional expectation is calculated from the misspecified baseline model to the alternative statistical model. The function \( n \log n\) is convex and equal to zero for \(n=1\). Therefore, Jensen’s inequality implies that conditional relative entropy is nonnegative and equal to zero when \(N_{t+1} = 1\). Notice that
Thus \(L \log L\) is a sub martingale. The expression
and is a measure of relative entropy over a \(t\)-period horizon. Relative entropy is often used to analyze model misspecifications and also appears in statistical characterizations of “large deviations” for Markov processes, as we shall discuss later.
Suppose that a decision-maker does not know whether a baseline or alternative model generates the data.
Attach a subjective prior probability \(\pi_o\) to the baseline probability model and probability \(1 - \pi_o\) on the alternative. Suppose that \(L\) is a likelihood ratio process with \(L_t\) reflecting information available at date \(t\). Date \(t\) posterior probabilities for the baseline and alternative probability models are:
When \({\frac 1 {t}} \log L_{t}\) converges to a negative number under the baseline probability, the first probability converges to one. But when \({\frac 1 {t}} \log L_{t}\) converges to a positive number under the alternative probability, the second probability converges to one. When the data are generated by the baseline probability model, the Law of Large Numbers implies the former; and when the data are generated by the alternative probability model, the Law of Large Numbers implies the latter. This analysis can be extended to situations in which some other model generates the data.
6.4. Factoring a multiplicative functional#
Following [Hansen and Scheinkman, 2009] and [Hansen, 2012], we factor a multiplicative functional into three multiplicative components having the primitive types Example 6.1, Example 6.2, Example 6.3. As in definition Definition 6.1, let \(Y\) be an additive functional, and let \(M = \exp(Y)\). Apply a one-period operator \(\mathbb{M}\) defined by
to bounded Borel measurable functions \(f\) of the Markov state. By applying the Law of Iterated expectations, a two-period operator iterates \(\mathbb{M}\) twice to obtain:
with corresponding definitions of \(j\)-period operators \(\mathbb{M}^j\). The family of operators is a special case of what is called a ``semi-group.’’ The domain of the semigroup can typically be extended to a larger family of functions, but this extension depending on further properties of the multiplicative process used to construct it. For an extended application to asset valuation and investor preferences, see []. We will explore these applications in discussions that follow.
First, for a strictly positive \(f\), construct the limit
when the limit is finite. For instance, \(f\) could be identically one. We call \({\tilde \eta}\) the asymptotic growth (or decay) rate of the multiplicative functional \(M\). Multiplying the multiplicative functional by \(\exp(-\eta t)\) removes expected asymptotic growth from the semigroup.
To refine this limiting characterization of a multiplicative functional and obtain two other components of the factorization, we apply what is referred to mathematics as Perron-Frobenius theory. We start by posing:
Eigenvalue-eigenfunction Problem: Solve
for an eigenvalue \(\exp(\tilde \eta)\) and a positive eigenfunction \({\tilde e}\).
We call the largest eigenvalue, the principal eigenvalue, and the associated eigenvector the principal eigenfunction of the operator \(\mathbb{M}\). A positive eigenfunction \({\tilde e}\) is a function of the Markov state that can be expected to grow (or decay) geometrically at the long-run growth rate \(\eta = {\tilde \eta}\). Write the eigenfunction equation (6.6) as:
Iterating the eigenfunction equation implies
Solve for the principal eigenvalue and eigenvector, and define:
and build
By construction, \({\widetilde N}_{t+1}\) has a conditional expectation equal to unity. Consequently, \( {\widetilde L}\) is a multiplicative martingale.
Theorem 6.1
Let \(M_t\) be a multiplicative functional. Suppose that the principal eigenvalue-eigenfunction Problem has a solution with principal eigenfunction \({\tilde e}(X)\). Then the multiplicative functional is the product of three components that are instances of the primitive functionals in examples Example 6.1, Example 6.2, and Example 6.3:
where \({\widetilde L}_t\) is a multiplicative martingale.
The factorization of a multiplicative functional described in Theorem 6.1 is a counterpart to the Proposition 4.1 decomposition of an additive functional. We used the Proposition 4.1 martingale to identify the permanent component of an additive functional in Chapter Processes with Markovian increments. In this chapter, we shall use the multiplicative martingale isolated by Theorem 6.1 to represent a change of probability measure. That the additive martingale \(Y= \log(M)\) has a variance that grows linearly over time contributes a component to the exponential trend of the multiplicative functional \(M\) along with a martingale component. The following log-linear, log-normal model displays relevant mechanics.
Example 6.6
Consider a stationary \(X\) process and an additive \(Y\) process described by the VAR
where \({\mathbb A}\) is a stable matrix and \(\{ W_{t+1} : t \ge 0 \}\) is a sequence of independent and identically normally distributed random vectors with mean zero and covariance matrix \({\mathbb I}\). In Proposition Proposition 4.1 of Chapter Processes with Markovian increments, we described the decomposition
where
Let \(M_t = \exp(Y_t)\). Use equation (6.8) to deduce
where
and
The martingale component of the multiplicative functional has “peculiar behavior.” It has expectation one by construction. The Martingale Convergence Theorem guarantees that sample paths converge, typically to zero. Fig. 6.2 plots probability density functions of the martingale component for different values of \(t\).

Fig. 6.2 Density of \(\widetilde{L}_t\) for different values of \(t\).#
We are especially interested in this martingale component as a change of probability measure. Formula (6.9) for \({\widetilde N}_t\) tells how the change in probability measure induces mean \( {\mathbb H}\) in the conditional distribution for the shock \(W_{t+1}\).
Models in which \(X_t\) is a finite-state Markov chain are also manageable computationally. In such models the principal eigenvalue calculation reduces to finding an eigenvector of a matrix with all positive entries.
Example 6.7
The stochastic process \(X_t\) is governed by a finite-state Markov chain on state space \( \{ {\sf s}_1, {\sf s}_2, \ldots, {\sf s}_n \}\), where \(s_i\) is the \(n \times 1\) vector whose components are all zero except for \(1\) in the \(i^{th}\) row. The transition matrix is \({\mathbb P},\) where \({\sf p}_{ij} = \textrm{Prob}( X_{t+1} = [{\sf s}_j | X_t = {\sf s}_i)\). We can represent the Markov chain as
where \({\mathbb E} (X_{t+1} | X_t ) = {\mathbb P}' X_t \), \({\mathbb P}'\) denotes the transpose of \(P\), and \(\{W_{t+1}\}\) is an \(n \times 1\) vector process that satisfies \({\mathbb E} ( W_{t+1} | X_t) = 0 \), which is therefore a martingale-difference sequence adapted to \(X_t, X_{t-1}, \ldots , X_0\).
Let \({\mathbb G}\) be an \(n \times n\) matrix whose \((i,j)\) entry \({\sf g}_{ij}\) is an additive net growth rate \(Y_{t+1} - Y_t\) experienced when \(X_{t+1} = {\sf s}_j\) and \(X_t = {\sf s}_i\). The stochastic process \(Y\) is governed by the additive functional
Let \(M= \exp(Y)\). Define a matrix \({\mathbb M}\) whose \((i,j)^{th}\) element is \({\sf m}_{ij} = \exp({\sf g}_{ij}).\) The stochastic process \(M\) is governed by the multiplicative functional:
Associated with this multiplicative functional is the principal eigenvalue problem
To convert this to a linear algebra problem, write the \(j^{th}\) entry of \({\tilde e}\) as \({\tilde e}_j\). Since \(X_t\) always assumes the value of one of the coordinate vectors \({\sf s}_i, i =1, \ldots, n\),
when \(X_t = {\sf s}_i\) and \(X_{t+1} = {\sf s}_j\). This allows us to rewrite the principal eigenvalue problem as
or
where \(\widetilde {\sf p}_{ij} = {\sf p}_{ij} {\sf m}_{ij}\) and \({\sf e}_i\) is entry \(i\) of \(e\). We want the largest eigenvalue and associated with a positive eigenvector of (6.11).
After solving the principal eigenvalue problem, compute
and form the matrix \({\widetilde {\mathbb L}} = [{\widetilde {\sf l}}_{ij}]\). We have now constructed a matrix \({\widetilde {\mathbb L}}\) that behaves as a transition matrix for a different finite state Markov chain. Its entries are nonnegative, and
We can use this matrix for form increments \((X_t)'{\widetilde {\mathbb L}} X_{t+1}\) in a positive multiplicative martingale process \(\{{\widetilde L_t}\}\):
To achieve a Theorem 6.1 representation of the multiplicative functional \(M_t\), use formula (6.12) for \({\widetilde{\sf m}}_{ij}\) to get \( {\sf m}_{ij} = \exp\left( \tilde \eta \right) {\widetilde {\sf m}}_{ij} \frac {{\sf e}_i}{{\sf e}_j}. \) This allows us to write (6.10) as
6.5. Stochastic stability#
Our characterization of a change of probability measure as the solution of a Perron-Frobenius problem determines only transition probabilities. Since the process is Markov, it is reasonable to seek an initial distribution of \(X_0\) under which the process is stationary. When the eigenfunction problem has multiple solutions, it turns out that there is a unique solution for which the process \(X\) is stochastically stable under the implied change of measure, in particular, the solution associated with the minimum eigenvalue. See [Hansen and Scheinkman, 2009] and [Hansen, 2012] for a formal analysis of this problem in a continuous-time Markov setting.
Definition 6.2
A process \(X\) is stochastically stable under a probability measure \({\widetilde {Pr}}\) if it is stationary and \(\lim_{j \rightarrow \infty} {\widetilde E} \left[h(X_j) \mid X_0 = x \right] = {\widetilde E} \left[ h(X_0) \right]\) for any Borel measurable \(h\) satisfying \({\widetilde E} \vert h(X_t) \vert < \infty\).
Stochastic stability under the change of measure provides way to think about some interesting long-term approximations. Suppose that
Then
Since \(X\) is stochastically stable under \({\widetilde P}r\),
Under the restriction, after adjusting for the growth decay in the semigroup, we obtain a more refined approximation:
where we assume that \({\widetilde {\mathbb E}}\left[{\frac {f(X_t)} {{\tilde e}(X_t)}}\right] < \infty\). Once we adjust for the impact of \({\tilde \eta}\), the limiting function is proportional to \({\tilde e}\). The function \(f\) determines only a scale factor \({\widetilde {\mathbb E}}\left[{\frac {f(X_t)} {{\tilde e}(X_t)}}\right] \tilde e(x)\).
It also turns out that stochastic stability is sufficient for the Perron-Frobenius eigenvalue problem to have a unique solution.
Theorem 6.2
Let \(M\) be a multiplicative functional. Suppose that \((\tilde \eta, \tilde e)\) solves the eigenfunction problem and that under the change of measure \(\widetilde P\) implied by the associated martingale \(\widetilde M\) the stochastic process \(X\) is stationary and ergodic. Consider any other solution \((\eta^*, e^*)\) to eigenfunction problem with implied martingale \(\{ M_t^* \}\). Then
\(\eta^* \ge \tilde \eta\).
If \(X\) is stochastically stable under the change of measure \(Pr^*\) implied by the martingale \(M^*\), then \(\eta^* = \tilde \eta\), \(e^*\) is proportional to \(\tilde e\), and \(M^* = \widetilde M\) for all \(t=0,1,... \).
Proof
First we show that \(\eta^* \ge \tilde \eta\). Write:
Thus,
If \(\tilde \eta > \eta^*\), then
But this equality cannot be true because under \(\widetilde{Pr}\) \(X\) is stochatically stable and \(\frac {e^*}{\tilde e}\) is strictly positive. Therefore, \(\eta^* \ge {\tilde \eta}.\)
Consider next the case in which \(\eta^* > \tilde \eta\). Write
which implies that
Thus,
Suppose that \(\tilde \eta < \eta^*\), then
so that \(X\) cannot be stochastically stable under the \(Pr^*\) measure.
Finally, suppose that \(\tilde \eta = \eta^*\) and that \(\frac {\tilde e(x)}{e^*(x)}\) is not constant. Then
and \(X\) cannot be stochastically stable under the \(Pr^*\) measure.
We will apply these results in a variety of ways in this and subsequent chapters.
So far, we have shown how to construct a factorization of a multiplicative functional from an underlying stochastic model of the process. It turns out that such a factorization can help us understand implications of stochastic equilibrium models for valuations of random payout processes. In addition, such factorizations can help organize empirical evidence in ways that make contact with such stochastic equilibrium asset pricing models.
6.6. Inferences about permanent shocks#
Macroeconomists often study dynamic impacts of shocks to systems of variables measured in logarithms. For example, [Alvarez and Jermann, 2005] suggest looking at asset prices using a multiplicative representation of a cumulative stochastic discount factor, though without the tools provided by this chapter. The additive decomposition derived and analyzed in Chapter Processes with Markovian increments are a convenient tool for models like theirs.
We start with a factorization of a stochastic discount factor process as given in Theorem 6.1.
Take logarithms and form:
This looks like an additive decomposition of the type analyzed in Chapter Processes with Markovian increments, but it is actually different. While \(L^s\) is a multiplicative martingale, \(\log L^s\) is typically a super martingale, but not a martingale. This leads us to write the additive decomposition as:
where \({\widehat L}_t^s\) is an additive martingale. As [Hansen, 2012] argues, a weaker result holds. If \(L^s\) is not degenerate (i.e., equal to one), then \({\widehat L}\) is not degenerate and conversely. A prominent multiplicative martingale component implies a prominent role for permanent shocks in the underlying economic dynamics. A formal probability model lets us link the two representations via the results we have described in this chapter and Chapter Processes with Markovian increments. Example Example 6.6 provides an example with explicit formulas linking the two representations.
6.7. Empirical counterparts to a factorization of stochastic discount factors#
Consider stochastic discount factorization (6.15) again. A date zero price of a long-term bond is:
Compute the corresponding yield by taking \(1/t\) times minus the logarithm:
Provided that
the limiting yield on a discount bond is \(- \eta^s.\)
Next consider a one-period holding period return on a \(t\) period discount bond:
Using stochastic stability and taking limits as \(t\) tends to \(\infty\) gives the limiting holding-period return:
A simple calculation shows that \(R_1^{\infty}\) satisfies the following equilibrium pricing restriction on a one-period return:
These long-horizon limits provide approximations to the eigenvalue for the stochastic discount factor and the ratio of the eigenfunctions. In a model without a martingale component [Kazemi, 1992], observed that the inverse of this holding-period return is the one-period stochastic discount factor. [Alvarez and Jermann, 2005] extend this insight by showing that the reciprocal reveals the component of one-period stochastic discount factor net of its martingale component. Within the [Kazemi, 1992] setup, a subjective belief specification, distinct from the baseline probability specification used to represent valuations, could rationalize the martingale component of a cumulative stochastic discount factor process. For instance, the baseline probability specification could be the one that actually generates the data. This distinction between probability measures can be sufficient to induce a martingale component relative to baseline probabilities even though this component is absent when valuations are depicted with the subjective probabilities.
In practice, we have only have bond data with a finite payoff horizon, whereas the characterizations [Kazemi, 1992] and [Alvarez and Jermann, 2005] use bond prices with a limiting payoff horizon. Empirical implementations using such characterizations assume that the observed term structure data have a sufficiently long duration component to provide plausible proxy for the limiting counterpart.
6.8. Long-term risk-return tradeoff for cash flows#
Following [Hansen and Scheinkman, 2009] and [Hansen et al., 2008], we consider the valuation of stochastic cash flows, \(G\), that are multiplicative functionals. Such cash flows are determinants of prices of both equities and bonds.
We now study long-term limits of prices of such cash flows. In addition to the stochastic discount process (6.15), form:
with a corresponding cash-flow return over horizon \(t\):
Note that as a special case, the cash-flow return on a unit date \(t\) cash-flow is:
Define the proportional risk premium on the initial cash-flow return as:
where the third term is minus the logarithm of the riskless cash-flow return for horizon \(t\). To adjust for the investment horizon, we scale by \(1/t\).
The product \(SG\) is itself a multiplicative functional. Let \(\eta^{sg}\) denote its geometric growth component. Then from (6.17), the limiting cash-flow risk compensation is:
This expression resembles a covariance, but it differs from a covariance because we are working with proportional measures of risk compensation for positive payoffs.
Remark 6.2
A cumulative return process \(R\) is a special case of a cash flow. For such a process, \(R_t/R_\tau\) for \(\tau < t\) is a \(t - \tau\) period return for any such \(t\) and \(\tau\). Normalize \(R_0 = 1\) and \(S_0 = 1\). For such a cash flow, \(SR\) is a multiplicative martingale, implying that \(\eta^{sg} = 0\), so that the limiting proportional risk premium is \(\eta^r + \eta^s.\) [Martin, 2012] studies tail behavior of cumulative returns. Since \(SR\) is a martingale bounded from below, it converges almost surely, typically to zero. Since its date zero conditional expectation is one, for long horizons this process necessary has a fat right tail.
We also investigate the limiting behavior of one-period holding period returns. An empirical asset pricing literature has explored these returns starting with [van Binsbergen et al., 2012]. See [Golez and Jackwerth, 2024] for a recent update of this evidence. Use the factorization of \(SG\) to get
Based as it is on a multiplicative factorization of \(SG\), this typically does not the difference between the logarithm of the factorization of \(S\) and the logarithm of the factorization of \(G\). We provide a characterization of the limiting one-period holding period return for the cash flow by imitating and extending our analysis of a limiting holding-period return for riskless bond. This gives the following analogue to (6.16):
The eigenvalue and eigenfunction adjustments come from studying \(SG\) instead of \(S\); we also n inherit a stochastic growth term \(G_1/G_0\). By multiplying this return by \(S_1/S_0\) we obtain \(N_1^{sg}\), the date one martingale increment for \(SG\). The one-period pricing relation for the cash-flow holding-period return follows immediately.
Finally, suppose that \(L^s = 1\) so that the martingale component of the stochastic discount factor process is degenerate. Then \(SG\) inherits the martingale component of \(G\), implying that
As a consequence, the long-term risk-return tradeoff is zero, since in the limit proportional risk compensation is
6.9. Bounding investor beliefs#
We use the cumulative stochastic discount factorization to analyze two distince approaches to drawing inferences about investor beliefs.
6.9.1. Subjective beliefs in the absence of long-term risk#
Suppose that we have data on prices of one-period state-contingent claims. We can use these data to infer the one-period operator, \({\mathbb M}.\) Recall that we represent this operator using a baseline specification of the one-period transition probabilities. One possibility is that the one-period baseline transition probabilities agree with the data generation. Rational expectations models equate transition probabilities to those used by investors. Suppose instead that
we endow investors with subjective beliefs that can differ from the baseline specification;
investors think there are no permanent macroeconomic shocks;
investors don’t have risk-based preferences that can induce a multiplicative martingale in a cumulative stochastic discount factor process.[1]
Under these three restrictions, we could identify the \(L^s\) as the likelihood ratio for investor beliefs relative to the baseline probability distribution. Thus, the implied martingale component in the cumulative stochastic discount factor identifies the subjective beliefs of investors. Using this change of measure, the limiting long-term risk compensations derived in the previous section are zero. These assumptions allow for the “Ross recovery” of investor beliefs.[2]
6.9.2. Restricting the martingale increment with limited asset market data#
Suppose instead that we assume rational expectations by endowing investors with knowledge of the data generating process. With limited asset market data we cannot identify the martingale component to cumulative stochastic discount factor process without additional model restrictions. We can, however, obtain potentially useful bounds on the martingale increment. We know that as a stochastic process the implied martingale has some peculiar behavior, but t nevertheless hat the implied probability measure can be well behaved. Consequently, in contrast to [Alvarez and Jermann, 2005], we use the increment as a device to represent conditional probabilities instead of just as a random variable.
There is a substantial literature on divergence measures for probability densities. Relative entropy is an important example. More generally, consider a convex function \(\phi\) that is zero when evaluated at one. The function \(n\log n\) and \(-\log n\) are examples of such functions. Jensen’s inequality implies that
and equal to zero when \(N_1\) is one, provided that \(N_1\) is a multiplicative martingale increment (has conditional expectation one). This gives rise to a family of \(\phi\) divergences that can be used to assess departures from baseline probabilities. Relative entropy, \(\phi(n) = n \log n\) is an example that is particularly tractable and has been used often. Both \(n \log n\) and \(- \log n\) can be interpreted as expected log-likelihood ratios.
One way of assessing the magnitude of \(N_1^L\) solves:
Minimum divergence Problem
subject to:
where \(Y_1\) is a vector of asset payoffs and \(Q_0\) is a vector of corresponding prices.
Recall that the term
can be approximated by the reciprocal of the one-period holding-period return on a long-term bond.
This approach is an example of partial identification because the vector \(Y_1\) of asset payoffs may not be sufficient to reconstruct all potential one-period asset payoffs and prices. This could be because data limitations lead an econometrician to choose to use incomplete data on financial markets.
Remark 6.3
To avoid having to estimate conditional expectations, applications often study an unconditional counterpart to this problem. In such situations, conditioning can be brought in through the “back door” by scaling payoffs and prices with variables in the conditioning information set; for example, see [Hansen and Singleton, 1982] and [Hansen and Richard, 1987]. See [Bakshi and Chabi-Yo, 2012] and [Bakshi et al., 2017] for some related implementations.
Remark 6.4
[Alvarez and Jermann, 2005] use \(- {\mathbb E}\left( \log S_1 + \log S_0 \right) \) as the objective to be minimized. Notice that
where the term in square brackets is the logarithm of the limiting holding-period bond return. The criterion thus equals that in the minimum divergence problem, but with an additive translation. Rewrite the constraints as:
Thus we are left with an equivalent minimization problem in which the translation term is subtracted off to obtain the bound of interest.
Applied researchers have sometimes omitted the first constraint, which weakens the bound. [Chen et al., 2024] isolate a potentially problematic aspect of monotone decreasing divergences because they can fail to detect certain limiting forms of deviations from baseline probabilities.
Remark 6.5
[Chen et al., 2020] propose extensions of the one-period divergence measures to multi-period counterparts that remain tractable and enlightening. Their method for accommodating conditioning information for bounding such divergences has a direct extension to the problem considered here.
Remark 6.6
If the martingale component of the stochastic discount factor is identically one, then a testable implication is: