10. Representing Marginal Valuation#

Authors: Lars Peter Hansen and Thomas J. Sargent

Date: November 2024 \(\newcommand{\eqdef}{\stackrel{\text{def}}{=}}\)

10.1. Introduction#

Partial derivatives of value functions measure marginal valuations and appear in first-order conditions of Markov decision problems. They feature prominently in max-min formulations of robust control problems as robustly optimal controls depend on partial derivatives of value functions. They can also be used to measure losses from suboptimal choices. They are pertinent both for individual decision problems or for social evaluations. Special cases of these methods have been used by [Hansen et al., 1999] to assess the impact of uncertainty on investment and equilibrium outcomes, by [Alvarez and Jermann, 2004] to evaluate the welfare consequences of uncertainty, and by [Barnett et al., 2020] to characterize the social cost of carbon emissions in the presence of climate change and damage uncertainties. Specifically, the social cost of global warming is the important contributor to calculations of the social cost of carbon emissions often inferred from marginal impacts of fossil fuel emissions on climate indicators measured as potentially uncertain damages to economic opportunities in the future.

By importing insights about stochastic nonlinear impulse response functions and from asset pricing methods for valuing uncertain cash flows, this chapter provides a more general and unifying construction that representations of partial derivatives and decompositions of them . The methods we discuss focus allow researchers to “deconstruct” quantitative findings. They contribute to a broader agenda of refining uncertainty quantification methods in a more insightful ways. Dynamic, stochastic equilibrium models often involve several moving parts. By deconstructing the implications of such models, we can ``open the black box’’ and provide simplified explanations of model outcomes. The asset pricing perspective allows us to think in terms of state-dependent discounting and stochastic flow reminiscent of stochastic payoffs to be valued. Moreover, this perspective allows us to partition the stochastic flow contributions based on different channels that emerge in the dynamic model specification.

10.2. Discrete time#

We first consider a discrete-time specification. As in the previous chapter, we start with Markov process

(10.1)#\[\begin{split}X_{t+1} = \psi(X_t, W_{t+1}), \\ Y_{t+1} - Y_t = \kappa(X_t, W_{t+1}),\end{split}\]

where there are \(n\) components of \(X,\) \(Y\) is scalar, and \(W\) is \(k\) dimensional. Recall the variational processes studied previously:

(10.2)#\[\begin{split}\Lambda_{t+1} = \frac {\partial \psi}{\partial x'} (X_t, W_{t+1}) \Lambda_t \\ \Delta_{t+1} - \Delta_t = \frac {\partial \kappa}{\partial x}(X_t, W_{t+1}) \cdot \Lambda_t .\end{split}\]

We use stochastic impulse responses to provide an “asset pricing” representation of partial derivatives of a value function with respect to one of the components of \(X_0\). Consider a value function that satisfies:

(10.3)#\[\begin{split}\begin{align} V(X_t) + Y_t = & \exp(-\delta) \mathbb{E} \left[ V(X_{t+1}) + Y_{t+1} \mid X_t \right] \\ & + [1 - \exp(-\delta)] \left[ U(X_t) + Y_t \right]. \end{align}\end{split}\]

This value function need not coincide with a solution to an optimal control problem. It could just be the evaluation of some non-optimal decision rule associated with a stochastic equilibrium. Marginal valuation is still used as part of a local policy analysis.

Differentiate both sides of this (10.3) with respect to \(X_t\) and \(Y_t\) and form dot products with appropriate variational counterparts:

(10.4)#\[\begin{split}\begin{align} \frac{\partial V}{\partial x}(X_t) \cdot \Lambda_t + \Delta_t = & \exp(-\delta) \mathbb{E}\left[ \frac{\partial V}{\partial x}(X_{t+1}) \cdot \Lambda_{t+1} + \Delta_{t+1} \mid X_t, \Lambda_t, \Delta_t \right] \\ & + [1 - \exp(-\delta)]\left[\frac{\partial U}{\partial x}(X_t) \cdot \Lambda_t + \Delta_t \right] \end{align} \end{split}\]

View equation (10.4) as a stochastic difference equation and solve it forward for \(\frac{\partial V}{\partial x}(X_t) \cdot \Lambda_t + \Delta_t:\)

(10.5)#\[\begin{align*} & \frac{\partial V}{\partial x}(X_t) \cdot \Lambda_t + \Delta_t = \cr & [1 - \exp(-\delta)]\sum_{\tau = 0}^\infty \mathbb{E}\left( \exp(-\tau \delta) \left[\frac{\partial U}{\partial x}(X_{t+\tau}) \cdot \Lambda_{t+\tau} + \Delta_{t + \tau}\right] \mid X_t, \Lambda_t, \Delta_t \right) \end{align*}\]

Initialize \(\Lambda_0 = \mathrm{e}_i\) where \(\mathrm{e}_i\) is a coordinate vector with a one in position \(i\) and \(\Delta_0 = 0.\) We now have represented the partial derivative of the value function as:

\[\begin{align} & \frac{\partial V}{\partial x_i}(x) = \cr & [1 - \exp(-\delta)]\sum_{t = 0}^\infty \exp(-t \delta) \mathbb{E}\left[ \frac{\partial U}{\partial x}(X_{t}) \cdot \Lambda_{t} + \Delta_t \mid X_0 = x, \Lambda_0 = \mathrm{e}_i, \Delta_0 = 0 \right] \end{align} \]

This resembles an asset pricing formula in which

\[\begin{split}\left\{\exp(-\delta t)\begin{bmatrix} \Lambda_t \\ \Delta_t \end{bmatrix} : t \ge 0 \right\}\end{split}\]

acts as a vector stochastic discount factor process and the marginal contribution

\[\begin{split}\left\{ [1 - \exp(-\delta)]\begin{bmatrix} \frac{\partial U}{\partial x}(X_{t}) \\ \Delta_t \end{bmatrix} : t \ge 0\right\},\end{split}\]

acts as a vector stochastic cash flow process. The stochastic impulse response tells the marginal state vector response at date \(t\) to changing the \(i^{th}\) state vector component at date zero, while the vector of marginal cash flows at date \(t\) measures the impact on utility of a marginal change in the date \(t\) state vector.

Remark 10.1

Sometimes it is convenient to apply summation by parts:

\[\begin{split}\begin{align*} & [1 - \exp(-\delta)]\sum_{\tau = 0}^\infty \mathbb{E}\left( \exp(-\tau \delta) \Delta_{t + \tau} \mid X_t, \Lambda_t, \Delta_t \right) \\ & = \sum_{\tau = 1}^\infty \mathbb{E}\left[ \exp(-\tau \delta) \left(\Delta_{t + \tau} - \Delta_{t+\tau -1} \right) \mid X_t, \Lambda_t, \Delta_t \right] + \Delta_t \\ & = \sum_{\tau = 1}^\infty \mathbb{E}\left[ \exp(-\tau \delta) \frac{\partial \kappa}{\partial x}(X_{t+\tau-1}, W_{t+\tau})\cdot \Lambda_{t+ \tau} \mid X_t, \Lambda_t, \right] + \Delta_t . \end{align*}\end{split}\]

Substituting into (10.5) gives:

\[\begin{split}\begin{align} &\frac{\partial V}{\partial x}(X_t) \cdot \Lambda_t + \Delta_t = \cr & [1 - \exp(-\delta)]\sum_{\tau = 0}^\infty \mathbb{E}\left[ \exp(-\tau \delta) \frac{\partial U}{\partial x}(X_{t+\tau}) \cdot \Lambda_{t+\tau} \mid X_t, \Lambda_t, \right] \\ & + \sum_{\tau = 1}^\infty \mathbb{E}\left[ \exp(-\tau \delta) \frac{\partial \kappa}{\partial x}(X_{t+\tau-1}, W_{t+\tau})\cdot \Lambda_{t+ \tau} \mid X_t, \Lambda_t, \right] + \Delta_t. \end{align}\end{split}\]

10.3. Continuous time#

A continuous-time counterpart allows us to draw a distinction between small shocks (Brownian increments) and large shocks (Poisson jumps). Formally, we consider a continuous-time specification with Brown motion shocks, in other words, diffusion dynamics. We then allow for jumps by treating them as terminal conditions where we impose continuation values conditioned on a jump taking place. The possibility of a jump will contribute to the value function computation. After developing the basic approach we extend the analysis to include robustness in the valuation.

10.3.1. Diffusion dynamics#

As a part of a more general derivation, we begin with state dynamics modeled as a Markov diffusion:

\[\begin{split}\begin{align*} dX_t & = \mu(X_t) dt + \sigma(X_t) dW_t \\ dY_t & = \nu(X_t) dt + \varsigma(X_t) \cdot dW_t. \end{align*}\end{split}\]

As for discrete time, these dynamics might or might not be outcomes of an optimization problem.

Using the variational process construction in the previous chapter, recall that

\[d\Lambda_{t}^i = \left(\Lambda_t\right)'\frac{\partial \mu_i}{\partial x}(X_t) dt + \left({\Lambda_t}\right)'\frac{\partial \sigma_i}{\partial x}(X_t) dW_t.\]

With the appropriate stacking, the drift for the composite process \((X,\Lambda)\) is:

(10.6)#\[\begin{split}\mu^a(x,\lambda) \overset{\text{def}}{=} \begin{bmatrix} \mu(x) \\ \lambda'{\frac {\partial \mu_i} {\partial x} }(x) \\ ... \\ \lambda'{\frac {\partial \mu_n} {\partial x} }(x) \end{bmatrix},\end{split}\]

and the composite matrix coefficient on \(dW_t\) is given by

(10.7)#\[\begin{split}\sigma^a(x,\lambda) \overset{\text{def}}{=} \begin{bmatrix} \sigma(x) \\ \lambda'\frac {\partial \sigma_1 }{\partial x}(x)\\ ... \\ \lambda' \frac {\partial \sigma_n }{\partial x}(x) \end{bmatrix}.\end{split}\]

Similarly, \(\Delta\) is the scalar variational process associated with \(Y\) with evolution

\[d \Delta_t = \Lambda_t \cdot \frac {\partial \nu}{\partial x} (X_t)dt + {\Lambda_t}' \frac {\partial \varsigma}{\partial x'} dW_t \]

10.3.2. An initial representation of a partial derivative#

Consider the evaluation of discounted utility where the instantaneous contribution is \(U(x)\) where \(x\) is the realization of a state vector \(X_t\). The function \(U\) satisfies a Feynman-Kac (FK) equation:

(10.8)#\[\begin{align} 0 = & \delta \left[U(x) + y\right] - \delta \left[V(x) + y \right] \mu(x) \cdot \frac {\partial V}{\partial x}(x) + \nu(x) \cr &+ {\frac 1 2 }{\rm trace}\left[\sigma(x)' \frac {\partial^2 V}{\partial x \partial x'}(x) \sigma(x) \right]. \end{align}\]

As in the discrete-time example, we want to represent

\[V_{x_i}(x) = {\frac {\partial V}{\partial x_i}}(x) \]

as an expected discounted value of a marginal impulse responses of future \(X_t\) to a marginal change of the \(i^{th}\) coordinate of \(x.\)

By differentiating Feynman-Kac equation (10.8) with respect to each coordinate, we obtain a vector of equations one for each state variable. We then form the dot product of this vector system with respect to \(m\) to obtain a scalar equation system that is of particular interest. The resulting equation is a Feynman-Kac equation for the scalar function:

\[\lambda \cdot \frac {\partial V}{\partial x}\]

as established in the Appendix. Given that the equation to be solved involves both \(\lambda\) and \(x\), this equation uses the diffusion dynamics for the joint process \((X,\Lambda)\).

The solution to this Feynman-Kac equation takes the form of a discounted expected value:

(10.9)#\[\begin{align} & \frac {\partial V}{\partial x}(X_0) \cdot \Lambda_0 + \Delta_0 \cr &= \delta \int_0^\infty \exp( - \delta t ) {\mathbb E} \left[ \frac {\partial U}{\partial x} (X_{t}) \cdot \Lambda_{t} + \Delta_t \mid X_0, \Lambda_0, \Delta_0 \right] dt. \end{align} \]

By initializing the state vector \(\Lambda_0\) to be a coordinate vector of zeros in all entries but entry \(i\) and \(\Lambda_0 = 0\), we obtain the formula we want, which gives the partial derivative as a discounted present value using \(\delta\) as the discount rate. The contribution, \(\Lambda_{t},\) is the marginal response of the date \(t\) state vector to marginal change in the \(i^{th}\) component of the state vector at date zero. The marginal change in the date \(t\) state vector induces marginal reward at date \(t\):

\[\delta \frac {\partial U}{\partial x} (X_{t})\cdot \Lambda_{t} + \Delta_t\]

which provides us with a useful interpretation as an asset price. The process \(\Lambda\) gives a vector counterpart to a stochastic discount factor process and \(\delta \frac {\partial U}{\partial x} (X_{t}) + \Delta_t\) gives the counterpart to a cash flow to be valued.

One application of representation (10.9) computes the discounted impulse response:

\[\delta \exp( - \delta t ) {\mathbb E} \left[ \frac {\partial U}{\partial x_j} (X_{t}) \cdot \Lambda_{t} \mid X_0, \Lambda_0, \Delta_0 \right]. \]

for \(t \ge 0\) and for \(j=1,2,...,n\) along with

\[\delta \exp( - \delta t ) {\mathbb E} \left[ \Delta_t \mid X_0, \Lambda_0, \Delta_0 \right] \]

for \(t \ge 0\) as an intertemporal, additive decomposition of the marginal valuation of one of the state variables as determined by an initialization of \(\Lambda_0.\)

Remark 10.2

Representations similar to (10.7) appear in the sensitivity analyses of options prices. See [Fournie et al., 1999].

10.3.3. Robustness#

We next consider a general class of drift distortions that can help us study model misspecification concerns. We initially explore the consequences of exogenously-specified drift distortion. After that, we show how such a distortion can emerge endogenously as a decision-maker’s response to concerns about model misspecifications.

For diffusions, we entertain distortions to the Brownian increment. Instead \(W\) being a multivariate Brownian motion, we allow it to have a drift \(H\) under a change in the probability distribution. We index the alternative probability specifications with their corresponding drift processes \(H\). Locally,

\[dW_t = H_t dt + dW^H_t\]

where \(W^H\) is a Brownian motion under the \(H\) probability. Given that both the distribution parameterize by \(H\) and the baseline distribution for the increment are normals with an identity matrix as the local covariance matrix, the local measure of relative entropy is given by the quadratic term:

\[{\frac 1 2} H_t \cdot H_t .\]

See [James, 1992], [Anderson et al., 2003], and [Hansen et al., 2006] for further discussions.

Initially, we introduce an exogenously specified drift distortion process \(H\) into the diffusion dynamics:

\[\begin{split}\begin{align*} &d X_t = \mu(X_t)dt + \sigma(X_t) H \left( \bar{X}_t \right) dt + \sigma(X_t) dW_t^H \\ &d \overline{X}_t = \bar{\mu}\left( \overline{X}_t \right) dt + \bar{\sigma}\left(\overline{X}_t \right) dW_t^H. \end{align*}\end{split}\]

By imitating our earlier analysis, we can associate with this joint system \((X, \overline{X})\) a composite variational process \((\Lambda, \overline{\Lambda}).\) To study endogenous state variable sensitivity, we are especially interested in the \(M\) component, not the \(\overline{\Lambda}_t\) process. Notice that if we set \(\overline{\Lambda}_0 = 0\), then \(\overline{\Lambda}_t = 0\) for \(t > 0.\)

The evolution for the variational process component \(\Lambda\) becomes:

\[d\Lambda_{t}^i = \left(\Lambda_t\right)'\left[\frac{\partial \mu_i}{\partial x}(X_t) + \frac{\partial \sigma_i}{\partial x}(X_t) H\left( \overline{X}_t \right) \right]dt + \left(\Lambda_t\right)'\frac{\partial \sigma_i}{\partial x}(X_t) dW_t^H.\]

Importantly, there is no contribution from differentiating \(H\) with respect to \(x\) since \(H\) only depends on the \(\bar{X}_t\) process.

Remark 10.3

While we focus on Markov forms of misspecification, this can be relaxed. The misspecification that will most concern a decision maker will have a Markov representation. That helps explain why we make a Markov assumption here.

10.3.4. Value function derivatives under robustness#

We now let the flow term be:

\[\delta \left[ U(x) + y \right] +{\frac \xi 2} \vert H\left( \bar{x} \right) \vert^2.\]

This implies a value function \({\overline V}(x,{\bar x}) + y.\)

Consider another value function that is sometimes used to compute a robustness adjustment to valuation. It coincides with ones sometimes used in robust control problems. This value function solves the HJB equation:

(10.10)#\[\begin{align} 0 = \min_h & \hspace{.2cm} \delta \left[U(x) + y\right] - \delta \left[ V(x) + y \right]+ {\frac \xi 2}|h|^2 + \left[\mu(x) +\sigma(x)h \right] V_x(x) \cr & + \nu(x) + \varsigma(x) \cdot h + {\frac 1 2} {\rm trace} \left[ \sigma (x)' V_{xx}(x) \sigma(x) \right]. \end{align}\]

The first-order conditions for \(h\) in equation (10.10) imply that

\[\sigma(x)'V_x(x) + \varsigma(x) + \xi h = 0 \]

The solution, \(V,\) of the HJB equation satisfies the Feynman-Kac equation that emerges after we substitute the minimizing \(h\) into the HJB equation:

(10.11)#\[\begin{split}\begin{align} 0 =& \hspace{.2cm} \delta U(x) - \delta V(x) + {\frac \xi 2}|h^*(x) |^2 + \left[\mu(x) +\sigma(x)h^*(x) \right] \cdot V_x(x) \\ & + \left[\nu(x) + \varsigma(x)\cdot h^*(x) \right] + {\frac 1 2} {\rm trace} \left[ \sigma (x)' V_{xx}(x) \sigma(x) \right] , \end{align}\end{split}\]

where

(10.12)#\[h^*(x) = - \frac 1 \xi \left[ \sigma'(x) V_x(x) + \varsigma(x) \right].\]

Consider an exogenously specified drift distortion as in the previous subsection where \(H = h^*\) and stochastic dynamics \(\bar X\) satisfy a consistency requirement:

\[\begin{split}\begin{align} {\bar \mu }(x) &= \mu(x) + H(x) \\ {\bar \sigma}(x) &= \sigma(x) \end{align}\end{split}\]

and \({\bar X}_0 = X_0.\)

We now show that when \( H(\bar x) = h^*( \bar x ),\) it is also true that

(10.13)#\[\begin{split}\begin{align} V(x) & = {\overline V}(x,x) \\ V_x(x) & = {\overline V}_x(x,x) \end{align}\end{split}\]

Differentiate equation (10.11) with respect to \(x\):

\[\begin{split}\begin{align} 0 = & - \delta V_x + \delta U_x + V_{xx}\left(\mu +\sigma h^* \right) + (\mu_x)'V_x + {\rm{mat}} \left\{ \left(\frac {\partial \sigma_i} {\partial x} \right) h^* \right\}'V_x \cr & + \frac{\partial \nu}{\partial x} + \frac {\partial \varsigma'} {\partial x} h^* \\ & + {\frac \partial {\partial x}} \left[{\frac 1 2} {\rm trace} \left( \sigma' V_{xx} \sigma \right) \right] \end{align}\end{split}\]

where \(\rm{mat}\) denotes a matrix formed by stacking the column arguments. (This expression uses the first-order conditions for h and an “Envelope Theorem” to cancel some terms.) Take corresponding derivatives of \({\overline V}\) with respect to the first-argument and then substitute \(x={\bar x}\) to obtain (10.13) when we set \(H(x) = h^*(x).\)

Note that it follows from the second equation in (10.13) that

\[V_{xx}(x) = {\overline V}_{xx}(x,x) + {\overline V}_{x\bar x}(x,x) \]

Remark 10.4

We can drop \(\frac{\xi}{2} |H(\overline{X}_t)|^2\) from the flow term that we used to construct \(\overline{V}\) and still obtain the second equality in (10.13) involving first-derivatives of value functions.

We can now compute representations that we have been seeking by simulating under the endogenously determined worst-case probability specification; we can obtain decompositions of various contributions over time and state vector components based on:

(10.14)#\[\begin{align*} &\frac{\partial \overline{V}}{\partial x}(X_0, \overline{X}_0) \cdot \Lambda_0 + \Delta_0 \cr &= \delta \int_0^\infty \exp(-\delta t) \widetilde{\mathbb{E}} \left[ \frac{\partial U}{\partial x}(X_{t}) \cdot \Lambda_{t} + \Delta_t \mid X_0, \bar{X}_0, \Lambda_0, \Delta_0 \right] dt \end{align*}\]

where set \(\overline{X}_0 = X_0\) and \(\Lambda_0\) equal to one of the coordinate vectors and \(\Delta_0 = 0.\) The mathematical expectation, \(\widetilde{\mathbb{E}}\), is computed under the worst-case stochastic evolution computed by imposing \(H(\overline{X}_t) = h^*(X_t).\)

Remark 10.5

While we demonstrated that we can treat a drift distortion as exogenous to the original state dynamics, for some applications we will want to view it as a change in the endogenous dynamics that are reflected (10.12).

Remark 10.6

By construction, along a simulated path, \(\overline{X}_t = X_t\), which we impose in our numerical calculations. As a consequence, it suffices to simulate \((X,\Lambda)\).

10.3.5. Allowing IES to differ from unity#

Let \(\rho\) be the inverse of the intertemporal elasticity of substitution for a recursive utility specification. The utility recursion is now:

\[\left(\frac{\delta}{1-\rho}\right)\left(\exp\left[(1-\rho)\left[U(X_t)+Y_t\right]\right)\exp\left[(\rho-1)\left[\overline{V}(X_t,\bar{X}_t)+Y_t\right]\right]-1\right) + \overline{\mu}_{v,t} = 0\]

where \(\overline{\mu}_{v,t}\) is the local mean of \(\overline{V}(X,\bar{X}) + Y\) with the robust adjustment discussed above. Compute:

\[\begin{split}\begin{align*} &\frac{\partial}{\partial x}\left(\frac{\delta}{1-\rho}\right)\left(\exp\left[(1-\rho)\left[U(x)+y\right]\right)\exp\left[(\rho-1)\left[\overline{V}(x,\overline{x})+y\right]\right]-1\right) \\ &= \delta\exp\left[(1-\rho)\left[U(x)-\overline{V}(x,\overline{x})\right]\right]\left[\frac{\partial U}{\partial x}(x)-\frac{\partial\overline{V}}{\partial x}(x,\overline{x})\right] \end{align*}\end{split}\]

With this computation, we modify the previous formulas by replacing the subjective discount factor, \(\exp(-t\delta),\) with

\[D_t \eqdef \exp\left(-\int_0^t\delta\exp\left[(1-\rho)\left[U(X_\tau)-\overline{V}(X_\tau,\overline{X}_\tau)d\tau\right]\right]\right).\]

Thus the instantaneous discount rate is now state dependent and depends on the both how the current utility compares to the continuation value and on whether \(\rho\) is greater or less than one. When the current utility exceeds the continuation value, the discount rate is scaled up when \(\rho\) exceeds one and is scaled down when \(\rho\) is less than one. The instantaneous flow term, \(\delta\frac{\partial U}{\partial x}(X_t),\) with

\[ \delta\exp\left[(1-\rho)\left[U(X_\tau)-\overline{V}(X_\tau,\overline{X}_\tau)\right]\right]\frac{\partial U}{\partial x}(X_t).\]

Combining these contributions gives:

\[\begin{align*} & \frac {\partial {\overline V}}{\partial x}\left( X_0, {\overline X}_0 \right) + \Delta_0 = \cr & \delta \int_0^\infty \widetilde{\mathbb E} \left[ D_t \exp\left[(1-\rho)\left[U(X_t)-{\overline V}\left( X_t, {\overline X}_t \right) \right]\right]\frac{\partial U}{\partial x}(X_t) \mid X_0 \Lambda_0, \Delta_0 \right] dt. \end{align*} \]

Remark 10.7

When we conduct simulations, we can impose that \(\overline{X}_t = X_t\) along with (10.13), implying that

\[\overline{V}\left(X_\tau,\overline{X}_\tau\right) = V(X_\tau).\]

10.3.6. Jumps#

We study a pre-jump functional equation in which jump serves as a continuation value. We allow multiple types of jumps, each with its own state-dependent intensity. We denote the intensity of a jump of type \(\ell\) by \(\mathcal{J}^\ell(x)\); a corresponding continuation value after a jump of type \(\ell\) has occurred is \(V^\ell(x)+y\). In applications, we’ll compute post-jump continuation value \(V^\ell\), as components of a complete model solution. To simplify the notation, we impose that \(\rho = 1,\) but it is straightforward to incorporate the \(\rho \ne 1\) extension we discussed in the previous subsection.

As in [Anderson et al., 2003], an HJB equation that adds concerns about robustness to misspecifications of jump intensities includes a robust adjustment to the intensities. The minimizing objective and constraints are separable across jumps. Thus we solve:

\[\min_{g^\ell} \mathcal{J}^\ell \left[ g^\ell \left(V^\ell - V)\right) + \xi \left( 1 - g^\ell + g^\ell \log g^\ell \right)\right]\]

for \(\ell = 1,2, ..., L\), where \(g^\ell \ge 0\) alters the intensity of type \(\ell,\) and the term

\[\mathcal{J}^\ell\left[1 - g^\ell + g^{\ell}\log g^\ell\right]\]

measures the relative entropy of jump intensity specifications.

The minimizing \(g^{\ell}\) is

\[g^{\ell*} = \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\]

with a minimized objective given by

\[\begin{align} & \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right] \left(V^\ell - V)\right) + \xi - \xi \exp \left[ - \frac 1 \xi \left (V^\ell - V)\right) \right] \cr & - \left(V^\ell - V\right)\exp \left[ - \frac 1 \xi \left (V^\ell - V)\right) \right] \cr & = \xi \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right) \end{align}.\]

The minimized objective

(10.15)#\[\xi \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right)\]

is increasing and concave in the value function difference: \(V^\ell - V\). A gradient inequality for a concave function implies that

\[\xi \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right) \le V^\ell - V.\]

Remark 10.8

The deduce the formula for relative entropy and jumps, consider a discrete-time approximation whereby the probability of a jump of type \(\ell\) over an interval of time \(\epsilon\) is (approximately ) \(\epsilon{\mathcal J}^\ell g^\ell\) and probability of not jumping \(1 - \epsilon{\mathcal J}^\ell g^\ell\) where \(g^\ell = 1\) at the baseline probability specification. The approximation becomes good when \(\epsilon\) declines to zero. The corresponding (approximate) relative entropy is

\[\begin{aligned} & \left(\log \epsilon + \log {\mathcal J}^\ell + \log g^\ell - \log \epsilon - \log {\mathcal J}^\ell \right) \epsilon {\mathcal J}^\ell g^\ell \cr & + \left[ \log \left( 1 - \epsilon {\mathcal J}^\ell g^\ell \right) - \log \left( 1 - \epsilon {\mathcal J}^\ell \right) \right] \left( 1 - \epsilon g^\ell {\mathcal J}^\ell \right) \end{aligned}\]

Differentiate this expression with respect to \(\epsilon\) to obtain:

\[\log g^\ell {\mathcal J}^\ell g^\ell - {\mathcal J}^\ell g^\ell + {\mathcal J}^\ell = {\mathcal J}^\ell \left( g^\ell \log g^\ell - g^\ell +1\right). \]

In what follows we will also be interested in the partial derivative of the minimized function given in (10.15) with respect to the state vector:

\[g^{\ell*} \left(\frac {\partial V^\ell}{\partial x} - \frac {\partial V}{\partial x} \right)\]

where \(g^{\ell*}\) is the minimizer used to alter the jump intensity.

When constructing the HJB equation, we continue to include the diffusion dynamics and now incorporate the \(L\) possible jumps. The usual term:

\[ \sum_{\ell=1}^L \mathcal{J}^\ell \left (V^\ell - V\right) .\]

is replaced by

\[\xi \sum_{\ell=1}^L \mathcal{J}^\ell \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right)\]

as an adjustment for robustness in the jump intensities.
The resulting HJB equation is:

\[\begin{split}\begin{align} 0 = \min_{h} & - \delta V + \delta U + {\frac{\xi}{2}}|h|^2 +\left[\mu +\sigma h\right]\cdot \frac {\partial V}{\partial x} + \nu + \varsigma \cdot h\\ & + {\frac{1}{2}}{\rm trace}\left[\sigma'\frac {\partial^2 V }{\partial x \partial x'}\sigma\right] \\ & + \xi \sum_{\ell=1}^L \mathcal{J}^\ell \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right) \end{align}\end{split}\]

We again construct a Feynman-Kac equation by substituting in \(h^*(x)\). Applying an Envelope Theorem to first-order conditions for minimization tells us that \(h^*(x)\) should not contribute to the derivatives of the value function. This leads us to focus on:

(10.16)#\[\begin{split}\begin{align*} 0 = & -\delta \frac {\partial V }{\partial x} + \delta \frac {\partial U }{\partial x} + \frac {\partial^2 V }{\partial x \partial x'}\left(\mu +\sigma h^*\right)\\ & + \left( \frac {\partial \mu'}{\partial x} \right) \frac {\partial V }{\partial x} + {\rm{mat}}\left\{\left(\frac{\partial \sigma_i}{\partial x}\right)h^*\right\}' \frac {\partial V }{\partial x}\\ & +\frac{\partial}{\partial x}\left[\frac{1}{2}{\rm trace}\left(\sigma' \frac {\partial^2 V }{\partial x \partial x'} \sigma\right)\right] \\ & + \xi \sum_{\ell=1}^L\frac {\partial \mathcal{J}^{\ell}}{\partial x} \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right) \\ & +\sum_{\ell=1}^L\mathcal{J}^{\ell}g^{\ell*} \left(\frac {\partial V^\ell}{\partial x} - \frac {\partial V}{\partial x} \right). \end{align*}\end{split}\]

It is revealing to rewrite equation (10.16) as:

\[\begin{split}\begin{align*} 0 = & -\left(\delta + \sum_{\ell=1}^L\mathcal{J}^{\ell}g^{\ell*}\right)\frac {\partial V }{\partial x} + \delta \frac {\partial U }{\partial x} \\ & + \frac {\partial^2 V }{\partial x \partial x'}\left(\mu +\sigma h^*\right) + \left( \frac {\partial \mu'}{\partial x}\right)'\frac {\partial V }{\partial x} + {\rm{mat}}\left\{\left(\frac{\partial \sigma_i}{\partial x}\right)h^*\right\}'\frac {\partial V }{\partial x} \\ & + \frac{\partial}{\partial x}\left[\frac{1}{2}{\rm trace}\left(\sigma'\frac {\partial^2 V }{\partial x \partial x'}\sigma\right)\right] \\ & + \xi \sum_{\ell=1}^L\frac {\partial \mathcal{J}^{\ell}}{\partial x} \left(1- \exp \left[ - \frac 1 \xi \left (V^\ell - V\right) \right]\right) \\ & + \sum_{\ell=1}^L\mathcal{J}^{\ell}g^{\ell*} \frac {\partial V^\ell}{\partial x} \end{align*}\end{split}\]

Notice how distorted intensities act like endogenous discount factors in this equation. The last two terms add flow contributions to pertinent Feynman-Kac equations via dot products with respect to \(m\). It is significant that these terms do not include derivatives of \(g^{\ell*}\) with respect to \(x\).

For simulating our asset pricing representation of the partial derivatives of the value function, the discounting term becomes state dependent in order to adjust for the jump probabilitites:

\[D_t \eqdef \exp\left( - \int_0^t\left[\delta + \sum_{\ell=1}^L\mathcal{J}^{\ell}(X_u)g^{\ell*}(X_u)\right]du\right),\]

In addition, three flow terms are discounted:

(10.17)#\[\begin{split}\begin{align} & \delta \Lambda_t \cdot U_x(X_t) & \text{i)}\\ & + \xi \Lambda_t \cdot \sum_{\ell=1}^L \frac {\partial \mathcal{J}^{\ell}}{\partial x} (X_t) \left(1- \exp \left[ - \frac 1 \xi \left[V^\ell(X_t) - V(X_t) \right] \right]\right) & \text{ii)}\\ & + \Lambda_t \cdot \sum_{\ell=1}^L\mathcal{J}^{\ell}(X_t)g^{\ell*}(X_t) \frac {\partial V^\ell}{\partial x} (X_t) & \text{iii)} \\ \end{align}\end{split}\]

Its revealing to think of right side as providing three different sources of the marginal values. The contributions of \(V^{\ell} - V\) and \(\frac {\partial V^\ell}{\partial x}\) are to be expected because they help to quantify the consequences of potential jumps. We may further decompose terms ii) and iii) by the jump type \(\ell\) to assess which jumps are the most important contributors to the marginal valuations. Analogous representations can be derived for the \(\frac {\partial V^\ell}{\partial x}\)’s conditioned on each of the jumps occurring.

Notice that term ii) of formula (10.17) include derivatives of the jump intensity with respect to the state of interest. In some examples, the jump intensities are constant or depend only on an exogenous state. In this case the second term drops out and only the first and third terms remain.

Simulation-based methods can be used to compute these value contributions. They should be conducted under implied worst-case diffusion dynamics. With multiple jump components, we decompose contributions to the marginal utility by jump types \(\ell\).

10.3.7. Climate change example#

[Barnett et al., 2024] use representations (10.17) to decompose their model-based measure of the social cost of climate change and the social value of research and development. In their analysis, there are two types of Poisson jumps. One is the discovery of a new technology and the other is recognization of how curved the damage function is for more extreme changes in temperature. The magnitude of damage curvature is revealed by a jump triggered by a temperature anomaly between 1.5 and 2 degrees celsius. [Barnett et al., 2024] allow for twenty different damage curves. While there are twenty one possible jump types, we group them into damage jumps (one through twenty) and a technology jump (twenty one). [Barnett et al., 2024] display the quantitative importance of a technology jump and a damage jump in contributing to the social value of research and development. We report analogous findings for the social cost of climate change measured as the negative of the marginal value of temperature. We take the negative of the marginal value of climate change because warming induces a social cost (a negative benefit).

Table 1 reports the the contribution of each of the three contributions given on the right side of (10.17) for the partial derivative of the value function with respect to the temperature state variable. The column ii(dc) includes only the contributions for damage curve jumps and column ii(td) includes the remaining contribution from the technology jump, and similarly for iii(dc) and iii(td). We see from this table that the second term dominates the calculation accounting for more virtually the entire total. The magnitude is highly sensitive to the specification of \(\xi\), where smaller values reflect less confidence in baseline probability specifications or equivalently more misspecification aversion, broadly conceived. Moreover, the primary source of this contribution is the damage curve uncertainty. Since term ii) swamps term i) the contemporaneous pre-jump contributions are very minor in comparison longer-horizon channel captured by term ii) for a potential damage curve realization. Although not reported, it turns out that the uncertainty adjustment reflected in this table is mostly attributable uncertainty in the prospects for a technological breakthrough that will solve the climate change problem.

\(\xi\)

i

ii(dc)

ii(td)

iii(dc)

iii(td)

sum

\(\infty\)

3%

97%

-5%

1%

4%

0.064

\(.150\)

3%

98%

-5%

0%

4%

0.118

\(.075\)

2%

99%

-3%

0%

2%

0.281

Table 1: Components to the partial derivative of the value function with respect to the temperature state variable. The (dc) columns include only the contributions from the damage curve realization jumps, and the (td) columns include the contributions from the technology breakthrough jump.

Fig. 10.1 reports the contributions to the social cost of climate change by horizon for the alternative values of the robustness parameter \(\xi\). Lowering \(\xi\) increases the cost contributions at all horizons. For all specifications of \(\xi,\) there is a substantial peak at around 25 years. To explore further the peak response report densities that capture the timing of the first Poisson jump in Fig. 10.2. We see that these densities peak at around a thirty-year horizon. Apparently the expected cost contributions peak earlier because for their forward-looking nature.

../_images/figure1.png

Fig. 10.1 An intertemporal decomposition of the social cost of temperature for alternative values of \(\xi\). We report only the cost contributions from the second term in (10.17).#

../_images/stochastic_figure2.png

Fig. 10.2 Densities for the the timing of the first jump for alternative values of \(\xi\).#