Preface#
\(\newcommand{\eqdef}{\stackrel{\text{def}}{=}}\)
This book describes tools that have helped us to pose and to answer questions about macroeconomic and financial dynamics. The chapters here make connections that are seldom provided by textbook treatments of economic dynamics, time series econometrics, and quantitative policy analysis. The work underlying this book emerged from our struggles to piece together insights from decision theory as initiated by Wald [Wald, 1950] and Savage [Savage, 1954], from policy assessment as featured by Marschak [Marschak, 1953] and Hurwicz [Hurwicz, 1966], and from modeling decision makers within dynamic stochastic equilibrium models as recommended by Muth [Muth, 1960] and Lucas [Lucas, 1987].
We follow Marschak [Marschak, 1953] and Hurwicz [Hurwicz, 1966] in interpreting counterfactual policies as historically unprecedented from the perspective of the data set used to estimate a model. A rational expectations assumption in the tradition of Muth [Muth, 1960] and Lucas [Lucas, 1987] imputes model-consistent beliefs to the artificial economic decision makers within a dynamic stochastic equilibrium model. A model builder can then use the model to analyze counterfactual public policies.
We were early advocates and practitioners of this rational expectations approach to econometrics and policy analysis. Early on, one of us—Sargent, [Sargent, 1978], [Saracoglu and Sargent, 1978], [Sargent, 1979]— showed how to take models with rational expectations to data through the construction of the implied likelihood functions. An econometrician might need to estimate parameters from a likelihood function constructed as if economic agents had full knowledge of these parameters. One of us—Hansen, working with Singleton [Hansen and Singleton, 1982, Hansen and Singleton, 1983] and Richard [Hansen and Richard, 1987]—figured out ways to use econometrics to learn about some parameters without simultaneously learning about all parameters needed to formulate a model with fully model-consistent expectations. Hansen and Singleton instead allowed the data-generating process to fill out flexible aspects of the stochastic equilibrium that were not captured by the estimated parameters. They used the Generalized Method of Moments (GMM) [Hansen, 1982] to focus on only some components of dynamic stochastic equilibrium models, for example, some consumption-based asset pricing models. This approach still heavily relies on the rational expectations assumption based on decision makers within the model presuming to know the actual data generation. Muth and Lucas could side step the formal link to data generation since decision makers’ expectations are pinned down as part of the stochastic equilibrium.[1]
Yet even as we retain faith in formal statistical tools, if we view models as approximations, how “rational” from the point of view of someone outside the model are those Muth-Lucas model-consistent agents who doggedly believe in a wrong model?
For better or worse, Laws of Large Numbers and various statistical refinements are the only tools that can help us understand how to learn from data. Situations in which sophisticated statisticians find it challenging to learn from economic time series would also seem to be environments in which sophisticated decision makers within the models we build find it challenging to project the future. Time series econometrics would seem to offer insights both in terms of how researchers “outside a model” assess its validity and in terms of how we might model decision makers “inside our models”.
In terms of how to proceed as model builders and econometricians, we were originally intrigued by the work of Peter Whittle [Whittle, 1990] in the 1990s and other related work in robust control theory. It induced us to roll up our sleeves and investigate the consequences of acknowledging that, along with us, the agents inside our models might be concerned that models give imperfect approximations to the underlying complex data generation.
Savage’s pioneering work [Savage, 1954] had provided an elegant axiomatic treatment of the subjective approach to expected utility. But Savage’s framework was not designed to study model misspecification. This led us to look for ideas from robust statistics, robust control theory, and decision theories extended to acknowledge ambiguities and possible misspecifications.
Closely related to this detour into learning different decision theories, we became curious about whether the impact of uncertainty, broadly conceived, could play a more central role in model development and assessment than was commonly practiced in economic dynamics. A broader notion of uncertainty opens the door to entertaining alternative degrees of confidence in the approximating models that serve as a guide for decision making. This type of consideration is missed when researchers simply impose “wrong” subjective beliefs.
Here we have only scratched the surface of a variety of tools and insights that we will develop in these chapters. In this book, we develop tools in stochastic processes, time series econometrics, and decision theory under uncertainty that we hope will empower the reader to address questions that we have only begun to answer.