The True Dangers of Trump’s Economic Plans
His Radical Agenda Would Wreak Havoc on American Businesses, Workers, and Consumers
It was a call I never expected to receive. I had just returned home from playing indoor tennis on the chilly, windy Sunday afternoon of March 16, 2008. A senior official of the U.S. Federal Reserve Board of Governors was on the phone to discuss the board’s recent invocation, for the first time in decades, of the obscure but explosive Section 13(3) of the Federal Reserve Act. Broadly interpreted, that section empowered the Federal Reserve to lend nearly unlimited cash to virtually anybody: in this case, the Fed planned to loan nearly $29 billion to J.P. Morgan to facilitate the bank’s acquisition of the investment firm Bear Stearns, which was on the edge of bankruptcy, having run through nearly $20 billion of cash in the previous week.
The demise of Bear Stearns was the beginning of a six-month erosion in global financial stability that would culminate with the failure of Lehman Brothers on September 15, 2008, triggering possibly the greatest financial crisis in history. To be sure, the Great Depression of the 1930s involved a far greater collapse in economic activity. But never before had short-term financial markets, the facilitators of everyday commerce, shut down on a global scale. As investors swung from euphoria to fear, deeply liquid markets dried up overnight, leading to a worldwide contraction in economic activity.
The financial crisis that ensued represented an existential crisis for economic forecasting. The conventional method of predicting macroeconomic developments—econometric modeling, the roots of which lie in the work of John Maynard Keynes—had failed when it was needed most, much to the chagrin of economists. In the run-up to the crisis, the Federal Reserve Board’s sophisticated forecasting system did not foresee the major risks to the global economy. Nor did the model developed by the International Monetary Fund, which concluded as late as the spring of 2007 that “global economic risks [had] declined” since September 2006 and that “the overall U.S. economy is holding up well . . . [and] the signs elsewhere are very encouraging.” On September 12, 2008, just three days before the crisis began, J.P. Morgan, arguably the United States’ premier financial institution, projected that the U.S. GDP growth rate would accelerate during the first half of 2009. The pre-crisis view of most professional analysts and forecasters was perhaps best summed up in December 2006 by The Economist: “Market capitalism, the engine that runs most of the world economy, seems to be doing its job well.”
What went wrong? Why was virtually every economist and policymaker of note so blind to the coming calamity? How did so many experts, including me, fail to see it approaching? I have come to see that an important part of the answers to those questions is a very old idea: “animal spirits,” the term Keynes famously coined in 1936 to refer to “a spontaneous urge to action rather than inaction.” Keynes was talking about an impulse that compels economic activity, but economists now use the term “animal spirits” to also refer to fears that stifle action. Keynes was hardly the first person to note the importance of irrational factors in economic decision-making, and economists surely did not lose sight of their significance in the decades that followed. The trouble is that such behavior is hard to measure and stubbornly resistant to any systematic analysis. For decades, most economists, including me, had concluded that irrational factors could not fit into any reliable method of forecasting.
But after several years of closely studying the manifestations of animal spirits during times of severe crisis, I have come to believe that people, especially during periods of extreme economic stress, act in ways that are more predictable than economists have traditionally understood. More important, such behavior can be measured and should be made an integral part of economic forecasting and economic policymaking. Spirits, it turns out, display consistencies that can help economists identify emerging price bubbles in equities, commodities, and exchange rates—and can even help them anticipate the economic consequences of those assets’ ultimate collapse and recovery.
The economics of animal spirits, broadly speaking, covers a wide range of human actions and overlaps with much of the relatively new discipline of behavioral economics. The study aims to incorporate a more realistic version of behavior than the model of the wholly rational Homo economicus used for so long. Evidence indicates that this more realistic view of the way people behave in their day-by-day activities in the marketplace traces a path of economic growth that is somewhat lower than would be the case if people were truly rational economic actors. If people acted at the level of rationality presumed in standard economics textbooks, the world’s standard of living would be measurably higher.
From the perspective of a forecaster, the issue is not whether behavior is rational but whether it is sufficiently repetitive and systematic to be numerically measured and predicted. The challenge is to better understand what Daniel Kahneman, a leading behavioral economist, refers to as “fast thinking”: the quick-reaction judgments on which people tend to base much, if not all, of their day-to-day decisions about financial markets. No one is immune to the emotions of fear and euphoria, which are among the predominant drivers of speculative markets. But people respond to fear and euphoria in different ways, and those responses create specific, observable patterns of thought and behavior.
Perhaps the animal spirit most crucial to forecasting is risk aversion. The process of choosing which risks to take and which to avoid determines the relative pricing structure of markets, which in turn guides the flow of savings into investment, the critical function of finance. Risk taking is essential to living, but the question is whether more risk taking is better than less. If it were, the demand for lower-quality bonds would exceed the demand for “risk-free” bonds, such as U.S. Treasury securities, and high-quality bonds would yield more than low-quality bonds. It is not, and they do not, from which one can infer the obvious: risk taking is necessary, but it is not something the vast majority of people actively seek.
The bounds of risk tolerance can best be measured by financial market yield spreads—that is, the difference between the yields of private-sector bonds and the yields of U.S. Treasuries. Such spreads exhibit surprisingly little change over time. The yield spreads between prime corporate bonds and U.S. Treasuries in the immediate post‒Civil War years, for example, were similar to those for the years following World War II. This remarkable equivalence suggests long-term stability in the degree of risk aversion in the United States.
Another powerful animal spirit is time preference, the propensity to value more highly a claim to an asset today than a claim to that same asset at some fixed time in the future. A promise delivered tomorrow is not as valuable as that promise conveyed today. Investors experience this phenomenon mostly through its most visible counterparts: interest rates and savings rates. Like risk aversion, time preference has proved remarkably stable: indeed, in Greece in the fifth century BC, interest rates were at levels similar to those of today’s rates. From 1694 to 1972, the Bank of England’s official policy rate ranged from two to ten percent. It surged to 17 percent during the inflationary late 1970s, but it has since returned to single digits.
Time preference also affects people’s propensity to save. A strong preference for immediate consumption diminishes a person’s tendency to save, whereas a high preference for saving diminishes the propensity to consume. Through most of human history, time preference did not have a major determining role in the level of savings, because prior to the late nineteenth century, most people had to consume virtually all they produced simply to stay alive. There was little left over to save even if people were innately inclined to do so. It was only when the innovation and productivity growth of the Industrial Revolution freed people from the grip of chronic starvation that time preference emerged as a significant—and remarkably stable—economic force. Consider that although real household incomes have risen significantly since the late nineteenth century, average savings rates have not risen as a consequence. In fact, during periods of peace in the United States since 1897, personal savings as a share of disposable personal income have almost always stayed within a relatively narrow range of five to ten percent.
In addition to the stable and predictable effects of time preference, another animal spirit is at work in these long-term trends: “conspicuous consumption,” as the economist Thorstein Veblen labeled it more than a century ago, a form of herd behavior captured by the more modern idiom “keeping up with the Joneses.” Saving and consumption reflect people’s efforts to maximize their happiness. But happiness depends far more on how people’s incomes compare with those of their perceived peers, or even those of their role models, than on how they are doing in absolute terms. In 1995, researchers asked a group of graduate students and staff members at the Harvard School of Public Health whether they would be happier earning $50,000 a year if their peers earned half that amount or $100,000 if their peers earned twice that amount; the majority chose the lower salary. That finding echoed the results of a fascinating 1947 study by the economists Dorothy Brady and Rose Friedman, demonstrating that the share of income an American family spent on consumer goods and services was largely determined not by its income but by how its income compared to the national average. Surveys indicate that a family with an average income in 2011 spent the same proportion of its income as a family with an average income in 1900, even though in inflation-adjusted terms, the 1900 income would represent only a minor fraction of the 2011 figure.
Such herd behavior also drives speculative booms and busts. When a herd commits to a bull market, the market becomes highly vulnerable to what I dub the Jessel Paradox, after the vaudeville comedian George Jessel. In one of his routines, Jessel told the story of a skeptical investor who reluctantly decides to invest in stocks. He starts by buying 100 shares of a rarely traded, fly-by-night company. Surprise, surprise—the price moves from $10 per share to $11 per share. Encouraged that he has become a wise investor, he buys more. Finally, when his own purchases have managed to bid the price up to $30 per share, he decides to cash in. He calls his broker to sell out his position. The broker hesitates and then responds, “To whom?”
Classic market bubbles take shape when herd behavior induces almost every investor to act like the one in Jessel’s joke. Bears become bulls, propelling prices ever higher. In the archetypal case, at the top of the market, everyone has turned into a believer and is fully committed, leaving no unconverted skeptics left to buy from the first new seller.
That was, in essence, what happened in 2008. By the spring of 2007, yield spreads in debt markets had narrowed dramatically; the spread between “junk” bonds that were rated CCC or lower and ten-year U.S. Treasury notes had fallen to an exceptionally low level. Almost all market participants were aware of the growing risks, but they also knew that a bubble could keep expanding for years. Financial firms thus feared that should they retrench too soon, they would almost surely lose market share, perhaps irretrievably. In July 2007, the chair and CEO of Citigroup, Charles Prince, expressed that fear in a now-famous remark: “When the music stops, in terms of liquidity, things will be complicated. But as long as the music is playing, you’ve got to get up and dance. We’re still dancing.”
Financial firms accepted the risk that they would be unable to anticipate the onset of a crisis in time to retrench. However, they thought the risk was limited, believing that even if a crisis developed, the seemingly insatiable demand for exotic financial products would dissipate only slowly, allowing them to sell almost all their portfolios without loss. They were mistaken. They failed to recognize that market liquidity is largely a function of the degree of investors’ risk aversion, the most dominant animal spirit that drives financial markets. Leading up to the onset of the crisis, the decreased risk aversion among investors had produced increasingly narrow credit yield spreads and heavy trading volumes, creating the appearance of liquidity and the illusion that firms could sell almost anything. But when fear-induced market retrenchment set in, that liquidity disappeared overnight, as buyers pulled back. In fact, in many markets, at the height of the crisis of 2008, bids virtually disappeared.
Financial firms could have protected themselves against the costs of their increased risk taking if they had remained adequately capitalized—if, in other words, they had prepared for a very rainy day. Regrettably, they had not, and the dangers that their lack of preparedness posed were not fully appreciated, even in the commercial banking sector. For example, in 2006, the Federal Deposit Insurance Corporation, speaking on behalf of all U.S. bank regulators, judged that “more than 99 percent of all insured institutions met or exceeded the requirements of the highest regulatory capital standards.”
What explains the failure of the large array of fail-safe buffers that were supposed to counter developing crises? Investors and economists believed that a sophisticated global system of financial risk management could contain market breakdowns. The risk-management paradigm that had its genesis in the work of such Nobel Prize–winning economists as Harry Markowitz, Robert Merton, and Myron Scholes was so thoroughly embraced by academia, central banks, and regulators that by 2006 it had become the core of the global bank regulatory standards known as Basel II. Global banks were authorized, within limits, to apply their own company-specific risk-based models to judge their capital requirements. Most of those models produced parameters based only on the last quarter century of observations. But even a sophisticated number-crunching model that covered the last five decades would not have anticipated the crisis that loomed.
Mathematical models that calibrate risk are nonetheless surely better guides to risk assessment than the “rule of thumb” judgments of a half century earlier. To this day, it is hard to find fault with the conceptual framework of such models, as far as they go. The elegant options-pricing model developed by Scholes and his late colleague Fischer Black is no less valid or useful today than when it was developed, in 1973. But in the growing state of euphoria in the years before the 2008 crash, private risk managers, the Federal Reserve, and other regulators failed to ensure that financial institutions were adequately capitalized, in part because we all failed to comprehend the underlying magnitude and full extent of the risks that were about to be revealed as the post-Lehman crisis played out. In particular, we failed to fully comprehend the size of the expansion of so-called tail risk.
“Tail risk” refers to the class of investment outcomes that occur with very low probabilities but that are accompanied by very large losses when they do materialize. Economists have assumed that if people acted solely to maximize their own self-interest, their actions would produce long-term growth paths consistent with their abilities to increase productivity. But because people lacked omniscience, the actual outcomes of their risk taking would reflect random deviations from long-term trends. And those deviations, with enough observations, would tend to be distributed in a manner similar to the outcomes of successive coin tosses, following what economists call a normal distribution: a bell curve with “tails” that rapidly taper off as the probability of occurrence diminishes.
Those assumptions have been tested in recent decades, as a number of once-in-a-lifetime phenomena have occurred with a frequency too high to credibly attribute to pure chance. The most vivid example is the wholly unprecedented stock-price crash on October 19, 1987, which propelled the Dow Jones Industrial Average down by more than 20 percent in a single day. No conventional graph of probability distribution would have predicted that crash. Accordingly, many economists began to speculate that the negative tail of financial risk was much “fatter” than had been assumed—in other words, the global financial system was far more vulnerable than most models showed.
In fact, as became clear in the wake of the Lehman collapse, the tail was morbidly obese. As a consequence of an underestimation of that risk, financial firms failed to anticipate the amount of additional capital that would be required to serve as an adequate buffer when the financial system was jolted.
The 2008 financial collapse has provided reams of new data on negative tail risk; the challenge will be to use the new data to develop a more realistic assessment of the range and probabilities of financial outcomes, with an emphasis on those that pose the greatest dangers to the financial system and the economy. One can hope that in a future financial crisis—and there will surely be one—economists, investors, and regulators will better understand how fat-tail markets work. Doing so will require better models, ones that more accurately reflect predictable aspects of human nature, including risk aversion, time preference, and herd behavior.
Forecasting will always be somewhat of a coin toss. But if economists better integrate animal spirits into our models, we can improve our forecasting accuracy. Economic models should, when possible, measure and forecast systematic human behavior and the tendencies of corporate culture. Modeling will always be constrained by a lack of relevant historical precedents. But analysts know a good deal more about how financial markets work—and fail—than we did before the 2008 crisis.
The halcyon days of the 1960s, when there was great optimism that econometric models offered new capabilities to accurately judge the future, are now long gone. Having been mugged too often by reality, forecasters now express less confidence about our abilities to look beyond the immediate horizon. We will forever need to reach beyond our equations to apply economic judgment. Forecasters may never approach the fantasy success of the Oracle of Delphi or Nostradamus, but we can surely improve on the discouraging performance of the past.