View all Articles

The Dangers of Data-Dependent Monetary Policy

Economics Finance

As Federal Reserve policymakers contemplate when they might take actions to increase the federal funds rate, their comments have emphasized that any decisions about the timing and speed of changes will be “data dependent.” Chair Janet Yellen herself referred repeatedly to the “data dependence” of policy at the press conference following yesterday’s meeting of the Federal Open Market Committee. These statements might imply only that policy decisions will be based on evidence about the economy’s performance, an approach that seems eminently reasonable. There are, nonetheless, some genuine risks to policies that place too much emphasis on each and every piece of incoming data, risks that Fed policymakers should recognize when following their data-driven approach.

A first set of risks arises because the data themselves always are subject to alternative interpretations, often with very different implications for monetary policy. For example, it is important to know how much of the recent slowdown in U.S. economic growth is due to unusually harsh winter weather and how much, if any, represents a general slowing of growth from the stronger expansion observed during the final three quarters of 2014. Similarly, how much of the recent decline in inflation reflects transitory effects of falling oil prices, and how much is the consequence of insufficiently accommodative monetary policy? How much of the decline in labor force participation is attributable to demographic shifts, and how much could be reversed if discouraged workers begin to look for jobs as economic conditions improve? None of these questions has an easy answer, yet policymakers must take a stand on each when using a data-dependent approach.

A second set of risks is associated with economic data that often are measured with considerable error and therefore subject to large revisions. In a 2008 working paper, for example, Dean Croushore describes a particularly striking case in point. In 2003, Federal Reserve officials warned repeatedly of a possible “unwelcome substantial fall in inflation” to justify their caution in raising interest rates after the 2001 recession. The figure below, which resembles those presented by Croushore, displays data from the Federal Reserve Bank of Philadelphia’s Real-Time Macroeconomic Dataset. The data show how initial observations of the Fed’s preferred measure of inflation – the growth rate of the price deflator for personal consumption expenditures, excluding food and energy – provided legitimate support for these concerns. When first released, those numbers pointed repeatedly to inflation below one percent during 2003 and 2004. Eventually, however, those statistics were revised significantly higher so that inflation for those years now appears to have been much closer to two percent. We reiterate Croushore’s point here, not to argue that the Fed’s concerns about low inflation today are misplaced, but simply to observe that policymakers have been led astray by errors in initial data releases in the past and should guard against the possibility of being similarly misled in the future.

 

If the latest data are open to alternative interpretations and may be subject to significant revision, how should the Fed proceed? Milton Friedman gave one answer to this question when he proposed his famous “k-percent rule,” instructing the Fed to eschew all attempts to fine-tune the economy and simply aim to keep a broad measure of the money supply growing at a constant rate. Many economists today believe the Fed can improve on a k-percent rule for money by adopting an alternative, such the rule proposed by John Taylor, which more actively manages the federal funds rate to achieve modest countercyclical objectives even as it maintains stable prices over the long term. Even if one agrees with this policy framework, however, it is important to recognize the limits to activist stabilization policy imposed by considerable uncertainties involved in measuring and interpreting the latest economic statistics.

 

It also is worth stepping back, to consider what is at stake. Debates over Fed policy now center on the timing of its actions: Would it be most appropriate to raise short-term interest rates by a fraction of a percentage point this fall, winter, or early next year? In truth, very little depends directly on the exact timing of this initial rate hike. Instead, the decision matters only because outside observers, unsure of the Fed’s ultimate objectives, will interpret an earlier rate increase as a sign that the Fed has taken a more “hawkish” stance against inflation whereas a delay would indicate a more “dovish” majority within the policymaking committee. This harmful uncertainty and the pressure on Fed policymakers to make exactly the right decisions on a meeting-by-meeting basis both could be reduced if Fed officials affirmed, as a group, that they recognize the limits to their ability to fine tune the economy and are focused more firmly on what they can realistically achieve: A return to the Fed’s two percent inflation target over the next year or two.

 

Michael Belongia is a professor of economics at the University of Mississippi. Peter Ireland is a professor of economics at Boston College and a member of the Shadow Open Market Committee.

Interested in real economic insights? Want to stay ahead of the competition? Each weekday morning, e21 delivers a short email that includes e21 exclusive commentaries and the latest market news and updates from Washington. Sign up for the e21 Morning eBrief.