An Examination of a Modified Taylor Rule

Authored by: Daniel Cowan


Abstract

Is there a rules-based explanation for the low interest rates and quantitative easing undertaken by the Federal Reserve following the Global Financial Crisis? The question is important as it pertains to the ongoing debate between rules-based and discretionary monetary policy. It is also important in the search for a Taylor Rule modification that can fill in the gap left by the breakdown of the original rule following the GFC. This paper examines a recent Taylor Rule modification proposed from James Bullard, President of the St. Louis Federal Reserve, to see if this modification can explain Fed actions following the GFC. The modification is analyzed in the same two ways that the original Taylor Rule was evaluated. Namely, this paper tests the economic logic of the modification as well as examines how well the rule’s policy rate prescription has fit the actual federal funds rate over time. The economic logic of the modification is examined during recessions. The fit between the rule’s policy rate prescription and the actual federal funds rate is examined using r-squared. I conclude that by changing the neutral rate in a Taylor-type rule, Bullard provides a credible policy rule that helps explain Fed behavior following the GFC.

Section 1: Introduction

This introduction will first briefly discuss monetary policy in the United States and how it is conducted using both conventional and unconventional methods, including an explanation of how those methods are expected to affect the economy. Then, I will highlight the ongoing debate between rules-based and discretionary monetary policy, laying out the rationale for both. Finally, I will introduce a policy tool, the Taylor Rule, and motivate the analysis of a modification to the Taylor Rule that takes place throughout this paper.

In the U.S., monetary policy “comprises the Federal Reserve's actions and communications to promote maximum employment, stable prices, and moderate long-term interest rates--the three economic goals the Congress has instructed the Federal Reserve to pursue” (Board of Governors of the Federal Reserve System n.d.). However, “[b]ecause long-term interest rates can remain low only in a stable macroeconomic environment, these goals are often referred to as the dual mandate; that is, the Federal Reserve seeks to promote the two coequal objectives of maximum employment and price stability” (Mishkin 2007). The Federal Reserve (or the “Fed,” for short) achieves its goals (the dual mandate) through a variety of actions.

Most notably, the Fed affects the economy through its adjustment and eventual accomplishment of the target federal funds rate (FFR). The FFR is the interest rate that banks charge each other on overnight loans. While an overnight loan rate may seem an obscure method of influencing the general economy, the FFR indirectly “influences household spending, business investment, production, employment, and inflation in the United States” (Board of Governors of the Federal Reserve System n.d.). The Fed does not directly set the FFR, but rather targets a policy rate and achieves that rate primarily through open market operations (when the Fed purchases debt instruments in the open market). Open market operations affect the FFR because “[o]pen market purchases of government securities increase the amount of reserve funds that banks have available to lend, which puts downward pressure on the federal funds rate” (Federal Reserve Bank of St. Louis n.d.)1.

There are three other policy tools besides open market operations that comprise what can be considered conventional monetary policy. These are the discount rate, reserve requirements, and the interest rate on reserves. The discount rate is the interest rate that the Fed charges banks for short term loans. The discount rate serves alongside open market operations in reaching the target FFR but is much less of a factor than open market operations. The Fed can also change reserve requirements for banks; however, this is a method seldomly used. The Fed also recently was granted the ability to charge interest on bank reserves, giving them another policy tool. All of these conventional policy tools attempt to incentivize banks to lend more or less (more lending is stimulating while lending less is restrictive).

The FFR affects both economic growth and inflation. Because of this, the Fed can use its policy tools to influence the FFR and work towards fulfilling the dual mandate. When the Fed acts to lower the FFR, it creates downward pressure on other interest rates in the economy, making it easier to borrow money, which is stimulating for an economy and in theory serves as a catalyst for GDP growth. Lower interest rates are also (generally) more conducive to increasing inflation. “When the federal funds rate is reduced, the resulting stronger demand for goods and services tends to push wages and other costs higher, reflecting the greater demand for workers and materials that are necessary for production. In addition, policy actions can influence expectations about how the economy will perform in the future, including expectations for prices and wages, and those expectations can themselves directly influence current inflation” (Board of Governors of the Federal Reserve 2015).

When the conventional tools of the Fed discussed above fail to produce the desired economic outcomes, the Fed has other, less conventional methods at its disposal. One of these methods (and one that is especially pertinent to this paper) is quantitative easing (QE). QE involves the large-scale purchase of assets with the goal of influencing long-term interest rates (as opposed to the short-term rates that are normally what the Fed aims to modify). Ben Bernanke, who was Chair of the Federal Reserve, explained the rationale for QE by arguing that, “with short-term nominal interest rates at zero, purchases by the central bank of long-maturity assets would act to push up the prices of those securities because the Fed was reducing their net supply. Thus, long maturity bond yields should go down, for example, if the Fed purchases long-maturity Treasury securities. Bernanke then argued that this was ‘accommodation,’ in the same sense as a reduction in the fed funds rate target is accommodation. Thus, QE should be expected to increase inflation and aggregate real economic activity” (Williamson 2017). It is important to note that when Bernanke discussed the use of QE, he treated zero short-term nominal rates (rates at the zero bound) as a prerequisite for its use. Zero short-term nominal rates imply that the Fed has taken all available action using conventional methods. If the economy still needs stimulus when rates reach the zero bound, Bernanke argued, then unconventional methods receive consideration.

Using QE to stimulate the economy when conventional methods have failed is exactly what Bernanke did as Chair of the Fed after the GFC2. Following the GFC, the FFR was near zero but the U.S. economy still needed stimulus. The Fed has never shown a willingness to use negative interest rates (which would, in theory, be even more accommodating than interest rates of zero). Therefore, QE was another lever to pull to continue to help the economy recover from the recession. Specifically, the Fed engaged in three rounds of QE in the years 2009, 2010, and 2012. These rounds saw purchases of long-term treasury assets, mortgage-backed securities, and agency debt. These purchases not only served to lower long-term interest rates in the economy, but, given the housing crisis at the heart of the recession, also aimed “[t]o provide support to mortgage lending and housing markets” (Board of Governors of the Federal Reserve System 2009).

When evaluating what monetary policy to undertake (specifically when targeting the FFR), there are two major schools of thought: some believe that monetary policy should be discretionary while others argue that monetary policy should be rules based. The distinction between rules based policy and discretionary policy was classically laid out by the rules-based advocates Kydland and Prescott (1977). Rules-based advocates argue that a rule is time-consistent policy, while discretion is time-inconsistent policy. To further understand this, take the example of building homes in a floodplain. If the government doesn’t want homes built in a floodplain, it could warn that no homeowners will receive any federal relief if their homes are destroyed. When the floods come, and homes are destroyed, the government can either stand by their word or break it and provide relief. Standing by their word would be time-consistent policy (rules based) whereas providing federal relief would be time-inconsistent policy (discretionary).

Those who prefer a rules-based approach would argue that should the government break its promise (engaging in time-inconsistent policy), people will learn that the government does not mean what it says. After learning this, people will build homes on floodplains freely, trusting that the government will bail them out. This leads to the exact opposite policy outcome that the government had desired in the first place. On the other hand, those who advocate discretion would argue that time-inconsistency allows policy-makers to respond to unforeseen circumstances. For example, suppose that a river overflows and floods homes that are not on a floodplain. Under a discretionary approach, the government could step in to assist innocent victims who did not take the undue risk that the government was trying to deter people from. Under a rules-based framework, this would not be possible (Buol and Vaughan 2003). Bernanke and Mishkin, who were members of the Fed, have exemplified arguments for discretionary monetary policy by saying that, “monetary rules do not allow the monetary authorities to respond to unforeseen circumstances” (Bernanke and Mishkin 1992, pg. 184). In the last few years Bernanke has continued to criticize a rules-based approach to monetary policy, saying that it “disguises the complexity of the underlying judgments that FOMC3 members must continually make if they are to make good policy decisions” (Bernanke 2015).

One of the most seminal contributions to this debate between rules and discretion was the introduction of a policy rule in 1993 by John Taylor that has come to be known as the “Taylor Rule.” The Taylor Rule is an algebraic formula that suggests what the FFR should be based upon several economic indicators. The Taylor Rule, as it was originally written, is

𝐹𝐹𝑅t = 𝜋 + 𝛼𝑌 + 𝛽(𝜋 − 𝜋∗) + 𝑟∗

where FFRt is the suggested Federal Funds Rate based on the Taylor Rule, π is the observed inflation rate in the U.S., π* is the target inflation rate (assumed to be 2%), Y is the output gap4 in the economy, 𝑟∗ is the neutral interest rate (assumed to be 2%), and α and β (both 0.5) are coefficients that weight the Fed’s responsiveness to an output gap or inflation gap, respectively.

The Taylor Rule has historically tended to be a good description of Fed behavior (meaning that it has prescribed a FFRT close to the actual FFR set by the Fed). Creating a rule that prescribed a rate close to the actual FFR was something that had never been seen before and was a large accomplishment. Taylor himself noted that, “What is perhaps surprising is that this rule [the Taylor Rule] fits the actual policy performance during the last few years remarkably well” (Taylor 1993, pg. 202). Due to the Taylor Rule’s ability to describe past Fed behavior, the Taylor Rule “subsequently became viewed as a prescription for conducting monetary policy going forward” (Asso, Kahn, and Leeson 2010, pg. 11). A rule that described past Fed behavior5could plausibly be accepted as a benchmark for it moving forward.

The Taylor Rule found acclaim not just because it could describe Fed behavior, but because it did so in a logically consistent way; the Taylor Rule doesn’t just say what the Fed has done, it gives insight as to the factors that motivate the Fed and how they respond to different factor fluctuations. “The Taylor [R]ule has gained widespread influence because it can be implemented in policy regimes with a dual mandate for price stability and economic growth as in the United States” (Asso, Kahn, and Leeson 2010, pg. 10). Taylor’s equal weighting on the output gap and the inflation gap reflect the dual mandate of the Fed, making the rule a plausible reflection of the Fed’s thought process when setting the FFR. The Taylor Rule doesn’t link the FFR to some arbitrary value in order to produce a good description of Fed behavior, it links the FFR to economic factors that the Fed actually cares about. If the Taylor Rule had said that the Fed would raise the rate when the output gap was negative, it would have been rejected. No matter how good a description of past behavior, a rule with such a logical breakdown would not find use.

The combination in the Taylor Rule of its representation of Fed goals (the dual mandate) and its ability to prescribe a rate that was a good fit of past Fed policy rates gave the Taylor Rule broad appeal. Within a year of its creation, the Taylor Rule found fanfare among investment banks and Fed members alike. Taylor received an audience with Alan Greenspan (who was Chair of the Fed at the time) in the years following the Taylor Rule’s origin. By early 1995, Janet Yellen (a Fed member who would later go on to become Chair) had brought up the Taylor Rule at FOMC meetings as a measure for where the FFR should be. By late 1995, the FOMC had a chart with various modifications of the Taylor Rule at each meeting (Asso, Kahn, and Leeson 2010).

The Taylor Rule’s success in describing the FFR has recently faltered, sparking modern installments of the debate between rules and discretion. Following the GFC, the Taylor Rule began prescribing a FFRT that was noticeably different than the actual FFR, which will be more clearly illustrated below in Section 2. This has added an interesting examination point to the ongoing debate between rules and discretion in monetary policy. Not only did the Taylor Rule prescribe a noticeably different rate following the GFC, it did so for an extended period. The sustained difference between the FFR and FFRT was historically unique in its size and length, causing speculation that during this time, the Fed completely broke from a rules-based approach. It was during this period of diversion between the FFR and FFRT that the Fed engaged in QE (the three rounds previously mentioned) while the FFR languished at the zero bound.

This deviation between the Taylor Rule’s rate prescription and the actual policy rate has caused public disputes between Taylor and Bernanke. Taylor has argued that the lackluster recovery from the GFC in the U.S. can be attributed to the Fed’s deviation from rules-based policy, particularly the Fed’s deviation from what the Taylor Rule prescribed (Taylor 2015). This assertion was a direct accusation against Bernanke, who was Chair of the Fed during and following the GFC. Bernanke countered by arguing that discretionary policy allowed him to fully respond to the GFC, and that the lackluster recovery was not a result of monetary policy but rather the severity of the recession.

Bernanke went on, though, to provide a modified Taylor Rule, which I will refer to as “Bernanke’s Rule” (Bernanke 2015). This rule was meant to show that the Fed did not completely abandon a rules-based approach when conducting monetary policy following the GFC, since Bernanke’s Rule prescribes a policy rate closer to the actual FFR following the GFC. Bernanke’s Rule recreates the Taylor Rule, except that β, the weighting coefficient on the output gap, is 1 rather than 0.5 (meaning that the Fed would lower the FFR more in response to a negative output gap than the Taylor Rule would suggest6). Bernanke’s Rule prescribed sustained negative rates following the GFC, which Bernanke used to justify the QE that the Fed engaged in at the time (Bernanke 2015). If a policy rule calls for a negative FFR, that provides evidence (in a rules-based context) that monetary policy needs to be even more accommodating than that which would produce a zero FFR, providing a rules-based rationale for unconventional methods like QE.

Nikolsko-Rzhevskyy and Papell (2012) examined the possibility that QE had been a rules-based action by taking Bernanke’s Rule (although they didn’t call it that) and applying it to a large historical period. The paper’s premise is that Bernanke’s Rule was only worth using to justify Fed behavior after the GFC if it had the same historical merit as the Taylor Rule7. They conclude that Bernanke’s Rule would have prescribed nonoptimal interest rates in the past (meaning a prescribed policy rate that would have damaged the economy), and therefore Bernanke’s Rule is not a Taylor Rule modification that should be applied to justify the QE after the GFC.

This paper will continue the search for a Taylor Rule modification that better matches Fed behavior following the GFC, specifically seeking rules-based justification for QE. More specifically, I will look at a Taylor Rule modification provided by James Bullard, President of the St. Louis Fed, to see if his Taylor Rule modification (which I will call “Bullard’s Rule”) provides rules-based justification for QE. However, rather than evaluating this modified Taylor Rule by making claims about optimality, as Nikolsko-Rzhevskyy and Papell (2012) did, this paper will evaluate Bullard’s Rule in the same way that the Taylor Rule was evaluated. First, the rule needs to incorporate logical economic factors based upon economic realities. In other words, does the modification make economic sense? Would the actual FFR be influenced by these factors? Would the Fed respond to these economic factors? In order for a Taylor Rule variant to be credible in the same way that the Taylor Rule has been, it has to be logically consistent. Second, the rule needs to be an adequate description of past Fed behavior (specifically referring to the FFR), seeing as the ability to describe past Fed behavior helped the Taylor Rule gain credibility.

Section 2 of this paper will revisit the Taylor Rule in depth and discuss the process of constructing a Taylor Rule chart using real-time data. Section 3 of this paper will introduce Bullard’s Rule and analyze both the reasoning for his modification as well as the logic of it from an economic standpoint. Lastly, Section 4 will look at the policy rates that the Taylor Rule and Bullard’s Rule have historically prescribed using an r-squared analysis to determine which has been a better match of the actual FFR over time. The paper concludes that Bullard’s Rule makes logical modifications to the Taylor Rule and has been a better description of Fed behavior over time, building a plausible rules-based explanation for the rate moves and QE seen following the GFC.

Section 2: The Taylor Rule’s Composition Using Real-Time Data

In this section, I will review the composition of the Taylor Rule in depth, providing explanations for each of the components. In doing so, I first discuss the neutral rate of interest included in the Taylor Rule, as the neutral rate is the part of the Taylor Rule that Bullard modifies in his rule. With an understanding of all the components in the Taylor Rule, I will then discuss the construction of the data set used in this paper to illustrate the Taylor Rule. Finally, with this data in hand, I will empirically display the discrepancy between the effective FFR and FFRt that has been referenced earlier in this paper. First, I will take a closer look at the Taylor Rule. As stated previously, the Taylor Rule is

𝐹𝐹𝑅t = 𝜋 + 𝛼𝑌 + 𝛽(𝜋 − 𝜋∗) + 𝑟∗

Where FFRt is the suggested FFR based on the Taylor Rule, π is the observed inflation rate in the U.S., π* is the target inflation rate (assumed to be 2%), 𝑟∗ is the neutral interest rate (assumed to be 2%), Y is the output gap in the economy, and α and β are weightings that describe the Fed’s responsiveness to an output gap and inflation gap, respectively (α = β = 0.5). In Taylor’s 1993 paper, the output gap, Y, was calculated as

𝑌 = 100(𝑦 − 𝑦∗)/𝑦∗

where y is real GDP, and y* is the trend growth rate of real GDP over the given sample time period. Originally, Taylor’s sample spanned 1984:Q1 to 1992:Q3. During this period, trend real GDP growth was 2.2% annually, so the output gap was measured quarterly as a deviation from this trend using the above formula.


The Taylor Rule succeeds in capturing the dual mandate of the Fed in an algebraic formula. The goal of stable prices is watched over by the inflation gap (π – π*) while the goal of maximum employment is represented by the output gap (Y). In the Taylor Rule, the gaps are equally weighted (with a 0.5 coefficient) reflecting the balanced importance of these mandates to the Fed (i.e., one is not given precedence over the other). Thus, the Taylor Rule prescribes that the Fed raise the FFR by one half of a percent for each percent that inflation is above target and for each percent that output is above trend. When inflation and output are both on target (or trend, respectively) the Taylor Rule prescribes a FFRT equal to the neutral rate, 𝑟∗, plus inflation (inflation would be 2% when inflation is on target). Therefore, the neutral rate serves as the intercept for the Taylor Rule, and rate adjustments are made based on this anchor, increasing its importance.


The neutral rate8, 𝑟∗, warrants an even closer look than the rest of the components because of its importance to the Taylor Rule and also because the neutral rate is the part of the Taylor Rule that is modified in Bullard’s Rule. Knut Wicksell (1898, 102) first proposed the notion of a natural rate of interest saying, “[t]here is a certain rate of interest on loans which is neutral in respect to commodity prices, and tends neither to raise nor to lower them.” The neutral rate is defined by the San Francisco Fed as the rate that “neither stimulates (speeds up, like pushing down the gas pedal on a car) nor restrains (slows down, like hitting the brakes) economic growth" (Federal Reserve Bank of San Francisco 2005). In practice, a FFR that is above the neutral rate would be viewed as restrictive, while a FFR below the neutral rate would be expansionary. In the Taylor Rule, this means that any FFR above 4% would be restrictive, and a policy rate below 4% would be expansionary.9


The neutral rate is a challenging inclusion into a policy rule because it is critical to setting the FFR but is also a theoretical notion that cannot be directly observed. Taylor (1993, 202) gives a brief explanation for his assumption that the neutral rate is 2% saying that, “the 2-percent ‘equilibrium’ real rate is close to the assumed steady-state growth rate [of real GDP] of 2.2 percent.” Taylor devotes almost no time to explaining the neutral rate in his rule, which is unsurprising considering that the crowning achievement of the Taylor Rule (in terms of its construction) was the reflection of the Fed’s dual mandate; examining the neutral rate was not the point of his work. Taylor’s assertion that the neutral rate is tied to trend growth does have literary support, seeing as “a tight link between the equilibrium [neutral] rate and growth is common in theoretical models” (Hamilton et al. 2015, 2).


However, recent studies have concluded that the neutral rate’s “relationship with trend GDP growth [is] much more tenuous than widely believed” (Hamilton et al. 2015, 1). Challenges to linking the neutral rate with output growth are bolstered by the fact that in the U.S. “rates were high in the 1970s and 1980s, when productivity growth was low; and that they started declining in the 1990s, when productivity accelerated” (Del Negro et al. 2017, 239) Even if there were a direct relationship between output growth and the neutral rate (which, as stated, is a tenuous assumption), this direct relationship would imply that the neutral rate in the economy changes over time, seeing as GDP does not grow on a fixed path indefinitely. Indeed, the assumption of the Taylor Rule that the neutral rate lives at a static 2% has drawn criticism. Members of the FOMC have been quoted saying things such as “[w]hile I am a strong believer in some of the wisdom embedded in the Taylor [R]ule, I have been concerned for a long time that we need to be more careful about how we set its level by coming up with a more reasonable estimate of the equilibrium funds rate” (FOMC 1997, 66-67). The neutral rate is the component of the Taylor Rule that Bullard’s Rule modifies.


Before detailing Bullard’s Rule and his interpretation of the neutral rate, I will explain the composition of the data set used in this paper, with the end goal of producing a chart that displays the divergence between FFRT and FFR following the GFC. This deviation serves as motivation for examining Bullard’s Rule as a rules-based explanation for monetary policy following the GFC. To show the breakdown of the Taylor Rule graphically requires constructing a historical Taylor Rule chart and comparing that data to the FFR over time. In doing so, it has been common practice since Orphanides (2001) to use real time data that would have been available to policy makers when they made their decisions. “Real-time data” means using a Taylor Rule data set made up of, as much as possible, vintages of information (such as inflation, etc.) that would have been available at the given moment. For example, when looking at what the Taylor Rule prescribes for 1995:Q2 (the FFRT in 1995:Q2) it would be inappropriate to use a measure of inflation for 1995:Q2 that was precisely calculated and revised (in hindsight) in the year 2016. Rather, we want to look at data that would have been available in 1995 when policy makers were meeting to decide the FFR. What follows is a description of data sources in this pursuit of real-time information.


To calculate inflation, the original Taylor Rule paper (1993) used GDP deflator. However, today the Core Personal Consumption Expenditures is a very common measure of inflation. Core PCE measures price level changes in the economy while excluding volatile categories such as food and energy. The exact measurements for Core PCE are set out by the Bureau of Economic Analysis. Core PCE did not start being a measure of inflation until 1996:Q2. Therefore, for this data set, GDP deflator is used from 1970 until the point when core PCE becomes available, at which time core PCE is used, as it is a more realistic representation of inflation measures that policy makers were looking at from 1996 and the years afterward. Core PCE provides a more accurate measure of inflation as well as being a favorite of many Fed members, making it a more realistic inclusion in the model. Real-time values for inflation (both with GDP deflation prior to 1996:Q2 and Core PCE from then on) were calculated using values from the Philadelphia Fed, which provides historical vintages of GDP and Core PCE. All values were also calculated with a one-quarter time lag. For example, to find inflation at time t, year over year inflation was calculated using values from t-1 and t-4. The value for the current quarter would not have been available real time, and therefore is not used in the data set.


For the output gap, this data set will use the formula (detailed previously) laid out in Taylor (1993) up until 1987:Q3. Prior to 1987:Q3, there is no record of a real-time potential GDP measure or output gap. Therefore, for this time period, the output gap is found using GDP detrending, as it was in Taylor’s original work. There are many issues with this approach, such as the start date of 1970 being somewhat arbitrary. However, this is the approach used in the original paper and is therefore actually a good measure because it was how the output gap used to be calculated. To apply a more modern method to previous periods would be inappropriate since it would not provide a data set that would have been available to policy makers at the time. After 1987:Q3 and until 2011:Q4, the Greenbook provides a real-time output gap. The Greenbook is a set of economic data and forecasts that is prepared for the Fed before each meeting. Greenbook data is explicitly used by policy-makers and is therefore the most appropriate measure of the output gap for the period when it is available. The Greenbook estimate stops after 2011 because it is only released after several years. Following 2011, real-time potential GDP and realized GDP estimates are taken from data provided by the Congressional Budget Office and the output gap is calculated from these values. It may seem strange to splice together so many methods of calculating the output gap, but with the goal of accuracy and fair representation of what was available to policy-makers at the time, it is actually the most consistent method.


Using the above described, Exhibit 1 is constructed, which empirically displays the discrepancy between FFRT and FFR during and after the GFC (remember that the GFC is the recession formally recognized by NBER from 2007:Q4 to 2009:Q2)10. During the GFC, the Taylor Rule tended to prescribe a higher FFRT than the effective FFR. After the GFC (starting in 2009:Q3), FFRt dipped negative for about a year and then increased to prescribe a FFRT of around 2% for approximately 5 years. Through this entire six-year period (2009:Q3 to 2015:Q3), when FFRt went negative and then lingered around 2%, the actual FFR hovered right around the zero bound, a clear difference from FFRT.11 For a rule that had been commonly viewed as a description of Fed behavior, the breakdown seen in Exhibit 1 is quite unique.12 This breakdown is exactly what has caused Taylor (2015) to accused Bernanke of breaking from a rules-based approach and damaging the economy. It is also the period that Bernanke (2015) has used as evidence to say that rules-based monetary policy doesn’t work (because he believes he took correct action following the GFC, action not in line with the Taylor Rule).


To this point, I have discussed how monetary policy is conducted in the U.S. as well as framed the ongoing debate between rules and discretion in conducting monetary policy. I introduced and explained the Taylor Rule as well as laid out how I constructed a real-time data set for examination of the Taylor Rule. Having now seen the discrepancy between the Taylor Rule’s rate prescription and the actual FFR following the GFC graphically, it is now time to introduce Bullard’s modification of the Taylor Rule, seeking to understand whether it provides a rules-based framework for the low interest rates and QE seen following the GFC.

Section 3: Bullard’s Rule

Now that we understand the Taylor Rule, its importance, and can empirically show the breakdown in its descriptive power following the GFC, I will more carefully motivate and introduce Bullard’s Rule. First, I will explain why Bullard modified the Taylor Rule the way that he did. Then I will examine the behavior of Bullard’s neutral rate during recessions to see if it behaves in a logical way.


Before looking at the descriptive power of Bullard’s Rule, it is important to understand the rationale behind his modification as well as testing the logic of its inclusion. Bullard’s Rule is a Taylor Rule modification that modifies the Taylor Rule’s representation of the neutral rate. In Bullard’s Rule, he forgoes the idea that the neutral rate stays at 2%, an underlying assumption of the Taylor Rule. This 2% assumption was a useful one for building a policy tool because the neutral rate is a theoretical notion that cannot be directly observed (but it is very important to monetary policy). In practice, and in many recent models, the neutral rate can be thought of as “the real interest rate on a safe and liquid asset that would be observed in equilibrium” (Del Negro et al. 2017, pg. 273). This interpretation of the neutral rate is driven by the fact that “central banks generally target returns on short-term safe and liquid assets. Therefore, for [r†] to be a useful benchmark for monetary policy, it should be associated with the return to an asset that possesses such attributes” (Del Negro et al. 2017, pg. 236).


Bullard takes the same approach as Del Negro et al. (2017): Instead of assuming the neutral rate as a static 2%, Bullard calculates the neutral rate as the real price of safe assets in the economy, namely, Treasury assets. Specifically, Bullard calculates the neutral rate as the 1-Year Nominal Treasury minus 1-year trailing inflation, thus producing the real return on a safe and liquid asset (Bullard 2017). In the type of model described by Del Negro et al. (2017), the neutral rate would normally, “equal the real output growth rate” (Bullard 2018, 13)13. The output growth rate can be calculated as labor force growth plus labor force productivity growth. However, there is a troubling problem with the neutral rate defined as the price of safe real assets: it has been consistently falling over time. This falling neutral rate is exactly what has caused doubts about the relationship between the neutral rate and output growth that Taylor (1993) assumes.


Exhibit 2, which is taken from Bullard (2017), motivates the solution that both he and Del Negro et al. have been exploring. Exhibit 2 shows that the 1-Year Nominal Treasury, labeled on the graph as “1-Year Real Yield (r†),” has been steadily declining over time, while the returns to all capital have remained fairly constant. This phenomenon has prompted positing the cause of the falling neutral rate as an investor desire for safe assets.14 A high investor desire for safe assets means that capital is flowing into safe assets. Capital flow into safe assets increases demand for safe assets, which in turn lowers the yield of safe assets (hence the falling r†).


Bullard has taken this relatively new understanding of the neutral rate and applied to to modify the Taylor Rule. Bullard’s Rule is written as:

𝐹𝐹𝑅b = 𝜋 + 𝛼𝑌 + 𝛽(𝜋 − 𝜋∗) + 𝑟 †

where all variables are the same as the Taylor Rule, except that FFRB is the policy rate prescribed by Bullard’s Rule, and r†, the neutral rate, equals the real price of safe assets15 and

r † = τ + φ + δ


where τ equals labor force growth, φ equals labor force productivity growth, and δ equals an investor desire for safe assets. This is another way of saying that the neutral rate equals the output growth rate plus an investor desire for safe assets. A strong desire for safe assets is represented by a negative value for δ, while a normal desire for safe assets would be represented by a zero value for δ. After all, a strong (or high) investor desire for safe assets would put downward pressure on the real returns to safe assets, therefore the safety premium takes a negative value. It is important to note that δ, the investor desire for safe assets, cannot be directly observed but is rather solved for algebraically as

δ = r † − τ − φ

As just discussed, an alternative for the 2% neutral rate estimation in the Taylor Rule is important because of research about the falling neutral rate and the investor desire for safe assets. However, allowing the neutral rate to be mobile is also important in light of the work done by Reinhart and Rogoff (2009). Their work empirically examines economies with high debt to GDP ratios and concludes that such economies tend to grow more slowly after a major financial crisis. If economies rarely return to their pre-crisis output growth rate, as Reinhart and Rogoff empirically display, then maintaining the same neutral rate assumption (2%) post-crisis as was held pre-crisis doesn’t make much sense, given the link between the neutral rate and output.16 After all, the U.S. recently went through the GFC while maintaining a very high debt to GDP ratio. Empirically proven slow growth post-recessions also weakens the assertion from Taylor (2015) that Bernanke’s actions following the GFC caused the slow recovery seen in the U.S., since a slow recovery seems to be a hallmark of major crises where the debt to GDP ratio is especially high.


In addition to the research driving Bullard’s interpretation of the neutral rate, Bullard’s r† also provides a realistic improvement over the original Taylor Rule because it allows policy inertia to enter the model. Policy inertia means that the Fed considers the current FFR when setting the new FFR. Policy inertia is included in Bullard’s Rule through the neutral rate because the price of real assets in the U.S. is influenced by the FFR in the short term. Policy inertia in a Taylor-type rule is important because the Fed raises rates at a measured pace, meaning that the Fed usually raises the target FFR (and indirectly the effective FFR) a fraction of a percent at a time. The Fed does not make rate decisions without taking into consideration where the rate is today. Policy inertia is normally modeled by taking an inertia factor, 𝜌, between 0 and 1. A 𝜌 of 0 would mean that the Fed does not acknowledge its previous rate when determining a new one. This is essentially how the Taylor Rule naturally functions, which is unrealistic. A policy inertia factor of 1 would mean that the Fed always sets the overnight rate at exactly what it was last time, so that it would never change (which is also unrealistic). A Taylor-type rule that includes policy inertia tends to be a realistic Taylor rule modification (Templeton Financial Services n.d.). A rate prescription with policy inertia is calculated as:

𝐹𝐹𝑅 = (𝜌 ∗ 𝐹𝐹𝑅􀯧􀬿􀬵) + [(1 − 𝜌) ∗ (𝐹𝐹𝑅􀯍 )]

Policy inertia is included in Bullard’s rule through the calculation of the neutral rate as the 1-Year Nominal Treasury minus 1-year trailing inflation. The 1-Year Nominal Treasury is highly influenced by the FFR in the short term, although in the long term it is determined by market factors. In other words, in Bullard’s Rule, when looking at the output gap and inflation gap, the starting point that the Fed uses (the neutral rate, the intercept of the formula) will be highly influenced by their previous policy rate.


Exhibit 3 shows the FFR versus the 1-Year Nominal Treasury. Clearly, the two are highly related. It would be a mistake, however, to suggest using the effective FFR as the neutral rate instead of the real price of safe assets, because the real price of safe assets is determined by market forces in the long run and because the neutral rate, from a theoretical standpoint, exists regardless of monetary policy. On the other hand, FFR is the direct result of monetary policy by design.


Having examined the rationale for the modified neutral rate in Bullard’s Rule, I now construct Exhibit 4, which plots the policy rate prescribed by Bullard’s Rule, FFRB, against the actual FFR surrounding the GFC. Constructing FFRB was done in the exact same way as constructing FFRT, except that rather than assuming the neutral rate at 2%, r† is calculated quarterly as the 1-Year Nominal Treasury minus 1-Year Trailing Inflation, as done in Bullard (2017). The FFR and the 1-Year Nominal Treasury were taken from the Federal Reserve of St. Louis’ data bank. Inflation data used to calculate 1-Year Trailing Inflation exactly matches the methods used in constructing Exhibit 1. The takeaway from Exhibit 4 is that FFRB closely matches the actual FFR up until FFRB goes negative starting in 2008:Q4. The sustained negative FFRB starting in 2008:Q4 provides a rules-based explanation for the QE that the Fed undertook during this time period.17 In a rules based framework, Bullard’s Rule provides a very close match to Fed behavior (both in terms of the FFR and QE) following the GFC.


The goal now is to understand whether Bullard’s Rule: (1) has modified the neutral rate in a way that performs logically in a Taylor-type rule; and (2) whether this rule is a good depiction of past Fed behavior for periods before the GFC. That leads to the questions - how does the neutral rate in Bullard’s Rule (and the factors that comprise it) behave over time? Also, is that behavior logically consistent with what would be expected? Does it make sense that the Fed would respond to these factors? If not, Bullard’s Rule is not a credible policy tool in the same way that the Taylor Rule is. If the answer to these questions is “yes,” it would add strength to the argument that Bullard’s Rule should be accepted as a useful Taylor Rule variant and give credence to its potential explanatory power for Fed behavior following the GFC.


Recessions will be examined to see whether Bullard’s neutral rate responds in a way that makes economic sense. Recessions are clearly an important time to have proper monetary policy because alleviating a recession has direct positive impact on the lives of people in the economy. Bullard’s Rule relaxes the idea of a static neutral rate. If Bullard’s construction of the neutral rate is to be believable, then the neutral rate should be lower during downturns in the economy than in booms. Taylor himself said that, “when the economy starts into recession, sharp and rapid interest-rate declines are appropriate” (Taylor 1993, 196). It makes sense that a weaker economy would not have the same neutral rate as a strong economy.


A lower neutral rate during recessions is exactly what happens with Bullard’s interpretation of the neutral rate, as can be seen in Exhibit 5. During each recession, which are the shaded regions on the graph, the neutral rate falls (compared to the 2% neutral rate line representing the neutral rate in the Taylor Rule). From this standpoint, Bullard’s Rule seems more credible. There are times outside of recessions when Bullard’s neutral rate takes a steep dive, such as between 1984 and 1986. The cause of such moves is beyond the scope of this paper, but it is reasonable to expect that there are times outside of recessions when it would be appropriate to have a lower neutral rate. A lower neutral rate is more appropriate for a weaker economy, and it is likely that there are times when the economy is weak without being defined formally as a recession. The key here is that the neutral rate behaves as expected during recessions.18 A neutral rate that falls during recessions is an improvement to the Taylor Rule because it would allow the Fed (within the context of a rules-based approach) to lower the target FFR more quickly and take appropriate action, which is beneficial to an economy in a downturn.


Besides taking r† as a whole, it is worthwhile to break out and examine its components19. It is important that the components behave in a way that is logically consistent, or Bullard’s neutral rate modification may not make sense to apply to a Taylor-type rule. Exhibit 6 shows how the sum of the observable components of the neutral rate (labor force growth and labor force productivity growth) behave over time, specifically during recessions. Unsurprisingly, the components take steep dives during recessions. This is as expected. There are other dives in the observable parts of the neutral rate that take place outside of recessions (1986 to 1988 and 1991 to 1993, for example). There are many reasons why labor force growth and labor productivity growth could take dives like this, such as immigration policy reducing the labor force or a change in educational standards lowering labor force productivity growth. Forces that cause dives in these observable factors may well take place outside of when NBER formally defines a recession. Lower labor force growth and labor force productivity growth need not only occur within recessions, but it is important that they do occur during recessions for logical consistency (this is an expected outcome) and to allow Bullard’s neutral rate to fall during recessions.


A more novel result comes in examining the safety premium, δ, which must be solved for algebraically.20 When looking at values for the safety premium, remember that a value of zero represents a normal desire for safe assets by investors in the market. A negative value represents a higher than normal desire for safe assets by investors. Exhibit 7 reveals interesting behavior from the safety premium around recessions. The safety premium dives in the middle of the recession, meaning that there is a high desire for safe assets during recessions. This makes logical sense as spooked investors tend to fly to safety when the economy turns down. This behavior is a huge boon to the safety premium being included in a Taylor-type rule and not only residing in theoretical models. Already supported by the empirically-backed reducing returns to safe assets in the U.S. (as discussed previously), it is now shown that the safety premium performs in a way that reflects behavior that would be expected following a recession. This flight to safety has also been evidenced in the bond market during recessions. Particularly, it is true that investors tend to move capital from high yield assets to superior credit assets during such time periods. The safety premium fluctuates and at times takes steep dives outside of recessions, but again here the important conclusion is that it predictably does so within recessions. There are many factors in an economy or a society (such as the threat of war) that could cause investors to desire safe assets outside of a recession.


Following the last two recessions, the safety premium has not only taken on a highly negative value during recessions, it has stayed highly negative for prolonged periods. Bullard (2017) has posited that this increased safety premium explains the sustained low rates that have existed following the GFC. This lingering safety premium could go a long way in explaining why the original Taylor Rule has overstated the FFR following the past two recessions, since it doesn’t account for a safety premium.


In this section I have examined the reasoning for Bullard’s modification of the Taylor Rule. I then examined his neutral rate construction during recessions and found it to behave in a way that is logically consistent, providing evidence that it could be appropriate to use in a Taylor-type rule.

Section 4: Descriptive Power

Now that Bullard’s Rule has been shown to be a logical modification to the Taylor Rule, I move to evaluate Bullard’s Rule in the other way that the Taylor Rule was evaluated: historical fit to Fed behavior (how close is FFRB to FFR over time?). Exhibit 8 shows the Taylor Rule’s rate prescription, FFRT, against the actual FFR over time. From this chart, it is clear both that the Taylor Rule has historically been a decent match for the FFR, but also that the Taylor Rule stopped being a good approximation of Fed behavior around the GFC. The deviation after the GFC has already been graphically shown in Exhibit 1, but Exhibit 8 provides a longer-term view which highlights the descriptive power of the Taylor Rule before the GFC. Exhibit 9 shows Bullard’s Rule’s rate prescription, FFRB, against the actual FFR over time. From this chart, it is clear that Bullard’s Rule has also historically been a decent fit to the actual FFR. The fit of Bullard’s Rule to the actual FFR following the GFC has already been graphically shown in Exhibit 4. Exhibit 9 provides a longer-term view which highlights the descriptive power of Bullard’s Rule before the GFC. There are large deviations between FFRB and the actual FFR following the GFC but remember that these took place when the Fed was at the zero bound with the FFR, and that FFRB during this time, because of its sustained negative rates, would provide rules-based justification for QE.


Exhibit 10 plots FFR, FFRT, and FFRB on the same chart to more clearly display how they have compared over time. From Exhibit 10, it could possibly be concluded that Bullard’s Rule has been closer to the effective FFR, but it would be hard to say definitively. This is why applying a statistical method like r-squared is helpful in determining which rule has prescribed a policy rate closer to the actual FFR over time.


Analyzing historical fit will be done using a simple r-squared21 analysis, which statistically describes the percentage of the variation in a variable “x” that is explained by the variation in another variable “y.” The actual FFR is set against the prescribed FFR from each rule (FFRT and FFRB) to see which rule explains a higher percentage of Fed behavior. Two r-squared values are produced. One represents FFR vs FFRT while the other will represent FFR vs FFRB. By definition, the formula (Taylor Rule or Bullard’s Rule) that has a higher r-squared is a better match of what the Fed actually did with the FFR.


It is important to note that the r-squared employed here is different from another common statistical measure, R-squared (with a capital “R”), which attempts to answer the same question in a different context. R-squared22 pertains to a regression model with multiple regressors, while r-squared (little “r”) is appropriate when there are only two variables (one regressor). When using r-squared, the independent and dependent variables are mathematically interchangeable (swapping them will not change the value of r-squared). The standard formula for r-squared is the square of the Pearson product moment correlation coefficient, r, and is written as:

 
Capture.JPG
 

The result of this formula is then squared to produce r-squared. In the formula, 𝑥̅ and 𝑦􀴤 represent the sample means, while 𝑥 and 𝑦 represent individual observations (of FFR and a prescription for FFR). For example, when calculating r-squared between FFR and FFRT, the equation could be rewritten as:

 
Capture 2.JPG
 

The use of r-squared suffers from a variety of issues. It does not by itself give the ability to say if a particular policy rate would have strengthened the economy. The analysis produced is positive and not normative. However, the goal here is not to say whether Fed policy over the period was optimal, but rather to say which rule (Taylor or Bullard’s) better matches Fed policy. This is relevant because if a rule does not approximate past Fed behavior, it is of little use in predicting their future actions. The historical description of the FFR by the Taylor Rule contributed to its adoption and popularity, therefore determining which rule describes the FFR better over time is an important endeavor. Another weakness of r-squared is that it does not recognize when the variation in the independent variable is predictable. Despite these issues, r-squared is a good starting point to determine best fit.


The data used to run r-squared analysis are the same as used in Exhibits 8, 9, and 10. Over the life of the data (from 1970:Q1 until 2017:Q1), the Taylor Rule has an r-squared of 0.6326 while Bullard’s Rule has an r -squared of 0.9250. While some movement in the FFR can be explained with the Taylor Rule (63.26%, according to r-squared), Bullard’s Rule does a much better job of approximating the FFR. There is, however, an issue with running r-squared for the whole time period (1970:Q1 until 2017:Q1). Since the FFR does not go negative (or at least the Fed has historically never set it there) but rule prescriptions (FFRT and FFRB) can, there is a truncation issue. It may not be appropriate to run r-squared for a variable that can go negative against one that cannot. The periods when the Taylor Rule and Bullard’s Rule prescribe negative rates is a relatively small portion of the data set, but the issue remains. There is no readily available statistical solution to this issue. However, as an exercise in due diligence, it is valuable to run the r-squared for the period until the rule prescriptions go negative. Under this time period (1970:Q1 until 2009:Q3) the Taylor Rule has an r-squared of 0.5313 while Bullard’s Rule has an r-squared of 0.9017.23 Whether including time periods with negative prescribed rates or not, Bullard’s Rule outperforms the Taylor Rule.


Not only does Bullard’s Rule fit what the Fed has done more closely, it has fit more closely during a critical time in Fed history. The Fed has a dual mandate to maintain low unemployment and encourage price stability. While it is difficult to say what FFR would have been “the right FFR” in a certain time, the dual mandate serves as a target (or a goal) for the Fed. As Bullard has pointed out (though not in the context of a neutral rate discussion) from 1995 until 2012, the Fed maintained a price level path as though it were targeting a 2% annual growth rate (Bullard 2012). This path can be seen in Exhibit 11. Therefore, it can be said that the actual FFR during this period was successful in that it fulfilled part of the dual mandate of the Fed (that is, the part that mandates price stability). Critically, Bullard’s Rule fits the actual FFR better than the Taylor Rule does during this time period of (at least partial) mandate fulfillment. During this time period from 1995 to 2012, the Taylor Rule has an r-squared of 0.5716 while Bullard’s Rule has an r-squared of 0.9466. Given that Bullard’s Rule prescribes a FFRB closer to the FFR than the original Taylor Rule during this period, and given that the Fed was successful in fulfilling in at least part of their dual mandate during this time, it can be concluded that Bullard’s Rule may be more successful in prescribing a policy rate that maintains stable prices.


Since 2012, prices have not remained stable (on the 2% path). Rather, prices have fallen below the 2% path, meaning that inflation has not been high enough to generate a 2% annualized inflation rate. This occurrence can be seen in Exhibit 12, which is an extension of Exhibit 11 (which showed prices on a 2% path). Prices after January 2012, a date marked by the vertical dotted line on the chart, fell below the 2% path. During this period, when prices started falling off the 2% path, the Taylor Rule prescribed a higher FFRT than Bullard’s FFRB. It is generally accepted that a lower interest rate is conducive to inflation. Therefore, even during this period where prices have fallen off path, Bullard’s Rule (because it prescribes a lower policy rate throughout and therefore encouraging to inflation) prescribes a monetary policy more fit to getting inflation back on target, furthering the idea that Bullard’s Rule is better suited for maintaining stable prices.


Granted, stable prices are only one half of the dual mandate, the other half being maintaining full employment. However, this mandate is more difficult to track over long periods of time. Price levels can be graphed with a trend line. Periods of inflation above the trend can be offset with appropriate periods of inflation below trend to get prices back on track for the long term. If the economy had a positive output gap, meaning that it was producing above its estimated potential, it is unlikely that the Fed would intentionally try to get the economy to produce below capacity. It is difficult to say what policy rate would have produced full production or stable prices in the economy at any given time. Yet, given that it is possible to track trend prices over time, it is then possible to look back and discuss the ability of Fed policy to produce stable prices over time. Since such an analysis is not feasible with the output gap, it is not discussed here.


In summary, Bullard’s Rule provides a policy rate estimate that fits what the Fed has done historically better than the original Taylor Rule. This is important as one role of a Taylor-type rule is to describe what the Fed has done in order to give insight as to what they might do. In this regard, Bullard’s Rule appears superior. Bullard’s Rule also appears superior to the original Taylor Rule when it comes to maintaining price stability, a stated goal of the Federal Reserve. Given this, Bullard’s Rule is credible in its ability to describe past Fed behavior.

Conclusion


Bullard’s modification of the Taylor Rule provides a credible alternative to calculating the neutral rate in the economy. The decomposition of this neutral rate has shown to perform as expected during recessions. Bullard’s Rule is also shown to provide a good historical fit. Considering these things alongside the fact that Bullard’s Rule provides a good description of Fed behavior following the GFC, it is plausible that the Fed did not break from a rules-based approach to monetary policy following the GFC.


This is a motivating premise for the paper, and the goal of the paper is to question whether using Bullard’s Rule for this time period is appropriate. However, it is worth noting how good Bullard’s Rule was for describing Fed behavior following the GFC. Once Bullard’s Rule started approaching the zero bound from the bottom, the Fed began to raise rates. This is an absolutely crucial performance by Bullard’s Rule that has previously gone without recognition. Nobody knew when the Fed was going to raise rates again, save for indications provided by Fed minutes during this time period. The original Taylor Rule was still hovering at a positive 2%, and the Fed seemed set on keeping rates extremely low. The fact that Bullard’s rate gave a close indication of when the FFR would get off the floor, even with the rate at the zero bound, shows its robustness and its predictive power in a time that was essentially uncharted territory. As far as using Bullard’s Rule for predictive purposes, the Bullard modification gave insight in a time of unbelievable uncertainty and did so without relying on moves in the FFR (because it was at the zero bound).


When it comes to using the price of safe assets as the neutral rate in a Taylor-type rule, it would be worth further research to determine if trending the data for r† rather than taking raw values produces a more valuable Taylor Rule modification. Bullard (2018) has advocated for this. Trending would better account for the fact that the 1-Year Nominal Treasury is influenced by the FFR in the short term but is determined by market forces in the long run. The pros and cons of each type of trending would have to be weighed, but the merit of the Bullard modification shown in this paper prompts that sort of research. It would also be worthwhile to revisit other Taylor Rule modifications (like Bernanke’s Rule) with this new neutral rate. Perhaps an even better modification can be formed by changing the coefficients on the inflation and output gaps with the revised neutral rate in place. Overall, Bullard’s neutral rate has shown to be a valuable modification and contribution to Taylor-type rules and should be an area of continuing research.

Footnotes

  1. The Fed sets a target FFR and uses monetary tools to produce it in the market. The FFR that banks face because of Fed actions is called the “effective” or “actual” FFR. The Fed is adept at reaching its target FFR (or more recently, its target range). Therefore, it is assumed that the effective FFR in the market is the FFR that the Fed aimed to create.

  2. The GFC is the recession formally recognized by the National Bureau of Economic Research that spanned 2007:Q4 to 2009:Q2. The QE following the GFC took place from 2009 to 2011.

  3. The FOMC (Federal Open Market Committee) is the group within the Fed specifically tasked with establishing monetary policy through targeting the FFR and conducting open market operations (such as QE).

  4. The output gap is a measure of how close the economy is to producing at its potential (actual-potential/potential)

  5. By saying a rule has described past “Fed behavior,” it is meant specifically that the rule has prescribed a FFR close to the actual FFR. However, the Fed does not directly set the FFR but instead uses tools to achieve it (usually open market operations). Prescribing the tools that the Fed should use in achieving the FFR is beyond the scope of this paper (and Taylor-type rules in general). The only specific tool examined in this paper is QE because of the possibility of motivating it through negative FFR prescriptions in a rules-based context.

  6. Bernanke is not alone in advocating for this modification of the Taylor Rule. It is fairly common and, at times, has been a good match for Fed behavior, as econometrically estimated in Judd and Rudebusch (1998).

  7. In Nikolsko-Rzhevskyy and Papell (2012), “historical merit” is meant as an evaluation of whether prescribes rates from Bernanke’s Rule would have been better or worse for the economy when compared to the Taylor Rule rate prescriptions. Their paper focused on examining what should have been done with the policy rate.

  8. The neutral rate is also sometimes referred to as the “equilibrium rate” or the “natural rate of interest.”

  9. 4% is the nominal equivalent of the real neutral rate, 2%, in the Taylor Rule. The Taylor Rule assumes a target inflation rate of 2% and hence the nominal neutral rate, with not inflation or output gap, would be 4%.

  10. Any reference to a recession in this paper refers to an economic downturn formally recognized by the National Bureau of Economic Research.

  11. Remember that this period of deviation is also when the Fed engaged in QE, which would not have found rules-based support under the Taylor Rule.

  12. Bernanke’s Rule has been plotted against the actual FFR in Appendix A. Remember that Bernanke’s Rule matches the Taylor Rule, except the weighting coefficient on the output gap is 1 rather than 0.5. Bernanke’s Rule calls for more sustained negative rates than the Taylor Rule (which could provide rules based justification for QE), but as discussed in Nikolsko-Rzhevskyy and Papell (2012), Bernanke’s Rule is not a credible Taylor Rule modification.

  13. This condition of many models is exactly what motivated Taylor to set his neutral rate near the GDP growth rate, as mentioned previously.

  14. This explanation is examined by Hamilton et al. (2015), Del Negro et al. (2017), and Bullard (2017) among others.

  15. Calculated as the 1-Year Nominal T-Bill minus 1-Year Trailing Inflation.

  16. Japan is a classic and empirically proven example of an economy that never recovered to its original GDP growth path after a major financial crisis. Japan had a large asset bubble until the early 1990’s at which point the economy collapsed and Japan entered a financial crisis. Before 1990, GDP per capita grew annually at just over 5%. Following the crisis, GDP has grown at a much slower 0.88% annually.

  17. Remember that a prescribed negative FFR calls for even more stimulating action by the Fed than a zero FFR.

  18. While beyond the scope of the paper, it appears from Exhibit 7 that the fall in Bullard’s neutral rate between 1985 and 1987 can be largely attributed to a heightened safety premium in the economy. What causes investors to desire safe assets, especially outside of a recession, is an area for continuing research.

  19. Remember that Bullard posits a neutral rate comprised of labor force growth, labor force productivity growth, and an investor desire for safe assets.

  20. δ = r † − τ − φ

  21. From a notation standpoint, it is important to note that the “r” in “r-squared” is unrelated to “r†,” the variable included in the Taylor Rule and Bullard’s Rule. It is a coincidence that will be allowed to exist in order to be congruent with notation used in statistics (r-squared) and economics (r†).

  22. R-squared is the square of the coefficient of multiple correlation

  23. It is worth noting that Bernanke’s Rule vs the actual FFR produces an r-squared value of 0.6233 for the entire period and 0.5294 for the period before there were negative rate prescriptions.

References

  • Asso, Pier Francesco, George A. Kahn, and Robert Leeson. 2010. “The Taylor Rule and the
    Practice of Central Banking.” The Federal Reserve Bank of Kansas City Research Working
    Paper No. 10-05.

  • Bernanke, Ben. 2015. “The Taylor Rule: A Benchmark for Monetary Policy?” Accessed October
    10, 2017. https://www.brookings.edu/blog/ben-bernanke/2015/04/28/the-taylor-rule-abenchmark-for-monetary-policy/.

  • Bernanke, Ben and Frederic Mishkin. 1992. “Central Bank Behavior and the Strategy of
    Monetary Policy: Observations from Six Industrialized Countries.” NBER Macroeconomics.
    7(1). 183-238.

  • Board of Governors of the Federal Reserve System. n.d. “Monetary Policy.” Accessed March 24,
    2018. https://www.federalreserve.gov/monetarypolicy.html.

  • Board of Governors of the Federal Reserve System. 2009. “FOMC Statement.” Accessed April
    9, 2018. https://www.federalreserve.gov/newsevents/pressreleases/monetary20091216a.htm

  • Board of Governors of the Federal Reserve System. 2015. “How Does Monetary Policy
    Influence Inflation and Employment?” Accessed April 8, 2018.
    https://www.federalreserve.gov/faqs/money_12856.htm

  • Bullard, James. 2012. “Price Level Targeting: The Fed Has It About Right.” Presentation to the
    Economic Club of Memphis.

  • Bullard, James. 2016. “A Tale of Two Narratives.” Presentation to the St. Louis Gateway
    Chapter of the National Association for Business Economics (NABE).

  • Bullard, James. 2017. “An Illustrative Calculation of r† with Policy Implications.” Presentation
    at the Central Bank Forecasting Conference.

  • Bullard, James. 2018. “R-Star Wars: The Phantom Menace.” Presentation at the 34th Annual
    National Association for Business Economics (NABE) Economic Policy Conference.

  • Buol, Jason J. and Mark D Vaughan. 2003. “Rules vs. Discretion: The Wrong Choice Could
    Open the Floodgates.” Accessed April 10, 2018.
    https://www.stlouisfed.org/Publications/Regional-Economist/January-2003/Rules-vs-Discretion-The-Wrong-Choice-Could-Open-the-Floodgates

  • Del Negro, Macro, Domenico Giannone, Marc P. Giannoni and Andrea Tambalotti. 2017.
    “Safety, Liquidity and the Natural Rate of Interest,” Brookings Papers on Economic Activity.
    235-303.

  • The Economist. 2015. “What is Quantitative Easing?” Accessed March 26, 2018.
    https://www.economist.com/blogs/economist-explains/2015/03/economist-explains-5

  • Federal Open Market Committee. 1997. Transcripts of meetings. Accessed March 20, 2018.
    https://www.federalreserve.gov/monetarypolicy/fomc_historical.htm

  • Federal Reserve Bank of San Francisco. 2005. “What is Neutral Monetary Policy?” Accessed
    March 25, 2018. https://www.frbsf.org/education/publications/doctor-econ/2005/april/neutralmonetary-policy/

  • Federal Reserve Bank of St. Louis. n.d. “A Closer Look at Open Market Operations.” Accessed
    April 8, 2018. https://www.stlouisfed.org/in-plain-english/a-closer-look-at-open-market-operations

  • Hamilton, James D., Ethan S. Harris, Jan Hatzius, and Kenneth D. West. 2015. “The Equilibrium
    Real Funds Rate: Past, Present, and Future.” The Hutchins Center on Fiscal and Monetary
    Policy at Brookins Working Paper No. 16.

  • Judd, John P. and Glenn D. Rudebusch. 1998. “Taylor's Rule and the Fed: 1970-1997.” FRBSF
    Economic Review, 3(1), 3-16.

  • Kydland, Finn E. and Edward C. Prescott. 1977. “Rules Rather than Discretion: The
    Inconsistency of Optimal Plans.” Journal of Political Economy, 85(1), 473-491.

  • Mishkin, Frederic S. 2007. “Monetary Policy and the Dual Mandate.” Presentation at
    Bridgewater College, Bridgewater, Virginia.

  • Nikolsko-Rzhevskyy, Alex, & David H. Papell. 2012. “Taylors Rule Versus Taylor Rules.”
    International Finance, 16(1), 71-93.

  • Orphanides, Athanasios. 2001. “Monetary Policy Rules Based on Real-Time Data,” American
    Economic Review, 91(4), 964-985.

  • Reinhart, Carmen M. and Kenneth S. Rogoff. 2009. This Time is Different. Princeton University
    Press.

  • Taylor, John, 1993. “Discretion versus Policy Rules in Practice,” Carnegie-Rochester
    Conference Series on Public Policy, 39(1), 195-214.

  • Taylor, John. 2015. “Taylor on Bernanke: Monetary Rules Work Better Than ‘Constrained
    Discretion’”. Accessed October 10, 2017. https://www.wsj.com/articles/taylor-on-bernankemonetary-rules-work-better-than-constrained-discretion-1430607377 mg=prod/accounts-wsj.

  • Templeton Financial Services. n.d. “The Taylor Rule.” Accessed April 10, 2018.
    https://www.tfsformunis.com/the-taylor-rule/

  • Wicksell, Knut. 1898. “Interest and Prices: A Study of the Causes Regulating the Value of
    Money.” Translated by R.F. Kahn (1936). London: Macmillan.

  • Williamson, Stephen D. 2017. “Quantitative Easing: How Well Does This Tool Work?”
    Accessed April 8, 2018. https://www.stlouisfed.org/publications/regional-economist/thirdquarter-2017/quantitative-easing-how-well-does-this-tool-work

Exhibits

Exhibit 1.JPG
Exhibit 2.JPG
Exhibit 3.JPG
Exhibit 4.JPG
Exhibit 5.JPG
Exhibit 6.JPG
Exhibit 7.JPG
Exhibit 8.JPG
Exhibit 9.JPG
Exhibit 10.JPG
Exhibit 11.JPG
Exhibit 12.JPG
Appendix A.JPG