What Would the "Financial Instability" Argument Look Like For Any Other Industry?

May 7, 2013Mike Konczal

It’s becoming a surprisingly influential argument given that it hasn’t been well presented or argued, much less vetted and challenged. What is it? The argument that we should raise interest rates or otherwise contract monetary policy in order to preserve “financial stability.”

Brad Delong says critiquing this idea is “PRIORITY #1 RED FLAG OMEGA,” while Nick Rowe argues that this idea “may be influential. And that idea is horribly wrong.”

Here’s one version of the argument, from a recent speech by Narayana Kocherlakota:

“On the one hand, raising the real interest rate will definitely lead to lower employment and prices. On the other hand, raising the real interest rate may reduce the risk of a financial crisis—a crisis which could give rise to a much larger fall in employment and prices. Thus, the Committee has to weigh the certainty of a costly deviation from its dual mandate objectives against the benefit of reducing the probability of an even larger deviation from those objectives.”

Tim Duy and Ryan Avent commented on this speech, which essentially argued that that raising rates would certainly cause a problem, but rates at their current value could cause even bigger problems.

Let’s be clear on the terms: should we risk another immediate recession (“lower employment and prices”) to preserve a thing called “financial stability?” Five immediate problems jump out from this argument. Nick Rowe emphasized tackling this on an abstract level; I’m going to focus on practical stuff.

1. This whole story seems predicated on the idea that expansionary monetary policy was behind the housing bubble and collapse. I think there’s very little hard evidence for that. Also, the basic stories surrounding interest rates, as JW Mason mentioned in a guest post here, being too low for too long have some serious contradictions. (For instance, if the problem is a “global savings glut,” expansionary monetary policy should push against that by reducing capital inflows.) So if the idea is to risk another recession in order to not repeat the 2000s, we should work with a clearer story about what went wrong in the housing bubble.

2. The term “reaching for yield” is often deployed in these arguments. Low rates means that traders have to take on bigger risks in order to earn a rate of return that is acceptable. (Is there a minimum level of profit that finance must make on lending? And should we throw people out of work to make sure they make it? I hadn’t heard of that, but sounds like a nice gig.)

But either way, it isn’t clear that low rates drive reaching for yield. Yields are the difference between lending and funding rates. And as JW Mason again writes in an important post, banks’ funding costs are also affected by the policy rate. “Looking at the most recent cycle, the decline in the Fed Funds rate from around 5 percent in 2006-2007 to the zero of today has been associated with a 2.5 point fall in bank funding costs but only a 1.5 point fall in bank lending rates -- in other words, a one point increase in spreads.” If anything, the story is the opposite of what people are arguing.

3. The best empirical evidence at understanding the “reach for yield” phenomenon I’ve seen comes from Bo Becker and Victoria Ivashina from Harvard University, “Reaching for Yield in the Bond Market.” Here’s a Voxeu summary, and here’s the research pdf. They look at holdings of insurance companies, and find that, “conditional on credit ratings, insurance portfolios are systematically biased toward higher yield, higher CDS bonds...It is also more pronounced for firms with poor corporate governance and for which regulatory capital requirement is more binding.”

This comes across as portfolio managers juking and manipulating capital requirements and the ratings agencies. The authors note that this is a major agency problem for insurance agencies. It was the strongest at the peak of the cycle, but went away during the recession.

Now if I told you we should keep the economy in a permanent recession because senior managers at insurance companies aren’t good at their basic job of monitoring mid-level portfolio managers you’d probably think I was crazy. And I would be. Especially since it seems that “reach for yield” is tied less to monetary policy and more to gaming ratings-based capital requirements.

4. If this is a serious problem, people should be talking about more serious forms of financial regulation. As a starter platform, we can raise capital requirements. Much of this “reach for yield” looks to be a regulatory arbitrage on ratings-based capital requirements, so, say, tripling the leverage requirement should net out the importance of the ratings agencies in capital requirements.

This is why a more coherent story about what we are concerned about when we think about “financial stability” would help. If we need to make the financial system less complex and prone to abusive practices, requiring parties of a derivatives contract to hold a stake in the underlying asset would do a lot. Are we worried about contagion? In that case, force banks to hold more capital as well as convertible instruments. About bad debts holding back the economy? Then reform the bankruptcy code, dropping the 2005 “reforms.” Some people are demanding more jail sentences, not only for the benefit of the public but for boards and shareholders who can’t keep their workers in line.

5. Because imagine this argument in the context of any other industry. Right now the interest rate is above where it needs to be to guarantee full employment. People are arguing that we should raise rates because banks might make loans, even though that is what the financial sector is supposed to do. (As Daniel Davies notes, “If the Federal Reserve sets out on a policy of lowering interest rates in order to encourage banks to make loans to the real economy, it is a bit weird for someone's main critique of the policy to be that it is encouraging banks to make loans.”)

Now imagine the government was going to take some land it owns containing oil and sell it to an oil company. Could you imagine someone saying, “We shouldn’t do this, because we can’t assume that oil companies are capable of drilling, refining and selling that oil” as a valid concern? Not concerns about random spills or global warming? But instead expressing concerns about whether the industry is capable of executing its most basic function.

Or take immigration. Imagine if a common response to letting a large number of high-skilled immigrants into the country would be “but we can’t assume that the labor market is capable of matching people with skills who want to work with employers who are willing to pay to complete jobs.” It’s tantamount to saying, “we shouldn’t assume that the labor market can do its basic function.”

It’s hard not to read the financial stability arguments as saying “look, we can’t trust the financial sector to accomplish its most basic goals.” If true, that’s a very significant problem that should cause everyone a lot of concern. It should make us ask why we even have a financial system if we can’t expect it to function, or function only by putting the entire economy at risk.

Follow or contact the Rortybomb blog:

  

 

It’s becoming a surprisingly influential argument given that it hasn’t been well presented or argued, much less vetted and challenged. What is it? The argument that we should raise interest rates or otherwise contract monetary policy in order to preserve “financial stability.”

Brad Delong says critiquing this idea is “PRIORITY #1 RED FLAG OMEGA,” while Nick Rowe argues that this idea “may be influential. And that idea is horribly wrong.”

Here’s one version of the argument, from a recent speech by Narayana Kocherlakota:

“On the one hand, raising the real interest rate will definitely lead to lower employment and prices. On the other hand, raising the real interest rate may reduce the risk of a financial crisis—a crisis which could give rise to a much larger fall in employment and prices. Thus, the Committee has to weigh the certainty of a costly deviation from its dual mandate objectives against the benefit of reducing the probability of an even larger deviation from those objectives.”

Tim Duy and Ryan Avent commented on this speech, which essentially argued that that raising rates would certainly cause a problem, but rates at their current value could cause even bigger problems.

Let’s be clear on the terms: should we risk another immediate recession (“lower employment and prices”) to preserve a thing called “financial stability?” Five immediate problems jump out from this argument. Nick Rowe emphasized tackling this on an abstract level; I’m going to focus on practical stuff.

1. This whole story seems predicated on the idea that expansionary monetary policy was behind the housing bubble and collapse. I think there’s very little hard evidence for that. Also, the basic stories surrounding interest rates, as JW Mason mentioned in a guest post here, being too low for too long have some serious contradictions. (For instance, if the problem is a “global savings glut,” expansionary monetary policy should push against that by reducing capital inflows.) So if the idea is to risk another recession in order to not repeat the 2000s, we should work with a clearer story about what went wrong in the housing bubble.

2. The term “reaching for yield” is often deployed in these arguments. Low rates means that traders have to take on bigger risks in order to earn a rate of return that is acceptable. (Is there a minimum level of profit that finance must make on lending? And should we throw people out of work to make sure they make it? I hadn’t heard of that, but sounds like a nice gig.)

But either way, it isn’t clear that low rates drive reaching for yield. Yields are the difference between lending and funding rates. And as JW Mason again writes in an important post, banks’ funding costs are also affected by the policy rate. “Looking at the most recent cycle, the decline in the Fed Funds rate from around 5 percent in 2006-2007 to the zero of today has been associated with a 2.5 point fall in bank funding costs but only a 1.5 point fall in bank lending rates -- in other words, a one point increase in spreads.” If anything, the story is the opposite of what people are arguing.

3. The best empirical evidence at understanding the “reach for yield” phenomenon I’ve seen comes from Bo Becker and Victoria Ivashina from Harvard University, “Reaching for Yield in the Bond Market.” Here’s a Voxeu summary, and here’s the research pdf. They look at holdings of insurance companies, and find that, “conditional on credit ratings, insurance portfolios are systematically biased toward higher yield, higher CDS bonds...It is also more pronounced for firms with poor corporate governance and for which regulatory capital requirement is more binding.”

This comes across as portfolio managers juking and manipulating capital requirements and the ratings agencies. The authors note that this is a major agency problem for insurance agencies. It was the strongest at the peak of the cycle, but went away during the recession.

Now if I told you we should keep the economy in a permanent recession because senior managers at insurance companies aren’t good at their basic job of monitoring mid-level portfolio managers you’d probably think I was crazy. And I would be. Especially since it seems that “reach for yield” is tied less to monetary policy and more to gaming ratings-based capital requirements.

4. If this is a serious problem, people should be talking about more serious forms of financial regulation. As a starter platform, we can raise capital requirements. Much of this “reach for yield” looks to be a regulatory arbitrage on ratings-based capital requirements, so, say, tripling the leverage requirement should net out the importance of the ratings agencies in capital requirements.

This is why a more coherent story about what we are concerned about when we think about “financial stability” would help. If we need to make the financial system less complex and prone to abusive practices, requiring parties of a derivatives contract to hold a stake in the underlying asset would do a lot. Are we worried about contagion? In that case, force banks to hold more capital as well as convertible instruments. About bad debts holding back the economy? Then reform the bankruptcy code, dropping the 2005 “reforms.” Some people are demanding more jail sentences, not only for the benefit of the public but for boards and shareholders who can’t keep their workers in line.

5. Because imagine this argument in the context of any other industry. Right now the interest rate is above where it needs to be to guarantee full employment. People are arguing that we should raise rates because banks might make loans, even though that is what the financial sector is supposed to do. (As Daniel Davies notes, “If the Federal Reserve sets out on a policy of lowering interest rates in order to encourage banks to make loans to the real economy, it is a bit weird for someone's main critique of the policy to be that it is encouraging banks to make loans.”)

Now imagine the government was going to take some land it owns containing oil and sell it to an oil company. Could you imagine someone saying, “We shouldn’t do this, because we can’t assume that oil companies are capable of drilling, refining and selling that oil” as a valid concern? Not concerns about random spills or global warming? But instead expressing concerns about whether the industry is capable of executing its most basic function.

Or take immigration. Imagine if a common response to letting a large number of high-skilled immigrants into the country would be “but we can’t assume that the labor market is capable of matching people with skills who want to work with employers who are willing to pay to complete jobs.” It’s tantamount to saying, “we shouldn’t assume that the labor market can do its basic function.”

It’s hard not to read the financial stability arguments as saying “look, we can’t trust the financial sector to accomplish its most basic goals.” If true, that’s a very significant problem that should cause everyone a lot of concern. It should make us ask why we even have a financial system if we can’t expect it to function, or function only by putting the entire economy at risk.

Follow or contact the Rortybomb blog:

  

 

Share This

Monetary Policy's Jurassic Park Problem at the Zero Lower Bound

May 3, 2013Mike Konczal

Remember those scenes in Jurassic Park where everyone has to stand really still? The T-Rex finds the humans, but its dinosaur brain only senses movements, so as long as nobody moves an inch, they are safe. But if they even twitch, they are going to be ripped to shreds. Those scenes are great.

Last weekend, I wrote a piece for Wonkblog on monetary policy at the zero lower bound alongside austerity that got a great number of responses [1]. I want to respond to two points.

1. One thing I wanted to engage on, and a point I hope gets some additional comments in 2013, is that we had a major shift in “expectations” management at the zero lower bound with the Evans Rule. I think that this form of expectation management is a trial run for more serious moves like using a higher inflation target or a nominal GDP target to gain traction at the zero lower bound. So how has it gone, and how would we know?

I had thought a good measure of its success was whether short-term inflation would approach its 2 percent target, and whether or not it would go past. Other people, notably Matt O’Brien, had already flagged that 2 percent appeared to be a ceiling even with the Evans Rule in place.

Some seem to be abandoning the Evans framework entirely, such as Ryan Avent writing this week that “the Evans rule is consistent with prolonged, Japanese style stagnation.” [2] Others argue that a consistent nominal GDP with austerity is sufficient evidence to show that the Evans Rule worked.

I think this needs more exploration. We don’t often get a serious shift in expectations. That’s why I’m not sure how much the “gas pedal” from David Becksworth’s response is at play. Becksworth notes that the purchases in QE3 don’t automatically react to turbulence in the economy, and hopes that the Federal Reserve will buy more if the economy gets weaker. But if the expectations of where the Fed wants to end up are the real limiting factor for a robust recovery, why would a small change in purchases matter? This is partially why Greg Ip said the FOMC statement this week was “asymmetric,” even though the Fed said it might “increase or reduce” purchases: an increase is a small move, but a reduction is a genuine retrenchment.

2. Another point is that expectations are important. I want to push back on Ryan Avent implying I “knew what conclusions were going to be drawn before the experiment was ever run.” I actually turned more negative about the December announcement while researching the post. I spoke to several economists who supported the Evans Rule at the time to see where they stood months later. I heard from many that they were excited about the proposal at first, but that they thought the policy was undermined significantly by FOMC members’ comments in March.

What happened in March? As the Washington Post’s Ylan Q. Mui wrote in March, the Fed seemed split into two camps: “Hawks, who want to curtail quantitative easing programs because of the risks they create. And doves, who see evidence that they’re working well enough at stimulating growth that they might soon no longer be needed.” The Fed’s March minutes noted “that continued solid improvement in the outlook for the labor market could prompt the Committee to slow the pace of purchases beginning at some point over the next several meetings." Several economists I spoke with thought that this hurt the expansionary impact of monetary policy.

Watch this again in slow-motion: Aggressive monetary policy begins to expand the economy, or at least gives the impression the economy is expanding. Central bankers argue that this means that they can pull back quicker than expected. (They don’t pull back; they just say they will.) The expectations for future policy then collapse, because central bankers signal that it will end too soon. The economy then weakens, going back to where it started.

This is monetary policy in the style of those T-Rex scenes in Jurassic Park. The central bank says, “we are committing to extraordinary action,” and then everyone has to remain incredibly still for a long time. Just a random dovish member of the FOMC saying, “hey, maybe it’s working so well we should consider ending it early” is enough for dinosaurs to eat everyone the policy’s effectiveness in impacting expectations to collapse.

If you believe this is a serious problem for monetary policy, well, this is precisely the time inconsistency problem Krugman identified in the late 1990s for Japan. The neutrality of money will cause an expansion to push up either prices or output, provided markets believe that it is permanent and that the central bank won’t immediately rush to stabilize prices the moment it gets a chance. And if the comments in March show that central banks aren’t going to “credibly promise to be irresponsible” with the Evans Rule, how will they do it with 4 percent inflation?

Note that four months after the stimulus was passed, no Democrats would stand up and defend it. Yet the stimulus was carried out without a problem. Four months after the Evans Rule, it looked like Bernanke’s coalition was weakening, and that has major implications. The Wonkblog piece I wrote notes that the next step will have to be an explicit, permanent, new target. That would get around these issues about how permanent the monetary expansion will be. But if there’s barely enough support for the Evans Rule, it makes me worried we won’t get there anytime soon.

[1] Responses include: Scott Sumner, Matt Yglesias, Paul Krugman, Reihan Salam, Ryan Avent, David Becksworth, Uneasy Money, Ramesh Ponnuru, southofthe49th, as well as a communist anarchist critique at pogoprinciple which notes that my “post-Fordist national fascist state fiscal policy” is exhausted. And that while “Keynesians are playing checkers, the monetarists are playing three dimensional chess.” Hmmm.

[2] If the Evans Rule was a bust from the get-go, was all that 2012 energy put into trying to find clever ways of explaining “Delphic” versus “Odyssean” guidance language to a general audience a waste of time? Boo.

Follow or contact the Rortybomb blog:

  

 

Remember those scenes in Jurassic Park where everyone has to stand really still? The T-Rex finds the humans, but its dinosaur brain only senses movements, so as long as nobody moves an inch, they are safe. But if they even twitch, they are going to be ripped to shreds. Those scenes are great.

Last weekend, I wrote a piece for Wonkblog on monetary policy at the zero lower bound alongside austerity that got a great number of responses [1]. I want to respond to two points.

1. One thing I wanted to engage on, and a point I hope gets some additional comments in 2013, is that we had a major shift in “expectations” management at the zero lower bound with the Evans Rule. I think that this form of expectation management is a trial run for more serious moves like using a higher inflation target or a nominal GDP target to gain traction at the zero lower bound. So how has it gone, and how would we know?

I had thought a good measure of its success was whether short-term inflation would approach its 2 percent target, and whether or not it would go past. Other people, notably Matt O’Brien, had already flagged that 2 percent appeared to be a ceiling even with the Evans Rule in place.

Some seem to be abandoning the Evans framework entirely, such as Ryan Avent writing this week that “the Evans rule is consistent with prolonged, Japanese style stagnation.” [2] Others argue that a consistent nominal GDP with austerity is sufficient evidence to show that the Evans Rule worked.

I think this needs more exploration. We don’t often get a serious shift in expectations. That’s why I’m not sure how much the “gas pedal” from David Becksworth’s response is at play. Becksworth notes that the purchases in QE3 don’t automatically react to turbulence in the economy, and hopes that the Federal Reserve will buy more if the economy gets weaker. But if the expectations of where the Fed wants to end up are the real limiting factor for a robust recovery, why would a small change in purchases matter? This is partially why Greg Ip said the FOMC statement this week was “asymmetric,” even though the Fed said it might “increase or reduce” purchases: an increase is a small move, but a reduction is a genuine retrenchment.

2. Another point is that expectations are important. I want to push back on Ryan Avent implying I “knew what conclusions were going to be drawn before the experiment was ever run.” I actually turned more negative about the December announcement while researching the post. I spoke to several economists who supported the Evans Rule at the time to see where they stood months later. I heard from many that they were excited about the proposal at first, but that they thought the policy was undermined significantly by FOMC members’ comments in March.

What happened in March? As the Washington Post’s Ylan Q. Mui wrote in March, the Fed seemed split into two camps: “Hawks, who want to curtail quantitative easing programs because of the risks they create. And doves, who see evidence that they’re working well enough at stimulating growth that they might soon no longer be needed.” The Fed’s March minutes noted “that continued solid improvement in the outlook for the labor market could prompt the Committee to slow the pace of purchases beginning at some point over the next several meetings." Several economists I spoke with thought that this hurt the expansionary impact of monetary policy.

Watch this again in slow-motion: Aggressive monetary policy begins to expand the economy, or at least gives the impression the economy is expanding. Central bankers argue that this means that they can pull back quicker than expected. (They don’t pull back; they just say they will.) The expectations for future policy then collapse, because central bankers signal that it will end too soon. The economy then weakens, going back to where it started.

This is monetary policy in the style of those T-Rex scenes in Jurassic Park. The central bank says, “we are committing to extraordinary action,” and then everyone has to remain incredibly still for a long time. Just a random dovish member of the FOMC saying, “hey, maybe it’s working so well we should consider ending it early” is enough for dinosaurs to eat everyone the policy’s effectiveness in impacting expectations to collapse.

If you believe this is a serious problem for monetary policy, well, this is precisely the time inconsistency problem Krugman identified in the late 1990s for Japan. The neutrality of money will cause an expansion to push up either prices or output, provided markets believe that it is permanent and that the central bank won’t immediately rush to stabilize prices the moment it gets a chance. And if the comments in March show that central banks aren’t going to “credibly promise to be irresponsible” with the Evans Rule, how will they do it with 4 percent inflation?

Note that four months after the stimulus was passed, no Democrats would stand up and defend it. Yet the stimulus was carried out without a problem. Four months after the Evans Rule, it looked like Bernanke’s coalition was weakening, and that has major implications. The Wonkblog piece I wrote notes that the next step will have to be an explicit, permanent, new target. That would get around these issues about how permanent the monetary expansion will be. But if there’s barely enough support for the Evans Rule, it makes me worried we won’t get there anytime soon.

[1] Responses include: Scott Sumner, Matt Yglesias, Paul Krugman, Reihan Salam, Ryan Avent, David Becksworth, Uneasy Money, Ramesh Ponnuru, southofthe49th, as well as a communist anarchist critique at pogoprinciple which notes that my “post-Fordist national fascist state fiscal policy” is exhausted. And that while “Keynesians are playing checkers, the monetarists are playing three dimensional chess.” Hmmm.

[2] If the Evans Rule was a bust from the get-go, was all that 2012 energy put into trying to find clever ways of explaining “Delphic” versus “Odyssean” guidance language to a general audience a waste of time? Boo.

Follow or contact the Rortybomb blog:

  

Share This

Reinhart-Rogoff a Week Later: Why Does This Matter?

Apr 24, 2013Mike Konczal

Retreat!

Well this is progress. We are seeing distancing by conservative writers on the Reinhart/Rogoff thesis. In Feburary, Douglas Holtz-Eakin wrote, “The debt hurts the economy already. The canonical work of Carmen Reinhart and Kenneth Rogoff and its successors carry a clear message: countries that have gross government debt in excess of 90% of Gross Domestic Product (GDP) are in the debt danger zone. Entering the zone means slower economic growth. Granted, the research is not yet robust enough to say exactly when and how a crisis will engulf the US, but there is no reason to believe that America is somehow immune." (h/t QZ.)

Today, Holtz-Eakin writes about Reinhart and Rogoff in National Review, but drops the "canonical" status. Now they are just two random people with some common sense the left is beating up. "In order to distract from the dismal state of analytic and actual economic affairs, the latest tactic is to blame...two researchers, Carmen Reinhardt and Kenneth Rogoff, who made the reasonable observation that ever-larger amounts of debt must eventually be associated with bad economic news."

That's not actually what they said, and if you read Holtz-Eakin in February Reinhart-Rogoff is sufficient evidence to enact the specific plans he wants. Now there's no defense of the "danger zone" argument; just the idea that the stimulus failed. Retreat!

This is getting a bigger audience. (If you haven't seen The Colbert Report on the Reinhart/Rogoff issue, it's fantastic.) But going foward, plan beats no plan. And a critique isn't a plan. So what should we conclude about Reinhart-Rogoff a week later, now that the critique seems to have won? How should the government approach the debt?

Cliffs and Tradeoffs

One thing about the "cliff" metaphor is that there's no tradeoff that would make it acceptable. If you are driving, there are all kinds of tradeoffs you make with your route, but you'd never agree to a tradeoff that has you driving off a cliff. There were numerous other ways of describing this scenario, either the technical "nonlinearities" or the "danger zone" of Eakin just a few months ago.

With the danger zone metaphor now out of play, perhaps economists can see the relevant tradeoffs more clearly. Reinhart-Rogoff stand with a small negative relationship between debt and growth, one that is likely driven by low growth rather than high debt. And despite what you've heard, there's no literature that shows the casuation in the other direction.

But let's say they found it. Well, what's the relevant tradeoff? If there's even a basic fiscal multipler at work, the upside more than compensates for the downside. As Brad DeLong notes, if you consider a multipler of 1.5 and a marginal tax share of 1/3, the small correlation people are finding - Delong uses 0.006 percent from an in-house estimate - are more than canceled. Spending 2 percent more causes a bump of 3 percent of GDP, while debt goes up 1 percent of GDP. As Delong notes, "3% higher GDP this year and slower growth that leads to GDP lower by 0.06% in a decade. And this is supposed to be an argument against expansionary fiscal policy right now?"

And as the IMF noted recently, "Studies suggest that fiscal multipliers are currently high in many advanced economies. One important implication is that fiscal tightening could raise the debt ratio in the short term, as fiscal gains are partly wiped out by the decline in output." Now is the time to move away from austerity and towards more expansion. There are costs (though debt servicing is at a historic low), but the benefits outweight them.
 
Right now people are debating what level of debt-to-GDP we should level out at and how quickly that debt should begin to come down. There's also the debt ceiling battle coming at the end of the summer. This new information will influence all these conversations.
 
Was it Important?
 
Meanwhile, Ryan Avent at The Economist's Free Exchange writes about Reinhart-Rogoff here. To address one of his points, Avent also thinks that the Reinhart-Rogoff cliff results are overplayed as something that actually impacted policy. This is always a tricky question to answer, but Reinhart-Rogoff certainly dominated the sensible, mainstream conversation over the deficit and was a favorite go-to for conservatives in particular. I also think it was popular among journalists, because it was a straight-line number that was supposed to not require complicated modelng. Media Matters put together this video of people discussing the Reinhart-Rogoff cutoff:

(Bonus fun: in the video, at the 1m20s, Niall Ferguson refers to the 90 percent result as "the law of finance.")

I think the ideas matter. (Why else would we do this?) I think it's important to understand this revelation in light of other players moving against austerity, including both the IMF and the financial industry. As people reposition themselves, understanding that one of the core old ideas is now out of play allows a different reconfiguration of power. Also, it's worth repeating, it's becoming harder to pretend that austerity hasn't failed. It didn't even do the actual goal, which was reduce the debt-to-GDP ratios of the countries that were being targeted.

Citizens across the world who were normally indifferent are realizing that they were sold a bad bag of goods when it came to austerity and belt-tightening. They are now trying to figure out what happened, and how things could be done differently. As these are such critical issues, this examination is important. It's great we are having it.

Follow or contact the Rortybomb blog:

  

 

Retreat!

Well this is progress. We are seeing distancing by conservative writers on the Reinhart/Rogoff thesis. In Feburary, Douglas Holtz-Eakin wrote, “The debt hurts the economy already. The canonical work of Carmen Reinhart and Kenneth Rogoff and its successors carry a clear message: countries that have gross government debt in excess of 90% of Gross Domestic Product (GDP) are in the debt danger zone. Entering the zone means slower economic growth. Granted, the research is not yet robust enough to say exactly when and how a crisis will engulf the US, but there is no reason to believe that America is somehow immune." (h/t QZ.)

Today, Holtz-Eakin writes about Reinhart and Rogoff in National Review, but drops the "canonical" status. Now they are just two random people with some common sense the left is beating up. "In order to distract from the dismal state of analytic and actual economic affairs, the latest tactic is to blame...two researchers, Carmen Reinhardt and Kenneth Rogoff, who made the reasonable observation that ever-larger amounts of debt must eventually be associated with bad economic news."

That's not actually what they said, and if you read Holtz-Eakin in February Reinhart-Rogoff is sufficient evidence to enact the specific plans he wants. Now there's no defense of the "danger zone" argument; just the idea that the stimulus failed. Retreat!

This is getting a bigger audience. (If you haven't seen The Colbert Report on the Reinhart/Rogoff issue, it's fantastic.) But going foward, plan beats no plan. And a critique isn't a plan. So what should we conclude about Reinhart-Rogoff a week later, now that the critique seems to have won? How should the government approach the debt?

Cliffs and Tradeoffs

One thing about the "cliff" metaphor is that there's no tradeoff that would make it acceptable. If you are driving, there are all kinds of tradeoffs you make with your route, but you'd never agree to a tradeoff that has you driving off a cliff. There were numerous other ways of describing this scenario, either the technical "nonlinearities" or the "danger zone" of Eakin just a few months ago.

With the danger zone metaphor now out of play, perhaps economists can see the relevant tradeoffs more clearly. Reinhart-Rogoff stand with a small negative relationship between debt and growth, one that is likely driven by low growth rather than high debt. And despite what you've heard, there's no literature that shows the casuation in the other direction.

But let's say they found it. Well, what's the relevant tradeoff? If there's even a basic fiscal multipler at work, the upside more than compensates for the downside. As Brad DeLong notes, if you consider a multipler of 1.5 and a marginal tax share of 1/3, the small correlation people are finding - Delong uses 0.006 percent from an in-house estimate - are more than canceled. Spending 2 percent more causes a bump of 3 percent of GDP, while debt goes up 1 percent of GDP. As Delong notes, "3% higher GDP this year and slower growth that leads to GDP lower by 0.06% in a decade. And this is supposed to be an argument against expansionary fiscal policy right now?"

And as the IMF noted recently, "Studies suggest that fiscal multipliers are currently high in many advanced economies. One important implication is that fiscal tightening could raise the debt ratio in the short term, as fiscal gains are partly wiped out by the decline in output." Now is the time to move away from austerity and towards more expansion. There are costs (though debt servicing is at a historic low), but the benefits outweight them.
 
Right now people are debating what level of debt-to-GDP we should level out at and how quickly that debt should begin to come down. There's also the debt ceiling battle coming at the end of the summer. This new information will influence all these conversations.
 
Was it Important?
 
Meanwhile, Ryan Avent at The Economist's Free Exchange writes about Reinhart-Rogoff here. To address one of his points, Avent also thinks that the Reinhart-Rogoff cliff results are overplayed as something that actually impacted policy. This is always a tricky question to answer, but Reinhart-Rogoff certainly dominated the sensible, mainstream conversation over the deficit and was a favorite go-to for conservatives in particular. I also think it was popular among journalists, because it was a straight-line number that was supposed to not require complicated modelng. Media Matters put together this video of people discussing the Reinhart-Rogoff cutoff:

(Bonus fun: in the video, at the 1m20s, Niall Ferguson refers to the 90 percent result as "the law of finance.")

I think the ideas matter. (Why else would we do this?) I think it's important to understand this revelation in light of other players moving against austerity, including both the IMF and the financial industry. As people reposition themselves, understanding that one of the core old ideas is now out of play allows a different reconfiguration of power. Also, it's worth repeating, it's becoming harder to pretend that austerity hasn't failed. It didn't even do the actual goal, which was reduce the debt-to-GDP ratios of the countries that were being targeted.

Citizens across the world who were normally indifferent are realizing that they were sold a bad bag of goods when it came to austerity and belt-tightening. They are now trying to figure out what happened, and how things could be done differently. As these are such critical issues, this examination is important. It's great we are having it.

Follow or contact the Rortybomb blog:

  

 

Share This

What's the Best Way to Help the Long-Term Unemployed? Full Employment.

Apr 24, 2013Mike Konczal

What's the best way to help the long-term unemployed? There's new concern about how difficult it is for the long-term unemployed to find jobs in light of an interesting study by Rand Ghayad, a visiting scholar at the Boston Fed and PhD candidate. Ghayad sent out resumes that were identical except for how long the candidate was unemployed, and the longer they were unemployed, the less likely it was they would get called back. Matthew O'Brien has a great writeup of the study here, and there's additional thoughts from Megan McArdle, Paul Krugman, Felix Salmon, and Matt Yglesias.

The impact of long-term unemployment on human lives is very real, and I think the government should be combating it using every tool it has. However, I want to push back on a few of the economic ideas that tend to hover in the background of these discussions; specifically, the idea that we should consider the long-term unemployed uniquely in trouble in this economy. Because, based on my interpretation of the evidence, the best approach to handling this problem is to aim for full employment.

It's well known that it is harder for those who have been out of the labor force the longest to find jobs. It would be weird if Ghayad hadn't found that result. There is a large debate in the literature about whether this is driven by employers or job candidates, and Ghayad provides a very useful study finding that employers are a key part here.

But let's look at the likelihood of finding a job in three different economic scenarios (2000, 2007, and 2012) by duration of unemployment:

But notice that when the economy is much stronger, as it was in 2000 when unemployment averaged 4 percent, the rate at which the long-term unemployed find jobs jumps up. Let's zoom in on the last category, the job-finding rate of those who have been searching for a job for 53 weeks or longer, and chart it back to 1995. (Since the data, provided by the BLS, is not seasonally adjusted, the number here is a 12-month rolling average.)

As you can see, it's much easier for the long-term unemployed to find jobs when there's a tight labor market, like there was in the late 1990s. This rate collapses in a recession, and with years of 7+ percent unemployment, it has stayed depressed.

A lot of people are drawing conclusions that something has broken in long-term unemployment based on a previous paper by Rand Ghayad, where he disaggregates the Beveridge Curve by unemployment duration. I've been critical of that paper. I think, strictly speaking, that the disaggregation just tells us that the long-term unemployed have become a larger percentage of the unemployed, which we knew. Meanwhile, the labor market is depressed for everyone, even short-term unemployed (also see SF Fed for more evidence of this). As the long-term unemployed are less likely to drop out of the labor market than in normal times right now, the dramatic increase in the long-term unemployed hasn't turned into a large drop in labor force participation like many worry about.

We should do things that are smart policies that target the long-term unemployed. Amy Taub of Demos has done convincing work on why ending credit checks as part of the job interview process would be a good idea. Extending unemployment insurance is also important. But the idea that we should change course away from boosting the general economy strikes me as a bad idea. The long-term unemployed experience the worst impact of a generally weak economy. But its that weak economy that is doing the damage. If unemployment was actually brought down, which we could do with more expansonary policy, then employers couldn't afford to be so choosy.

Follow or contact the Rortybomb blog:

  

 

What's the best way to help the long-term unemployed? There's new concern about how difficult it is for the long-term unemployed to find jobs in light of an interesting study by Rand Ghayad, a visiting scholar at the Boston Fed and PhD candidate. Ghayad sent out resumes that were identical except for how long the candidate was unemployed, and the longer they were unemployed, the less likely it was they would get called back. Matthew O'Brien has a great writeup of the study here, and there's additional thoughts from Megan McArdle, Paul Krugman, Felix Salmon, and Matt Yglesias.

The impact of long-term unemployment on human lives is very real, and I think the government should be combating it using every tool it has. However, I want to push back on a few of the economic ideas that tend to hover in the background of these discussions; specifically, the idea that we should consider the long-term unemployed uniquely in trouble in this economy. Because, based on my interpretation of the evidence, the best approach to handling this problem is to aim for full employment.

It's well known that it is harder for those who have been out of the labor force the longest to find jobs. It would be weird if Ghayad hadn't found that result. There is a large debate in the literature about whether this is driven by employers or job candidates, and Ghayad provides a very useful study finding that employers are a key part here.

But let's look at the likelihood of finding a job in three different economic scenarios (2000, 2007, and 2012) by duration of unemployment:

But notice that when the economy is much stronger, as it was in 2000 when unemployment averaged 4 percent, the rate at which the long-term unemployed find jobs jumps up. Let's zoom in on the last category, the job-finding rate of those who have been searching for a job for 53 weeks or longer, and chart it back to 1995. (Since the data, provided by the BLS, is not seasonally adjusted, the number here is a 12-month rolling average.)

As you can see, it's much easier for the long-term unemployed to find jobs when there's a tight labor market, like there was in the late 1990s. This rate collapses in a recession, and with years of 7+ percent unemployment, it has stayed depressed.

A lot of people are drawing conclusions that something has broken in long-term unemployment based on a previous paper by Rand Ghayad, where he disaggregates the Beveridge Curve by unemployment duration. I've been critical of that paper. I think, strictly speaking, that the disaggregation just tells us that the long-term unemployed have become a larger percentage of the unemployed, which we knew. Meanwhile, the labor market is depressed for everyone, even short-term unemployed (also see SF Fed for more evidence of this). As the long-term unemployed are less likely to drop out of the labor market than in normal times right now, the dramatic increase in the long-term unemployed hasn't turned into a large drop in labor force participation like many worry about.

We should do things that are smart policies that target the long-term unemployed. Amy Taub of Demos has done convincing work on why ending credit checks as part of the job interview process would be a good idea. Extending unemployment insurance is also important. But the idea that we should change course away from boosting the general economy strikes me as a bad idea. The long-term unemployed experience the worst impact of a generally weak economy. But its that weak economy that is doing the damage. If unemployment was actually brought down, which we could do with more expansonary policy, then employers couldn't afford to be so choosy.

Follow or contact the Rortybomb blog:

  

 

Share This

Are Student Loans Becoming a Macroeconomic Issue?

Apr 23, 2013Mike Konczal

What's the general economic consensus on the impact of student loans on the household finances of those who hold them? Here's "Student Loans: Do College Students Borrow Too Much—Or Not Enough?" (Christopher Avery and Sarah Turner, 2012), which argues, "[t]here is little evidence to suggest that the average burden of loan repayment relative to income has increased in recent years." Using data from 2004-2009, the authors find that "the mean ratio of monthly payments to income is 10.5 percent" for those in repayment six years after initial enrollment.

They boost that number with a 2006 study by Baum and Schwarz to conclude that two trends cancel each other out: there's rising debt but steady student debt-to-income ratios. How can this happen? It "can be attributed to a combination of rising earnings, declining interest rates, and increased use of extended repayment options." This is how, though average total undergraduate debt jumped 66 percent to a value of $18,900 from 1997 to 2002, "average monthly payments increased by only 13 percent over these five years. The mean ratio of payments to income actually declined from 11 percent to 9 percent because borrower.”

Let's put this a different way. If you asked economists looking at the data if student loans could be having a macroeconomic effect, especially through a financial burden on those that have them, they'd say that the actual percent of monthly income paying student loans hasn't changed all that much since the 1990s. They may be making larger lifetime payments, since they'll carry the debts longer, but that's a choice they are making, which could reflect positive or negative developments. Certaintly there's no short-term strain. So there aren't any economic consequences worth mentioning when it comes to student loans.

I always thought this approach had problems. First, they were only looking at the pre-crisis era, so we couldn't see the impact of student loans once we hit a serious problem. And they were just rough averages of short-term income aggregates, rather than looking at specific individuals with or without student-debt and seeing what kinds of spending, particularly on longer-term durable goods, they do. But since I had no data myself, I never pushed on this very hard. Part of the problem is that student loans have happened relatively quickly, so quantitatively it's hard for data agencies to adjust their techniques to "see" this data easily, and not just lump them in with "other debts."

That is starting to change. The Federal Reserve Bank of New York is doing some high-end analysis of student loans, and their economists Meta Brown and Sydnee Caldwell have a great post from last week, "Young Student Loan Borrowers Retreat from Housing and Auto Markets." They find that over the past decade, people with student loans were more likely to have a mortgage at age 30 and a car loan at age 25. In the crisis this edge has collapsed:

There's a similar dynamic for car loans.

The researchers argue that two obvious explanations stands out for this collapse. The first is that the actual future expected earnings have fallen for this group, so they are going to spend less. The second is that credit constraints are especially binding, as those with student loans have a worse credit score than those without.

Derek Thompson at The Altantic Business responds critically, arguing that: (1) cars and mortgages are falling out of favor with young people, so this is likely a secular trend; (2) young people are essentially doing a "debt swap," switching cars and mortgages for education to take advantage of an education premium, and the cars and mortgages will come later; and (3) though this is, at best, a short-term drag on the economy and reflecting short-term problems, it'll super-charge our economy come later.

What should we make of this?

(1) It's possible that there is a secular trend to it, with young people not wanting mortgages or cars. But why wouldn't the spread survive? "People with student loans" is a broad category of people, and it is difficult to assume that it's just people moving to become renters in urban cores driving the entire thing. The collapse of the spread between the two coinciding with the crisis makes it hard to believe it's just a coincidence.

(2) As discussed at the beginning, the overall idea in the student loan data literature is that student loans shouldn't have a negative impact on consumption, especially at the national level. The extra cost of servicing the debt is more than balanced out by the extra income earned, even if the length of the debt needs to adjust to meet that. Indeed, there's often a "best investment ever" or "leaving money on the table" aspect to the discussion of higher education and student loans. So if this data holds, it's a major change from the normal way economists understand this.

And the issue of student debt is where the problem with the "education premium" is going to hit a wall. The college premium is driven just as much by high school wages falling as it is by college-educated wages increasing, which has slowed in the past decade. So if you have to take on large debt to secure a stagnating college-level income, it suddenly isn't clear that it is such a great deal, even if there's a strictly defined "premium" over the alternative.

(3) It isn't clear that the upswing in people, particularly women, taking on additional education is involved with this collapse in borrowing, as the ages of 25 and 30 cut off many people in school. I think it would reflect the collapse in the housing market, but the auto loan market is there as well. It is true that the economy as a whole is deleveraging, but that is largely reflective of housing and foreclosures.

How much this reverts if we get back to full employment and whether there's a "swap" that could lead to a better long-term economy are good questions, but the fact that we even have to put the question these way shows a change in what economists believed about student loans. No matter what, this shows that education isn't enough of an insurance against the business cycle.

And I actually see it the other way - right now Ben Bernanke is working overtime to try and get interest rates to the lowest they've ever been, and he still can't induce borrowing by college-educated young people. Congress also lowered interest rates on new student loans, though too many student loans are out there at high rates given the disinflationary times. If the lower lending isn't the result of institutional issues with credit scores, that means college-educated young people are particularly battered in this economy. And there could be a low-level drag on the economy for the foreseeable future.

If the New York Fed is taking requests, the biggest question I have is how student loans are impacting household formations. Young people are living with their parents for longer at a point where getting an additional million homebuyers would supercharge the economy. Are they living at home because they are unemployed, or because they are un(der)employed and have student loans? If it is the second, then there's definitely a serious lag on the economy.

But the real issue revealed by this study is that this stuff is important. It is showing up in national data; the people arguing that student loans simply disappear under higher earnings now have a macroeconomic issue to deal with.

Follow or contact the Rortybomb blog:

  

 

What's the general economic consensus on the impact of student loans on the household finances of those who hold them? Here's "Student Loans: Do College Students Borrow Too Much—Or Not Enough?" (Christopher Avery and Sarah Turner, 2012), which argues, "[t]here is little evidence to suggest that the average burden of loan repayment relative to income has increased in recent years." Using data from 2004-2009, the authors find that "the mean ratio of monthly payments to income is 10.5 percent" for those in repayment six years after initial enrollment.

They boost that number with a 2006 study by Baum and Schwarz to conclude that two trends cancel each other out: there's rising debt but steady student debt-to-income ratios. How can this happen? It "can be attributed to a combination of rising earnings, declining interest rates, and increased use of extended repayment options." This is how, though average total undergraduate debt jumped 66 percent to a value of $18,900 from 1997 to 2002, "average monthly payments increased by only 13 percent over these five years. The mean ratio of payments to income actually declined from 11 percent to 9 percent because borrower.”

Let's put this a different way. If you asked economists looking at the data if student loans could be having a macroeconomic effect, especially through a financial burden on those that have them, they'd say that the actual percent of monthly income paying student loans hasn't changed all that much since the 1990s. They may be making larger lifetime payments, since they'll carry the debts longer, but that's a choice they are making, which could reflect positive or negative developments. Certaintly there's no short-term strain. So there aren't any economic consequences worth mentioning when it comes to student loans.

I always thought this approach had problems. First, they were only looking at the pre-crisis era, so we couldn't see the impact of student loans once we hit a serious problem. And they were just rough averages of short-term income aggregates, rather than looking at specific individuals with or without student-debt and seeing what kinds of spending, particularly on longer-term durable goods, they do. But since I had no data myself, I never pushed on this very hard. Part of the problem is that student loans have happened relatively quickly, so quantitatively it's hard for data agencies to adjust their techniques to "see" this data easily, and not just lump them in with "other debts."

That is starting to change. The Federal Reserve Bank of New York is doing some high-end analysis of student loans, and their economists Meta Brown and Sydnee Caldwell have a great post from last week, "Young Student Loan Borrowers Retreat from Housing and Auto Markets." They find that over the past decade, people with student loans were more likely to have a mortgage at age 30 and a car loan at age 25. In the crisis this edge has collapsed:

There's a similar dynamic for car loans.

The researchers argue that two obvious explanations stands out for this collapse. The first is that the actual future expected earnings have fallen for this group, so they are going to spend less. The second is that credit constraints are especially binding, as those with student loans have a worse credit score than those without.

Derek Thompson at The Altantic Business responds critically, arguing that: (1) cars and mortgages are falling out of favor with young people, so this is likely a secular trend; (2) young people are essentially doing a "debt swap," switching cars and mortgages for education to take advantage of an education premium, and the cars and mortgages will come later; and (3) though this is, at best, a short-term drag on the economy and reflecting short-term problems, it'll super-charge our economy come later.

What should we make of this?

(1) It's possible that there is a secular trend to it, with young people not wanting mortgages or cars. But why wouldn't the spread survive? "People with student loans" is a broad category of people, and it is difficult to assume that it's just people moving to become renters in urban cores driving the entire thing. The collapse of the spread between the two coinciding with the crisis makes it hard to believe it's just a coincidence.

(2) As discussed at the beginning, the overall idea in the student loan data literature is that student loans shouldn't have a negative impact on consumption, especially at the national level. The extra cost of servicing the debt is more than balanced out by the extra income earned, even if the length of the debt needs to adjust to meet that. Indeed, there's often a "best investment ever" or "leaving money on the table" aspect to the discussion of higher education and student loans. So if this data holds, it's a major change from the normal way economists understand this.

And the issue of student debt is where the problem with the "education premium" is going to hit a wall. The college premium is driven just as much by high school wages falling as it is by college-educated wages increasing, which has slowed in the past decade. So if you have to take on large debt to secure a stagnating college-level income, it suddenly isn't clear that it is such a great deal, even if there's a strictly defined "premium" over the alternative.

(3) It isn't clear that the upswing in people, particularly women, taking on additional education is involved with this collapse in borrowing, as the ages of 25 and 30 cut off many people in school. I think it would reflect the collapse in the housing market, but the auto loan market is there as well. It is true that the economy as a whole is deleveraging, but that is largely reflective of housing and foreclosures.

How much this reverts if we get back to full employment and whether there's a "swap" that could lead to a better long-term economy are good questions, but the fact that we even have to put the question these way shows a change in what economists believed about student loans. No matter what, this shows that education isn't enough of an insurance against the business cycle.

And I actually see it the other way - right now Ben Bernanke is working overtime to try and get interest rates to the lowest they've ever been, and he still can't induce borrowing by college-educated young people. Congress also lowered interest rates on new student loans, though too many student loans are out there at high rates given the disinflationary times. If the lower lending isn't the result of institutional issues with credit scores, that means college-educated young people are particularly battered in this economy. And there could be a low-level drag on the economy for the foreseeable future.

If the New York Fed is taking requests, the biggest question I have is how student loans are impacting household formations. Young people are living with their parents for longer at a point where getting an additional million homebuyers would supercharge the economy. Are they living at home because they are unemployed, or because they are un(der)employed and have student loans? If it is the second, then there's definitely a serious lag on the economy.

But the real issue revealed by this study is that this stuff is important. It is showing up in national data; the people arguing that student loans simply disappear under higher earnings now have a macroeconomic issue to deal with.

Follow or contact the Rortybomb blog:

  

 

Share This

Guest Post: The Time Series of High Debt and Growth in Italy, Japan, and the United States

Apr 22, 2013Deepankar Basu

Mike Konczal here. In light of the collapse of the argument for a "cliff" in debt-to-GDP ratio, the most pressing issue to figure out is what to make of any minor relationship between debt and GDP. Which way does the causation work? Arin Dube wrote about this last week. Today, Deepankar Basu, assistant professor of economics at the University of Massachusetts-Amherst, takes a deep dive into this data using time series methods. Though this will involve some complicated techniques and charts, this work is crucial for understanding the current situation. I hope you check it out!

Public Debt and Economic Growth in the Postwar U.S., Italian and Japanese Economies

Deepankar Basu

A recent paper by Thomas Herndon, Michael Ash, and Robert Pollin (HAP) has effectively refuted one of the most frequently cited stats of recent years: countries with public debt above 90 percent of GDP experience sharp drop offs in economic growth. This “90 percent” result was put into circulation in 2010 by a paper written by Carmen Reinhart and Kenneth Rogoff (RR) and was heavily circulated by conservative policymakers, commentators, and economists.

I think the most important issue in the subsequent discussion in blogs and newspaper op-eds (for a quick rundown see here) is the question of causality. Does the negative correlation between public debt and economic growth rest on high levels of public debt causing low economic growth, as RR and other “austerians” claim (we borrow this term from Jim Crotty)? Or is the causation the reverse of what the austerians say, meaning low economic growth causes higher public debt? Using the HAP data set for 20 OECD countries, economist Arindrajit Dube of University of Massachusetts-Amherst has shown that (a) the negative relationship between public debt and growth is much stronger at low levels of growth, and (b) the association between past economic growth and current debt levels is much stronger than the association between current levels of debt and future economic growth. This is strong evidence for the second causation argument, where low growth leads to high debt.

While Dube has worked in a single equation framework with a panel data set, in this article, I change gears and ask a time series question instead: what useful information, if any, can one extract about the relationship between public debt and economic growth from historical data for individual countries? In particular, I ask the following question: can data on historical coevolution of public debt and economic growth in the postwar U.S., Italian and Japanese economies tell us anything useful about possible causal relationships among these two variables? To briefly summarize the results, I find that the time series pattern of the dynamic relationship between public debt and economic growth in the postwar U.S., Italian, and Japanese economies is consistent with low growth causing high debt rather than high debt causing low growth.

Why I Chose the U.S., Italy, and Japan

As reported in Table A-1 of the HAP paper, there are only 10 countries in the sample of advanced economies from 1946-2009 that witnessed debt-to-GDP ratios above 90. These countries generally experienced years with debt/GDP above 90 consecutively, so they form easily observable episodes. However, in the postwar period very few of these episodes exhibit notably slow growth. The U.S. from 1946-2009 has already been explained in detail here as being caused by the reduction in government spending due to demobilization from World War II.

Mike Konczal here. In light of the collapse of the argument for a "cliff" in debt-to-GDP ratio, the most pressing issue to figure out is what to make of any minor relationship between debt and GDP. Which way does the causation work? Arin Dube wrote about this last week. Today, Deepankar Basu, assistant professor of economics at the University of Massachusetts-Amherst, takes a deep dive into this data using time series methods. Though this will involve some complicated techniques and charts, this work is crucial for understanding the current situation. I hope you check it out!

Public Debt and Economic Growth in the Postwar U.S., Italian and Japanese Economies

Deepankar Basu

A recent paper by Thomas Herndon, Michael Ash, and Robert Pollin (HAP) has effectively refuted one of the most frequently cited stats of recent years: countries with public debt above 90 percent of GDP experience sharp drop offs in economic growth. This “90 percent” result was put into circulation in 2010 by a paper written by Carmen Reinhart and Kenneth Rogoff (RR) and was heavily circulated by conservative policymakers, commentators, and economists.

I think the most important issue in the subsequent discussion in blogs and newspaper op-eds (for a quick rundown see here) is the question of causality. Does the negative correlation between public debt and economic growth rest on high levels of public debt causing low economic growth, as RR and other “austerians” claim (we borrow this term from Jim Crotty)? Or is the causation the reverse of what the austerians say, meaning low economic growth causes higher public debt? Using the HAP data set for 20 OECD countries, economist Arindrajit Dube of University of Massachusetts-Amherst has shown that (a) the negative relationship between public debt and growth is much stronger at low levels of growth, and (b) the association between past economic growth and current debt levels is much stronger than the association between current levels of debt and future economic growth. This is strong evidence for the second causation argument, where low growth leads to high debt.

While Dube has worked in a single equation framework with a panel data set, in this article, I change gears and ask a time series question instead: what useful information, if any, can one extract about the relationship between public debt and economic growth from historical data for individual countries? In particular, I ask the following question: can data on historical coevolution of public debt and economic growth in the postwar U.S., Italian and Japanese economies tell us anything useful about possible causal relationships among these two variables? To briefly summarize the results, I find that the time series pattern of the dynamic relationship between public debt and economic growth in the postwar U.S., Italian, and Japanese economies is consistent with low growth causing high debt rather than high debt causing low growth.

Why I Chose the U.S., Italy, and Japan

As reported in Table A-1 of the HAP paper, there are only 10 countries in the sample of advanced economies from 1946-2009 that witnessed debt-to-GDP ratios above 90. These countries generally experienced years with debt/GDP above 90 consecutively, so they form easily observable episodes. However, in the postwar period very few of these episodes exhibit notably slow growth. The U.S. from 1946-2009 has already been explained in detail here as being caused by the reduction in government spending due to demobilization from World War II.

Other than the U.S., the only two countries with debt-to-GDP above 90 percent and average growth below 2 percent are Italy and Japan, with 1 percent and 0.7 percent respectively. With the inclusion of the earlier years from 1946-1949, New Zealand’s average growth increases from RR’s reported -7.6 percent to 2.6 percent. That is why I chose to focus in this article on U.S., Italy and Japan.

For the U.S. economy, federal debt declined from its high value (more than 100 percent of GDP) in the immediate postwar years to its lowest level in the mid-1970s (less than 25 percent of GDP), thereafter increasing till the mid-1990s and falling again over the next decade or so before picking up again with the onset of the global financial and economic crisis in 2007. The growth rate of real GDP has fluctuated a lot in the postwar period, with average values being higher in the two decades after the end of WWII than after the 1980s.

The Italian economy has experienced a different pattern: low levels of public debt till the early 1970s followed by a three-decade-long increase, with contemporary debt levels remaining at historical highs. Japan witnessed a very similar pattern: low levels of public debt till the mid-1970s followed by four decades of steady increase, with contemporary levels of debt hovering at historical highs. In terms of economic growth, both Italy and Japan witnessed a gradual slowdown, even as growth fluctuated at business cycle frequencies, over the entire postwar period. Thus, for all the three countries, there is large variation over time in both the variables (public debt and economic growth), which can be exploited to investigate their dynamic interrelationships. 

To motivate the analysis, in Figures 1.1, 1.2, and 1.3, I give time series plots of public debt and economic growth (year-on-year change in real GDP) for the three economies that I have chosen for this analysis: the U.S. economy between 1946 and 2012, the Italian economy between 1951 and 2009, and the Japanese economy between 1956 and 2009.

FIGURE 1.1  (USA): Time Series plots, for the period 1946-2012, of (a) federal debt held by public as a share of GDP (top panel), and (b) year-on-year change in real GDP (bottom panel). Source: data for debt is from Table B-78, Economic Report of the President, 2013; data for growth is from NIPA Table 1.1.1 

FIGURE 1.2  (ITALY): Time Series plots, for the period 1946-2012, of (a) federal debt held by public as a share of GDP (top panel), and (b) year-on-year change in real GDP (bottom panel). Source: Herndorn, Ash and Pollin (2013).

FIGURE 1.3  (JAPAN): Time Series plots, for the period 1946-2012, of (a) federal debt held by public as a share of GDP (top panel), and (b) year-on-year change in real GDP (bottom panel). Source: Herndorn, Ash and Pollin (2013). 

Why Use a Time Series Framework

Why do I adopt a time series framework? Adopting a time series lens allows one to use a vector autoregression (VAR) analysis, a popular time series methodology that is especially suitable for studying rich dynamic interactions among a group of time series variables. The pattern of dynamic interactions (allowing for complex lagged effects) can be nicely summarized through plots of orthogonalized impulse response functions, which trace out the effect of an unexpected change in a variable on the time paths of all the variables in the system (orthogonalizing the error makes sure that the effect of impulses to one error is not contaminated by cross correlation with other errors in the system).  In other words, this allows a researcher to address the following question: how would the variables in the VAR evolve over time when impacted by an unexpected change in one of the variables, holding other things constant? The key phrases here are “unexpected change in one of the variable” and “holding other things constant.” How do we interpret these key phrases?

Recall that in a VAR, every variable is explained by its past values and by past values of the other variables in the system. Each equation also has an unexplained part, the random error term. Thus an impulse imparted to the error (i.e., the unexplained part) in one of the equations in the VAR, can be understood as an “unexpected change,” or change in the variable that is not explained by its own past values and past values of the other variables in the VAR. Orthogonalizing the errors, on the other hand, implies that a change in one error is uncorrelated by changes in other errors in the system. Hence, when the researcher traces out the impact of an impulse to one error, she is confident that it is not picking up effects of changes in the other errors. This is a clear advantage over cross sectional analysis of correlations among variables, where distinguishing the effects of changes in one variable from the other might be difficult.  

In addition, a VAR allows each variable to be endogenous; i.e., it not only allows for lagged but also contemporaneous interaction among the variables. Thus, the researcher is not forced to take an a priori stand on whether a variable is exogenous (or not) as in a single equation estimation framework (where the dependent variable is, by assumption, endogenous, and some of the independent variables are exogenous).

Of course, a VAR will not, by itself, address the issue of causality; one needs to impose additional restrictions to distinguish causality from correlation (i.e., to tackle the so-called identification problem). A common identification strategy is to adopt a “causal ordering” of the variables in the VAR, which is a way to restrict some of the contemporaneous effects among the variables. If a variable is causally prior to another, this means that changes in the second variable cannot have any contemporaneous impacts on the first. In a two-variable vector autoregression (VAR), there are only two possible orderings: the first variable can be assumed to be causally prior to the second, or vice versa.

So, one can use both orderings (instead of taking a stand on which is the correct structural relationship) and see if the shape of the impulse response functions change according to the ordering adopted. If it does not, then the pattern of dynamic interaction captured by impulse response functions can be thought of as a reasonable approximation of underlying structural relationships. The point is this: if the impulse response functions display qualitatively similar shapes in both ordering of variables (and remember there are only two possibilities here), then the dynamic patterns of interaction are independent of the ordering. Either of them can be used to address the question: how does the system react to an unexpected change in one variable? This is a common empirical strategy in the time series literature, and as such we adopt it here. (This strategy becomes difficult to implement and interpret when there are more than two variables in the system, in which case theoretically motivated restrictions are imposed to get identification.)

Two-Variable VAR Analysis for Individual Countries

To investigate the debt-growth relationship, I estimate a two-variable VAR with an optimal number of lags (where public debt as a share of GDP and year-over-year change in real GDP are the two variables) for each of the three countries separately: the U.S. economy for the period 1946-2012, the Italian economy over 1951-2009, and the Japanese economy over the period 1956-2009. (I choose the “optimal” number of lags using the Akaike Information Criterion.) I find three interesting results.

First, the contemporaneous correlation between the errors in the two equations of the VAR is negative for each of the three countries (-0.56 for the U.S., -0.54 for Italy, and -0.30 for Japan). This suggests that unexpected changes in debt and economic growth move in the opposite direction in each of these countries. This finding is in line with existing results, both of Reinhart-Rogoff and their critics.

Second, I conduct Granger non-causality tests to understand lags of which of the two variables in the VAR better helps in predicting the other. Table 1 summarizes Granger non-causality test results for the three countries. The first column in Table 1 tests whether debt does not Granger-cause growth; i.e., the null hypothesis that all lags of debt enter the growth equation with zero coefficients. A high p-value indicates that the null hypothesis cannot be rejected; i.e., lags of debt do not help in predicting growth. The entries in the first column are all relatively large and show that lags of debt do not help in predicting growth with high levels of statistical significance. This is true for all three economies, and especially for Italy (which has a p-value of 0.81).

The second column in Table 1 tests for the opposite direction of predictability: it tests whether growth does not Granger-cause debt; i.e., the null hypothesis that all lags of growth enter the debt equation with zero coefficients. A low p-value indicates that the null hypothesis can be strongly rejected; i.e., lags of growth do help in predicting debt. The entries in column 2 are all relatively small and show that lags of growth help in strongly predicting debt for all three countries (both U.S. and Italy have p-values of 0, and Japan has a p-value of 0.04).

This finding about Granger non-causality is in line with similar results reported in 2010 by Josh Bivens and John Irons for the U.S. economy. The fact that similar results hold for Italy and Japan, which have been witnessing relatively higher levels of public debt in the past few decades, is indeed a strong rebuttal of austerian claims. It demonstrates that low growth leading to (or helping to predict) high debt is more consistent with the time series data than high debt leading to (or helping to predict) low growth. Moreover, this is true not only for the U.S. economy but also for Italy and Japan. 

Third, I analyze plots of impulse response functions (IRF) to decipher possible directions of effects running between debt and growth for all three countries for the two possible “orderings” of the variables. Figure 2.1, 2.2, and 2.3 display the orthogonalized IRFs with the first “ordering,” where debt is assumed to be “causally prior” to growth (meaning changes in debt can have a contemporaneous impact on growth but not the other way around). Figure 3.1, 3.2, and 3.3 display the orthogonalized IRFs with the alternative ordering, where growth is assumed to be “causally prior” to debt (meaning changes in growth can have a contemporaneous impact on debt but not the other way around).

 

 

FIGURE 2.1. (USA): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the U.S. economy for the period 1946- 2012 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Debt is causally prior to growth.

FIGURE 2.2. (ITALY): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the Italian economy for the period 1951- 2009 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Debt is causally prior to growth.

FIGURE 2.3. (JAPAN): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the Japanese economy for the period 1956- 2009 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Debt is causally prior to growth.

Impulse Response Function: Impact of Debt on Growth

Let us start with the first ordering. In the top panel (right) of Figure 2.1 (USA), a one standard deviation positive impulse to the debt shock (i.e., the error in the equation that predicts debt) reduces growth contemporaneously, but growth returns back to zero within a year and stays there after that. In the top (right) panel of Figure 2.2 (ITALY), a similar impulse to the debt shock reduces growth contemporaneously, and growth returns back to zero within the next two years and stays there after that (notice that the 90 percent confidence interval includes zero). In the top panel (right) of Figure 2.3 (JAPAN), a one standard deviation impulse to the debt shock reduces growth contemporaneously, but growth returns back to zero within a year and gradually falls over the next several years (though here, too, the 90 percent confidence interval includes zero).

What story do these pictures tell us? If debt has a contemporaneous effect on growth (but not the other way round), then an unexpected increase in the level of debt in any year (due, for instance, to an increase in the deficit of a government that has given a tax break) will reduce economic growth in that year, but the negative impact will be washed out relatively quickly. The system will return back to its original growth path within the next few years. The speed with which the system reverts back to its original state is quickest for the U.S, slower for Japan, and slowest for Italy.  

FIGURE 3.1. (USA): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the U.S. economy for the period 1946- 2012 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Growth is causally prior to debt.

FIGURE 3.2. (ITALY): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the Italian economy for the period 1951- 2009 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Growth is causally prior to debt.

FIGURE 3.3. (JAPAN): Orthogonalized impulse response functions using a Cholesky decomposition for a 2 variable VAR (debt and growth) with optimal number of lags (chosen with AIC). The recursive VAR is estimated with annual data for the Japanese economy for the period 1956- 2009 and 90 percent bootstrapped confidence intervals are included in the IRF plots. Ordering: Growth is causally prior to debt.

Let us now turn to the second ordering. In the top panel (right) of Figure 3.1 (USA), a one standard deviation impulse to the debt shock has no contemporaneous effect on growth, but there is a positive effect on growth for the next two years. In the top panel (right) of Figure 3.2 (ITALY), a one standard deviation impulse to the debt shock has no contemporaneous effect on growth, and a fluctuating (negative and positive) impact on growth which is not very precisely estimated (the 90 percent confidence interval includes zero). In the top panel (right) of Figure 3.3 (JAPAN), a one standard deviation impulse to the debt shock has no contemporaneous effect on growth, but growth experiences a positive impact for the next three years, after which it starts falling – all of which is estimated pretty imprecisely (the 90 percent confidence interval includes zero).

How should we interpret these pictures? In this case, only Italy displays a negative impact of debt on growth; both Japan and the U.S. show mildly positive impacts of unexpected changes in debt levels (though the effects are estimated pretty imprecisely). Thus, if it were the case that the contemporaneous effect between debt and growth runs from the latter to the former (as the second ordering assumes), then increases in levels of public debt might even have a positive impact on economic growth, as witnessed in the U.S. and Japan. Why might this be the case? This might be reflecting the positive multiplier effect on output growth of a boost to aggregate demand coming from an increase in the government’s deficit. Evidence for the U.S. and Japan suggests that this effect might be non-zero, at least in the short run.

Thus, for all three countries and in both orderings, an unexpected increase in debt in any year does not have any statistically significant negative effect on economic growth in future years. When I allow the contemporaneous effect to run from growth to debt, the short- to medium-term impact is positive for the U.S. and Japan, though the effects are not very precisely estimated. This evidence is contrary to RR’s claim that high debt leads to low growth.   

Impulse Response Function: Impact of Growth on Debt

Once again, let us start with the first ordering. In the bottom panel (left) of Figures 2.1 (USA), 2.2 (ITALY), and 2.3 (JAPAN), a one standard deviation impulse to the growth shock reduces debt unambiguously in the short and medium term. While debt starts returning to its initial level in the case of the U.S. economy after about five to six years, it keeps declining in the Italian and Japanese economies. (This seems to suggest that the impact of economic growth on debt levels is longer lasting in Italy and Japan than in the U.S.) The bottom panels (left) of Figures 3.1 (USA), 3.2 (ITALY), and 3.3 (JAPAN) display impulse response plots for a one standard deviation impulse to the growth shock for the second ordering. They paint a qualitatively similar picture to that seen for the first ordering.

So, what do these figures tell us? They show that an unexpected increase in economic growth (for instance, due to an increase in aggregate demand caused by expanding exports) will be associated with a decrease in levels of public debt. Hence, we can turn this picture around and infer the following: when there is an unexpected decrease in economic growth, it will be associated with an increase in the levels of public debt over the next several years. This is true for all the three countries and for both orderings of the variables in the VAR.

Moreover, unlike the effect of debt on growth (which we saw in the top panels of the figures), the effects of unexpected changes in growth on future debt levels are statistically significant (though imprecisely measured) up to about 10 years in the future. This evidence clearly supports the anti-austerian position that low growth leads to higher public debt.

Summary

To summarize, I find that the time series pattern of the dynamic relationship between public debt and economic growth in the postwar U.S., Italian, and Japanese economies is consistent with low growth causing high debt rather than the high debt causing low growth. I draw this conclusion from two types of analyses: Granger non-causality tests and an investigation of impulse response function plots.

Granger non-causality tests allow one to ask the following questions: (a) do debt levels in the past help in better predicting current economic growth, and (b) does economic growth in the past help in improving predictions of current debt levels? The evidence suggests that for the U.S., Italy, and Japan, the answer to the first question is a NO and the answer to the second is a YES.

Impulse response analysis allows one to address the following questions: (a) what is the impact of an unexpected increase in current debt levels on the future time path of economic growth, and (b) how does an unexpected decline in economic growth affect future levels of debt? The data suggests that an unexpected increase in debt levels has only a small effect on future economic growth but an unexpected decline in economic growth is associated with large and long-lasting increases in public debt levels.     

Thus, empirical evidence from time series analysis of the U.S., Italian, and Japanese economies seems to bolster the critique presented by our colleagues Herndon, Ash, and Pollin, as well as Dube and others, of the Reinhart-Rogoff claim that high public debt leads to low economic growth. If anything, the evidence supports causality running in the opposite direction: low growth causes higher public debt.

Follow or contact the Rortybomb blog:

  

 

Share This

Guest Post: Reinhart/Rogoff and Growth in a Time Before Debt

Apr 17, 2013Arindrajit Dube

[Mike Konczal here.  Yesterday I wrote about a paper by Thomas Herndon, Michael Ash and Robert Pollin of University of Massachusetts, Amherst. They replicated the influential Reinhart/Rogoff paper Growth in a Time of Debt. There were many responses on the internet, including Jared Bernstein, Matt YglesiasDean Baker, Paul Krugman, and many, many others. Reinhart and Rogoff have since responded with a statement. They believe that the findings do not "affects in any significant way the central message of the paper or that in our subsequent work." What is that message? That higher debt is associated with lower growth.

From the beginning many economists (Krugman, Bivens and Irons) have argued that their paper probably has the causation backwards: slow growth causes higher debt. But now that Herndon, Ash and Pollin have made the data used public, perhaps a talented econometrician could actually answer this? Arindrajit Dube was up for the challenge. Dube is an assistant professor of economics at the University of Massachusetts, Amherst.]

Growth in a Time Before Debt…

Recent work by my colleagues at UMass Thomas Herndon, Michael Ash and Robert Pollin (2013)—hereafter HAP—has demonstrated that in contrast to the apparent results in Reinhart and Rogoff (2010), there is no real discontinuity or "tipping point" around 90 percent of debt-to-GDP ratio.

In their response, Reinhart and Rogoff—hereafter RR—admit to the arithmetic mistakes, but argue that the negative correlation between debt-to-GDP ratio and growth in the corrected data still supports their original contention. Taking the Stata dataset that HAP generously made available as part of their replication exercise, I first reproduced the nonparametric graph in HAP (2013) using a lowess regression (slightly different than the specific method they used). The dotted lines are 95 percent bootstrapped confidence bands.

There is a visible negative relationship between growth and debt-to-GDP, but as HAP point out, the strength of the relationship is actually much stronger at low ratios of debt-to-GDP.  This makes us worry about the causal mechanism. After all, while a nonlinearity may be expected at high ratios due to a tipping point, the stronger negative relationship at low ratios is difficult to rationalize using a tipping point dynamic.

In their response, RR state that they were careful to distinguish between association and causality in their original research. Of course, we would only really care about this association if it likely reflects causality flowing from debt to growth (i.e. higher debt leading to lower growth, the lesson many take from RR's paper).

While it is difficult to ascertain causality from plots like this, we can leverage the time pattern of changes to gain some insight. Here is a simple question: does a high debt-to-GDP ratio better predict future growth rates, or past ones?  If the former is true, it would be consistent with the argument that higher debt levels cause growth to fall. On the other hand, if higher debt "predicts" past growth, that is a signature of reverse causality.

Below I have created similar plots by regressing current year's GDP on (1) the next 3 years' average GDP growth, and (2) last three years' average GDP growth. (My .do file is available here so anyone can make these graphs. After all, if I made an error, I'd rather know about it now.)

Figure 2:  Future and Past Growth Rates and Current Debt-to-GDP Ratio

As is evident, current period debt-to-GDP is a pretty poor predictor of future GDP growth at debt-to-GDP ratios of 30 or greater—the range where one might expect to find a tipping point dynamic.  But it does a great job predicting past growth.
 
This pattern is a telltale sign of reverse causality.  Why would this happen? Why would a fall in growth increase the debt-to-GDP ratio? One reason is just algebraic. The ratio has a numerator (debt) and denominator (GDP): any fall in GDP will mechanically boost the ratio.  Even if GDP growth doesn’t become negative, continuous growth in debt coupled with a GDP growth slowdown will also lead to a rise in the debt-to-GDP ratio.
 
Besides, there is also a less mechanical story. A recession leads to increased spending through automatic stabilizers such as unemployment insurance. And governments usually finance these using greater borrowing, as undergraduate macro-economics textbooks tell us governments should do. This is what happened in the U.S. during the past recession. For all of these reasons, we should expect reverse causality to be a problem here, and these bivariate plots are consistent with such a story.
 
Of course, these are just bivariate plots. To get the econometrics right, when looking at correlations between current period debt-to-GDP ratio and past or future GDP growth, you should also account for past or future debt-to-GDP ratio.
 
A standard way of doing this is using a "distributed lag" model - which just means regressing GDP growth on a set of leads and lags in debt to GDP ratio, and then forming an "impulse response" from, say, a hypothetical 10 point increase in the debt-to-GDP ratio (where 100 is when the debt level is equal to GDP).
 
Figure 3 below reports these impulse responses. What we find is exactly the pattern consistent with reverse causality.
 
The way to read this graph is to go from left to right. Here “-3” is 3 years before a 10 point increase in the debt-to-GDP ratio, “-2” is 2 years before the increase, etc.   The graph shows that GDP growth rates were unusually low and falling prior to the 10 point increase in the debt-to-GDP ratio.  If you average the growth differentials from the 3 years prior to the increase in debt, (i.e., the values associated with -3,-2,-1 on the X-axis), it is –0.6 (or 6/10 of a percent lower growth than usual) and statistically significant at the 5 percent level. In contrast, the average growth rates from years 1, 2 and 3+ after the 10 point increase in debt-to-GDP ratio is 0.2 (or 2/10 of one percent) higher than usual. 
 
Figure 3: Impulse Response of GDP Growth from a 10-point increase in Debt-to-Income Ratio

So what does this all show?  It shows that purely in terms of correlations, a 10 point increase in the debt-to-GDP ratio in the RR data is associated with a 6/10 of a percentage point lower growth in the 3 years prior to the increase, but actually a slightly larger than usual growth in the few years after the increase. During the year of the increase in debt-to-GDP ratio, GDP growth is really low, consistent with the algebraic effect of lower growth leading to a higher debt-to-GDP ratio.

All in all, these simple exercises suggest that the raw correlation between debt-to-GDP ratio and GDP growth probably reflects a fair amount of reverse casualty. We can’t simply use correlations like those used by RR (or ones presented here) to identify causal estimates.

[Aside:  For those who are more econometrically inclined, here is the picture with country and year fixed effects to soak up some of the heterogeneity.  Not much different. By the way, the standard errors in the panel regressions are clustered by country.]

----
Addendum.
 
Labor economists have long recognized that falling values of the outcome can sometimes precede the treatment. In the job training literature this is known as an "Ashenfelter dip." Those with a fall in earnings are more likely to enter training programs, creating a spurious negative correlation between training and wages. This has similarity to the problem of debt and growth studied here.
 
One way in which economists control for such dips is by including the lagged outcome as a control.  In this case, we can control for a 1-year lagged GDP growth using a partial linear model. This still allows for a nonlinear relationship between GDP growth and debt-to-GDP ratio like in the bivariate case, but in addition controls for last period's growth.
 
Here's the picture:
Controlling for the previous year's GDP growth largely erases the negative relationship between debt-to-GDP ratio and GDP growth, especially for the range where debt is 30 percent or more of GDP.  This is because a fall in GDP precedes the rise in Debt-to-GDP ratio. This is yet another demonstration that the simple bivariate negative correlation is driven in substantial part by reverse causality.

Follow or contact the Rortybomb blog:

  

 

[Mike Konczal here.  Yesterday I wrote about a paper by Thomas Herndon, Michael Ash and Robert Pollin of University of Massachusetts, Amherst. They replicated the influential Reinhart/Rogoff paper Growth in a Time of Debt. There were many responses on the internet, including Jared Bernstein, Matt YglesiasDean Baker, Paul Krugman, and many, many others. Reinhart and Rogoff have since responded with a statement. They believe that the findings do not "affects in any significant way the central message of the paper or that in our subsequent work." What is that message? That higher debt is associated with lower growth.

From the beginning many economists (Krugman, Bivens and Irons) have argued that their paper probably has the causation backwards: slow growth causes higher debt. But now that Herndon, Ash and Pollin have made the data used public, perhaps a talented econometrician could actually answer this? Arindrajit Dube was up for the challenge. Dube is an assistant professor of economics at the University of Massachusetts, Amherst.]

Growth in a Time Before Debt…

Recent work by my colleagues at UMass Thomas Herndon, Michael Ash and Robert Pollin (2013)—hereafter HAP—has demonstrated that in contrast to the apparent results in Reinhart and Rogoff (2010), there is no real discontinuity or "tipping point" around 90 percent of debt-to-GDP ratio.

In their response, Reinhart and Rogoff—hereafter RR—admit to the arithmetic mistakes, but argue that the negative correlation between debt-to-GDP ratio and growth in the corrected data still supports their original contention. Taking the Stata dataset that HAP generously made available as part of their replication exercise, I first reproduced the nonparametric graph in HAP (2013) using a lowess regression (slightly different than the specific method they used). The dotted lines are 95 percent bootstrapped confidence bands.

There is a visible negative relationship between growth and debt-to-GDP, but as HAP point out, the strength of the relationship is actually much stronger at low ratios of debt-to-GDP.  This makes us worry about the causal mechanism. After all, while a nonlinearity may be expected at high ratios due to a tipping point, the stronger negative relationship at low ratios is difficult to rationalize using a tipping point dynamic.

In their response, RR state that they were careful to distinguish between association and causality in their original research. Of course, we would only really care about this association if it likely reflects causality flowing from debt to growth (i.e. higher debt leading to lower growth, the lesson many take from RR's paper).

While it is difficult to ascertain causality from plots like this, we can leverage the time pattern of changes to gain some insight. Here is a simple question: does a high debt-to-GDP ratio better predict future growth rates, or past ones?  If the former is true, it would be consistent with the argument that higher debt levels cause growth to fall. On the other hand, if higher debt "predicts" past growth, that is a signature of reverse causality.

Below I have created similar plots by regressing current year's GDP on (1) the next 3 years' average GDP growth, and (2) last three years' average GDP growth. (My .do file is available here so anyone can make these graphs. After all, if I made an error, I'd rather know about it now.)

Figure 2:  Future and Past Growth Rates and Current Debt-to-GDP Ratio

As is evident, current period debt-to-GDP is a pretty poor predictor of future GDP growth at debt-to-GDP ratios of 30 or greater—the range where one might expect to find a tipping point dynamic.  But it does a great job predicting past growth.
 
This pattern is a telltale sign of reverse causality.  Why would this happen? Why would a fall in growth increase the debt-to-GDP ratio? One reason is just algebraic. The ratio has a numerator (debt) and denominator (GDP): any fall in GDP will mechanically boost the ratio.  Even if GDP growth doesn’t become negative, continuous growth in debt coupled with a GDP growth slowdown will also lead to a rise in the debt-to-GDP ratio.
 
Besides, there is also a less mechanical story. A recession leads to increased spending through automatic stabilizers such as unemployment insurance. And governments usually finance these using greater borrowing, as undergraduate macro-economics textbooks tell us governments should do. This is what happened in the U.S. during the past recession. For all of these reasons, we should expect reverse causality to be a problem here, and these bivariate plots are consistent with such a story.
 
Of course, these are just bivariate plots. To get the econometrics right, when looking at correlations between current period debt-to-GDP ratio and past or future GDP growth, you should also account for past or future debt-to-GDP ratio.
 
A standard way of doing this is using a "distributed lag" model - which just means regressing GDP growth on a set of leads and lags in debt to GDP ratio, and then forming an "impulse response" from, say, a hypothetical 10 point increase in the debt-to-GDP ratio (where 100 is when the debt level is equal to GDP).
 
Figure 3 below reports these impulse responses. What we find is exactly the pattern consistent with reverse causality.
 
The way to read this graph is to go from left to right. Here “-3” is 3 years before a 10 point increase in the debt-to-GDP ratio, “-2” is 2 years before the increase, etc.   The graph shows that GDP growth rates were unusually low and falling prior to the 10 point increase in the debt-to-GDP ratio.  If you average the growth differentials from the 3 years prior to the increase in debt, (i.e., the values associated with -3,-2,-1 on the X-axis), it is –0.6 (or 6/10 of a percent lower growth than usual) and statistically significant at the 5 percent level. In contrast, the average growth rates from years 1, 2 and 3+ after the 10 point increase in debt-to-GDP ratio is 0.2 (or 2/10 of one percent) higher than usual. 
 
Figure 3: Impulse Response of GDP Growth from a 10-point increase in Debt-to-Income Ratio

So what does this all show?  It shows that purely in terms of correlations, a 10 point increase in the debt-to-GDP ratio in the RR data is associated with a 6/10 of a percentage point lower growth in the 3 years prior to the increase, but actually a slightly larger than usual growth in the few years after the increase. During the year of the increase in debt-to-GDP ratio, GDP growth is really low, consistent with the algebraic effect of lower growth leading to a higher debt-to-GDP ratio.

All in all, these simple exercises suggest that the raw correlation between debt-to-GDP ratio and GDP growth probably reflects a fair amount of reverse casualty. We can’t simply use correlations like those used by RR (or ones presented here) to identify causal estimates.

[Aside:  For those who are more econometrically inclined, here is the picture with country and year fixed effects to soak up some of the heterogeneity.  Not much different. By the way, the standard errors in the panel regressions are clustered by country.]

----
Addendum.
 
Labor economists have long recognized that falling values of the outcome can sometimes precede the treatment. In the job training literature this is known as an "Ashenfelter dip." Those with a fall in earnings are more likely to enter training programs, creating a spurious negative correlation between training and wages. This has similarity to the problem of debt and growth studied here.
 
One way in which economists control for such dips is by including the lagged outcome as a control.  In this case, we can control for a 1-year lagged GDP growth using a partial linear model. This still allows for a nonlinear relationship between GDP growth and debt-to-GDP ratio like in the bivariate case, but in addition controls for last period's growth.
 
Here's the picture:
Controlling for the previous year's GDP growth largely erases the negative relationship between debt-to-GDP ratio and GDP growth, especially for the range where debt is 30 percent or more of GDP.  This is because a fall in GDP precedes the rise in Debt-to-GDP ratio. This is yet another demonstration that the simple bivariate negative correlation is driven in substantial part by reverse causality.

Follow or contact the Rortybomb blog:

  

 

Share This

Researchers Finally Replicated Reinhart-Rogoff, and There Are Serious Problems.

Apr 16, 2013Mike Konczal

In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, "Growth in a Time of Debt." Their "main result is that...median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower." Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.

This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan's Path to Prosperity budget states their study "found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth." The Washington Post editorial board takes it as an economic consensus view, stating that "debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth." 

Is it conclusive? One response has been to argue that the causation is backwards, or that slower growth leads to higher debt-to-GDP ratios. Josh Bivens and John Irons made this case at the Economic Policy Institute. But this assumes that the data is correct. From the beginning there have been complaints that Reinhart and Rogoff weren't releasing the data for their results (e.g. Dean Baker). I knew of several people trying to replicate the results who were bumping into walls left and right - it couldn't be done.

In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadhseet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.

They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result. Let's investigate further:

Selective Exclusions. Reinhart-Rogoff use 1946-2009 as their period, with the main difference among countries being their starting year. In their data set, there are 110 years of data available for countries that have a debt/GDP over 90 percent, but they only use 96 of those years. The paper didn't disclose which years they excluded or why.

Herndon-Ash-Pollin find that they exclude Australia (1946-1950), New Zealand (1946-1949), and Canada (1946-1950). This has consequences, as these countries have high-debt and solid growth. Canada had debt-to-GDP over 90 percent during this period and 3 percent growth. New Zealand had a debt/GDP over 90 percent from 1946-1951. If you use the average growth rate across all those years it is 2.58 percent. If you only use the last year, as Reinhart-Rogoff does, it has a growth rate of -7.6 percent. That's a big difference, especially considering how they weigh the countries.

Unconventional Weighting. Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that the U.K. is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.

In case that didn't make sense, let's look at an example. The U.K. has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for the U.K.

Now maybe you don't want to give equal weighting to years (technical aside: Herndon-Ash-Pollin bring up serial correlation as a possibility). Perhaps you want to take episodes. But this weighting significantly reduces the average; if you weight by the number of years you find a higher growth rate above 90 percent. Reinhart-Rogoff don't discuss this methodology, either the fact that they are weighing this way or the justification for it, in their paper.

Coding Error. As Herndon-Ash-Pollin puts it: "A coding error in the RR working spreadsheet entirely excludes five countries, Australia, Austria, Belgium, Canada, and Denmark, from the analysis. [Reinhart-Rogoff] averaged cells in lines 30 to 44 instead of lines 30 to 49...This spreadsheet error...is responsible for a -0.3 percentage-point error in RR's published average real GDP growth in the highest public debt/GDP category." Belgium, in particular, has 26 years with debt-to-GDP above 90 percent, with an average growth rate of 2.6 percent (though this is only counted as one total point due to the weighting above).

Being a bit of a doubting Thomas on this coding error, I wouldn't believe unless I touched the digital Excel wound myself. One of the authors was able to show me that, and here it is. You can see the Excel blue-box for formulas missing some data:

This error is needed to get the results they published, and it would go a long way to explaining why it has been impossible for others to replicate these results. If this error turns out to be an actual mistake Reinhart-Rogoff made, well, all I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel.

So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.

This is also good evidence for why you should release your data online, so it can be properly vetted. But beyond that, looking through the data and how much it can collapse because of this or that assumption, it becomes quite clear that there's no magic number out there. The debt needs to be thought of as a response to the contingent circumstances we find ourselves in, with mass unemployment, a Federal Reserve desperately trying to gain traction at the zero lower bound, and a gap between what we could be producing and what we are. The past guides us, but so far it has failed to provide evidence of an emergency threshold. In fact, it tells us that a larger deficit right now would help us greatly.

[UPDATE: People are responding to the Excel error, and that is important to document. But from a data point of view, the exclusion of the Post-World War II data is particularly troublesome, as that is driving the negative results. This needs to be explained, as does the weighting, which compresses the long periods of average growth and high debt.]

[UPDATE: Check out the next post from this blog on Reinhart-Rogoff, a guest post by economist Arindrajit Dube. Now that 90 percent debt-to-GDP is no longer a cliff for growth, what about the general trend between the two? Dube finds significant evidence that reverse causation is the culprit.]

Follow or contact the Rortybomb blog:

  

 

In 2010, economists Carmen Reinhart and Kenneth Rogoff released a paper, "Growth in a Time of Debt." Their "main result is that...median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower." Countries with debt-to-GDP ratios above 90 percent have a slightly negative average growth rate, in fact.

This has been one of the most cited stats in the public debate during the Great Recession. Paul Ryan's Path to Prosperity budget states their study "found conclusive empirical evidence that [debt] exceeding 90 percent of the economy has a significant negative effect on economic growth." The Washington Post editorial board takes it as an economic consensus view, stating that "debt-to-GDP could keep rising — and stick dangerously near the 90 percent mark that economists regard as a threat to sustainable economic growth." 

Is it conclusive? One response has been to argue that the causation is backwards, or that slower growth leads to higher debt-to-GDP ratios. Josh Bivens and John Irons made this case at the Economic Policy Institute. But this assumes that the data is correct. From the beginning there have been complaints that Reinhart and Rogoff weren't releasing the data for their results (e.g. Dean Baker). I knew of several people trying to replicate the results who were bumping into walls left and right - it couldn't be done.

In a new paper, "Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff," Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst successfully replicate the results. After trying to replicate the Reinhart-Rogoff results and failing, they reached out to Reinhart and Rogoff and they were willing to share their data spreadhseet. This allowed Herndon et al. to see how how Reinhart and Rogoff's data was constructed.

They find that three main issues stand out. First, Reinhart and Rogoff selectively exclude years of high debt and average growth. Second, they use a debatable method to weight the countries. Third, there also appears to be a coding error that excludes high-debt and average-growth countries. All three bias in favor of their result, and without them you don't get their controversial result. Let's investigate further:

Selective Exclusions. Reinhart-Rogoff use 1946-2009 as their period, with the main difference among countries being their starting year. In their data set, there are 110 years of data available for countries that have a debt/GDP over 90 percent, but they only use 96 of those years. The paper didn't disclose which years they excluded or why.

Herndon-Ash-Pollin find that they exclude Australia (1946-1950), New Zealand (1946-1949), and Canada (1946-1950). This has consequences, as these countries have high-debt and solid growth. Canada had debt-to-GDP over 90 percent during this period and 3 percent growth. New Zealand had a debt/GDP over 90 percent from 1946-1951. If you use the average growth rate across all those years it is 2.58 percent. If you only use the last year, as Reinhart-Rogoff does, it has a growth rate of -7.6 percent. That's a big difference, especially considering how they weigh the countries.

Unconventional Weighting. Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that the U.K. is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.

In case that didn't make sense, let's look at an example. The U.K. has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for the U.K.

Now maybe you don't want to give equal weighting to years (technical aside: Herndon-Ash-Pollin bring up serial correlation as a possibility). Perhaps you want to take episodes. But this weighting significantly reduces the average; if you weight by the number of years you find a higher growth rate above 90 percent. Reinhart-Rogoff don't discuss this methodology, either the fact that they are weighing this way or the justification for it, in their paper.

Coding Error. As Herndon-Ash-Pollin puts it: "A coding error in the RR working spreadsheet entirely excludes five countries, Australia, Austria, Belgium, Canada, and Denmark, from the analysis. [Reinhart-Rogoff] averaged cells in lines 30 to 44 instead of lines 30 to 49...This spreadsheet error...is responsible for a -0.3 percentage-point error in RR's published average real GDP growth in the highest public debt/GDP category." Belgium, in particular, has 26 years with debt-to-GDP above 90 percent, with an average growth rate of 2.6 percent (though this is only counted as one total point due to the weighting above).

Being a bit of a doubting Thomas on this coding error, I wouldn't believe unless I touched the digital Excel wound myself. One of the authors was able to show me that, and here it is. You can see the Excel blue-box for formulas missing some data:

This error is needed to get the results they published, and it would go a long way to explaining why it has been impossible for others to replicate these results. If this error turns out to be an actual mistake Reinhart-Rogoff made, well, all I can hope is that future historians note that one of the core empirical points providing the intellectual foundation for the global move to austerity in the early 2010s was based on someone accidentally not updating a row formula in Excel.

So what do Herndon-Ash-Pollin conclude? They find "the average real GDP growth rate for countries carrying a public debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as [Reinhart-Rogoff claim]." [UPDATE: To clarify, they find 2.2 percent if they include all the years, weigh by number of years, and avoid the Excel error.] Going further into the data, they are unable to find a breakpoint where growth falls quickly and significantly.

This is also good evidence for why you should release your data online, so it can be properly vetted. But beyond that, looking through the data and how much it can collapse because of this or that assumption, it becomes quite clear that there's no magic number out there. The debt needs to be thought of as a response to the contingent circumstances we find ourselves in, with mass unemployment, a Federal Reserve desperately trying to gain traction at the zero lower bound, and a gap between what we could be producing and what we are. The past guides us, but so far it has failed to provide evidence of an emergency threshold. In fact, it tells us that a larger deficit right now would help us greatly.

[UPDATE: People are responding to the Excel error, and that is important to document. But from a data point of view, the exclusion of the Post-World War II data is particularly troublesome, as that is driving the negative results. This needs to be explained, as does the weighting, which compresses the long periods of average growth and high debt.]

[UPDATE: Check out the next post from this blog on Reinhart-Rogoff, a guest post by economist Arindrajit Dube. Now that 90 percent debt-to-GDP is no longer a cliff for growth, what about the general trend between the two? Dube finds significant evidence that reverse causation is the culprit.]

Follow or contact the Rortybomb blog:

  

 

Share This

Mapping Out the Arguments Against Chained CPI

Apr 9, 2013Mike Konczal

Reports started coming in late last week that President Obama’s budget, to be released early tomorrow, will include a change to the cost-of-living adjustment (COLA) for Social Security. Specifically, it will adopt a “chained CPI” (consumer price index) measure.

Many people have been writing stories about why this is a bad idea. I want to generalize them into four major categories of critique of moving to a chained CPI (with one aside). As you read stories about the pros and cons of this change in the weeks ahead, hopefully this guide can provide some background.

Accuracy, or Lack Thereof

Economists like the idea of chained CPI because they think it’s more representative of how people behave when they substitute among goods. In this story, we have been over-correcting for inflation in the past decades.

However, as a letter from EPI, signed by 300 economists and social insurance experts, explains, it is just as likely as we are under-correcting. EPI notes "it is just as likely that the current COLA fails to keep up with rising costs confronting elderly and disabled beneficiaries." The current adjustment is based on an index of workers excluding retirees.

If you look into the data, the elderly spend a lot more of their limited money on housing, utilities, and medical care. Health care costs have been rising rapidly over the past several decades, and it is difficult to substitute on other necessary, fixed-price goods like utilities. With the notable exception of college costs, the things urban wage earners spend money on haven't increased in price as quickly as what the elderly purchase. As a result, the CPI-E (the index tailored to the elderly) has increased 3.3 percent a year from 1982 to 2007, while the CPI-W (tailored to wage earners) has only increased 3 percent a year. Definitionally, through the way it is calculated, chained CPI-W will always be lower than CPI-W. [Edit: This will almost certainly be lower, but it isn't definitionally true.]

As Dean Baker has noted, if accuracy were the only motive for changing COLA, it would be relatively easy to get a full, chained version of the index of prices faced by the elderly and use that. That has not been proposed.

Hedging Unexpected Longevity

Another argument is that this is a relatively small cut, or that a slower rate of growth shouldn’t really be thought of as a cut. But there’s a big problem with this.

There are many nice things about the design of Social Security, but one of them is that it is a form of insurance against the downsides of living longer than expected. Let’s say you retire at 65, believe you’ll live to 85, and save enough to make it to 88 just in case. And then you live to 92. Are those last five years absolutely miserable, with your savings completely depleted and an inability to earn market wages except through begging and charity? No, because my man Franklin Delano Roosevelt and Social Security got your back. Social Security helps hedge against two risks that are very difficult to manage: when you were born (and thus the years into which you’ll retire) and how long you’ll live.

Notice how chained CPI cuts, though. In the same way that compounding interest grows quickly over time because you get interest on what you’ve saved, a lower cost-of-living adjustment creates a lower baseline for future adjustments, so the cuts grow over time.

This means that the real cuts come from people who happen to live the longest. Which is precisely one of the risks Social Security is meant to combat. This is one reason why women, who live longer than men, are much more at risk from these chained CPI cuts.

Aside: Can’t We Balance the Downside?

You’ll notice liberals who support moving to chained CPI have complicated “swallow a bird to catch the spider who’s catching the fly” policy proposals to go along with it. If we swallow Obama’s chained CPI proposal, we’ll need to swallow an age “bump” to catch chained CPI from falling heavily on the very old. But after we swallow the age bump, we’ll need to swallow some sort of exemption for Supplemental Security Income to catch the fact that the change would still fall heavily on the initial benefit level for the poorest elderly and disabled people. And so on.

Doing all these fixes, of course, eliminates much of the savings that people are hoping to get. And it is unlikely that these clever ways of balancing the worst effects of the change will get even a single Republican vote. And of course, in spite of all this effort, Republicans could still call out the president for proposing to cut Social Security.

Neither Grand nor a Bargain

You’ll hear arguments that a Grand Bargain is necessary, so it’s better to bring Social Security into long-term balance now, with Democrats at the helm, than in the future, when there will be less time and an uncertain governance coalition. You can get fewer cuts and more revenue than you would otherwise and take the issue off the table for the foreseeable future to concentrate on other priorities.

But if that’s your idea, then this is a terrible deal and sets a terrible precedent, because this deal would accomplish none of your goals. You'd cut Social Security without putting in any new revenue. And it wouldn't be sufficient to close the long-term gap, so the issue would stay on the table. Indeed, the deficit hawks would probably be emboldened, viewing this as a "downpayment" on future cuts, and require any future attempts to get more revenue for Social Security, say by raising the payroll tax cap, to involve significant additional cuts.

We Need to Expand Social Security

As Michael Lind, Joshua Freedman, and Steven Hill of the New America Foundation, along with Robert Hiltonsmith of Demos, expertly document, Social Security should be expanded in the years ahead, not cut.

Retirement security is meant to be a three-legged stool of Social Security, private savings, and employer pensions. The last two legs of that stool have been collapsing in the past few decades, and there is no reason to believe that this will change in the near future. 401(k)s have been a boon for the rich to avoid taxes and save money that they’d be saving anyway, while it isn’t clear that average Americans have saved enough to offset declining pensions. Median wages have dropped in the recession and are likely to show little growth in the years ahead, which makes building private savings harder. There isn't a ton to cut - even the middle income quintile of retirees, making only around $20,000 a year, get 62 percent of their income from Social Security.

There are many ways to boost Social Security, and the New America paper introduces one. But as the authors note, “[a]ny strategy that expands the reliable and efficient public share of retirement security in America would be an improvement over today’s system, which is biased toward the affluent and skewed toward private savings.” And the best way to do programs is to build out programs that already work well.

Any other stories out there that require a new category?

Follow or contact the Rortybomb blog:

  

 

Reports started coming in late last week that President Obama’s budget, to be released early tomorrow, will include a change to the cost-of-living adjustment (COLA) for Social Security. Specifically, it will adopt a “chained CPI” (consumer price index) measure.

Many people have been writing stories about why this is a bad idea. I want to generalize them into four major categories of critique of moving to a chained CPI (with one aside). As you read stories about the pros and cons of this change in the weeks ahead, hopefully this guide can provide some background.

Accuracy, or Lack Thereof

Economists like the idea of chained CPI because they think it’s more representative of how people behave when they substitute among goods. In this story, we have been over-correcting for inflation in the past decades.

However, as a letter from EPI, signed by 300 economists and social insurance experts, explains, it is just as likely as we are under-correcting. EPI notes "it is just as likely that the current COLA fails to keep up with rising costs confronting elderly and disabled beneficiaries." The current adjustment is based on an index of workers excluding retirees.

If you look into the data, the elderly spend a lot more of their limited money on housing, utilities, and medical care. Health care costs have been rising rapidly over the past several decades, and it is difficult to substitute on other necessary, fixed-price goods like utilities. With the notable exception of college costs, the things urban wage earners spend money on haven't increased in price as quickly as what the elderly purchase. As a result, the CPI-E (the index tailored to the elderly) has increased 3.3 percent a year from 1982 to 2007, while the CPI-W (tailored to wage earners) has only increased 3 percent a year. Definitionally, through the way it is calculated, chained CPI-W will always be lower than CPI-W. [Edit: This will almost certainly be lower, but it isn't definitionally true.]

As Dean Baker has noted, if accuracy were the only motive for changing COLA, it would be relatively easy to get a full, chained version of the index of prices faced by the elderly and use that. That has not been proposed.

Hedging Unexpected Longevity

Another argument is that this is a relatively small cut, or that a slower rate of growth shouldn’t really be thought of as a cut. But there’s a big problem with this.

There are many nice things about the design of Social Security, but one of them is that it is a form of insurance against the downsides of living longer than expected. Let’s say you retire at 65, believe you’ll live to 85, and save enough to make it to 88 just in case. And then you live to 92. Are those last five years absolutely miserable, with your savings completely depleted and an inability to earn market wages except through begging and charity? No, because my man Franklin Delano Roosevelt and Social Security got your back. Social Security helps hedge against two risks that are very difficult to manage: when you were born (and thus the years into which you’ll retire) and how long you’ll live.

Notice how chained CPI cuts, though. In the same way that compounding interest grows quickly over time because you get interest on what you’ve saved, a lower cost-of-living adjustment creates a lower baseline for future adjustments, so the cuts grow over time.

This means that the real cuts come from people who happen to live the longest. Which is precisely one of the risks Social Security is meant to combat. This is one reason why women, who live longer than men, are much more at risk from these chained CPI cuts.

Aside: Can’t We Balance the Downside?

You’ll notice liberals who support moving to chained CPI have complicated “swallow a bird to catch the spider who’s catching the fly” policy proposals to go along with it. If we swallow Obama’s chained CPI proposal, we’ll need to swallow an age “bump” to catch chained CPI from falling heavily on the very old. But after we swallow the age bump, we’ll need to swallow some sort of exemption for Supplemental Security Income to catch the fact that the change would still fall heavily on the initial benefit level for the poorest elderly and disabled people. And so on.

Doing all these fixes, of course, eliminates much of the savings that people are hoping to get. And it is unlikely that these clever ways of balancing the worst effects of the change will get even a single Republican vote. And of course, in spite of all this effort, Republicans could still call out the president for proposing to cut Social Security.

Neither Grand nor a Bargain

You’ll hear arguments that a Grand Bargain is necessary, so it’s better to bring Social Security into long-term balance now, with Democrats at the helm, than in the future, when there will be less time and an uncertain governance coalition. You can get fewer cuts and more revenue than you would otherwise and take the issue off the table for the foreseeable future to concentrate on other priorities.

But if that’s your idea, then this is a terrible deal and sets a terrible precedent, because this deal would accomplish none of your goals. You'd cut Social Security without putting in any new revenue. And it wouldn't be sufficient to close the long-term gap, so the issue would stay on the table. Indeed, the deficit hawks would probably be emboldened, viewing this as a "downpayment" on future cuts, and require any future attempts to get more revenue for Social Security, say by raising the payroll tax cap, to involve significant additional cuts.

We Need to Expand Social Security

As Michael Lind, Joshua Freedman, and Steven Hill of the New America Foundation, along with Robert Hiltonsmith of Demos, expertly document, Social Security should be expanded in the years ahead, not cut.

Retirement security is meant to be a three-legged stool of Social Security, private savings, and employer pensions. The last two legs of that stool have been collapsing in the past few decades, and there is no reason to believe that this will change in the near future. 401(k)s have been a boon for the rich to avoid taxes and save money that they’d be saving anyway, while it isn’t clear that average Americans have saved enough to offset declining pensions. Median wages have dropped in the recession and are likely to show little growth in the years ahead, which makes building private savings harder. There isn't a ton to cut - even the middle income quintile of retirees, making only around $20,000 a year, get 62 percent of their income from Social Security.

There are many ways to boost Social Security, and the New America paper introduces one. But as the authors note, “[a]ny strategy that expands the reliable and efficient public share of retirement security in America would be an improvement over today’s system, which is biased toward the affluent and skewed toward private savings.” And the best way to do programs is to build out programs that already work well.

Any other stories out there that require a new category?

Follow or contact the Rortybomb blog:

  

 

Share This

What Does the Leaked Brown-Vitter Bill on Too Big To Fail Do?

Apr 9, 2013Mike Konczal

Sens. Sherrod Brown (D-Ohio) and David Vitter (R-La.) have been working on a bill to block the largest banks and financial firms from receiving federal subsidies for being deemed Too Big to Fail. On Friday, a draft version of that bill was leaked to Tim Fernholz of Quartz, much to Vitter’s chagrin. So, what does the bill do?

Let’s start with what it doesn’t do: It doesn’t break up the big banks. Rather, it focuses on how much capital they have to hold to protect themselves from disasters and would “prohibit any further implementation of” the international Basel III accords on financial regulation.

But let’s back up. Banks hold capital to protect against losses. The more capital they hold, the safer they are from crisis. As Alan Greenspan said after the financial meltdown, “[t]he reason I raise the capital issue so often, is that, in a sense, it solves every problem.” The “ratio” in question is the amount of capital against the amount of assets. So, if a bank has $10 in cash and $100 in assets, its capital ratio is 1:10.

Regulators set minimum capital ratios for banks. A capital ratio is like any other ratio, with a numerator and denominator. Some amount of capital held goes on top, and some value of the assets the bank holds goes on the bottom. The Brown-Vitter legislation would significantly change both parts of that ratio.

This is where things get a bit wonky: Common equity is viewed as the best form of capital because it can directly absorb losses. Basel III puts more emphasis on using common equity than previous versions. There’s a baseline 4.5 percent buffer, which is supplemented by a 2.5 percent “capital conservation buffer.” In addition, Basel III also has requirements for categories of less effective forms of capital, grouped under Tier 1 and Tier 2, or “total capital.”

As for the denominator, Basel III has risk-weighted the assets held by the firms. Firms use models and ratings to determine an asset’s risk. The riskier the asset, the more held capital needed in case of a loss. An asset rated as less risky requires less held capital. (You may remember the financial crisis involved both the ratings agencies and the financial sector getting these ratings very wrong for subprime mortgages.)

The Brown-Vitter proposal would not adopt Basel III. It would instead have a baseline of 10 percent equity in the numerator consisting solely of common equity. There are also surcharges for capital over $400 billion, which would cover all assets regardless of their risk-weighting. So there would be a significant increase in equity. The denominator would also increase, forcing banks to hold even more capital. This approach has much in common with the recent book “The Banker’s New Clothes,” by Anat Admati and Martin Hellwig, and should be seen as a win for those arguing along these lines.

Though it might seem like a technicality, risk-weighting assets is as significant in this proposal as a higher capital ratio. Risk-weighting was introduced by the first Basel in the late 1980s, using broad categories. It evolved to, among other goals, encourage firms to build out their risk management teams. However, those teams often acted as regulatory arbitrage teams instead. Many people view the system as encouraging race-to-the-bottom regulation dodging, backward-looking strategies that reduce capital held in a bubble and techniques that use derivatives and bad models to keep capital ratios low.

Regulators are growing more critical, both domestically and internationally, about Basel III. That regulation has several measures to address problems with risk-weighted assets, from adjusting the numbers used to requiring capital for derivative positions. But it is unclear how well these will work in practice.

Basel III has to be enacted by the banking regulators in the United States. The process began last summer (see a summary here). As Federal Reserve Governor Daniel Tarullo notes, regulators are expected to finish the Basel III capital rules this year and begin working on the rules for new liquidity requirements and other parts of Basel III.

It is interesting that the Brown-Vitter bill would replace, rather than supplement or modify, Basel III. Basel III has a leverage requirement that does similar work to the extra equity requirements Brown-Vitter recommends. That rule is only set at 4 percent, instead of 10 percent, but could be raised while keeping the rest of the Basel rules intact.

Because even those who want financial institutions to hold a lot more capital and less leverage may see a few downsides to abandoning Basel III. If firms go into Basel’s newly created capital conservation buffer, they can’t release dividends and are limited on bonuses. This, to use banking regulation jargon, is a way of requiring “prompt corrective action” on the part of both regulators and firms, who will normally drag their feet.

Basel III isn’t just capital ratios, though. Another important element is its new liquidity requirements. Liquidity here refers to the ability of banks to have enough funding to make payments in the short term, especially if there’s a crisis. Basel III includes a “liquidity coverage ratio,” which requires banks to keep enough liquid funding to survive a crisis.

Financial institutions have been lobbying against an aggressive implementation of Basel IIl’s liquidity requirements. They saw a small victory when some of the requirements were pulled back in the final rule in January. Brown-Vitter would remove them entirely — a remarkable win for the financial sector if the proposal passes.

(There are already some liquidity requirements made since the financial crisis, but they aren’t as extensive as Basel lll. And because they have evolved consciously alongside Basel III, it’s unclear what would happen to them.)

Note that this bill is explicit in not breaking up the big banks, either with a size cap or by reinstating Glass-Steagall. Two months ago in the House, Rep. John Campbell (R-Calif.) also introduced a bill designed to end Too Big To Fail, which called for banks to hold special convertible debt instruments while also repealing the Volcker Rule. There’s been a lot of talk about conservatives becoming aggressive on structural changes to the financial sector, but so far there’s no evidence of this in Congress.

During the drafting of Dodd-Frank, Treasury Secretary Timothy Geithner argued against Congress writing capital ratios into law, preferring to leave it to regulators at Basel to find an internationally agreed-upon solution. Basel’s endgame is now coming into focus, and there needs to be a debate on how well it addresses our outstanding problems in the financial sector when it comes to bank capital. This bill means reformers might start to rally around the idea that dramatically increasing capital, as well as removing the emphasis given to measuring risks, is an important part of ending Too Big To Fail. Even if that means going against the recent Basel accords.

 

Follow or contact the Rortybomb blog:
  

 

Sens. Sherrod Brown (D-Ohio) and David Vitter (R-La.) have been working on a bill to block the largest banks and financial firms from receiving federal subsidies for being deemed Too Big to Fail. On Friday, a draft version of that bill was leaked to Tim Fernholz of Quartz, much to Vitter’s chagrin. So, what does the bill do?

Let’s start with what it doesn’t do: It doesn’t break up the big banks. Rather, it focuses on how much capital they have to hold to protect themselves from disasters and would “prohibit any further implementation of” the international Basel III accords on financial regulation.

But let’s back up. Banks hold capital to protect against losses. The more capital they hold, the safer they are from crisis. As Alan Greenspan said after the financial meltdown, “[t]he reason I raise the capital issue so often, is that, in a sense, it solves every problem.” The “ratio” in question is the amount of capital against the amount of assets. So, if a bank has $10 in cash and $100 in assets, its capital ratio is 1:10.

Regulators set minimum capital ratios for banks. A capital ratio is like any other ratio, with a numerator and denominator. Some amount of capital held goes on top, and some value of the assets the bank holds goes on the bottom. The Brown-Vitter legislation would significantly change both parts of that ratio.

This is where things get a bit wonky: Common equity is viewed as the best form of capital because it can directly absorb losses. Basel III puts more emphasis on using common equity than previous versions. There’s a baseline 4.5 percent buffer, which is supplemented by a 2.5 percent “capital conservation buffer.” In addition, Basel III also has requirements for categories of less effective forms of capital, grouped under Tier 1 and Tier 2, or “total capital.”

As for the denominator, Basel III has risk-weighted the assets held by the firms. Firms use models and ratings to determine an asset’s risk. The riskier the asset, the more held capital needed in case of a loss. An asset rated as less risky requires less held capital. (You may remember the financial crisis involved both the ratings agencies and the financial sector getting these ratings very wrong for subprime mortgages.)

The Brown-Vitter proposal would not adopt Basel III. It would instead have a baseline of 10 percent equity in the numerator consisting solely of common equity. There are also surcharges for capital over $400 billion, which would cover all assets regardless of their risk-weighting. So there would be a significant increase in equity. The denominator would also increase, forcing banks to hold even more capital. This approach has much in common with the recent book “The Banker’s New Clothes,” by Anat Admati and Martin Hellwig, and should be seen as a win for those arguing along these lines.

Though it might seem like a technicality, risk-weighting assets is as significant in this proposal as a higher capital ratio. Risk-weighting was introduced by the first Basel in the late 1980s, using broad categories. It evolved to, among other goals, encourage firms to build out their risk management teams. However, those teams often acted as regulatory arbitrage teams instead. Many people view the system as encouraging race-to-the-bottom regulation dodging, backward-looking strategies that reduce capital held in a bubble and techniques that use derivatives and bad models to keep capital ratios low.

Regulators are growing more critical, both domestically and internationally, about Basel III. That regulation has several measures to address problems with risk-weighted assets, from adjusting the numbers used to requiring capital for derivative positions. But it is unclear how well these will work in practice.

Basel III has to be enacted by the banking regulators in the United States. The process began last summer (see a summary here). As Federal Reserve Governor Daniel Tarullo notes, regulators are expected to finish the Basel III capital rules this year and begin working on the rules for new liquidity requirements and other parts of Basel III.

It is interesting that the Brown-Vitter bill would replace, rather than supplement or modify, Basel III. Basel III has a leverage requirement that does similar work to the extra equity requirements Brown-Vitter recommends. That rule is only set at 4 percent, instead of 10 percent, but could be raised while keeping the rest of the Basel rules intact.

Because even those who want financial institutions to hold a lot more capital and less leverage may see a few downsides to abandoning Basel III. If firms go into Basel’s newly created capital conservation buffer, they can’t release dividends and are limited on bonuses. This, to use banking regulation jargon, is a way of requiring “prompt corrective action” on the part of both regulators and firms, who will normally drag their feet.

Basel III isn’t just capital ratios, though. Another important element is its new liquidity requirements. Liquidity here refers to the ability of banks to have enough funding to make payments in the short term, especially if there’s a crisis. Basel III includes a “liquidity coverage ratio,” which requires banks to keep enough liquid funding to survive a crisis.

Financial institutions have been lobbying against an aggressive implementation of Basel IIl’s liquidity requirements. They saw a small victory when some of the requirements were pulled back in the final rule in January. Brown-Vitter would remove them entirely — a remarkable win for the financial sector if the proposal passes.

(There are already some liquidity requirements made since the financial crisis, but they aren’t as extensive as Basel lll. And because they have evolved consciously alongside Basel III, it’s unclear what would happen to them.)

Note that this bill is explicit in not breaking up the big banks, either with a size cap or by reinstating Glass-Steagall. Two months ago in the House, Rep. John Campbell (R-Calif.) also introduced a bill designed to end Too Big To Fail, which called for banks to hold special convertible debt instruments while also repealing the Volcker Rule. There’s been a lot of talk about conservatives becoming aggressive on structural changes to the financial sector, but so far there’s no evidence of this in Congress.

During the drafting of Dodd-Frank, Treasury Secretary Timothy Geithner argued against Congress writing capital ratios into law, preferring to leave it to regulators at Basel to find an internationally agreed-upon solution. Basel’s endgame is now coming into focus, and there needs to be a debate on how well it addresses our outstanding problems in the financial sector when it comes to bank capital. This bill means reformers might start to rally around the idea that dramatically increasing capital, as well as removing the emphasis given to measuring risks, is an important part of ending Too Big To Fail. Even if that means going against the recent Basel accords.

 

Follow or contact the Rortybomb blog:
  

 

Share This

Pages