How Ronald Coase Demolished Current Libertarian Ideas About Property

Sep 3, 2013Mike Konczal

Property isn’t a vertical relationship between a person and an object, but instead is a horizontal, reciprocal relationship of exclusions between people. Since the benefit of one person in regard to property comes at the expense of someone else, there’s no logical or coherent way to invoke liberty or classical liberal principles of “do no harm” when it comes to how the law determines the shape of property. All we can do is pick among competing systems that try to achieve shared social goals.

That’s not an idea normally associated with the economist Ronald Coase, who died yesterday at 102. But it’s a very important part of his landmark paper, ”The Problem of Social Cost (1960), that goes missing when the right-wing celebrates his legacy. Let’s unpack it.

The paper is meant to address the issue of externalities, or when a third party pays a price (or get a benefit) as a result of market transactions he or she isn’t engaged with. Pollution is the classic example.

The normal Coase Theorem argues that in the ethereal world of perfect markets, clear property rights, and no transaction costs, legal regulations would only impact the distribution but not the outcomes of externalities.

Obligatory example, this one from Coase: someone purchases land next to a train to farm. The train throws off sparks, which damage the crops. The railroad company could remodel the train to stop the sparks. What difference would liability law and regulations make?

Let’s say it cost $100 to put on spark guards that would stop $120 worth of crop damage. In this case, the spark guards would get installed. If liability fell on the train company, they’d pay the $100. If it didn’t, the farmer would pay the train company $100 to install the spark guards. If those numbers were reversed, the spark guard wouldn’t get installed. The train company would just pay $100 for the crop damages to prevent the lawsuit if they faced liability. If they didn’t, the farmer would eat the $100 loss. In both cases, the law didn’t change what decision would be made if they just bargained together. The only thing that would change would be the cash payments. (This does not pan out well in the real world [1].)

What does this have to do with libertarianism? As Barbara Fried notes, Coase is defining the social costs as being “the joint costs of conflicting desires in a world of scarce resources.” This move brings the progressive legal realism of the early 20th century law into the economics field.

What Coase is overturning is the idea that the scenario above is simply the railroad damaging the crops, and thus the issue is how to stop or punish the railroad company. Instead, there are multiple, valid claims, claims that necessarily put restrictions on others, and the issue is how to balance them.

As Coase says early on about externalities, “The question is commonly thought of as one in which A inflicts harm on B and what has to be decided is: how should we restrain A? But this is wrong. We are dealing with a problem of a reciprocal nature. To avoid the harm to B would inflict harm on A. The real question that has to be decided is: should A be allowed to harm B or should B be allowed to harm A?”

Indeed, the very first thing Coase does in the paper is to argue the “reciprocal” nature of social cost. The cost of the crop damage isn’t a question. The problem comes out of two people’s desire to utilize their property rights: for the train to run as is, and for the farmer to grow crops near the tracks. The question is, whose property rights do we privilege: the railroad’s or the farmer’s? People in law and economics usually dodge this by arguing that bargaining will take care of the (non-distributional) issues, but in the regular world, which is full of transaction costs, these decisions will need to be made.

And this is where Coase is a major problem for current libertarian thinking. Today’s libertarians draw almost their entire philosophy from the idea of “self-ownership” and think that the only role of government is to enforce a minimal, classical liberal version of “do no harm.”

But notice how ideas like non-aggression makes no sense in the Coase world. The ideal of self-ownership and minimal government can’t get us out of this problem, because it is precisely what ownership entails that is under question. And to realize one person’s ownership would necessarily entail limiting the ownership claims of someone else. (You can read the hostility that anarcho-capitalist Murray Rothbard had for the Coase Theorem’s “social engineering” here.)

Or as Coase concludes, “We may speak of a person owning land and using it as a factor of production but what the land-owner in fact possesses is the right to carry out a circumscribed list of actions…in choosing between social arrangements within the context of which individual decisions are made, we have to bear in mind that a change in the existing system which will lead to an improvement in some decisions may well lead to a worsening of others.”

The question of which social arrangements are best is the problem we face. Some, however, can’t even see the question.

[1] Three quick examples of the Coase Theorem not panning out in the real world:

Where Do We Send Unemployment Checks? John Donohue looked at a natural experiment from a pilot program in Illinois that would send out a bonus unemployment check of $500 for people who successfully found a job. But some people in the pilot program had the checks sent to them, while others, randomly, had the checks sent to their employers. This was a great test for the Coase Theorem, as the people in question had to bargain a contract to get employed in the first place, so there were no transaction costs.

It turned out there was a significant effect. People were much less likely to participate if their employers received the check. So policy design does matter.

Actual Cattle, Actual Society. In 1989, Robert Ellickson of Yale Law School investigated how rural landowners in California handled livestock trespassing under different liability regimes. What did he find? “The field evidence I gathered suggests that a change in animal trespass law indeed fails to affect resource allocation, not because transaction costs are low, but because transaction costs are high. Legal rules are costly to learn and enforce. Trespass incidents are minor irritations between parties who typically have complex continuing relationships that enable them readily to enforce informal norms.”

Norms and social accounts of obligations are important basic sources of entitlements, as opposed to just abstract bargaining models.

Institutions Matter Too. Much of the more interesting work in cross-country growth has been focused on relative strengths and weaknesses of public institutions like courts, something that shouldn’t matter from the Coase world. For one example from someone in that field, Simon Johnson had a great summary about financial regulation and economic conditions. They key point is that securities law has a strong correlation with economic outcomes, which shouldn’t happen. But it does.

Follow or contact the Rortybomb blog:

  

 

Property isn’t a vertical relationship between a person and an object, but instead is a horizontal, reciprocal relationship of exclusions between people. Since the benefit of one person in regard to property comes at the expense of someone else, there’s no logical or coherent way to invoke liberty or classical liberal principles of “do no harm” when it comes to how the law determines the shape of property. All we can do is pick among competing systems that try to achieve shared social goals.

That’s not an idea normally associated with the economist Ronald Coase, who died yesterday at 102. But it’s a very important part of his landmark paper, ”The Problem of Social Cost (1960), that goes missing when the right-wing celebrates his legacy. Let’s unpack it.

The paper is meant to address the issue of externalities, or when a third party pays a price (or get a benefit) as a result of market transactions he or she isn’t engaged with. Pollution is the classic example.

The normal Coase Theorem argues that in the ethereal world of perfect markets, clear property rights, and no transaction costs, legal regulations would only impact the distribution but not the outcomes of externalities.

Obligatory example, this one from Coase: someone purchases land next to a train to farm. The train throws off sparks, which damage the crops. The railroad company could remodel the train to stop the sparks. What difference would liability law and regulations make?

Let’s say it cost $100 to put on spark guards that would stop $120 worth of crop damage. In this case, the spark guards would get installed. If liability fell on the train company, they’d pay the $100. If it didn’t, the farmer would pay the train company $100 to install the spark guards. If those numbers were reversed, the spark guard wouldn’t get installed. The train company would just pay $100 for the crop damages to prevent the lawsuit if they faced liability. If they didn’t, the farmer would eat the $100 loss. In both cases, the law didn’t change what decision would be made if they just bargained together. The only thing that would change would be the cash payments. (This does not pan out well in the real world [1].)

What does this have to do with libertarianism? As Barbara Fried notes, Coase is defining the social costs as being “the joint costs of conflicting desires in a world of scarce resources.” This move brings the progressive legal realism of the early 20th century law into the economics field.

What Coase is overturning is the idea that the scenario above is simply the railroad damaging the crops, and thus the issue is how to stop or punish the railroad company. Instead, there are multiple, valid claims, claims that necessarily put restrictions on others, and the issue is how to balance them.

As Coase says early on about externalities, “The question is commonly thought of as one in which A inflicts harm on B and what has to be decided is: how should we restrain A? But this is wrong. We are dealing with a problem of a reciprocal nature. To avoid the harm to B would inflict harm on A. The real question that has to be decided is: should A be allowed to harm B or should B be allowed to harm A?”

Indeed, the very first thing Coase does in the paper is to argue the “reciprocal” nature of social cost. The cost of the crop damage isn’t a question. The problem comes out of two people’s desire to utilize their property rights: for the train to run as is, and for the farmer to grow crops near the tracks. The question is, whose property rights do we privilege: the railroad’s or the farmer’s? People in law and economics usually dodge this by arguing that bargaining will take care of the (non-distributional) issues, but in the regular world, which is full of transaction costs, these decisions will need to be made.

And this is where Coase is a major problem for current libertarian thinking. Today’s libertarians draw almost their entire philosophy from the idea of “self-ownership” and think that the only role of government is to enforce a minimal, classical liberal version of “do no harm.”

But notice how ideas like non-aggression makes no sense in the Coase world. The ideal of self-ownership and minimal government can’t get us out of this problem, because it is precisely what ownership entails that is under question. And to realize one person’s ownership would necessarily entail limiting the ownership claims of someone else. (You can read the hostility that anarcho-capitalist Murray Rothbard had for the Coase Theorem’s “social engineering” here.)

Or as Coase concludes, “We may speak of a person owning land and using it as a factor of production but what the land-owner in fact possesses is the right to carry out a circumscribed list of actions…in choosing between social arrangements within the context of which individual decisions are made, we have to bear in mind that a change in the existing system which will lead to an improvement in some decisions may well lead to a worsening of others.”

The question of which social arrangements are best is the problem we face. Some, however, can’t even see the question.

[1] Three quick examples of the Coase Theorem not panning out in the real world:

Where Do We Send Unemployment Checks? John Donohue looked at a natural experiment from a pilot program in Illinois that would send out a bonus unemployment check of $500 for people who successfully found a job. But some people in the pilot program had the checks sent to them, while others, randomly, had the checks sent to their employers. This was a great test for the Coase Theorem, as the people in question had to bargain a contract to get employed in the first place, so there were no transaction costs.

It turned out there was a significant effect. People were much less likely to participate if their employers received the check. So policy design does matter.

Actual Cattle, Actual Society. In 1989, Robert Ellickson of Yale Law School investigated how rural landowners in California handled livestock trespassing under different liability regimes. What did he find? “The field evidence I gathered suggests that a change in animal trespass law indeed fails to affect resource allocation, not because transaction costs are low, but because transaction costs are high. Legal rules are costly to learn and enforce. Trespass incidents are minor irritations between parties who typically have complex continuing relationships that enable them readily to enforce informal norms.”

Norms and social accounts of obligations are important basic sources of entitlements, as opposed to just abstract bargaining models.

Institutions Matter Too. Much of the more interesting work in cross-country growth has been focused on relative strengths and weaknesses of public institutions like courts, something that shouldn’t matter from the Coase world. For one example from someone in that field, Simon Johnson had a great summary about financial regulation and economic conditions. They key point is that securities law has a strong correlation with economic outcomes, which shouldn’t happen. But it does.

Follow or contact the Rortybomb blog:

  

 

Share This

Can President Obama's New Metrics Curb College Costs?

Aug 23, 2013Mike Konczal

(Photo Source: White House)

President Obama just announced a major initiative on higher education. Will it contain or reverse rising costs?

I want to discuss the part of it that seems most tailored to containing costs, which is creating new higher education metrics to compare schools. These metrics will be created by 2015, which will be used to determine access to federal dollars such as student loans and Pell grants by 2018.

From the fact sheet, the to-be-determined rankings will be based on three things: access, affordability, and outcomes. Access includes “percentage of students receiving Pell grants,” affordability includes “average tuition, scholarships, and loan debt,” and outcomes includes graduation rates and earnings.

Here are my initial thoughts as I try to understand this. The tl;dr version is that it is important that these metrics are used to drive down private costs relative to public, expose administrative bloat, put pressure on the states, and bring accountability to the for-profits. If they don’t do that, they’re a waste on the cost-containment front. Now, here are six more detailed points to consider about how the metrics will be implemented and what effects they will have:

1. The Goals Will Run Counter to Each Other. The efforts to increase graduation rates and have better post-graduation outcomes may require more spending by colleges. Some colleges in each of the meta-categories are likely to be booted for bad performance, or the metrics will make attending the worst-performing colleges so expensive as to drag them into a death spiral. Good as that may be for education, it will collapse the supply of higher education in the short term, putting more price pressure on existing institutions.

Which is to say that we should distinguish efforts to increase quality through access and outcomes from efforts to contain costs. Students graduating on time will make colleges de facto more affordable, and perhaps that is mainly what the president is looking for.  But that is not entirely cost containment.

2. The Student-Consumer or the Government? What’s different here? As Sara Goldrick-Rab and others argue, one reason cost containment has failed in the past “may stem from the financial aid system’s strong focus on the behaviors of ‘student-consumers’ rather than education providers.”

It’s not clear to me why empowering these “student-consumers,” who go about rationally analyzing disclosed data in the marketplace for education, would give them the ability to make the demands necessary to contain costs at universities as a whole. One could see them driving out obviously underperforming institutions from the landscape, but it’s much harder to imagine them forcing institutions to contain costs, at least without political struggle.

Students themselves are quite aware of the increasing costs in the past few years, with endless “click here to know what you are borrowing” measures that likely don’t do much. There’s really little evidence that an additional range of disclosures would make the institutions here more accountable or force them to contain their costs.

Which is to say that we should focus less on disclosure and the consumer regime for cost containment, and more on how the government will force changes itself by making aid less available unless an affordability metric is met.

3 The Obvious Information to Disclose. Talking about “the problem of higher education costs” is a major category error, as they vary by institution. The factors that cause community colleges to raise tuition (decreasing public support) are different than those facing for-profits (maximizing aid extraction) or private not-for-profits (maximizing prestige and consumer experience).

Consistent across all of these is the idea that increasing administrative costs are a major driver of costs. This strikes me as the obvious, and perhaps only, metric where the consumer-student could force containment and best practices.

So a very obvious thing to inform consumers of is “how much of my tuition goes to instruction?” If consumer-students want to force down football coach salaries and investment in extravagant non-instructional benefits, this is the most obvious way to do it and can be plastered across every disclosure form.

(Another question I think is important, which would be great to deal with for-profits, is to disclose “how much of my tuition will be paid out to shareholders?” Consumers may or may not be happy with paying extra to build a more gigantic football stadium; they are probably not happy paying money that leaves the educational institution entirely.)

4. Taking on Private Universities. It’s worth noting here that these metrics will be applied to private schools as well, using all of the government’s Title IV money (grants and student loans and everything else) as the leverage. And this is probably the major challenge, as private schools will not like this, and they have a lot of political coverage. Who among the elite hasn’t gone to a prominent private university?

In a recent editorial on these new metrics, Sara Goldrick-Rab notes the danger that President Obama will “cave to the private higher-education lobby.” For if private higher education’s “expenses are so merited, we should see bigger gains at private elites than at we do at less-expensive institutions, not just higher graduation rates. None of that is happening now.”

I’m curious how the metrics will “compare colleges with similar missions.” Will they compare public schools and private schools on the issue of cost containment at a given a level of quality? They should, as directly funded public options can drive down the costs of privately allocated goods, but if they do, that will necessarily put a lot of pressure on private schools.

Interestingly, this could lead to a situation where private universities just leave the federal support system. Harvard, for instance, could just say “forget you” to the federal government and fund whatever aid it wants out of its own endowment. This move might split reformers, even though it would likely be for the best.

5. Taking on the States. This is the most incoherent part of Obama’s pitch about the metrics. In the fact sheet, President Obama noted that “[d]eclining state funding has forced students to shoulder a bigger proportion of college costs; tuition has almost doubled as a share of public college revenues over the past 25 years from 25 percent to 47 percent.” Yet at the same time he talks about bloat and waste as drivers. Both could be true, but if the first is a main driver then individual rankings of schools will have a problem.

One way to balance this would be to rank states themselves alongside schools. Demos proposes “an additional ratings system: why don’t we rate state legislatures on their per-student investment in higher education?” This could be useful in giving people in different states a much better sense of what their public higher education looks like. Crucially, it would also adjust for the fact that state education systems function as a continuum with multiple levels and transfers up and down the educational ladder.

6. Political Battles. A lot of commentators are arguing this is a battle between President Obama and liberal professors, so it is unlikely to trigger GOP opposition. I’m not sure about that. The real people who will disproportionately end up in the crosshairs if this is done well, as listed above, are (a) administrators taking inflated salaries, (b) private and flagship schools that provided little value at very high costs, and c) for-profits.

I think Josh Barro misses that for-profit schools are a major GOP constituency. George W. Bush’s Assistant Secretary for Postsecondary Education, Sally Stroup, was a former University of Phoenix lobbyist, and led a successful effort to remove restrictions on for-profit schools. On the campaign trail, Mitt Romney name-dropped a for-profit school that happened to donate to him. Insofar as the Obama administration will try to use these metrics to get a second bite at curbing the for-profit industry as it failed to do in its first term, that will set off alarm bells.

Meanwhile, as noted above, basically every elite within 100 yards of D.C. politics, particularly in elite media and Democratic politics (e.g. “He was my professor actually at Harvard”), functions like a member of a private higher education lobby. How will they react if the hammer comes down there?

There’s a lot of emphasis on getting poor students on Pell grants into high-end schools. That is a good goal. However, the issues with costs and higher education go far beyond this and affect families who are not rich but don’t qualify for means-tested aid. They are the ones who will increasingly demand cost containment.

Something will eventually give. The question remains as to whether or not these metrics will be used to drive down private costs relative to public, expose administrative bloat, put pressure on the states, and bring accountability to the for-profits. If they do, it’s a positive sign; if not, a waste or worse when it comes to cost containment.

Follow or contact the Rortybomb blog:

  

 

(Photo Source: White House)

President Obama just announced a major initiative on higher education. Will it contain or reverse rising costs?

I want to discuss the part of it that seems most tailored to containing costs, which is creating new higher education metrics to compare schools. These metrics will be created by 2015, which will be used to determine access to federal dollars such as student loans and Pell grants by 2018.

From the fact sheet, the to-be-determined rankings will be based on three things: access, affordability, and outcomes. Access includes “percentage of students receiving Pell grants,” affordability includes “average tuition, scholarships, and loan debt,” and outcomes includes graduation rates and earnings.

Here are my initial thoughts as I try to understand this. The tl;dr version is that it is important that these metrics are used to drive down private costs relative to public, expose administrative bloat, put pressure on the states, and bring accountability to the for-profits. If they don’t do that, they’re a waste on the cost-containment front. Now, here are six more detailed points to consider about how the metrics will be implemented and what effects they will have:

1. The Goals Will Run Counter to Each Other. The efforts to increase graduation rates and have better post-graduation outcomes may require more spending by colleges. Some colleges in each of the meta-categories are likely to be booted for bad performance, or the metrics will make attending the worst-performing colleges so expensive as to drag them into a death spiral. Good as that may be for education, it will collapse the supply of higher education in the short term, putting more price pressure on existing institutions.

Which is to say that we should distinguish efforts to increase quality through access and outcomes from efforts to contain costs. Students graduating on time will make colleges de facto more affordable, and perhaps that is mainly what the president is looking for.  But that is not entirely cost containment.

2. The Student-Consumer or the Government? What’s different here? As Sara Goldrick-Rab and others argue, one reason cost containment has failed in the past “may stem from the financial aid system’s strong focus on the behaviors of ‘student-consumers’ rather than education providers.”

It’s not clear to me why empowering these “student-consumers,” who go about rationally analyzing disclosed data in the marketplace for education, would give them the ability to make the demands necessary to contain costs at universities as a whole. One could see them driving out obviously underperforming institutions from the landscape, but it’s much harder to imagine them forcing institutions to contain costs, at least without political struggle.

Students themselves are quite aware of the increasing costs in the past few years, with endless “click here to know what you are borrowing” measures that likely don’t do much. There’s really little evidence that an additional range of disclosures would make the institutions here more accountable or force them to contain their costs.

Which is to say that we should focus less on disclosure and the consumer regime for cost containment, and more on how the government will force changes itself by making aid less available unless an affordability metric is met.

3 The Obvious Information to Disclose. Talking about “the problem of higher education costs” is a major category error, as they vary by institution. The factors that cause community colleges to raise tuition (decreasing public support) are different than those facing for-profits (maximizing aid extraction) or private not-for-profits (maximizing prestige and consumer experience).

Consistent across all of these is the idea that increasing administrative costs are a major driver of costs. This strikes me as the obvious, and perhaps only, metric where the consumer-student could force containment and best practices.

So a very obvious thing to inform consumers of is “how much of my tuition goes to instruction?” If consumer-students want to force down football coach salaries and investment in extravagant non-instructional benefits, this is the most obvious way to do it and can be plastered across every disclosure form.

(Another question I think is important, which would be great to deal with for-profits, is to disclose “how much of my tuition will be paid out to shareholders?” Consumers may or may not be happy with paying extra to build a more gigantic football stadium; they are probably not happy paying money that leaves the educational institution entirely.)

4. Taking on Private Universities. It’s worth noting here that these metrics will be applied to private schools as well, using all of the government’s Title IV money (grants and student loans and everything else) as the leverage. And this is probably the major challenge, as private schools will not like this, and they have a lot of political coverage. Who among the elite hasn’t gone to a prominent private university?

In a recent editorial on these new metrics, Sara Goldrick-Rab notes the danger that President Obama will “cave to the private higher-education lobby.” For if private higher education’s “expenses are so merited, we should see bigger gains at private elites than at we do at less-expensive institutions, not just higher graduation rates. None of that is happening now.”

I’m curious how the metrics will “compare colleges with similar missions.” Will they compare public schools and private schools on the issue of cost containment at a given a level of quality? They should, as directly funded public options can drive down the costs of privately allocated goods, but if they do, that will necessarily put a lot of pressure on private schools.

Interestingly, this could lead to a situation where private universities just leave the federal support system. Harvard, for instance, could just say “forget you” to the federal government and fund whatever aid it wants out of its own endowment. This move might split reformers, even though it would likely be for the best.

5. Taking on the States. This is the most incoherent part of Obama’s pitch about the metrics. In the fact sheet, President Obama noted that “[d]eclining state funding has forced students to shoulder a bigger proportion of college costs; tuition has almost doubled as a share of public college revenues over the past 25 years from 25 percent to 47 percent.” Yet at the same time he talks about bloat and waste as drivers. Both could be true, but if the first is a main driver then individual rankings of schools will have a problem.

One way to balance this would be to rank states themselves alongside schools. Demos proposes “an additional ratings system: why don’t we rate state legislatures on their per-student investment in higher education?” This could be useful in giving people in different states a much better sense of what their public higher education looks like. Crucially, it would also adjust for the fact that state education systems function as a continuum with multiple levels and transfers up and down the educational ladder.

6. Political Battles. A lot of commentators are arguing this is a battle between President Obama and liberal professors, so it is unlikely to trigger GOP opposition. I’m not sure about that. The real people who will disproportionately end up in the crosshairs if this is done well, as listed above, are (a) administrators taking inflated salaries, (b) private and flagship schools that provided little value at very high costs, and c) for-profits.

I think Josh Barro misses that for-profit schools are a major GOP constituency. George W. Bush’s Assistant Secretary for Postsecondary Education, Sally Stroup, was a former University of Phoenix lobbyist, and led a successful effort to remove restrictions on for-profit schools. On the campaign trail, Mitt Romney name-dropped a for-profit school that happened to donate to him. Insofar as the Obama administration will try to use these metrics to get a second bite at curbing the for-profit industry as it failed to do in its first term, that will set off alarm bells.

Meanwhile, as noted above, basically every elite within 100 yards of D.C. politics, particularly in elite media and Democratic politics (e.g. “He was my professor actually at Harvard”), functions like a member of a private higher education lobby. How will they react if the hammer comes down there?

There’s a lot of emphasis on getting poor students on Pell grants into high-end schools. That is a good goal. However, the issues with costs and higher education go far beyond this and affect families who are not rich but don’t qualify for means-tested aid. They are the ones who will increasingly demand cost containment.

Something will eventually give. The question remains as to whether or not these metrics will be used to drive down private costs relative to public, expose administrative bloat, put pressure on the states, and bring accountability to the for-profits. If they do, it’s a positive sign; if not, a waste or worse when it comes to cost containment.

Follow or contact the Rortybomb blog:

  

 

College graduation banner image via Shutterstock.com

Share This

Guest Post: O Canada and Its Housing Market

Aug 14, 2013David Min

Mike here. Over the weekend I wrote a post at Wonkblog, "In Defense of the 30 Year Mortgage." Many people have responded to this idea by bringing up the housing market of our neighbors in Canada. In order to keep this conversation running, I have a guest post by David Min, friend of the blog and a University of California, Irvine law professor. Take it away, David:

Does Canada prove the 30-year fixed-rate mortgage is of limited value? Here’s Matt Yglesias from last week:

If you cross the border into Canada it's not like people are living in yurts. It works fine. But since homebuyers have to carry a bit more interest rate risk, they seem to purchase slightly smaller houses. Alternatively if you imagine a jumbo loan scenario where the 30-year fixed rate mortgage lives but with systematically higher interest rates, you'd find that people would have to respond by purchasing slightly smaller houses. And it's not a coincidence that Americans live in the biggest houses in the world.

As I’ve outlined in the past, the dominant mortgage product in Canada is a five-year fixed-rate mortgage, amortized over 25 years, that essentially requires refinancing every five years. This product leaves borrowers open to two important types of mortgage-related risk.

First, there is the risk that interest rates will rise significantly between the time the loan is first originated and the time that it must be refinanced, causing a payment shock that the borrower may not be able to afford. Second, there is the risk that when the loan comes due, there may not be refinancing options available to the borrower, either because the property has declined in value so much that the loan does not meet loan-to-value requirements, or perhaps because banks have reduced their lending due to a credit contraction.

For what it’s worth, Canada has historically had a greater government involvement in its housing finance system, through a combination of government-backed mortgage securitization and mortgage insurance offered by the Canada Mortgage and Housing Corporation (an entity similar in many ways to Fannie and Freddie), as well as governmental reinsurance for all mortgage insurance, which in total accounts for some 70-80 percent of all Canadian home loans. So if you’re looking to Canada as a model of getting the government out of housing finance, look again (and don’t look to Europe, which also has very high levels of government guarantees for housing finance, as I explained recently in congressional testimony).

As to Matt’s broader point about Canadian mortgage finance, there is no question that we can have a housing finance system without the 30-year FRM that drives sufficient capital into housing to meet our needs (both for owner-occupied and rental housing), but that’s not the point of the debate over the 30-year FRM. The key difference between Canada’s five-year FRM and the American 30-year FRM is that the former leaves interest rate risk (and refinancing risk) with consumers, whereas the latter leaves rate risk (and prepayment risk) with financial institutions such as banks, pension funds, and insurance companies.

The key question is whether interest rate risk is better placed with households or with banks and investors. Those of us who favor the 30-year FRM argue that this risk should be placed with the latter, who are better equipped to handle this risk. The available evidence suggests that average mortgage borrowers do not attempt to predict what mortgage rates will be five years down the line. And even if they could do this, they lack access to the financial instruments that might allow them to hedge against this risk. Conversely, banks and MBS investors already spend quite a lot of resources trying to protect against interest rate volatility.  

Moreover, when households are unable to deal with interest rate risk, they are unable to make their mortgage payments. This creates a double whammy insofar as higher rate risk for borrowers means higher credit risk for banks and investors. Thus, from a systemic stability standpoint, it seems to make more sense to place rate risk with financial institutions rather than with consumers.

Neither the U.S. nor Canada has experienced significant interest rate increases since the early 1980s, so the difference between the five- and 30-year FRMs has largely been a theoretical debate since that time. But as Karl Case (the economist who helped create the eponymous Case-Shiller home price index) has noted, we have at least one important data point from that last episode of interest rate volatility that suggests the 30-year FRM is preferable from a financial stability standpoint.

Both Vancouver and California had housing booms in the late 1970s, and both of course went through the double-digit interest rate increases of the early 1980s, which led to U.S. mortgage rates settling at about 17-18 percent. Then, as now, the dominant mortgage in the U.S. was the 30-year FRM and the dominant mortgage in Canada was the five-year FRM. Vancouver and California experienced starkly different housing markets in response to this interest rate volatility. Because Canadian mortgages were designed to be refinanced every few years, Canadian borrowers faced enormous payment shocks (with mortgage payments doubling or tripling), which resulted in a huge housing bust, with Vancouver experiencing a 60 percent (!) home price decline in the early 1980s. Conversely, California experienced a few years of a stagnant housing market in which potential sellers simply held onto their existing mortgages, and prices never fell in nominal terms.

This limited historical data suggests that the U.S. 30-year FRM is a more systemically stable product than the shorter duration rollover loan that is popular in Canada. Within the United States, of course, there is ample evidence that the 30-year FRM performs far better than short-term rollover loans. During the Great Depression, the delinquency rates on short-term rollover loans reached 50 percent, as underwater borrowers were unable to find sources of refinancing (sound familiar?). More recently, adjustable-rate mortgages experienced delinquency rates that were two to three times higher than fixed-rate mortgages made to comparable borrowers, as both the Federal Housing Finance Agency and the Mortgage Bankers Association have found.

All of this evidence suggests that critics of the 30-year FRM need to be treading a little more carefully in trashing the benefits of this particular product.

Follow or contact the Rortybomb blog:

  

 

Mike here. Over the weekend I wrote a post at Wonkblog, "In Defense of the 30 Year Mortgage." Many people have responded to this idea by bringing up the housing market of our neighbors in Canada. In order to keep this conversation running, I have a guest post by David Min, friend of the blog and a University of California, Irvine law professor. Take it away, David:

Does Canada prove the 30-year fixed-rate mortgage is of limited value? Here’s Matt Yglesias from last week:

If you cross the border into Canada it's not like people are living in yurts. It works fine. But since homebuyers have to carry a bit more interest rate risk, they seem to purchase slightly smaller houses. Alternatively if you imagine a jumbo loan scenario where the 30-year fixed rate mortgage lives but with systematically higher interest rates, you'd find that people would have to respond by purchasing slightly smaller houses. And it's not a coincidence that Americans live in the biggest houses in the world.

As I’ve outlined in the past, the dominant mortgage product in Canada is a five-year fixed-rate mortgage, amortized over 25 years, that essentially requires refinancing every five years. This product leaves borrowers open to two important types of mortgage-related risk.

First, there is the risk that interest rates will rise significantly between the time the loan is first originated and the time that it must be refinanced, causing a payment shock that the borrower may not be able to afford. Second, there is the risk that when the loan comes due, there may not be refinancing options available to the borrower, either because the property has declined in value so much that the loan does not meet loan-to-value requirements, or perhaps because banks have reduced their lending due to a credit contraction.

For what it’s worth, Canada has historically had a greater government involvement in its housing finance system, through a combination of government-backed mortgage securitization and mortgage insurance offered by the Canada Mortgage and Housing Corporation (an entity similar in many ways to Fannie and Freddie), as well as governmental reinsurance for all mortgage insurance, which in total accounts for some 70-80 percent of all Canadian home loans. So if you’re looking to Canada as a model of getting the government out of housing finance, look again (and don’t look to Europe, which also has very high levels of government guarantees for housing finance, as I explained recently in congressional testimony).

As to Matt’s broader point about Canadian mortgage finance, there is no question that we can have a housing finance system without the 30-year FRM that drives sufficient capital into housing to meet our needs (both for owner-occupied and rental housing), but that’s not the point of the debate over the 30-year FRM. The key difference between Canada’s five-year FRM and the American 30-year FRM is that the former leaves interest rate risk (and refinancing risk) with consumers, whereas the latter leaves rate risk (and prepayment risk) with financial institutions such as banks, pension funds, and insurance companies.

The key question is whether interest rate risk is better placed with households or with banks and investors. Those of us who favor the 30-year FRM argue that this risk should be placed with the latter, who are better equipped to handle this risk. The available evidence suggests that average mortgage borrowers do not attempt to predict what mortgage rates will be five years down the line. And even if they could do this, they lack access to the financial instruments that might allow them to hedge against this risk. Conversely, banks and MBS investors already spend quite a lot of resources trying to protect against interest rate volatility.  

Moreover, when households are unable to deal with interest rate risk, they are unable to make their mortgage payments. This creates a double whammy insofar as higher rate risk for borrowers means higher credit risk for banks and investors. Thus, from a systemic stability standpoint, it seems to make more sense to place rate risk with financial institutions rather than with consumers.

Neither the U.S. nor Canada has experienced significant interest rate increases since the early 1980s, so the difference between the five- and 30-year FRMs has largely been a theoretical debate since that time. But as Karl Case (the economist who helped create the eponymous Case-Shiller home price index) has noted, we have at least one important data point from that last episode of interest rate volatility that suggests the 30-year FRM is preferable from a financial stability standpoint.

Both Vancouver and California had housing booms in the late 1970s, and both of course went through the double-digit interest rate increases of the early 1980s, which led to U.S. mortgage rates settling at about 17-18 percent. Then, as now, the dominant mortgage in the U.S. was the 30-year FRM and the dominant mortgage in Canada was the five-year FRM. Vancouver and California experienced starkly different housing markets in response to this interest rate volatility. Because Canadian mortgages were designed to be refinanced every few years, Canadian borrowers faced enormous payment shocks (with mortgage payments doubling or tripling), which resulted in a huge housing bust, with Vancouver experiencing a 60 percent (!) home price decline in the early 1980s. Conversely, California experienced a few years of a stagnant housing market in which potential sellers simply held onto their existing mortgages, and prices never fell in nominal terms.

This limited historical data suggests that the U.S. 30-year FRM is a more systemically stable product than the shorter duration rollover loan that is popular in Canada. Within the United States, of course, there is ample evidence that the 30-year FRM performs far better than short-term rollover loans. During the Great Depression, the delinquency rates on short-term rollover loans reached 50 percent, as underwater borrowers were unable to find sources of refinancing (sound familiar?). More recently, adjustable-rate mortgages experienced delinquency rates that were two to three times higher than fixed-rate mortgages made to comparable borrowers, as both the Federal Housing Finance Agency and the Mortgage Bankers Association have found.

All of this evidence suggests that critics of the 30-year FRM need to be treading a little more carefully in trashing the benefits of this particular product.

Follow or contact the Rortybomb blog:

  

 

Share This

Denialism and Bad Faith in Policy Arguments

Aug 14, 2013Mike Konczal

Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”

But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”

He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States.

This is why I think the Smithian “Derp” concept needs fleshing out as a diagnosis of our current situation. (I’m not a fan of the word either, but I’ll use it for this post.) For those not familiar with the term, Noah Smith argues that a major problem in our policy discussions is “the constant, repetitive reiteration of strong priors.” But if that was the only issue, Meltzer would support more expansion like he did for Japan!

Simply blaming reiteration of priors is missing something. The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.

There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. Eventually, this transcends the “reiteration of strong priors” and becomes an updating of the case but a reiteration of the conclusion. Throughout 2010 and 2011, an endless series of arguments about how a long-term fiscal deal would help with the current recession were made, without any credible evidence that this would help our short-term economy. But that’s what people want to do, and so they acknowledge the fresh problem but simply plug in their wrong solutions. The same was true with Mitt Romney’s plan for the economy, which wasn’t specific to 2012 in any way.

Bad faith solutions don’t have to be about things you wanted to do anyway. Phillip Mirowski’s new book makes a fascinating observation about conservative think tanks when it comes to global warming. On the one hand, they have an active project arguing global warming isn’t happening. But on the other hand, they also have an active project arguing global warming can be solved through geoengineering the atmosphere. (For an example, here’s AEI arguing worries over climate change are overblown, but also separately hosting a panel on geoengineering.)

So global warming isn’t real, but if it is, heroic atmospheric entrepreneurs will come in at the last minute and save the day. Thus, you can have denialism and bad-faith solutions in play at the same time.

The fact that we can get to the denial and bad-faith corner makes me think this can be made generalizable and charted on a grid, but I still feel it’s missing some dimensions. What Smith identifies is real, but I’m not sure how to place it on these axes. What do you make of it?

Follow or contact the Rortybomb blog:

  

 

Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”

But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”

He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States.

This is why I think the Smithian “Derp” concept needs fleshing out as a diagnosis of our current situation. (I’m not a fan of the word either, but I’ll use it for this post.) For those not familiar with the term, Noah Smith argues that a major problem in our policy discussions is “the constant, repetitive reiteration of strong priors.” But if that was the only issue, Meltzer would support more expansion like he did for Japan!

Simply blaming reiteration of priors is missing something. The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.

There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. Eventually, this transcends the “reiteration of strong priors” and becomes an updating of the case but a reiteration of the conclusion. Throughout 2010 and 2011, an endless series of arguments about how a long-term fiscal deal would help with the current recession were made, without any credible evidence that this would help our short-term economy. But that’s what people want to do, and so they acknowledge the fresh problem but simply plug in their wrong solutions. The same was true with Mitt Romney’s plan for the economy, which wasn’t specific to 2012 in any way.

Bad faith solutions don’t have to be about things you wanted to do anyway. Phillip Mirowski’s new book makes a fascinating observation about conservative think tanks when it comes to global warming. On the one hand, they have an active project arguing global warming isn’t happening. But on the other hand, they also have an active project arguing global warming can be solved through geoengineering the atmosphere. (For an example, here’s AEI arguing worries over climate change are overblown, but also separately hosting a panel on geoengineering.)

So global warming isn’t real, but if it is, heroic atmospheric entrepreneurs will come in at the last minute and save the day. Thus, you can have denialism and bad-faith solutions in play at the same time.

The fact that we can get to the denial and bad-faith corner makes me think this can be made generalizable and charted on a grid, but I still feel it’s missing some dimensions. What Smith identifies is real, but I’m not sure how to place it on these axes. What do you make of it?

Follow or contact the Rortybomb blog:

  

 

Share This

Whatever Happened to the Economic Policy Uncertainty Index?

Aug 6, 2013Mike Konczal

Jim Tankersley has been doing the Lord’s work by following up on questionable arguments people have made about our current economic weakness being something other than a demand crisis. First, he asked Alberto Alesina about how all that expansionary austerity is working out from the vantage point of this year. Now he looks at the Economic Policy Uncertainty (EPU) index (Baker, Bloom, Davis) as it stands halfway into 2013.

And it has collapsed. The EPU index has been falling at rapid speeds, hitting 2008 levels. Yet the recovery doesn’t seem to be speeding up at all. Wasn’t that supposed to happen?

I’ve been meaning to revisit this index from when I looked at it last fall, and this is a good time to do so. It’s worth unpacking what actually drove the increase in EPU during the past five years, and understanding why there was little reason to believe it reflected uncertainty causing a weak economy. If anything, the relationship is clearly the other way around.

Let’s make sure we understand the uncertainty argument: the increase in EPU “slowed the recovery from the recession by leading businesses and households to postpone investment, hiring and consumption expenditure.” (To give you a sense, in 2011 the authors argued in editorials that this index showed that the NLRB, Obamacare and "harmful rhetorical attacks on business and millionaires" were the cause of prolongued economic weakness.)

As commenters pointed out, it would be easy to construct an index that gets the causation to be spurious or even go the other way. If weak growth could cause the Economic Policy Uncertainty index to skyrocket, then it’s not clear the narrative holds up as well. “There’s uncertainty over whether or not Congress and the Federal Reserve will aggressively fight the downturn” isn’t what the index is trying to measure, but that’s what it seems to be doing.

Let’s take a look at the graph of EPU. When most people discuss this, they argue that the peaks tell them the index is onto something, as it peaks during periods of major confusion (9/11, Lehman bankruptcy, debt ceiling showdown).

But what is worth noting, and what drives the results in a practical way, is the increase in the level during this time period. And that happens immediately in January 2009:

How does economic policy uncertainty jump the first day in 2009? The index has three parts. The first is a newspaper search of people using the phrase “economic policy uncertainty.” I discussed that last fall, arguing that it was mostly capturing Republican talking points and the discipline of the GOP machine rather than actual analysis.

The second is relevant here, and that’s the number of tax provisions set to expire in the near future. (In the first version of the paper this was total number of tax provisions, while in the current version it’s total dollar amount of those provisions.) It’s heavily discounted, so tax cuts that are expiring in a year or two are weighted at a much higher level than those that are further in the future.

What does this look like over the past few years?

So what happened starting in early 2009? The stimulus, of course. And the stimulus was in large part tax provisions that were set to expire in two years. This mechanically increased economic policy uncertainty, even though it was a policy response designed to boost automatic stabilizers. Also, the Bush tax cuts were approaching their endgame, and the algorithm gave a disproportionate weight to them as they entered their last two years.

Then, in late 2010, the Bush tax cuts and some tax provisions from the stimulus were extended to provide additional stimulus to the economy while it was still weak.

Here’s how the creators of the index describe this move: “Congress often decides whether to extend them at the last minute, undermining stability of and certainty about the future path of the tax code... Similarly, the 2010 Payroll Tax Cut was a large tax decrease initially set to expire in 1 year but was twice extended just weeks before its expiration.”

But this decision was not orthogonal to the state of the economy. A major reason the administration waited and then extended the Bush Tax Cuts and the payroll tax cut was the fact that the economy was still weak, and they wanted to boost demand. The only policy uncertainty here was how aggressive and successful the administration would be in securing additional stimulus, which itself was a function of the weakness of the economy. To retroactively argue that the government’s actions in securing additional demand were creating the crisis they are trying to fight requires an additional level of argument not present.

The third part of their index has the same issue. They draw on a literature (e.g. here) that uses disagreements (dispersion of predictions) among professional forecasters as a proxy for uncertainty -- disagreements about the predicted growth in inflation, and predictions of both state and federal spending, one year in advance.

The problem comes from trying to push their definition of EPU onto these disagreements. Debates over how much the federal government will spend through stimulus, how rough the austerity will be at the state level, or how well Bernanke will be able to hit his inflation target, which drives this index, are really debates about the reaction to the crisis. The dispersion will increase if people can’t figure out how aggressively the state will respond to a major collapse in spending. But this is a function of a collapsing economy and how well the government responds to it, not the other way around.

This is why we should ultimately be careful with studies that take this index and plop it into, say, a Beveridge Curve analysis. As Tankersley notes, the government decided to fight a major downturn with stimulus, and the subsequent move away from stimulus before full employment hasn’t helped the economy. In other breaking news, if you carry an umbrella because it is raining, and then toss the umbrella, it doesn’t make it stop raining.

Follow or contact the Rortybomb blog:

  

 

Jim Tankersley has been doing the Lord’s work by following up on questionable arguments people have made about our current economic weakness being something other than a demand crisis. First, he asked Alberto Alesina about how all that expansionary austerity is working out from the vantage point of this year. Now he looks at the Economic Policy Uncertainty (EPU) index (Baker, Bloom, Davis) as it stands halfway into 2013.

And it has collapsed. The EPU index has been falling at rapid speeds, hitting 2008 levels. Yet the recovery doesn’t seem to be speeding up at all. Wasn’t that supposed to happen?

I’ve been meaning to revisit this index from when I looked at it last fall, and this is a good time to do so. It’s worth unpacking what actually drove the increase in EPU during the past five years, and understanding why there was little reason to believe it reflected uncertainty causing a weak economy. If anything, the relationship is clearly the other way around.

Let’s make sure we understand the uncertainty argument: the increase in EPU “slowed the recovery from the recession by leading businesses and households to postpone investment, hiring and consumption expenditure.” (To give you a sense, in 2011 the authors argued in editorials that this index showed that the NLRB, Obamacare and "harmful rhetorical attacks on business and millionaires" were the cause of prolongued economic weakness.)

As commenters pointed out, it would be easy to construct an index that gets the causation to be spurious or even go the other way. If weak growth could cause the Economic Policy Uncertainty index to skyrocket, then it’s not clear the narrative holds up as well. “There’s uncertainty over whether or not Congress and the Federal Reserve will aggressively fight the downturn” isn’t what the index is trying to measure, but that’s what it seems to be doing.

Let’s take a look at the graph of EPU. When most people discuss this, they argue that the peaks tell them the index is onto something, as it peaks during periods of major confusion (9/11, Lehman bankruptcy, debt ceiling showdown).

But what is worth noting, and what drives the results in a practical way, is the increase in the level during this time period. And that happens immediately in January 2009:

How does economic policy uncertainty jump the first day in 2009? The index has three parts. The first is a newspaper search of people using the phrase “economic policy uncertainty.” I discussed that last fall, arguing that it was mostly capturing Republican talking points and the discipline of the GOP machine rather than actual analysis.

The second is relevant here, and that’s the number of tax provisions set to expire in the near future. (In the first version of the paper this was total number of tax provisions, while in the current version it’s total dollar amount of those provisions.) It’s heavily discounted, so tax cuts that are expiring in a year or two are weighted at a much higher level than those that are further in the future.

What does this look like over the past few years?

So what happened starting in early 2009? The stimulus, of course. And the stimulus was in large part tax provisions that were set to expire in two years. This mechanically increased economic policy uncertainty, even though it was a policy response designed to boost automatic stabilizers. Also, the Bush tax cuts were approaching their endgame, and the algorithm gave a disproportionate weight to them as they entered their last two years.

Then, in late 2010, the Bush tax cuts and some tax provisions from the stimulus were extended to provide additional stimulus to the economy while it was still weak.

Here’s how the creators of the index describe this move: “Congress often decides whether to extend them at the last minute, undermining stability of and certainty about the future path of the tax code... Similarly, the 2010 Payroll Tax Cut was a large tax decrease initially set to expire in 1 year but was twice extended just weeks before its expiration.”

But this decision was not orthogonal to the state of the economy. A major reason the administration waited and then extended the Bush Tax Cuts and the payroll tax cut was the fact that the economy was still weak, and they wanted to boost demand. The only policy uncertainty here was how aggressive and successful the administration would be in securing additional stimulus, which itself was a function of the weakness of the economy. To retroactively argue that the government’s actions in securing additional demand were creating the crisis they are trying to fight requires an additional level of argument not present.

The third part of their index has the same issue. They draw on a literature (e.g. here) that uses disagreements (dispersion of predictions) among professional forecasters as a proxy for uncertainty -- disagreements about the predicted growth in inflation, and predictions of both state and federal spending, one year in advance.

The problem comes from trying to push their definition of EPU onto these disagreements. Debates over how much the federal government will spend through stimulus, how rough the austerity will be at the state level, or how well Bernanke will be able to hit his inflation target, which drives this index, are really debates about the reaction to the crisis. The dispersion will increase if people can’t figure out how aggressively the state will respond to a major collapse in spending. But this is a function of a collapsing economy and how well the government responds to it, not the other way around.

This is why we should ultimately be careful with studies that take this index and plop it into, say, a Beveridge Curve analysis. As Tankersley notes, the government decided to fight a major downturn with stimulus, and the subsequent move away from stimulus before full employment hasn’t helped the economy. In other breaking news, if you carry an umbrella because it is raining, and then toss the umbrella, it doesn’t make it stop raining.

Follow or contact the Rortybomb blog:

  

 

Share This

What did FDR Write Inside His Copy of the Proto-Keynesian Road to Plenty?

Aug 2, 2013Mike Konczal

File under: Marginalia Fridays.

In 1928 William Foster and Waddill Catchings wrote The Road to Plenty. A university president and a Goldman Sachs financier, respectively, these two had a serious interest in studying business cycles, and had an idea of what they thought might be happening. This book presented a theory that was proto-Keynesian eight years before the General Theory.

Let's get a summary of that book from Elliot A. Rosen's Roosevelt, the Great Depression, and the Economics of Recovery: "[The Road to Prosperity] claimed that sustained production required sustained consumer demand, a counter to Say's law of market, or classical theory, which held that consumer demand followed automatically from capital consumption. Foster and Catchings explained underconsumption partly in terms of consumer reluctance to spend when prices fell and also in terms of price distortions, maldistribution of income, and the tendency of business to finance capital requirements from earnings, thus sterilizing savings. The result was industrial overcapacity as consumer purchasing power declined. Public works would be required periodically to stimuluate purchasing power."

Franklin Delano Roosevelt, before he was President, had a copy of the book. What did he write in his copy of the book in 1928, right as the Great Depression was gearing up?

Thankfully, our friends at the FDR Presidential Library, who do an excellent job of keeping the records of the 20th Century's greatest President, were able to snap a picture and sent it to me:

FDR's writing:

In case you can't see it, it says "Too good to be true - you can't get something for nothing." Hmmm.

Though Roosevelt didn't buy it at first, he thankfully later evolved on the issue. One lucky reason is because a big fan of the book was a Utah banker who read it intensely starting in 1931, when the Depression seemed like it would never end, much less recover. That man's name was Marriner Stoddard Eccles. The rest, as they say, is history. (Except it's not, because we are currently fighting this all over again.)

The book itself is a series of conversations among strangers on a Pullman-car over what is going on in the economy. A typical page:

'But I cannot see,' objected the Professor, 'how the savings, either of corporations or of individuals, cause the shortage of which you speak. The money which industry receives from consumers and retains as undsitributed profits is not locked up in strong boxes. Most of it is deposited in banks, where other men may borrow it and pay it out. So it flows on to consumers. [....] Once you take account of the fact that money invested is money spent, you see that both individuals and corporations can save all they please without causing consumer buying to lag behind the production of consumers' goods.'

'Yes,' the Business Man replied, 'I am familiar with that contention, but it seems to me unsound. Of course it is true that a considerable part of money savings are deposited in banks, where the money is available for borrowers. But the fact that somebody may borrow the money and pay it out as wages, is immaterial as long as nobody does borrow it. Such money is no more a stimulus to business than is gold in the bowels of the earth.'

(Seem familiar?)

Follow or contact the Rortybomb blog:

  

 

File under: Marginalia Fridays.

In 1928 William Foster and Waddill Catchings wrote The Road to Plenty. A university president and a Goldman Sachs financier, respectively, these two had a serious interest in studying business cycles, and had an idea of what they thought might be happening. This book presented a theory that was proto-Keynesian eight years before the General Theory.

Let's get a summary of that book from Elliot A. Rosen's Roosevelt, the Great Depression, and the Economics of Recovery: "[The Road to Prosperity] claimed that sustained production required sustained consumer demand, a counter to Say's law of market, or classical theory, which held that consumer demand followed automatically from capital consumption. Foster and Catchings explained underconsumption partly in terms of consumer reluctance to spend when prices fell and also in terms of price distortions, maldistribution of income, and the tendency of business to finance capital requirements from earnings, thus sterilizing savings. The result was industrial overcapacity as consumer purchasing power declined. Public works would be required periodically to stimuluate purchasing power."

Franklin Delano Roosevelt, before he was President, had a copy of the book. What did he write in his copy of the book in 1928, right as the Great Depression was gearing up?

Thankfully, our friends at the FDR Presidential Library, who do an excellent job of keeping the records of the 20th Century's greatest President, were able to snap a picture and sent it to me:

FDR's writing:

In case you can't see it, it says "Too good to be true - you can't get something for nothing." Hmmm.

Though Roosevelt didn't buy it at first, he thankfully later evolved on the issue. One lucky reason is because a big fan of the book was a Utah banker who read it intensely starting in 1931, when the Depression seemed like it would never end, much less recover. That man's name was Marriner Stoddard Eccles. The rest, as they say, is history. (Except it's not, because we are currently fighting this all over again.)

The book itself is a series of conversations among strangers on a Pullman-car over what is going on in the economy. A typical page:

'But I cannot see,' objected the Professor, 'how the savings, either of corporations or of individuals, cause the shortage of which you speak. The money which industry receives from consumers and retains as undsitributed profits is not locked up in strong boxes. Most of it is deposited in banks, where other men may borrow it and pay it out. So it flows on to consumers. [....] Once you take account of the fact that money invested is money spent, you see that both individuals and corporations can save all they please without causing consumer buying to lag behind the production of consumers' goods.'

'Yes,' the Business Man replied, 'I am familiar with that contention, but it seems to me unsound. Of course it is true that a considerable part of money savings are deposited in banks, where the money is available for borrowers. But the fact that somebody may borrow the money and pay it out as wages, is immaterial as long as nobody does borrow it. Such money is no more a stimulus to business than is gold in the bowels of the earth.'

(Seem familiar?)

Follow or contact the Rortybomb blog:

  

 

Share This

Yellen, Summers and Rebuilding After the Fire

Jul 24, 2013Mike Konczal

There is no Bernanke Consensus. This is important to remember about our moment, and about how to evaluate what comes next for the Federal Reserve. What we have instead is the Bernanke Improvisation, a series of emergency procedures to try to keep the economy from falling apart, and perhaps even guide it back to full employment, after normal monetary policy hit a wall.

With the rumor mill circulating that Larry Summers could be the next Federal Reserve chair instead of Janet Yellen, it’s worth understanding where the Fed is. Bernanke has been like a fireman trying to put out a fire since 2008. What comes next is the rebuilding. What building codes will we have? What precautions will we take to prevent the next fire, and what are the tradeoffs?

This makes the next FOMC chair extremely important. While you are inside a burning building, what the fireman is doing is everything. But deciding how to rebuild will ultimately make the big difference for the next 30 years.

The next FOMC chair will have three major issues to deal with during his or her tenure. The first is to determine when to start pushing on the brakes, and thus where we’ll hit “full employment.” The second is to decide how aggressively to enforce financial reform rules [1]. Those are pretty important things!

But the new FOMC chair has an even bigger responsibility. He or she will also have to figure out a way to rebuild monetary policy and the Federal Reserve so that we won’t have a repeat of our current crisis. And in case you’ve missed the half-a-lost-decade we’ve already gone through, this couldn’t be more important.

Monetary policy itself could be rebuilt in a number of directions. It could give up on unemployment, perhaps keeping the economy permanently in a quasi-recession to somehow boost a notion of “financial stability” instead. Or it could evolve in a direction designed to avoid the prolonged recession we just had, which could involve a higher inflation target or targeting something like nominal GDP.

But the default, like many things in life, is that inertia will win out, and some form of muddling forward will continue on indefinitely. The Federal Reserve will maintain a low inflation target that it always falls short of, and the economy will never run at its peak capacity. Attempts at better communications and priorities will be abandoned. And even minor recessions will run the risk of hitting the liquidity trap, making them far worse than they need to be.

The inertia problem is why having a consensus builder and convincer in charge is key, and it is a terrible development that these traits are being coded as feminine and thus weak. As a new governor in 1996, Janet Yellen argued the evidence to convince Alan Greenspan that targeting zero percent inflation was a bad idea. (Could you imagine this recession if inflation was already hovering at a little above zero in 2007?) The next governor will be asked to gather much more complicated evidence to make even harder decisions about the future of the economy - and Yellen has a proven track record here.

Yellen has been at the forefront of all these debates. As Cardiff Garcia writes, she runs the subcommittee on communications and has spent a great deal of time trying to figure out how these unorthodox policies impact the economy. The debate about what constitutes full employment has become muted among liberal economists because unemployment has been so high, but it will come back to the fore after the taper hits. Yellen has been thinking about this all along. Crucially, she has come the closest of any high-ranking Fed official to endorsing a major shift of current policy - in this case, to something like a nominal spending target. This will become important to however we rebuild after this crisis.

As a quick history lesson, there were two major points where a large battle broke out on monetary stimulus. The first was the spring and summer of 2010, when there were serious worries about a double-dip recession. This ended when Bernanke announced QE2, which immediately collapsed market expectations of deflation. The second was in the first half of 2012, when an intellectual consensus was built around tying monetary policy to future conditions, ending with the adoption of the Evans Rule.

I can’t find Larry Summers commenting on either of these situations, either in high-end academic debates or in the wide variety of op-eds he’s written. The commenters at The Money Illusion couldn’t find a single instance of Summers suggesting that monetary policy was too tight in the past five years. Summers was simply missing in action for the most important monetary policy debates of the past 30 years, while Yellen was leading them. And trying to shift from those debates into a new status quo will be the responsibility of the next FOMC chair.

 

 

[1] Given what this blog normally covers, I’d be remiss to not mention housing and financial reform. During the Obama transition, Larry Summers promised “substantial resources of $50-100B to a sweeping effort to address the foreclosure crisis” as well as “reforming our bankruptcy laws.” This letter was crucial in securing votes from Democrats like Jeff Merkley for the second round of TARP bailouts. A recent check showed that the administration ended up using only $4.4 billion on foreclosure mitigation through the awful HAMP program, while Summers reportedly was not supportive of bankruptcy reform.

And as Bill McBride notes, Yellen was making the correct calls on the housing bubble and its potential damage while Summers was attacking those who thought financial innovation could increase the risks of a panic and crash.

It’s difficult to overstate how important the Federal Reserve is to financial regulation. Did you catch how the Federal Reserve needs to decide about the future of finance and physical commodities soon, with virtually no oversight or accountability? Even if you think Summers gets a bum rap for deregulation in the 1990s, you must believe that his suspicion of skepticism about finance - for instance, the reporting on his opposition on the Volcker Rule - is not what our real economy needs while Dodd-Frank is being implemented.

Follow or contact the Rortybomb blog:

  

 

There is no Bernanke Consensus. This is important to remember about our moment, and about how to evaluate what comes next for the Federal Reserve. What we have instead is the Bernanke Improvisation, a series of emergency procedures to try to keep the economy from falling apart, and perhaps even guide it back to full employment, after normal monetary policy hit a wall.

With the rumor mill circulating that Larry Summers could be the next Federal Reserve chair instead of Janet Yellen, it’s worth understanding where the Fed is. Bernanke has been like a fireman trying to put out a fire since 2008. What comes next is the rebuilding. What building codes will we have? What precautions will we take to prevent the next fire, and what are the tradeoffs?

This makes the next FOMC chair extremely important. While you are inside a burning building, what the fireman is doing is everything. But deciding how to rebuild will ultimately make the big difference for the next 30 years.

The next FOMC chair will have three major issues to deal with during his or her tenure. The first is to determine when to start pushing on the brakes, and thus where we’ll hit “full employment.” The second is to decide how aggressively to enforce financial reform rules [1]. Those are pretty important things!

But the new FOMC chair has an even bigger responsibility. He or she will also have to figure out a way to rebuild monetary policy and the Federal Reserve so that we won’t have a repeat of our current crisis. And in case you’ve missed the half-a-lost-decade we’ve already gone through, this couldn’t be more important.

Monetary policy itself could be rebuilt in a number of directions. It could give up on unemployment, perhaps keeping the economy permanently in a quasi-recession to somehow boost a notion of “financial stability” instead. Or it could evolve in a direction designed to avoid the prolonged recession we just had, which could involve a higher inflation target or targeting something like nominal GDP.

But the default, like many things in life, is that inertia will win out, and some form of muddling forward will continue on indefinitely. The Federal Reserve will maintain a low inflation target that it always falls short of, and the economy will never run at its peak capacity. Attempts at better communications and priorities will be abandoned. And even minor recessions will run the risk of hitting the liquidity trap, making them far worse than they need to be.

The inertia problem is why having a consensus builder and convincer in charge is key, and it is a terrible development that these traits are being coded as feminine and thus weak. As a new governor in 1996, Janet Yellen argued the evidence to convince Alan Greenspan that targeting zero percent inflation was a bad idea. (Could you imagine this recession if inflation was already hovering at a little above zero in 2007?) The next governor will be asked to gather much more complicated evidence to make even harder decisions about the future of the economy - and Yellen has a proven track record here.

Yellen has been at the forefront of all these debates. As Cardiff Garcia writes, she runs the subcommittee on communications and has spent a great deal of time trying to figure out how these unorthodox policies impact the economy. The debate about what constitutes full employment has become muted among liberal economists because unemployment has been so high, but it will come back to the fore after the taper hits. Yellen has been thinking about this all along. Crucially, she has come the closest of any high-ranking Fed official to endorsing a major shift of current policy - in this case, to something like a nominal spending target. This will become important to however we rebuild after this crisis.

As a quick history lesson, there were two major points where a large battle broke out on monetary stimulus. The first was the spring and summer of 2010, when there were serious worries about a double-dip recession. This ended when Bernanke announced QE2, which immediately collapsed market expectations of deflation. The second was in the first half of 2012, when an intellectual consensus was built around tying monetary policy to future conditions, ending with the adoption of the Evans Rule.

I can’t find Larry Summers commenting on either of these situations, either in high-end academic debates or in the wide variety of op-eds he’s written. The commenters at The Money Illusion couldn’t find a single instance of Summers suggesting that monetary policy was too tight in the past five years. Summers was simply missing in action for the most important monetary policy debates of the past 30 years, while Yellen was leading them. And trying to shift from those debates into a new status quo will be the responsibility of the next FOMC chair.

 

 

[1] Given what this blog normally covers, I’d be remiss to not mention housing and financial reform. During the Obama transition, Larry Summers promised “substantial resources of $50-100B to a sweeping effort to address the foreclosure crisis” as well as “reforming our bankruptcy laws.” This letter was crucial in securing votes from Democrats like Jeff Merkley for the second round of TARP bailouts. A recent check showed that the administration ended up using only $4.4 billion on foreclosure mitigation through the awful HAMP program, while Summers reportedly was not supportive of bankruptcy reform.

And as Bill McBride notes, Yellen was making the correct calls on the housing bubble and its potential damage while Summers was attacking those who thought financial innovation could increase the risks of a panic and crash.

It’s difficult to overstate how important the Federal Reserve is to financial regulation. Did you catch how the Federal Reserve needs to decide about the future of finance and physical commodities soon, with virtually no oversight or accountability? Even if you think Summers gets a bum rap for deregulation in the 1990s, you must believe that his suspicion of skepticism about finance - for instance, the reporting on his opposition on the Volcker Rule - is not what our real economy needs while Dodd-Frank is being implemented.

Follow or contact the Rortybomb blog:

  

Federal Reserve banner image via Shutterstock.com

Share This

Brooks’s Recovery Gender Swap

Jul 17, 2013Mike Konczal

How are men doing in our anemic economic recovery? David Brooks, after discussing his favorite Western movie, argues in his latest column, Men on the Threshold, that men are "unable to cross the threshold into the new economy." Though he'd probably argue that he's talking about generational changes, he focuses on a few data points from the current recession, including that "all the private sector jobs lost by women during the Great Recession have been recaptured, but men still have a long way to go."

Is he right? And what are some facts we can put on the current recovery when it comes to men versus women?

Total Employment

Men had a harder crash during the recession, but a much better recovery, when compared with women.

Indeed, during the first two years of the recovery expert analysis was focused on a situation that was completely reversed from Brooks' story. The question in mid-2011 was "why weren't women finding jobs?" Pew Research put out a report in July 2011 finding that "From the end of the recession in June 2009 through May 2011, men gained 768,000 jobs and lowered their unemployment rate by 1.1 percentage points to 9.5%. 1 Women, by contrast, lost 218,000 jobs during the same period, and their unemployment rate increased by 0.2 percentage points to 8.5%."

How does that look two years later? Here's a graph of the actual level of employment by gender from the Great Recession onward:

If you squint you can see how women's employment is flat throughout 2011, when men start gaining jobs. Since the beginning 2011, men have gotten around 65 percent of all new jobs. That rate started at 70 percent, and has declined to around 60 percent now. So it is true, as Brooks notes, that women are approaching their old level of employment. But the idea that the anemic recovery has been biased against men is harder to understand. The issue is just a weak recovery - more jobs would mean more jobs for both men and women, but also especially for men.

Occupations

But maybe the issue is the occupations that men are now working. As Brooks writes, "Now, thanks to a communications economy, [men] find themselves in a world that values expressiveness, interpersonal ease, vulnerability and the cooperative virtues." This is a world where they either can't compete, or won't. The testable hypothesis is that men are doing poorly in occupations that are traditionally female dominated.

However the data shows that men are moving into female-dominated occupations, and taking a large majority of the new jobs there.

How has the gendered division of occupations evolved since 2011? Here is first quarter data from 2011 and 2013 of occupations by gender from the CPS. As a reminder, your occupation is what you do, while your industry is what your employer does. Occupation data is much noiser, hence us moving to quarterly data:

Ok that's a mess of data. What should we be looking for in this?

First off, men are moving into occupations that have been traditionally gender-coded female. Office support jobs, which Bryce Covert and I found were a major driver of overall female employment decline from 2009-2011, are now going to men. Men have taken 95 percent of new jobs in this occupation, one that was only about 26 percent male in 2011. We also see men taking a majority of jobs in the male-minority service occupations. Men are also gaining in sales jobs even while the overall number of jobs are declining. That's a major transformation happening in real-time.

(Meanwhile, it's not all caring work and symbolic analysts out there. There's a massive domestic energy extraction business booming in the United States, and those jobs are going to men as well. If you were to break down into suboccupations this becomes very obvious. Men took around 100 percent of the over 600,000+ new "construction and extraction" jobs, for instance.)

It'll be interesting to see how extensive men moving into traditionally female jobs will be, and to what extent it'll challenge the nature of both them and that work. Much of the structure of service work in the United States comes from the model of Walmart, and that comes from both Southern, Christian values and a model of the role women play in kinship structures and communities.

As Sarah Jaffe notes in her piece A Day Without Care, summarizing the work of Bethany Moreton, "Walmart...built its global empire on the backs of part-time women workers, capitalizing on the skills of white Southern housewives who’d never worked for pay before but who saw the customer service work they did at Walmart as an extension of the Christian service values they held dear. Those women didn’t receive a living wage because they were presumed to be married; today, Walmart’s workforce is much more diverse yet still expected to live on barely more than minimum wage."

How will men react when faced with this? And how will their bosses counter?

Follow or contact the Rortybomb blog:

  

 

How are men doing in our anemic economic recovery? David Brooks, after discussing his favorite Western movie, argues in his latest column, Men on the Threshold, that men are "unable to cross the threshold into the new economy." Though he'd probably argue that he's talking about generational changes, he focuses on a few data points from the current recession, including that "all the private sector jobs lost by women during the Great Recession have been recaptured, but men still have a long way to go."

Is he right? And what are some facts we can put on the current recovery when it comes to men versus women?

Total Employment

Men had a harder crash during the recession, but a much better recovery, when compared with women.

Indeed, during the first two years of the recovery expert analysis was focused on a situation that was completely reversed from Brooks' story. The question in mid-2011 was "why weren't women finding jobs?" Pew Research put out a report in July 2011 finding that "From the end of the recession in June 2009 through May 2011, men gained 768,000 jobs and lowered their unemployment rate by 1.1 percentage points to 9.5%. 1 Women, by contrast, lost 218,000 jobs during the same period, and their unemployment rate increased by 0.2 percentage points to 8.5%."

How does that look two years later? Here's a graph of the actual level of employment by gender from the Great Recession onward:

If you squint you can see how women's employment is flat throughout 2011, when men start gaining jobs. Since the beginning 2011, men have gotten around 65 percent of all new jobs. That rate started at 70 percent, and has declined to around 60 percent now. So it is true, as Brooks notes, that women are approaching their old level of employment. But the idea that the anemic recovery has been biased against men is harder to understand. The issue is just a weak recovery - more jobs would mean more jobs for both men and women, but also especially for men.

Occupations

But maybe the issue is the occupations that men are now working. As Brooks writes, "Now, thanks to a communications economy, [men] find themselves in a world that values expressiveness, interpersonal ease, vulnerability and the cooperative virtues." This is a world where they either can't compete, or won't. The testable hypothesis is that men are doing poorly in occupations that are traditionally female dominated.

However the data shows that men are moving into female-dominated occupations, and taking a large majority of the new jobs there.

How has the gendered division of occupations evolved since 2011? Here is first quarter data from 2011 and 2013 of occupations by gender from the CPS. As a reminder, your occupation is what you do, while your industry is what your employer does. Occupation data is much noiser, hence us moving to quarterly data:

Ok that's a mess of data. What should we be looking for in this?

First off, men are moving into occupations that have been traditionally gender-coded female. Office support jobs, which Bryce Covert and I found were a major driver of overall female employment decline from 2009-2011, are now going to men. Men have taken 95 percent of new jobs in this occupation, one that was only about 26 percent male in 2011. We also see men taking a majority of jobs in the male-minority service occupations. Men are also gaining in sales jobs even while the overall number of jobs are declining. That's a major transformation happening in real-time.

(Meanwhile, it's not all caring work and symbolic analysts out there. There's a massive domestic energy extraction business booming in the United States, and those jobs are going to men as well. If you were to break down into suboccupations this becomes very obvious. Men took around 100 percent of the over 600,000+ new "construction and extraction" jobs, for instance.)

It'll be interesting to see how extensive men moving into traditionally female jobs will be, and to what extent it'll challenge the nature of both them and that work. Much of the structure of service work in the United States comes from the model of Walmart, and that comes from both Southern, Christian values and a model of the role women play in kinship structures and communities.

As Sarah Jaffe notes in her piece A Day Without Care, summarizing the work of Bethany Moreton, "Walmart...built its global empire on the backs of part-time women workers, capitalizing on the skills of white Southern housewives who’d never worked for pay before but who saw the customer service work they did at Walmart as an extension of the Christian service values they held dear. Those women didn’t receive a living wage because they were presumed to be married; today, Walmart’s workforce is much more diverse yet still expected to live on barely more than minimum wage."

How will men react when faced with this? And how will their bosses counter?

Follow or contact the Rortybomb blog:

  

 

Business people armwrestling image via Shutterstock.com.

Share This

Mirowski on the Vacuum and Obscurity of Current Economics

Jul 9, 2013Mike Konczal

I just finished reading Philip Mirowski’s Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. It’s fantastic, wonderfully dense, packed with ideas running from Foucault through how game theorists botched the TARP auction design. It provides a stunningly detailed summary of topics economic bloggers would be interested in, ranging from the debate on the Efficient Market Hypothesis after the crisis, the structure of the Mont Pelerin Society, to the way conservatives spread the idea that the GSEs caused the financial crisis. If you like Mirowski’s other works, you'll love this. Mirowski is an economist and a historian, and has a knack for showing the evolving arguments and justifications and contexts for economic ideas and approaches. I'm writing a longer review of it, but I'll be bringing up pieces of it here.

I wanted to include this part on the issue of the vacuousness within economics at this moment. Mirowski:

“Third, it would appear that the corporeal solidity of a live intellectual discipline would be indicated by consensus reference text that help define what it means to be an advocate of that discipline. Here, I would insist that undergraduate textbooks should not count, since they merely project the etiolated public face of the discipline to the world. But if we look at contemporary orthodox economics, where is the John Stuart Mill, the Alfred Marshall, the Paul Samuelson, the Tjalling Koopmans, or the David Kreps of the early twenty-first century? The answer is that, in macroeconomics, there is none. And in microeconomics, the supposed gold standard is Andrew Mas-Collel, Michael Whinston, and Jerry Green (Microeconomic Theory), at its birth a baggy compendium lacking clear organizing principles, but now slipping out of data and growing a bit long in the tooth. Although often forced to take econometrics as part of the core, there is no longer any consensus that econometrics is situated at the heart of economic empiricism in the modern world. Beyond the graduate textbooks, the profession is held together by little more than a few journals that are designated indispensable by some rather circular bibliometrics measures, and the dominance of a few highly ranked departments, rather than any clear intellectual standards. Indeed, graduates are socialized and indoctrinated by forcing them to read articles from those journals with a half-life of five years: and so the disciplinary center of gravity wanders aimlessly, without vision or intentionality. The orthodoxy, so violent quarantined and demarcated from outside pretenders, harbors a vacuum within its perimeter.

Fourth, and finally, should one identify specific models as paradigmatic for neoclassical economics, then they are accompanied by formal proofs of impeccable logic which demonstrate that the model does not underwrite the seeming solidity of the textbooks. Neoclassical theory is itself the vector of its own self-abnegation. If one cites the canonical Arrow-Debreu model of general equilibrium, then one can pair it with the Sonnenschein-Mantel-Debreu theorems, which point out that the general Arrow-Debreu model places hardly any restrictions at all on the functions that one deems “basic economics,” such as excess demand functions. Or, alternatively, if one lights on the Nash equilibrium in game theory, you can pair that with the so-called folk theorem, which states that under generic conditions, almost anything can qualify as a Nash equilibrium. Keeping with the wonderful paradoxes of “strategic behavior,” the Milgrom-Stokey “No Trade theorem” suggests that if everyone really were as suspicious and untrusting as the Nash theory makes out, then no one would engage in any market exchange whatsoever in a neoclassical world. The Modigliani-Miller theorem states that the level of debt relative to equity in a bank’s balance sheet should not matter one whit for market purposes, even though finance theory is obsessed with debt. Arrow’s impossibility theorem states that, if one models the polity on the pattern of a neoclassical model, then democratic politics is essentially impotent to achieve political goals. Markets are now assert to be marvelous information processors, but the Grossman-Stiglitz results suggest that there are no incentives for anyone to invest in the development and refinement of information in the first place. The list just goes on and on. It is the fate of the Delphic oracles to deal in obscurity.” (p. 24-26)

Konczal here. The entire book is that intense. A few thoughts:

- To put the first point a different way, the complaint I hear most when it comes to the major graduate textbooks is that they function as cookbooks, or books full of simple recipes each designed to do a single thing. Beyond micro, this is especially true for the major texts in macroeconomics and econometrics. The macroeconomics piece gives the sense that it’s designed to pull attention away from the major visions and towards little puzzle pieces that don’t connect into any kind of bigger picture.

Scanning the "What's New?" part of the new 2012 edition of the standard, entry-level graduate macro text, it seems like there's nothing new for the crisis. (If it is mentioned, I didn't see it.) If you are an energetic, smart graduate student who really wants to dissect the economic crisis, you essentially have to sit out the first half of your macroeconomic coursework before you get to something that has to do with a recession as a regular person would understand it. It's clear what has priority within the education of new economists.

I wonder how much the move to empirical methods and experiments are less about access to computing power and data sets (or, ha, issues of falsification), and more about the fact that the ability to innovate on the theory side has broken and it is now impossible to break new ground. How many enfante terribles in economics are theorists these days? I assume any substantial break from standing theory is immediate exclusion from tenure-setting journals.

- I love this magic trick analogy from Mirowski for frictions within DSGE: “By thrusting the rabbit into the hat, then pulling it back out with a different hand, the economist merely creates a model more awkward, arbitrary, and unprepossessing [that also] violate[] the Lucas critique in a more egregious fashion than the earlier Keynesian models these macroeconomists love to hate” (p. 284).

- The mention of the “excess demand function” reminded me whether stability issues are covered anymore. The book The Assumptions Economists Make makes a big deal about the lack of stability analysis in how economists discuss general equilibrium (also see Alejandro Nadal here).

To clarify in English, have you ever heard of the “Invisible Hand” metaphor? Markets equilibrate supply and demand with prices across the whole economy. Stability is the question of “under what circumstances (if any) does a competitive economy converge to equilibrium, and, if it does, how quickly does this happen?”

Will these concerns come back into graduate education and discussion with the crisis? I got a chance to check out the new 2011 Advanced Microeconomic Theory by Jehle and Reny, which seems to be the new, more mathematically tight, alternative to Mas-Collel (1995) for graduate microeconomics.

One the first page of their chapter for general equilibrium: “These are questions of existence, uniqueness, and stability of general competitive equilibrium. All are deep and important, but we will only address the first.” Wow. That’s a massive forgetting from Mas-Collel, which covers these issues, even if superficially, to give student an understanding that they are there.

Going forward, if you ask a new economist “could the economy just stay this way forever?” or “could more commodity trading push prices further away from a true price?” (pretty important questions!) you will probably get a smug “we’ve proven the Invisible Hand handles this decades ago.” Little will he or she know that a gigantic, inconclusive debate occurred about these issues, but they’ve simply been excised down the memory hole.

Follow or contact the Rortybomb blog:

  

 

I just finished reading Philip Mirowski’s Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. It’s fantastic, wonderfully dense, packed with ideas running from Foucault through how game theorists botched the TARP auction design. It provides a stunningly detailed summary of topics economic bloggers would be interested in, ranging from the debate on the Efficient Market Hypothesis after the crisis, the structure of the Mont Pelerin Society, to the way conservatives spread the idea that the GSEs caused the financial crisis. If you like Mirowski’s other works, you'll love this. Mirowski is an economist and a historian, and has a knack for showing the evolving arguments and justifications and contexts for economic ideas and approaches. I'm writing a longer review of it, but I'll be bringing up pieces of it here.

I wanted to include this part on the issue of the vacuousness within economics at this moment. Mirowski:

“Third, it would appear that the corporeal solidity of a live intellectual discipline would be indicated by consensus reference text that help define what it means to be an advocate of that discipline. Here, I would insist that undergraduate textbooks should not count, since they merely project the etiolated public face of the discipline to the world. But if we look at contemporary orthodox economics, where is the John Stuart Mill, the Alfred Marshall, the Paul Samuelson, the Tjalling Koopmans, or the David Kreps of the early twenty-first century? The answer is that, in macroeconomics, there is none. And in microeconomics, the supposed gold standard is Andrew Mas-Collel, Michael Whinston, and Jerry Green (Microeconomic Theory), at its birth a baggy compendium lacking clear organizing principles, but now slipping out of data and growing a bit long in the tooth. Although often forced to take econometrics as part of the core, there is no longer any consensus that econometrics is situated at the heart of economic empiricism in the modern world. Beyond the graduate textbooks, the profession is held together by little more than a few journals that are designated indispensable by some rather circular bibliometrics measures, and the dominance of a few highly ranked departments, rather than any clear intellectual standards. Indeed, graduates are socialized and indoctrinated by forcing them to read articles from those journals with a half-life of five years: and so the disciplinary center of gravity wanders aimlessly, without vision or intentionality. The orthodoxy, so violent quarantined and demarcated from outside pretenders, harbors a vacuum within its perimeter.

Fourth, and finally, should one identify specific models as paradigmatic for neoclassical economics, then they are accompanied by formal proofs of impeccable logic which demonstrate that the model does not underwrite the seeming solidity of the textbooks. Neoclassical theory is itself the vector of its own self-abnegation. If one cites the canonical Arrow-Debreu model of general equilibrium, then one can pair it with the Sonnenschein-Mantel-Debreu theorems, which point out that the general Arrow-Debreu model places hardly any restrictions at all on the functions that one deems “basic economics,” such as excess demand functions. Or, alternatively, if one lights on the Nash equilibrium in game theory, you can pair that with the so-called folk theorem, which states that under generic conditions, almost anything can qualify as a Nash equilibrium. Keeping with the wonderful paradoxes of “strategic behavior,” the Milgrom-Stokey “No Trade theorem” suggests that if everyone really were as suspicious and untrusting as the Nash theory makes out, then no one would engage in any market exchange whatsoever in a neoclassical world. The Modigliani-Miller theorem states that the level of debt relative to equity in a bank’s balance sheet should not matter one whit for market purposes, even though finance theory is obsessed with debt. Arrow’s impossibility theorem states that, if one models the polity on the pattern of a neoclassical model, then democratic politics is essentially impotent to achieve political goals. Markets are now assert to be marvelous information processors, but the Grossman-Stiglitz results suggest that there are no incentives for anyone to invest in the development and refinement of information in the first place. The list just goes on and on. It is the fate of the Delphic oracles to deal in obscurity.” (p. 24-26)

Konczal here. The entire book is that intense. A few thoughts:

- To put the first point a different way, the complaint I hear most when it comes to the major graduate textbooks is that they function as cookbooks, or books full of simple recipes each designed to do a single thing. Beyond micro, this is especially true for the major texts in macroeconomics and econometrics. The macroeconomics piece gives the sense that it’s designed to pull attention away from the major visions and towards little puzzle pieces that don’t connect into any kind of bigger picture.

Scanning the "What's New?" part of the new 2012 edition of the standard, entry-level graduate macro text, it seems like there's nothing new for the crisis. (If it is mentioned, I didn't see it.) If you are an energetic, smart graduate student who really wants to dissect the economic crisis, you essentially have to sit out the first half of your macroeconomic coursework before you get to something that has to do with a recession as a regular person would understand it. It's clear what has priority within the education of new economists.

I wonder how much the move to empirical methods and experiments are less about access to computing power and data sets (or, ha, issues of falsification), and more about the fact that the ability to innovate on the theory side has broken and it is now impossible to break new ground. How many enfante terribles in economics are theorists these days? I assume any substantial break from standing theory is immediate exclusion from tenure-setting journals.

- I love this magic trick analogy from Mirowski for frictions within DSGE: “By thrusting the rabbit into the hat, then pulling it back out with a different hand, the economist merely creates a model more awkward, arbitrary, and unprepossessing [that also] violate[] the Lucas critique in a more egregious fashion than the earlier Keynesian models these macroeconomists love to hate” (p. 284).

- The mention of the “excess demand function” reminded me whether stability issues are covered anymore. The book The Assumptions Economists Make makes a big deal about the lack of stability analysis in how economists discuss general equilibrium (also see Alejandro Nadal here).

To clarify in English, have you ever heard of the “Invisible Hand” metaphor? Markets equilibrate supply and demand with prices across the whole economy. Stability is the question of “under what circumstances (if any) does a competitive economy converge to equilibrium, and, if it does, how quickly does this happen?”

Will these concerns come back into graduate education and discussion with the crisis? I got a chance to check out the new 2011 Advanced Microeconomic Theory by Jehle and Reny, which seems to be the new, more mathematically tight, alternative to Mas-Collel (1995) for graduate microeconomics.

One the first page of their chapter for general equilibrium: “These are questions of existence, uniqueness, and stability of general competitive equilibrium. All are deep and important, but we will only address the first.” Wow. That’s a massive forgetting from Mas-Collel, which covers these issues, even if superficially, to give student an understanding that they are there.

Going forward, if you ask a new economist “could the economy just stay this way forever?” or “could more commodity trading push prices further away from a true price?” (pretty important questions!) you will probably get a smug “we’ve proven the Invisible Hand handles this decades ago.” Little will he or she know that a gigantic, inconclusive debate occurred about these issues, but they’ve simply been excised down the memory hole.

Follow or contact the Rortybomb blog:

  

 

Share This

Can the Taper Matter? Revisiting a Wonkish 2012 Debate

Jun 25, 2013Mike Konczal

Last week, Ben Bernanke tested the waters for “tapering,” or cutting back on the rate at which he carries out new asset purchases, and everything is going poorly. As James Bullard, the president of the Federal Reserve Bank of St. Louis, argued in discussing his dovish dissent with Wonkblog, “This was tighter policy. It’s all about tighter policy. You can communicate it one way or another way, but the markets are saying that they’re pulling up the probability we’re going to withdraw from the QE program sooner than they expected, and that’s having a big influence.”

But if you really believe in the expectations channel of monetary policy, can this even matter? Let’s use this to revisit an obscure monetary beef from fall 2012.

Cardiff Garcia had a recent post discussing the fragile alliance between fiscalists and monetarists at the zero lower bound. But one angle he missed was the disagreements between monetarists, or more generally those who believe that the Federal Reserve has a lot of “ammo” at the zero lower bound, over what really matters and how.

For instance, David Becksworth writes, “What is puzzling to me is how anyone could look at the outcome of this experiment and claim the Fed's large scale asset programs (LSAPs) are not helpful.” But one of the most important and influential supporters of expansionary monetary policy, the one who probably helped put the Federal Reserve on its bold course in late 2012, thinks exactly this. And that person is the economist Michael Woodford.

To recap, the Fed took two major steps in 2012. First, it used a communication strategy to say that it would keep interest rates low until certain economic states were hit, such as unemployment hitting 6.5 percent or inflation hitting 2.5 percent. This was the Evans Rule, which used what is called the expectations channel. Second, the Fed started purchasing $85 billion a month in assets until this goal was hit. This was QE3, which used what is called the portfolio channel.

In his major September 2012 paper, Woodford argued that the latter step, the $85 billion in purchases every month, doesn't even matter, because "'portfolio-balance effects' do not exist in a modern, general-equilibrium theory of asset prices." At best, such QE-related purchases "can be helpful as ways of changing expectations about future policy — essentially, as a type of signalling that can usefully supplement purely verbal forms of forward guidance." (He even calls the idea that purchases matter "1950s-vintage," which is as cutting as you can get as a macroeconomist.)

To put it a different way, the Fed's use of the portfolio channel only matters to the extent that the Fed isn't being clear in its written statements about future interest rate policy and other means of setting expectations.

Woodford specifically called out research by the Peterson Institute for International Economics’ (and friend of the blog) Joseph Gagnon. Contra Woodford, Gagnon et al concluded in their research, “[QE] purchases led to economically meaningful and long-lasting reductions in longer-term interest rates on a range of securities [that] reflect lower risk premiums, including term premiums, rather than lower expectations of future short-term interest rates.” (Woodford thinks that expectations of future short-term interest rates are the only thing at play here.)

Woodford's analysis of this research immediately came under attack in the blogosphere. James Hamilton at Econobrowser noted that “Gagnon, et. al.'s finding has also been confirmed by a number of other researchers using very different data sets and methods.” Gagnon himself responded here, defending the research and noting that Woodford’s theoretical assumptions “are violated in clear and obvious ways in the real world.“ (If you are interested in the nitty-gritty of the research agenda, it is worth following the links.)

For our purposes, what can the taper explain for us? Remember, in Woodford’s world of strict expectations, QE purchases don’t matter. Since the purchases don't matter, moving future purchases up or down at the margins, keeping the expected future path of short-term interest rates constant, shouldn't matter either. Raising the $85 billion to $100 billion wouldn't help, and lowering it to $70 billion wouldn't hurt, unless Bernanke also moved the expectations of future policy.

The taper was a test case for this theory. Bernanke meant to keep expectations of future short-term interest rates the same (there was no change to when interest rates would rise) while reducing the flow of QE purchases. But from our first readings, it has been accepted by the market as a major tightening of policy. This strikes me as a major victory for Gagnon and a loss for the strongest versions of the expectations channel.

Of course, the taper could be a signal that Bernanke has lost his coalition or is otherwise going soft on expansionary policy. If that’s the case, then according to the stronger version of the expectations theory, QE3 should never have been started, because it adds no value and is just another thing that could go wrong. Bernanke should just have focused on crafting a more articulate press release instead. This doesn't seem the right lesson when a body of research argues purchases are making a difference.

An objective bystander would say that if the taper is being read as tightening even though future expectations language is the same, it means that we should be throwing everything we have at the problem because everything is in play. That includes fiscal policy. As Woodford writes, “[t]he most obvious source of a boost to current aggregate demand that would not depend solely on expectational channels is fiscal stimulus.” We should be expanding, rather than contracting, the portfolio channel, while also avoiding the sequester and extending the payroll tax cut. Arguments, like Woodford's, about the supremacy of any one approach tend to get knocked down by all the other concerns.

Follow or contact the Rortybomb blog:

  

 

Last week, Ben Bernanke tested the waters for “tapering,” or cutting back on the rate at which he carries out new asset purchases, and everything is going poorly. As James Bullard, the president of the Federal Reserve Bank of St. Louis, argued in discussing his dovish dissent with Wonkblog, “This was tighter policy. It’s all about tighter policy. You can communicate it one way or another way, but the markets are saying that they’re pulling up the probability we’re going to withdraw from the QE program sooner than they expected, and that’s having a big influence.”

But if you really believe in the expectations channel of monetary policy, can this even matter? Let’s use this to revisit an obscure monetary beef from fall 2012.

Cardiff Garcia had a recent post discussing the fragile alliance between fiscalists and monetarists at the zero lower bound. But one angle he missed was the disagreements between monetarists, or more generally those who believe that the Federal Reserve has a lot of “ammo” at the zero lower bound, over what really matters and how.

For instance, David Becksworth writes, “What is puzzling to me is how anyone could look at the outcome of this experiment and claim the Fed's large scale asset programs (LSAPs) are not helpful.” But one of the most important and influential supporters of expansionary monetary policy, the one who probably helped put the Federal Reserve on its bold course in late 2012, thinks exactly this. And that person is the economist Michael Woodford.

To recap, the Fed took two major steps in 2012. First, it used a communication strategy to say that it would keep interest rates low until certain economic states were hit, such as unemployment hitting 6.5 percent or inflation hitting 2.5 percent. This was the Evans Rule, which used what is called the expectations channel. Second, the Fed started purchasing $85 billion a month in assets until this goal was hit. This was QE3, which used what is called the portfolio channel.

In his major September 2012 paper, Woodford argued that the latter step, the $85 billion in purchases every month, doesn't even matter, because "'portfolio-balance effects' do not exist in a modern, general-equilibrium theory of asset prices." At best, such QE-related purchases "can be helpful as ways of changing expectations about future policy — essentially, as a type of signalling that can usefully supplement purely verbal forms of forward guidance." (He even calls the idea that purchases matter "1950s-vintage," which is as cutting as you can get as a macroeconomist.)

To put it a different way, the Fed's use of the portfolio channel only matters to the extent that the Fed isn't being clear in its written statements about future interest rate policy and other means of setting expectations.

Woodford specifically called out research by the Peterson Institute for International Economics’ (and friend of the blog) Joseph Gagnon. Contra Woodford, Gagnon et al concluded in their research, “[QE] purchases led to economically meaningful and long-lasting reductions in longer-term interest rates on a range of securities [that] reflect lower risk premiums, including term premiums, rather than lower expectations of future short-term interest rates.” (Woodford thinks that expectations of future short-term interest rates are the only thing at play here.)

Woodford's analysis of this research immediately came under attack in the blogosphere. James Hamilton at Econobrowser noted that “Gagnon, et. al.'s finding has also been confirmed by a number of other researchers using very different data sets and methods.” Gagnon himself responded here, defending the research and noting that Woodford’s theoretical assumptions “are violated in clear and obvious ways in the real world.“ (If you are interested in the nitty-gritty of the research agenda, it is worth following the links.)

For our purposes, what can the taper explain for us? Remember, in Woodford’s world of strict expectations, QE purchases don’t matter. Since the purchases don't matter, moving future purchases up or down at the margins, keeping the expected future path of short-term interest rates constant, shouldn't matter either. Raising the $85 billion to $100 billion wouldn't help, and lowering it to $70 billion wouldn't hurt, unless Bernanke also moved the expectations of future policy.

The taper was a test case for this theory. Bernanke meant to keep expectations of future short-term interest rates the same (there was no change to when interest rates would rise) while reducing the flow of QE purchases. But from our first readings, it has been accepted by the market as a major tightening of policy. This strikes me as a major victory for Gagnon and a loss for the strongest versions of the expectations channel.

Of course, the taper could be a signal that Bernanke has lost his coalition or is otherwise going soft on expansionary policy. If that’s the case, then according to the stronger version of the expectations theory, QE3 should never have been started, because it adds no value and is just another thing that could go wrong. Bernanke should just have focused on crafting a more articulate press release instead. This doesn't seem the right lesson when a body of research argues purchases are making a difference.

An objective bystander would say that if the taper is being read as tightening even though future expectations language is the same, it means that we should be throwing everything we have at the problem because everything is in play. That includes fiscal policy. As Woodford writes, “[t]he most obvious source of a boost to current aggregate demand that would not depend solely on expectational channels is fiscal stimulus.” We should be expanding, rather than contracting, the portfolio channel, while also avoiding the sequester and extending the payroll tax cut. Arguments, like Woodford's, about the supremacy of any one approach tend to get knocked down by all the other concerns.

Follow or contact the Rortybomb blog:

  

 

Share This

Pages