Guest Post: O Canada and Its Housing Market

Aug 14, 2013David Min

Mike here. Over the weekend I wrote a post at Wonkblog, "In Defense of the 30 Year Mortgage." Many people have responded to this idea by bringing up the housing market of our neighbors in Canada. In order to keep this conversation running, I have a guest post by David Min, friend of the blog and a University of California, Irvine law professor. Take it away, David:

Does Canada prove the 30-year fixed-rate mortgage is of limited value? Here’s Matt Yglesias from last week:

If you cross the border into Canada it's not like people are living in yurts. It works fine. But since homebuyers have to carry a bit more interest rate risk, they seem to purchase slightly smaller houses. Alternatively if you imagine a jumbo loan scenario where the 30-year fixed rate mortgage lives but with systematically higher interest rates, you'd find that people would have to respond by purchasing slightly smaller houses. And it's not a coincidence that Americans live in the biggest houses in the world.

As I’ve outlined in the past, the dominant mortgage product in Canada is a five-year fixed-rate mortgage, amortized over 25 years, that essentially requires refinancing every five years. This product leaves borrowers open to two important types of mortgage-related risk.

First, there is the risk that interest rates will rise significantly between the time the loan is first originated and the time that it must be refinanced, causing a payment shock that the borrower may not be able to afford. Second, there is the risk that when the loan comes due, there may not be refinancing options available to the borrower, either because the property has declined in value so much that the loan does not meet loan-to-value requirements, or perhaps because banks have reduced their lending due to a credit contraction.

For what it’s worth, Canada has historically had a greater government involvement in its housing finance system, through a combination of government-backed mortgage securitization and mortgage insurance offered by the Canada Mortgage and Housing Corporation (an entity similar in many ways to Fannie and Freddie), as well as governmental reinsurance for all mortgage insurance, which in total accounts for some 70-80 percent of all Canadian home loans. So if you’re looking to Canada as a model of getting the government out of housing finance, look again (and don’t look to Europe, which also has very high levels of government guarantees for housing finance, as I explained recently in congressional testimony).

As to Matt’s broader point about Canadian mortgage finance, there is no question that we can have a housing finance system without the 30-year FRM that drives sufficient capital into housing to meet our needs (both for owner-occupied and rental housing), but that’s not the point of the debate over the 30-year FRM. The key difference between Canada’s five-year FRM and the American 30-year FRM is that the former leaves interest rate risk (and refinancing risk) with consumers, whereas the latter leaves rate risk (and prepayment risk) with financial institutions such as banks, pension funds, and insurance companies.

The key question is whether interest rate risk is better placed with households or with banks and investors. Those of us who favor the 30-year FRM argue that this risk should be placed with the latter, who are better equipped to handle this risk. The available evidence suggests that average mortgage borrowers do not attempt to predict what mortgage rates will be five years down the line. And even if they could do this, they lack access to the financial instruments that might allow them to hedge against this risk. Conversely, banks and MBS investors already spend quite a lot of resources trying to protect against interest rate volatility.  

Moreover, when households are unable to deal with interest rate risk, they are unable to make their mortgage payments. This creates a double whammy insofar as higher rate risk for borrowers means higher credit risk for banks and investors. Thus, from a systemic stability standpoint, it seems to make more sense to place rate risk with financial institutions rather than with consumers.

Neither the U.S. nor Canada has experienced significant interest rate increases since the early 1980s, so the difference between the five- and 30-year FRMs has largely been a theoretical debate since that time. But as Karl Case (the economist who helped create the eponymous Case-Shiller home price index) has noted, we have at least one important data point from that last episode of interest rate volatility that suggests the 30-year FRM is preferable from a financial stability standpoint.

Both Vancouver and California had housing booms in the late 1970s, and both of course went through the double-digit interest rate increases of the early 1980s, which led to U.S. mortgage rates settling at about 17-18 percent. Then, as now, the dominant mortgage in the U.S. was the 30-year FRM and the dominant mortgage in Canada was the five-year FRM. Vancouver and California experienced starkly different housing markets in response to this interest rate volatility. Because Canadian mortgages were designed to be refinanced every few years, Canadian borrowers faced enormous payment shocks (with mortgage payments doubling or tripling), which resulted in a huge housing bust, with Vancouver experiencing a 60 percent (!) home price decline in the early 1980s. Conversely, California experienced a few years of a stagnant housing market in which potential sellers simply held onto their existing mortgages, and prices never fell in nominal terms.

This limited historical data suggests that the U.S. 30-year FRM is a more systemically stable product than the shorter duration rollover loan that is popular in Canada. Within the United States, of course, there is ample evidence that the 30-year FRM performs far better than short-term rollover loans. During the Great Depression, the delinquency rates on short-term rollover loans reached 50 percent, as underwater borrowers were unable to find sources of refinancing (sound familiar?). More recently, adjustable-rate mortgages experienced delinquency rates that were two to three times higher than fixed-rate mortgages made to comparable borrowers, as both the Federal Housing Finance Agency and the Mortgage Bankers Association have found.

All of this evidence suggests that critics of the 30-year FRM need to be treading a little more carefully in trashing the benefits of this particular product.

Follow or contact the Rortybomb blog:

  

 

Mike here. Over the weekend I wrote a post at Wonkblog, "In Defense of the 30 Year Mortgage." Many people have responded to this idea by bringing up the housing market of our neighbors in Canada. In order to keep this conversation running, I have a guest post by David Min, friend of the blog and a University of California, Irvine law professor. Take it away, David:

Does Canada prove the 30-year fixed-rate mortgage is of limited value? Here’s Matt Yglesias from last week:

If you cross the border into Canada it's not like people are living in yurts. It works fine. But since homebuyers have to carry a bit more interest rate risk, they seem to purchase slightly smaller houses. Alternatively if you imagine a jumbo loan scenario where the 30-year fixed rate mortgage lives but with systematically higher interest rates, you'd find that people would have to respond by purchasing slightly smaller houses. And it's not a coincidence that Americans live in the biggest houses in the world.

As I’ve outlined in the past, the dominant mortgage product in Canada is a five-year fixed-rate mortgage, amortized over 25 years, that essentially requires refinancing every five years. This product leaves borrowers open to two important types of mortgage-related risk.

First, there is the risk that interest rates will rise significantly between the time the loan is first originated and the time that it must be refinanced, causing a payment shock that the borrower may not be able to afford. Second, there is the risk that when the loan comes due, there may not be refinancing options available to the borrower, either because the property has declined in value so much that the loan does not meet loan-to-value requirements, or perhaps because banks have reduced their lending due to a credit contraction.

For what it’s worth, Canada has historically had a greater government involvement in its housing finance system, through a combination of government-backed mortgage securitization and mortgage insurance offered by the Canada Mortgage and Housing Corporation (an entity similar in many ways to Fannie and Freddie), as well as governmental reinsurance for all mortgage insurance, which in total accounts for some 70-80 percent of all Canadian home loans. So if you’re looking to Canada as a model of getting the government out of housing finance, look again (and don’t look to Europe, which also has very high levels of government guarantees for housing finance, as I explained recently in congressional testimony).

As to Matt’s broader point about Canadian mortgage finance, there is no question that we can have a housing finance system without the 30-year FRM that drives sufficient capital into housing to meet our needs (both for owner-occupied and rental housing), but that’s not the point of the debate over the 30-year FRM. The key difference between Canada’s five-year FRM and the American 30-year FRM is that the former leaves interest rate risk (and refinancing risk) with consumers, whereas the latter leaves rate risk (and prepayment risk) with financial institutions such as banks, pension funds, and insurance companies.

The key question is whether interest rate risk is better placed with households or with banks and investors. Those of us who favor the 30-year FRM argue that this risk should be placed with the latter, who are better equipped to handle this risk. The available evidence suggests that average mortgage borrowers do not attempt to predict what mortgage rates will be five years down the line. And even if they could do this, they lack access to the financial instruments that might allow them to hedge against this risk. Conversely, banks and MBS investors already spend quite a lot of resources trying to protect against interest rate volatility.  

Moreover, when households are unable to deal with interest rate risk, they are unable to make their mortgage payments. This creates a double whammy insofar as higher rate risk for borrowers means higher credit risk for banks and investors. Thus, from a systemic stability standpoint, it seems to make more sense to place rate risk with financial institutions rather than with consumers.

Neither the U.S. nor Canada has experienced significant interest rate increases since the early 1980s, so the difference between the five- and 30-year FRMs has largely been a theoretical debate since that time. But as Karl Case (the economist who helped create the eponymous Case-Shiller home price index) has noted, we have at least one important data point from that last episode of interest rate volatility that suggests the 30-year FRM is preferable from a financial stability standpoint.

Both Vancouver and California had housing booms in the late 1970s, and both of course went through the double-digit interest rate increases of the early 1980s, which led to U.S. mortgage rates settling at about 17-18 percent. Then, as now, the dominant mortgage in the U.S. was the 30-year FRM and the dominant mortgage in Canada was the five-year FRM. Vancouver and California experienced starkly different housing markets in response to this interest rate volatility. Because Canadian mortgages were designed to be refinanced every few years, Canadian borrowers faced enormous payment shocks (with mortgage payments doubling or tripling), which resulted in a huge housing bust, with Vancouver experiencing a 60 percent (!) home price decline in the early 1980s. Conversely, California experienced a few years of a stagnant housing market in which potential sellers simply held onto their existing mortgages, and prices never fell in nominal terms.

This limited historical data suggests that the U.S. 30-year FRM is a more systemically stable product than the shorter duration rollover loan that is popular in Canada. Within the United States, of course, there is ample evidence that the 30-year FRM performs far better than short-term rollover loans. During the Great Depression, the delinquency rates on short-term rollover loans reached 50 percent, as underwater borrowers were unable to find sources of refinancing (sound familiar?). More recently, adjustable-rate mortgages experienced delinquency rates that were two to three times higher than fixed-rate mortgages made to comparable borrowers, as both the Federal Housing Finance Agency and the Mortgage Bankers Association have found.

All of this evidence suggests that critics of the 30-year FRM need to be treading a little more carefully in trashing the benefits of this particular product.

Follow or contact the Rortybomb blog:

  

 

Share This

Denialism and Bad Faith in Policy Arguments

Aug 14, 2013Mike Konczal

Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”

But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”

He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States.

This is why I think the Smithian “Derp” concept needs fleshing out as a diagnosis of our current situation. (I’m not a fan of the word either, but I’ll use it for this post.) For those not familiar with the term, Noah Smith argues that a major problem in our policy discussions is “the constant, repetitive reiteration of strong priors.” But if that was the only issue, Meltzer would support more expansion like he did for Japan!

Simply blaming reiteration of priors is missing something. The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.

There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. Eventually, this transcends the “reiteration of strong priors” and becomes an updating of the case but a reiteration of the conclusion. Throughout 2010 and 2011, an endless series of arguments about how a long-term fiscal deal would help with the current recession were made, without any credible evidence that this would help our short-term economy. But that’s what people want to do, and so they acknowledge the fresh problem but simply plug in their wrong solutions. The same was true with Mitt Romney’s plan for the economy, which wasn’t specific to 2012 in any way.

Bad faith solutions don’t have to be about things you wanted to do anyway. Phillip Mirowski’s new book makes a fascinating observation about conservative think tanks when it comes to global warming. On the one hand, they have an active project arguing global warming isn’t happening. But on the other hand, they also have an active project arguing global warming can be solved through geoengineering the atmosphere. (For an example, here’s AEI arguing worries over climate change are overblown, but also separately hosting a panel on geoengineering.)

So global warming isn’t real, but if it is, heroic atmospheric entrepreneurs will come in at the last minute and save the day. Thus, you can have denialism and bad-faith solutions in play at the same time.

The fact that we can get to the denial and bad-faith corner makes me think this can be made generalizable and charted on a grid, but I still feel it’s missing some dimensions. What Smith identifies is real, but I’m not sure how to place it on these axes. What do you make of it?

Follow or contact the Rortybomb blog:

  

 

Here’s the thing about Allan Meltzer: he knows. Or at least he should know. It’s tough to remember that he knows when he writes editorials like his latest, "When Inflation Doves Cry." This is a mess of an editorial, a confused argument about why huge inflation is around the corner. “Instead of continuing along this futile path, the Fed should end its open-ended QE3 now... Those who believe that inflation will remain low should look more thoroughly and think more clearly. ”

But he knows. Because here’s Meltzer in 1999 with "A Policy for Japanese Recovery": “Monetary expansion and devaluation is a much better solution. An announcement by the Bank of Japan and the government that the aim of policy is to prevent deflation and restore growth by providing enough money to raise asset prices would change beliefs and anticipations.”

He knows that there’s an actual debate, with people who are “thinking clearly,” about monetary policy at the zero lower bound as a result of Japan. He participated in it. So he must have been aware of Ben Bernanke, Paul Krugman, Milton Friedman, Michael Woodford, and Lars Svensson all also debating it at the same time. But now he’s forgotten it. In fact, his arguments for Japan are the exact opposite of what they are now for the United States.

This is why I think the Smithian “Derp” concept needs fleshing out as a diagnosis of our current situation. (I’m not a fan of the word either, but I’ll use it for this post.) For those not familiar with the term, Noah Smith argues that a major problem in our policy discussions is “the constant, repetitive reiteration of strong priors.” But if that was the only issue, Meltzer would support more expansion like he did for Japan!

Simply blaming reiteration of priors is missing something. The problem here isn’t that Meltzer may have changed his mind on his advice for Japan. If that’s the case, I’d love to read about what led to that change. The problem is one of denialism, where the person refuses to acknowledge the actually existing debate, and instead pantomimes a debate with a shadow. It involves the idea of a straw man, but sometimes it’s simply not engaging at all. For Meltzer, the extensive debate about monetary policy at the zero lower bound is simply excised from the conversation, and people who only read him will have no clue that it was ever there.

There’s also another dimension that I think is even more important, which is whether or not the argument, conclusions, or suggestions are in good faith. Eventually, this transcends the “reiteration of strong priors” and becomes an updating of the case but a reiteration of the conclusion. Throughout 2010 and 2011, an endless series of arguments about how a long-term fiscal deal would help with the current recession were made, without any credible evidence that this would help our short-term economy. But that’s what people want to do, and so they acknowledge the fresh problem but simply plug in their wrong solutions. The same was true with Mitt Romney’s plan for the economy, which wasn’t specific to 2012 in any way.

Bad faith solutions don’t have to be about things you wanted to do anyway. Phillip Mirowski’s new book makes a fascinating observation about conservative think tanks when it comes to global warming. On the one hand, they have an active project arguing global warming isn’t happening. But on the other hand, they also have an active project arguing global warming can be solved through geoengineering the atmosphere. (For an example, here’s AEI arguing worries over climate change are overblown, but also separately hosting a panel on geoengineering.)

So global warming isn’t real, but if it is, heroic atmospheric entrepreneurs will come in at the last minute and save the day. Thus, you can have denialism and bad-faith solutions in play at the same time.

The fact that we can get to the denial and bad-faith corner makes me think this can be made generalizable and charted on a grid, but I still feel it’s missing some dimensions. What Smith identifies is real, but I’m not sure how to place it on these axes. What do you make of it?

Follow or contact the Rortybomb blog:

  

 

Share This

Whatever Happened to the Economic Policy Uncertainty Index?

Aug 6, 2013Mike Konczal

Jim Tankersley has been doing the Lord’s work by following up on questionable arguments people have made about our current economic weakness being something other than a demand crisis. First, he asked Alberto Alesina about how all that expansionary austerity is working out from the vantage point of this year. Now he looks at the Economic Policy Uncertainty (EPU) index (Baker, Bloom, Davis) as it stands halfway into 2013.

And it has collapsed. The EPU index has been falling at rapid speeds, hitting 2008 levels. Yet the recovery doesn’t seem to be speeding up at all. Wasn’t that supposed to happen?

I’ve been meaning to revisit this index from when I looked at it last fall, and this is a good time to do so. It’s worth unpacking what actually drove the increase in EPU during the past five years, and understanding why there was little reason to believe it reflected uncertainty causing a weak economy. If anything, the relationship is clearly the other way around.

Let’s make sure we understand the uncertainty argument: the increase in EPU “slowed the recovery from the recession by leading businesses and households to postpone investment, hiring and consumption expenditure.” (To give you a sense, in 2011 the authors argued in editorials that this index showed that the NLRB, Obamacare and "harmful rhetorical attacks on business and millionaires" were the cause of prolongued economic weakness.)

As commenters pointed out, it would be easy to construct an index that gets the causation to be spurious or even go the other way. If weak growth could cause the Economic Policy Uncertainty index to skyrocket, then it’s not clear the narrative holds up as well. “There’s uncertainty over whether or not Congress and the Federal Reserve will aggressively fight the downturn” isn’t what the index is trying to measure, but that’s what it seems to be doing.

Let’s take a look at the graph of EPU. When most people discuss this, they argue that the peaks tell them the index is onto something, as it peaks during periods of major confusion (9/11, Lehman bankruptcy, debt ceiling showdown).

But what is worth noting, and what drives the results in a practical way, is the increase in the level during this time period. And that happens immediately in January 2009:

How does economic policy uncertainty jump the first day in 2009? The index has three parts. The first is a newspaper search of people using the phrase “economic policy uncertainty.” I discussed that last fall, arguing that it was mostly capturing Republican talking points and the discipline of the GOP machine rather than actual analysis.

The second is relevant here, and that’s the number of tax provisions set to expire in the near future. (In the first version of the paper this was total number of tax provisions, while in the current version it’s total dollar amount of those provisions.) It’s heavily discounted, so tax cuts that are expiring in a year or two are weighted at a much higher level than those that are further in the future.

What does this look like over the past few years?

So what happened starting in early 2009? The stimulus, of course. And the stimulus was in large part tax provisions that were set to expire in two years. This mechanically increased economic policy uncertainty, even though it was a policy response designed to boost automatic stabilizers. Also, the Bush tax cuts were approaching their endgame, and the algorithm gave a disproportionate weight to them as they entered their last two years.

Then, in late 2010, the Bush tax cuts and some tax provisions from the stimulus were extended to provide additional stimulus to the economy while it was still weak.

Here’s how the creators of the index describe this move: “Congress often decides whether to extend them at the last minute, undermining stability of and certainty about the future path of the tax code... Similarly, the 2010 Payroll Tax Cut was a large tax decrease initially set to expire in 1 year but was twice extended just weeks before its expiration.”

But this decision was not orthogonal to the state of the economy. A major reason the administration waited and then extended the Bush Tax Cuts and the payroll tax cut was the fact that the economy was still weak, and they wanted to boost demand. The only policy uncertainty here was how aggressive and successful the administration would be in securing additional stimulus, which itself was a function of the weakness of the economy. To retroactively argue that the government’s actions in securing additional demand were creating the crisis they are trying to fight requires an additional level of argument not present.

The third part of their index has the same issue. They draw on a literature (e.g. here) that uses disagreements (dispersion of predictions) among professional forecasters as a proxy for uncertainty -- disagreements about the predicted growth in inflation, and predictions of both state and federal spending, one year in advance.

The problem comes from trying to push their definition of EPU onto these disagreements. Debates over how much the federal government will spend through stimulus, how rough the austerity will be at the state level, or how well Bernanke will be able to hit his inflation target, which drives this index, are really debates about the reaction to the crisis. The dispersion will increase if people can’t figure out how aggressively the state will respond to a major collapse in spending. But this is a function of a collapsing economy and how well the government responds to it, not the other way around.

This is why we should ultimately be careful with studies that take this index and plop it into, say, a Beveridge Curve analysis. As Tankersley notes, the government decided to fight a major downturn with stimulus, and the subsequent move away from stimulus before full employment hasn’t helped the economy. In other breaking news, if you carry an umbrella because it is raining, and then toss the umbrella, it doesn’t make it stop raining.

Follow or contact the Rortybomb blog:

  

 

Jim Tankersley has been doing the Lord’s work by following up on questionable arguments people have made about our current economic weakness being something other than a demand crisis. First, he asked Alberto Alesina about how all that expansionary austerity is working out from the vantage point of this year. Now he looks at the Economic Policy Uncertainty (EPU) index (Baker, Bloom, Davis) as it stands halfway into 2013.

And it has collapsed. The EPU index has been falling at rapid speeds, hitting 2008 levels. Yet the recovery doesn’t seem to be speeding up at all. Wasn’t that supposed to happen?

I’ve been meaning to revisit this index from when I looked at it last fall, and this is a good time to do so. It’s worth unpacking what actually drove the increase in EPU during the past five years, and understanding why there was little reason to believe it reflected uncertainty causing a weak economy. If anything, the relationship is clearly the other way around.

Let’s make sure we understand the uncertainty argument: the increase in EPU “slowed the recovery from the recession by leading businesses and households to postpone investment, hiring and consumption expenditure.” (To give you a sense, in 2011 the authors argued in editorials that this index showed that the NLRB, Obamacare and "harmful rhetorical attacks on business and millionaires" were the cause of prolongued economic weakness.)

As commenters pointed out, it would be easy to construct an index that gets the causation to be spurious or even go the other way. If weak growth could cause the Economic Policy Uncertainty index to skyrocket, then it’s not clear the narrative holds up as well. “There’s uncertainty over whether or not Congress and the Federal Reserve will aggressively fight the downturn” isn’t what the index is trying to measure, but that’s what it seems to be doing.

Let’s take a look at the graph of EPU. When most people discuss this, they argue that the peaks tell them the index is onto something, as it peaks during periods of major confusion (9/11, Lehman bankruptcy, debt ceiling showdown).

But what is worth noting, and what drives the results in a practical way, is the increase in the level during this time period. And that happens immediately in January 2009:

How does economic policy uncertainty jump the first day in 2009? The index has three parts. The first is a newspaper search of people using the phrase “economic policy uncertainty.” I discussed that last fall, arguing that it was mostly capturing Republican talking points and the discipline of the GOP machine rather than actual analysis.

The second is relevant here, and that’s the number of tax provisions set to expire in the near future. (In the first version of the paper this was total number of tax provisions, while in the current version it’s total dollar amount of those provisions.) It’s heavily discounted, so tax cuts that are expiring in a year or two are weighted at a much higher level than those that are further in the future.

What does this look like over the past few years?

So what happened starting in early 2009? The stimulus, of course. And the stimulus was in large part tax provisions that were set to expire in two years. This mechanically increased economic policy uncertainty, even though it was a policy response designed to boost automatic stabilizers. Also, the Bush tax cuts were approaching their endgame, and the algorithm gave a disproportionate weight to them as they entered their last two years.

Then, in late 2010, the Bush tax cuts and some tax provisions from the stimulus were extended to provide additional stimulus to the economy while it was still weak.

Here’s how the creators of the index describe this move: “Congress often decides whether to extend them at the last minute, undermining stability of and certainty about the future path of the tax code... Similarly, the 2010 Payroll Tax Cut was a large tax decrease initially set to expire in 1 year but was twice extended just weeks before its expiration.”

But this decision was not orthogonal to the state of the economy. A major reason the administration waited and then extended the Bush Tax Cuts and the payroll tax cut was the fact that the economy was still weak, and they wanted to boost demand. The only policy uncertainty here was how aggressive and successful the administration would be in securing additional stimulus, which itself was a function of the weakness of the economy. To retroactively argue that the government’s actions in securing additional demand were creating the crisis they are trying to fight requires an additional level of argument not present.

The third part of their index has the same issue. They draw on a literature (e.g. here) that uses disagreements (dispersion of predictions) among professional forecasters as a proxy for uncertainty -- disagreements about the predicted growth in inflation, and predictions of both state and federal spending, one year in advance.

The problem comes from trying to push their definition of EPU onto these disagreements. Debates over how much the federal government will spend through stimulus, how rough the austerity will be at the state level, or how well Bernanke will be able to hit his inflation target, which drives this index, are really debates about the reaction to the crisis. The dispersion will increase if people can’t figure out how aggressively the state will respond to a major collapse in spending. But this is a function of a collapsing economy and how well the government responds to it, not the other way around.

This is why we should ultimately be careful with studies that take this index and plop it into, say, a Beveridge Curve analysis. As Tankersley notes, the government decided to fight a major downturn with stimulus, and the subsequent move away from stimulus before full employment hasn’t helped the economy. In other breaking news, if you carry an umbrella because it is raining, and then toss the umbrella, it doesn’t make it stop raining.

Follow or contact the Rortybomb blog:

  

 

Share This

What did FDR Write Inside His Copy of the Proto-Keynesian Road to Plenty?

Aug 2, 2013Mike Konczal

File under: Marginalia Fridays.

In 1928 William Foster and Waddill Catchings wrote The Road to Plenty. A university president and a Goldman Sachs financier, respectively, these two had a serious interest in studying business cycles, and had an idea of what they thought might be happening. This book presented a theory that was proto-Keynesian eight years before the General Theory.

Let's get a summary of that book from Elliot A. Rosen's Roosevelt, the Great Depression, and the Economics of Recovery: "[The Road to Prosperity] claimed that sustained production required sustained consumer demand, a counter to Say's law of market, or classical theory, which held that consumer demand followed automatically from capital consumption. Foster and Catchings explained underconsumption partly in terms of consumer reluctance to spend when prices fell and also in terms of price distortions, maldistribution of income, and the tendency of business to finance capital requirements from earnings, thus sterilizing savings. The result was industrial overcapacity as consumer purchasing power declined. Public works would be required periodically to stimuluate purchasing power."

Franklin Delano Roosevelt, before he was President, had a copy of the book. What did he write in his copy of the book in 1928, right as the Great Depression was gearing up?

Thankfully, our friends at the FDR Presidential Library, who do an excellent job of keeping the records of the 20th Century's greatest President, were able to snap a picture and sent it to me:

FDR's writing:

In case you can't see it, it says "Too good to be true - you can't get something for nothing." Hmmm.

Though Roosevelt didn't buy it at first, he thankfully later evolved on the issue. One lucky reason is because a big fan of the book was a Utah banker who read it intensely starting in 1931, when the Depression seemed like it would never end, much less recover. That man's name was Marriner Stoddard Eccles. The rest, as they say, is history. (Except it's not, because we are currently fighting this all over again.)

The book itself is a series of conversations among strangers on a Pullman-car over what is going on in the economy. A typical page:

'But I cannot see,' objected the Professor, 'how the savings, either of corporations or of individuals, cause the shortage of which you speak. The money which industry receives from consumers and retains as undsitributed profits is not locked up in strong boxes. Most of it is deposited in banks, where other men may borrow it and pay it out. So it flows on to consumers. [....] Once you take account of the fact that money invested is money spent, you see that both individuals and corporations can save all they please without causing consumer buying to lag behind the production of consumers' goods.'

'Yes,' the Business Man replied, 'I am familiar with that contention, but it seems to me unsound. Of course it is true that a considerable part of money savings are deposited in banks, where the money is available for borrowers. But the fact that somebody may borrow the money and pay it out as wages, is immaterial as long as nobody does borrow it. Such money is no more a stimulus to business than is gold in the bowels of the earth.'

(Seem familiar?)

Follow or contact the Rortybomb blog:

  

 

File under: Marginalia Fridays.

In 1928 William Foster and Waddill Catchings wrote The Road to Plenty. A university president and a Goldman Sachs financier, respectively, these two had a serious interest in studying business cycles, and had an idea of what they thought might be happening. This book presented a theory that was proto-Keynesian eight years before the General Theory.

Let's get a summary of that book from Elliot A. Rosen's Roosevelt, the Great Depression, and the Economics of Recovery: "[The Road to Prosperity] claimed that sustained production required sustained consumer demand, a counter to Say's law of market, or classical theory, which held that consumer demand followed automatically from capital consumption. Foster and Catchings explained underconsumption partly in terms of consumer reluctance to spend when prices fell and also in terms of price distortions, maldistribution of income, and the tendency of business to finance capital requirements from earnings, thus sterilizing savings. The result was industrial overcapacity as consumer purchasing power declined. Public works would be required periodically to stimuluate purchasing power."

Franklin Delano Roosevelt, before he was President, had a copy of the book. What did he write in his copy of the book in 1928, right as the Great Depression was gearing up?

Thankfully, our friends at the FDR Presidential Library, who do an excellent job of keeping the records of the 20th Century's greatest President, were able to snap a picture and sent it to me:

FDR's writing:

In case you can't see it, it says "Too good to be true - you can't get something for nothing." Hmmm.

Though Roosevelt didn't buy it at first, he thankfully later evolved on the issue. One lucky reason is because a big fan of the book was a Utah banker who read it intensely starting in 1931, when the Depression seemed like it would never end, much less recover. That man's name was Marriner Stoddard Eccles. The rest, as they say, is history. (Except it's not, because we are currently fighting this all over again.)

The book itself is a series of conversations among strangers on a Pullman-car over what is going on in the economy. A typical page:

'But I cannot see,' objected the Professor, 'how the savings, either of corporations or of individuals, cause the shortage of which you speak. The money which industry receives from consumers and retains as undsitributed profits is not locked up in strong boxes. Most of it is deposited in banks, where other men may borrow it and pay it out. So it flows on to consumers. [....] Once you take account of the fact that money invested is money spent, you see that both individuals and corporations can save all they please without causing consumer buying to lag behind the production of consumers' goods.'

'Yes,' the Business Man replied, 'I am familiar with that contention, but it seems to me unsound. Of course it is true that a considerable part of money savings are deposited in banks, where the money is available for borrowers. But the fact that somebody may borrow the money and pay it out as wages, is immaterial as long as nobody does borrow it. Such money is no more a stimulus to business than is gold in the bowels of the earth.'

(Seem familiar?)

Follow or contact the Rortybomb blog:

  

 

Share This

Yellen, Summers and Rebuilding After the Fire

Jul 24, 2013Mike Konczal

There is no Bernanke Consensus. This is important to remember about our moment, and about how to evaluate what comes next for the Federal Reserve. What we have instead is the Bernanke Improvisation, a series of emergency procedures to try to keep the economy from falling apart, and perhaps even guide it back to full employment, after normal monetary policy hit a wall.

With the rumor mill circulating that Larry Summers could be the next Federal Reserve chair instead of Janet Yellen, it’s worth understanding where the Fed is. Bernanke has been like a fireman trying to put out a fire since 2008. What comes next is the rebuilding. What building codes will we have? What precautions will we take to prevent the next fire, and what are the tradeoffs?

This makes the next FOMC chair extremely important. While you are inside a burning building, what the fireman is doing is everything. But deciding how to rebuild will ultimately make the big difference for the next 30 years.

The next FOMC chair will have three major issues to deal with during his or her tenure. The first is to determine when to start pushing on the brakes, and thus where we’ll hit “full employment.” The second is to decide how aggressively to enforce financial reform rules [1]. Those are pretty important things!

But the new FOMC chair has an even bigger responsibility. He or she will also have to figure out a way to rebuild monetary policy and the Federal Reserve so that we won’t have a repeat of our current crisis. And in case you’ve missed the half-a-lost-decade we’ve already gone through, this couldn’t be more important.

Monetary policy itself could be rebuilt in a number of directions. It could give up on unemployment, perhaps keeping the economy permanently in a quasi-recession to somehow boost a notion of “financial stability” instead. Or it could evolve in a direction designed to avoid the prolonged recession we just had, which could involve a higher inflation target or targeting something like nominal GDP.

But the default, like many things in life, is that inertia will win out, and some form of muddling forward will continue on indefinitely. The Federal Reserve will maintain a low inflation target that it always falls short of, and the economy will never run at its peak capacity. Attempts at better communications and priorities will be abandoned. And even minor recessions will run the risk of hitting the liquidity trap, making them far worse than they need to be.

The inertia problem is why having a consensus builder and convincer in charge is key, and it is a terrible development that these traits are being coded as feminine and thus weak. As a new governor in 1996, Janet Yellen argued the evidence to convince Alan Greenspan that targeting zero percent inflation was a bad idea. (Could you imagine this recession if inflation was already hovering at a little above zero in 2007?) The next governor will be asked to gather much more complicated evidence to make even harder decisions about the future of the economy - and Yellen has a proven track record here.

Yellen has been at the forefront of all these debates. As Cardiff Garcia writes, she runs the subcommittee on communications and has spent a great deal of time trying to figure out how these unorthodox policies impact the economy. The debate about what constitutes full employment has become muted among liberal economists because unemployment has been so high, but it will come back to the fore after the taper hits. Yellen has been thinking about this all along. Crucially, she has come the closest of any high-ranking Fed official to endorsing a major shift of current policy - in this case, to something like a nominal spending target. This will become important to however we rebuild after this crisis.

As a quick history lesson, there were two major points where a large battle broke out on monetary stimulus. The first was the spring and summer of 2010, when there were serious worries about a double-dip recession. This ended when Bernanke announced QE2, which immediately collapsed market expectations of deflation. The second was in the first half of 2012, when an intellectual consensus was built around tying monetary policy to future conditions, ending with the adoption of the Evans Rule.

I can’t find Larry Summers commenting on either of these situations, either in high-end academic debates or in the wide variety of op-eds he’s written. The commenters at The Money Illusion couldn’t find a single instance of Summers suggesting that monetary policy was too tight in the past five years. Summers was simply missing in action for the most important monetary policy debates of the past 30 years, while Yellen was leading them. And trying to shift from those debates into a new status quo will be the responsibility of the next FOMC chair.

 

 

[1] Given what this blog normally covers, I’d be remiss to not mention housing and financial reform. During the Obama transition, Larry Summers promised “substantial resources of $50-100B to a sweeping effort to address the foreclosure crisis” as well as “reforming our bankruptcy laws.” This letter was crucial in securing votes from Democrats like Jeff Merkley for the second round of TARP bailouts. A recent check showed that the administration ended up using only $4.4 billion on foreclosure mitigation through the awful HAMP program, while Summers reportedly was not supportive of bankruptcy reform.

And as Bill McBride notes, Yellen was making the correct calls on the housing bubble and its potential damage while Summers was attacking those who thought financial innovation could increase the risks of a panic and crash.

It’s difficult to overstate how important the Federal Reserve is to financial regulation. Did you catch how the Federal Reserve needs to decide about the future of finance and physical commodities soon, with virtually no oversight or accountability? Even if you think Summers gets a bum rap for deregulation in the 1990s, you must believe that his suspicion of skepticism about finance - for instance, the reporting on his opposition on the Volcker Rule - is not what our real economy needs while Dodd-Frank is being implemented.

Follow or contact the Rortybomb blog:

  

 

There is no Bernanke Consensus. This is important to remember about our moment, and about how to evaluate what comes next for the Federal Reserve. What we have instead is the Bernanke Improvisation, a series of emergency procedures to try to keep the economy from falling apart, and perhaps even guide it back to full employment, after normal monetary policy hit a wall.

With the rumor mill circulating that Larry Summers could be the next Federal Reserve chair instead of Janet Yellen, it’s worth understanding where the Fed is. Bernanke has been like a fireman trying to put out a fire since 2008. What comes next is the rebuilding. What building codes will we have? What precautions will we take to prevent the next fire, and what are the tradeoffs?

This makes the next FOMC chair extremely important. While you are inside a burning building, what the fireman is doing is everything. But deciding how to rebuild will ultimately make the big difference for the next 30 years.

The next FOMC chair will have three major issues to deal with during his or her tenure. The first is to determine when to start pushing on the brakes, and thus where we’ll hit “full employment.” The second is to decide how aggressively to enforce financial reform rules [1]. Those are pretty important things!

But the new FOMC chair has an even bigger responsibility. He or she will also have to figure out a way to rebuild monetary policy and the Federal Reserve so that we won’t have a repeat of our current crisis. And in case you’ve missed the half-a-lost-decade we’ve already gone through, this couldn’t be more important.

Monetary policy itself could be rebuilt in a number of directions. It could give up on unemployment, perhaps keeping the economy permanently in a quasi-recession to somehow boost a notion of “financial stability” instead. Or it could evolve in a direction designed to avoid the prolonged recession we just had, which could involve a higher inflation target or targeting something like nominal GDP.

But the default, like many things in life, is that inertia will win out, and some form of muddling forward will continue on indefinitely. The Federal Reserve will maintain a low inflation target that it always falls short of, and the economy will never run at its peak capacity. Attempts at better communications and priorities will be abandoned. And even minor recessions will run the risk of hitting the liquidity trap, making them far worse than they need to be.

The inertia problem is why having a consensus builder and convincer in charge is key, and it is a terrible development that these traits are being coded as feminine and thus weak. As a new governor in 1996, Janet Yellen argued the evidence to convince Alan Greenspan that targeting zero percent inflation was a bad idea. (Could you imagine this recession if inflation was already hovering at a little above zero in 2007?) The next governor will be asked to gather much more complicated evidence to make even harder decisions about the future of the economy - and Yellen has a proven track record here.

Yellen has been at the forefront of all these debates. As Cardiff Garcia writes, she runs the subcommittee on communications and has spent a great deal of time trying to figure out how these unorthodox policies impact the economy. The debate about what constitutes full employment has become muted among liberal economists because unemployment has been so high, but it will come back to the fore after the taper hits. Yellen has been thinking about this all along. Crucially, she has come the closest of any high-ranking Fed official to endorsing a major shift of current policy - in this case, to something like a nominal spending target. This will become important to however we rebuild after this crisis.

As a quick history lesson, there were two major points where a large battle broke out on monetary stimulus. The first was the spring and summer of 2010, when there were serious worries about a double-dip recession. This ended when Bernanke announced QE2, which immediately collapsed market expectations of deflation. The second was in the first half of 2012, when an intellectual consensus was built around tying monetary policy to future conditions, ending with the adoption of the Evans Rule.

I can’t find Larry Summers commenting on either of these situations, either in high-end academic debates or in the wide variety of op-eds he’s written. The commenters at The Money Illusion couldn’t find a single instance of Summers suggesting that monetary policy was too tight in the past five years. Summers was simply missing in action for the most important monetary policy debates of the past 30 years, while Yellen was leading them. And trying to shift from those debates into a new status quo will be the responsibility of the next FOMC chair.

 

 

[1] Given what this blog normally covers, I’d be remiss to not mention housing and financial reform. During the Obama transition, Larry Summers promised “substantial resources of $50-100B to a sweeping effort to address the foreclosure crisis” as well as “reforming our bankruptcy laws.” This letter was crucial in securing votes from Democrats like Jeff Merkley for the second round of TARP bailouts. A recent check showed that the administration ended up using only $4.4 billion on foreclosure mitigation through the awful HAMP program, while Summers reportedly was not supportive of bankruptcy reform.

And as Bill McBride notes, Yellen was making the correct calls on the housing bubble and its potential damage while Summers was attacking those who thought financial innovation could increase the risks of a panic and crash.

It’s difficult to overstate how important the Federal Reserve is to financial regulation. Did you catch how the Federal Reserve needs to decide about the future of finance and physical commodities soon, with virtually no oversight or accountability? Even if you think Summers gets a bum rap for deregulation in the 1990s, you must believe that his suspicion of skepticism about finance - for instance, the reporting on his opposition on the Volcker Rule - is not what our real economy needs while Dodd-Frank is being implemented.

Follow or contact the Rortybomb blog:

  

Federal Reserve banner image via Shutterstock.com

Share This

Brooks’s Recovery Gender Swap

Jul 17, 2013Mike Konczal

How are men doing in our anemic economic recovery? David Brooks, after discussing his favorite Western movie, argues in his latest column, Men on the Threshold, that men are "unable to cross the threshold into the new economy." Though he'd probably argue that he's talking about generational changes, he focuses on a few data points from the current recession, including that "all the private sector jobs lost by women during the Great Recession have been recaptured, but men still have a long way to go."

Is he right? And what are some facts we can put on the current recovery when it comes to men versus women?

Total Employment

Men had a harder crash during the recession, but a much better recovery, when compared with women.

Indeed, during the first two years of the recovery expert analysis was focused on a situation that was completely reversed from Brooks' story. The question in mid-2011 was "why weren't women finding jobs?" Pew Research put out a report in July 2011 finding that "From the end of the recession in June 2009 through May 2011, men gained 768,000 jobs and lowered their unemployment rate by 1.1 percentage points to 9.5%. 1 Women, by contrast, lost 218,000 jobs during the same period, and their unemployment rate increased by 0.2 percentage points to 8.5%."

How does that look two years later? Here's a graph of the actual level of employment by gender from the Great Recession onward:

If you squint you can see how women's employment is flat throughout 2011, when men start gaining jobs. Since the beginning 2011, men have gotten around 65 percent of all new jobs. That rate started at 70 percent, and has declined to around 60 percent now. So it is true, as Brooks notes, that women are approaching their old level of employment. But the idea that the anemic recovery has been biased against men is harder to understand. The issue is just a weak recovery - more jobs would mean more jobs for both men and women, but also especially for men.

Occupations

But maybe the issue is the occupations that men are now working. As Brooks writes, "Now, thanks to a communications economy, [men] find themselves in a world that values expressiveness, interpersonal ease, vulnerability and the cooperative virtues." This is a world where they either can't compete, or won't. The testable hypothesis is that men are doing poorly in occupations that are traditionally female dominated.

However the data shows that men are moving into female-dominated occupations, and taking a large majority of the new jobs there.

How has the gendered division of occupations evolved since 2011? Here is first quarter data from 2011 and 2013 of occupations by gender from the CPS. As a reminder, your occupation is what you do, while your industry is what your employer does. Occupation data is much noiser, hence us moving to quarterly data:

Ok that's a mess of data. What should we be looking for in this?

First off, men are moving into occupations that have been traditionally gender-coded female. Office support jobs, which Bryce Covert and I found were a major driver of overall female employment decline from 2009-2011, are now going to men. Men have taken 95 percent of new jobs in this occupation, one that was only about 26 percent male in 2011. We also see men taking a majority of jobs in the male-minority service occupations. Men are also gaining in sales jobs even while the overall number of jobs are declining. That's a major transformation happening in real-time.

(Meanwhile, it's not all caring work and symbolic analysts out there. There's a massive domestic energy extraction business booming in the United States, and those jobs are going to men as well. If you were to break down into suboccupations this becomes very obvious. Men took around 100 percent of the over 600,000+ new "construction and extraction" jobs, for instance.)

It'll be interesting to see how extensive men moving into traditionally female jobs will be, and to what extent it'll challenge the nature of both them and that work. Much of the structure of service work in the United States comes from the model of Walmart, and that comes from both Southern, Christian values and a model of the role women play in kinship structures and communities.

As Sarah Jaffe notes in her piece A Day Without Care, summarizing the work of Bethany Moreton, "Walmart...built its global empire on the backs of part-time women workers, capitalizing on the skills of white Southern housewives who’d never worked for pay before but who saw the customer service work they did at Walmart as an extension of the Christian service values they held dear. Those women didn’t receive a living wage because they were presumed to be married; today, Walmart’s workforce is much more diverse yet still expected to live on barely more than minimum wage."

How will men react when faced with this? And how will their bosses counter?

Follow or contact the Rortybomb blog:

  

 

How are men doing in our anemic economic recovery? David Brooks, after discussing his favorite Western movie, argues in his latest column, Men on the Threshold, that men are "unable to cross the threshold into the new economy." Though he'd probably argue that he's talking about generational changes, he focuses on a few data points from the current recession, including that "all the private sector jobs lost by women during the Great Recession have been recaptured, but men still have a long way to go."

Is he right? And what are some facts we can put on the current recovery when it comes to men versus women?

Total Employment

Men had a harder crash during the recession, but a much better recovery, when compared with women.

Indeed, during the first two years of the recovery expert analysis was focused on a situation that was completely reversed from Brooks' story. The question in mid-2011 was "why weren't women finding jobs?" Pew Research put out a report in July 2011 finding that "From the end of the recession in June 2009 through May 2011, men gained 768,000 jobs and lowered their unemployment rate by 1.1 percentage points to 9.5%. 1 Women, by contrast, lost 218,000 jobs during the same period, and their unemployment rate increased by 0.2 percentage points to 8.5%."

How does that look two years later? Here's a graph of the actual level of employment by gender from the Great Recession onward:

If you squint you can see how women's employment is flat throughout 2011, when men start gaining jobs. Since the beginning 2011, men have gotten around 65 percent of all new jobs. That rate started at 70 percent, and has declined to around 60 percent now. So it is true, as Brooks notes, that women are approaching their old level of employment. But the idea that the anemic recovery has been biased against men is harder to understand. The issue is just a weak recovery - more jobs would mean more jobs for both men and women, but also especially for men.

Occupations

But maybe the issue is the occupations that men are now working. As Brooks writes, "Now, thanks to a communications economy, [men] find themselves in a world that values expressiveness, interpersonal ease, vulnerability and the cooperative virtues." This is a world where they either can't compete, or won't. The testable hypothesis is that men are doing poorly in occupations that are traditionally female dominated.

However the data shows that men are moving into female-dominated occupations, and taking a large majority of the new jobs there.

How has the gendered division of occupations evolved since 2011? Here is first quarter data from 2011 and 2013 of occupations by gender from the CPS. As a reminder, your occupation is what you do, while your industry is what your employer does. Occupation data is much noiser, hence us moving to quarterly data:

Ok that's a mess of data. What should we be looking for in this?

First off, men are moving into occupations that have been traditionally gender-coded female. Office support jobs, which Bryce Covert and I found were a major driver of overall female employment decline from 2009-2011, are now going to men. Men have taken 95 percent of new jobs in this occupation, one that was only about 26 percent male in 2011. We also see men taking a majority of jobs in the male-minority service occupations. Men are also gaining in sales jobs even while the overall number of jobs are declining. That's a major transformation happening in real-time.

(Meanwhile, it's not all caring work and symbolic analysts out there. There's a massive domestic energy extraction business booming in the United States, and those jobs are going to men as well. If you were to break down into suboccupations this becomes very obvious. Men took around 100 percent of the over 600,000+ new "construction and extraction" jobs, for instance.)

It'll be interesting to see how extensive men moving into traditionally female jobs will be, and to what extent it'll challenge the nature of both them and that work. Much of the structure of service work in the United States comes from the model of Walmart, and that comes from both Southern, Christian values and a model of the role women play in kinship structures and communities.

As Sarah Jaffe notes in her piece A Day Without Care, summarizing the work of Bethany Moreton, "Walmart...built its global empire on the backs of part-time women workers, capitalizing on the skills of white Southern housewives who’d never worked for pay before but who saw the customer service work they did at Walmart as an extension of the Christian service values they held dear. Those women didn’t receive a living wage because they were presumed to be married; today, Walmart’s workforce is much more diverse yet still expected to live on barely more than minimum wage."

How will men react when faced with this? And how will their bosses counter?

Follow or contact the Rortybomb blog:

  

 

Business people armwrestling image via Shutterstock.com.

Share This

Mirowski on the Vacuum and Obscurity of Current Economics

Jul 9, 2013Mike Konczal

I just finished reading Philip Mirowski’s Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. It’s fantastic, wonderfully dense, packed with ideas running from Foucault through how game theorists botched the TARP auction design. It provides a stunningly detailed summary of topics economic bloggers would be interested in, ranging from the debate on the Efficient Market Hypothesis after the crisis, the structure of the Mont Pelerin Society, to the way conservatives spread the idea that the GSEs caused the financial crisis. If you like Mirowski’s other works, you'll love this. Mirowski is an economist and a historian, and has a knack for showing the evolving arguments and justifications and contexts for economic ideas and approaches. I'm writing a longer review of it, but I'll be bringing up pieces of it here.

I wanted to include this part on the issue of the vacuousness within economics at this moment. Mirowski:

“Third, it would appear that the corporeal solidity of a live intellectual discipline would be indicated by consensus reference text that help define what it means to be an advocate of that discipline. Here, I would insist that undergraduate textbooks should not count, since they merely project the etiolated public face of the discipline to the world. But if we look at contemporary orthodox economics, where is the John Stuart Mill, the Alfred Marshall, the Paul Samuelson, the Tjalling Koopmans, or the David Kreps of the early twenty-first century? The answer is that, in macroeconomics, there is none. And in microeconomics, the supposed gold standard is Andrew Mas-Collel, Michael Whinston, and Jerry Green (Microeconomic Theory), at its birth a baggy compendium lacking clear organizing principles, but now slipping out of data and growing a bit long in the tooth. Although often forced to take econometrics as part of the core, there is no longer any consensus that econometrics is situated at the heart of economic empiricism in the modern world. Beyond the graduate textbooks, the profession is held together by little more than a few journals that are designated indispensable by some rather circular bibliometrics measures, and the dominance of a few highly ranked departments, rather than any clear intellectual standards. Indeed, graduates are socialized and indoctrinated by forcing them to read articles from those journals with a half-life of five years: and so the disciplinary center of gravity wanders aimlessly, without vision or intentionality. The orthodoxy, so violent quarantined and demarcated from outside pretenders, harbors a vacuum within its perimeter.

Fourth, and finally, should one identify specific models as paradigmatic for neoclassical economics, then they are accompanied by formal proofs of impeccable logic which demonstrate that the model does not underwrite the seeming solidity of the textbooks. Neoclassical theory is itself the vector of its own self-abnegation. If one cites the canonical Arrow-Debreu model of general equilibrium, then one can pair it with the Sonnenschein-Mantel-Debreu theorems, which point out that the general Arrow-Debreu model places hardly any restrictions at all on the functions that one deems “basic economics,” such as excess demand functions. Or, alternatively, if one lights on the Nash equilibrium in game theory, you can pair that with the so-called folk theorem, which states that under generic conditions, almost anything can qualify as a Nash equilibrium. Keeping with the wonderful paradoxes of “strategic behavior,” the Milgrom-Stokey “No Trade theorem” suggests that if everyone really were as suspicious and untrusting as the Nash theory makes out, then no one would engage in any market exchange whatsoever in a neoclassical world. The Modigliani-Miller theorem states that the level of debt relative to equity in a bank’s balance sheet should not matter one whit for market purposes, even though finance theory is obsessed with debt. Arrow’s impossibility theorem states that, if one models the polity on the pattern of a neoclassical model, then democratic politics is essentially impotent to achieve political goals. Markets are now assert to be marvelous information processors, but the Grossman-Stiglitz results suggest that there are no incentives for anyone to invest in the development and refinement of information in the first place. The list just goes on and on. It is the fate of the Delphic oracles to deal in obscurity.” (p. 24-26)

Konczal here. The entire book is that intense. A few thoughts:

- To put the first point a different way, the complaint I hear most when it comes to the major graduate textbooks is that they function as cookbooks, or books full of simple recipes each designed to do a single thing. Beyond micro, this is especially true for the major texts in macroeconomics and econometrics. The macroeconomics piece gives the sense that it’s designed to pull attention away from the major visions and towards little puzzle pieces that don’t connect into any kind of bigger picture.

Scanning the "What's New?" part of the new 2012 edition of the standard, entry-level graduate macro text, it seems like there's nothing new for the crisis. (If it is mentioned, I didn't see it.) If you are an energetic, smart graduate student who really wants to dissect the economic crisis, you essentially have to sit out the first half of your macroeconomic coursework before you get to something that has to do with a recession as a regular person would understand it. It's clear what has priority within the education of new economists.

I wonder how much the move to empirical methods and experiments are less about access to computing power and data sets (or, ha, issues of falsification), and more about the fact that the ability to innovate on the theory side has broken and it is now impossible to break new ground. How many enfante terribles in economics are theorists these days? I assume any substantial break from standing theory is immediate exclusion from tenure-setting journals.

- I love this magic trick analogy from Mirowski for frictions within DSGE: “By thrusting the rabbit into the hat, then pulling it back out with a different hand, the economist merely creates a model more awkward, arbitrary, and unprepossessing [that also] violate[] the Lucas critique in a more egregious fashion than the earlier Keynesian models these macroeconomists love to hate” (p. 284).

- The mention of the “excess demand function” reminded me whether stability issues are covered anymore. The book The Assumptions Economists Make makes a big deal about the lack of stability analysis in how economists discuss general equilibrium (also see Alejandro Nadal here).

To clarify in English, have you ever heard of the “Invisible Hand” metaphor? Markets equilibrate supply and demand with prices across the whole economy. Stability is the question of “under what circumstances (if any) does a competitive economy converge to equilibrium, and, if it does, how quickly does this happen?”

Will these concerns come back into graduate education and discussion with the crisis? I got a chance to check out the new 2011 Advanced Microeconomic Theory by Jehle and Reny, which seems to be the new, more mathematically tight, alternative to Mas-Collel (1995) for graduate microeconomics.

One the first page of their chapter for general equilibrium: “These are questions of existence, uniqueness, and stability of general competitive equilibrium. All are deep and important, but we will only address the first.” Wow. That’s a massive forgetting from Mas-Collel, which covers these issues, even if superficially, to give student an understanding that they are there.

Going forward, if you ask a new economist “could the economy just stay this way forever?” or “could more commodity trading push prices further away from a true price?” (pretty important questions!) you will probably get a smug “we’ve proven the Invisible Hand handles this decades ago.” Little will he or she know that a gigantic, inconclusive debate occurred about these issues, but they’ve simply been excised down the memory hole.

Follow or contact the Rortybomb blog:

  

 

I just finished reading Philip Mirowski’s Never Let a Serious Crisis Go to Waste: How Neoliberalism Survived the Financial Meltdown. It’s fantastic, wonderfully dense, packed with ideas running from Foucault through how game theorists botched the TARP auction design. It provides a stunningly detailed summary of topics economic bloggers would be interested in, ranging from the debate on the Efficient Market Hypothesis after the crisis, the structure of the Mont Pelerin Society, to the way conservatives spread the idea that the GSEs caused the financial crisis. If you like Mirowski’s other works, you'll love this. Mirowski is an economist and a historian, and has a knack for showing the evolving arguments and justifications and contexts for economic ideas and approaches. I'm writing a longer review of it, but I'll be bringing up pieces of it here.

I wanted to include this part on the issue of the vacuousness within economics at this moment. Mirowski:

“Third, it would appear that the corporeal solidity of a live intellectual discipline would be indicated by consensus reference text that help define what it means to be an advocate of that discipline. Here, I would insist that undergraduate textbooks should not count, since they merely project the etiolated public face of the discipline to the world. But if we look at contemporary orthodox economics, where is the John Stuart Mill, the Alfred Marshall, the Paul Samuelson, the Tjalling Koopmans, or the David Kreps of the early twenty-first century? The answer is that, in macroeconomics, there is none. And in microeconomics, the supposed gold standard is Andrew Mas-Collel, Michael Whinston, and Jerry Green (Microeconomic Theory), at its birth a baggy compendium lacking clear organizing principles, but now slipping out of data and growing a bit long in the tooth. Although often forced to take econometrics as part of the core, there is no longer any consensus that econometrics is situated at the heart of economic empiricism in the modern world. Beyond the graduate textbooks, the profession is held together by little more than a few journals that are designated indispensable by some rather circular bibliometrics measures, and the dominance of a few highly ranked departments, rather than any clear intellectual standards. Indeed, graduates are socialized and indoctrinated by forcing them to read articles from those journals with a half-life of five years: and so the disciplinary center of gravity wanders aimlessly, without vision or intentionality. The orthodoxy, so violent quarantined and demarcated from outside pretenders, harbors a vacuum within its perimeter.

Fourth, and finally, should one identify specific models as paradigmatic for neoclassical economics, then they are accompanied by formal proofs of impeccable logic which demonstrate that the model does not underwrite the seeming solidity of the textbooks. Neoclassical theory is itself the vector of its own self-abnegation. If one cites the canonical Arrow-Debreu model of general equilibrium, then one can pair it with the Sonnenschein-Mantel-Debreu theorems, which point out that the general Arrow-Debreu model places hardly any restrictions at all on the functions that one deems “basic economics,” such as excess demand functions. Or, alternatively, if one lights on the Nash equilibrium in game theory, you can pair that with the so-called folk theorem, which states that under generic conditions, almost anything can qualify as a Nash equilibrium. Keeping with the wonderful paradoxes of “strategic behavior,” the Milgrom-Stokey “No Trade theorem” suggests that if everyone really were as suspicious and untrusting as the Nash theory makes out, then no one would engage in any market exchange whatsoever in a neoclassical world. The Modigliani-Miller theorem states that the level of debt relative to equity in a bank’s balance sheet should not matter one whit for market purposes, even though finance theory is obsessed with debt. Arrow’s impossibility theorem states that, if one models the polity on the pattern of a neoclassical model, then democratic politics is essentially impotent to achieve political goals. Markets are now assert to be marvelous information processors, but the Grossman-Stiglitz results suggest that there are no incentives for anyone to invest in the development and refinement of information in the first place. The list just goes on and on. It is the fate of the Delphic oracles to deal in obscurity.” (p. 24-26)

Konczal here. The entire book is that intense. A few thoughts:

- To put the first point a different way, the complaint I hear most when it comes to the major graduate textbooks is that they function as cookbooks, or books full of simple recipes each designed to do a single thing. Beyond micro, this is especially true for the major texts in macroeconomics and econometrics. The macroeconomics piece gives the sense that it’s designed to pull attention away from the major visions and towards little puzzle pieces that don’t connect into any kind of bigger picture.

Scanning the "What's New?" part of the new 2012 edition of the standard, entry-level graduate macro text, it seems like there's nothing new for the crisis. (If it is mentioned, I didn't see it.) If you are an energetic, smart graduate student who really wants to dissect the economic crisis, you essentially have to sit out the first half of your macroeconomic coursework before you get to something that has to do with a recession as a regular person would understand it. It's clear what has priority within the education of new economists.

I wonder how much the move to empirical methods and experiments are less about access to computing power and data sets (or, ha, issues of falsification), and more about the fact that the ability to innovate on the theory side has broken and it is now impossible to break new ground. How many enfante terribles in economics are theorists these days? I assume any substantial break from standing theory is immediate exclusion from tenure-setting journals.

- I love this magic trick analogy from Mirowski for frictions within DSGE: “By thrusting the rabbit into the hat, then pulling it back out with a different hand, the economist merely creates a model more awkward, arbitrary, and unprepossessing [that also] violate[] the Lucas critique in a more egregious fashion than the earlier Keynesian models these macroeconomists love to hate” (p. 284).

- The mention of the “excess demand function” reminded me whether stability issues are covered anymore. The book The Assumptions Economists Make makes a big deal about the lack of stability analysis in how economists discuss general equilibrium (also see Alejandro Nadal here).

To clarify in English, have you ever heard of the “Invisible Hand” metaphor? Markets equilibrate supply and demand with prices across the whole economy. Stability is the question of “under what circumstances (if any) does a competitive economy converge to equilibrium, and, if it does, how quickly does this happen?”

Will these concerns come back into graduate education and discussion with the crisis? I got a chance to check out the new 2011 Advanced Microeconomic Theory by Jehle and Reny, which seems to be the new, more mathematically tight, alternative to Mas-Collel (1995) for graduate microeconomics.

One the first page of their chapter for general equilibrium: “These are questions of existence, uniqueness, and stability of general competitive equilibrium. All are deep and important, but we will only address the first.” Wow. That’s a massive forgetting from Mas-Collel, which covers these issues, even if superficially, to give student an understanding that they are there.

Going forward, if you ask a new economist “could the economy just stay this way forever?” or “could more commodity trading push prices further away from a true price?” (pretty important questions!) you will probably get a smug “we’ve proven the Invisible Hand handles this decades ago.” Little will he or she know that a gigantic, inconclusive debate occurred about these issues, but they’ve simply been excised down the memory hole.

Follow or contact the Rortybomb blog:

  

 

Share This

Can the Taper Matter? Revisiting a Wonkish 2012 Debate

Jun 25, 2013Mike Konczal

Last week, Ben Bernanke tested the waters for “tapering,” or cutting back on the rate at which he carries out new asset purchases, and everything is going poorly. As James Bullard, the president of the Federal Reserve Bank of St. Louis, argued in discussing his dovish dissent with Wonkblog, “This was tighter policy. It’s all about tighter policy. You can communicate it one way or another way, but the markets are saying that they’re pulling up the probability we’re going to withdraw from the QE program sooner than they expected, and that’s having a big influence.”

But if you really believe in the expectations channel of monetary policy, can this even matter? Let’s use this to revisit an obscure monetary beef from fall 2012.

Cardiff Garcia had a recent post discussing the fragile alliance between fiscalists and monetarists at the zero lower bound. But one angle he missed was the disagreements between monetarists, or more generally those who believe that the Federal Reserve has a lot of “ammo” at the zero lower bound, over what really matters and how.

For instance, David Becksworth writes, “What is puzzling to me is how anyone could look at the outcome of this experiment and claim the Fed's large scale asset programs (LSAPs) are not helpful.” But one of the most important and influential supporters of expansionary monetary policy, the one who probably helped put the Federal Reserve on its bold course in late 2012, thinks exactly this. And that person is the economist Michael Woodford.

To recap, the Fed took two major steps in 2012. First, it used a communication strategy to say that it would keep interest rates low until certain economic states were hit, such as unemployment hitting 6.5 percent or inflation hitting 2.5 percent. This was the Evans Rule, which used what is called the expectations channel. Second, the Fed started purchasing $85 billion a month in assets until this goal was hit. This was QE3, which used what is called the portfolio channel.

In his major September 2012 paper, Woodford argued that the latter step, the $85 billion in purchases every month, doesn't even matter, because "'portfolio-balance effects' do not exist in a modern, general-equilibrium theory of asset prices." At best, such QE-related purchases "can be helpful as ways of changing expectations about future policy — essentially, as a type of signalling that can usefully supplement purely verbal forms of forward guidance." (He even calls the idea that purchases matter "1950s-vintage," which is as cutting as you can get as a macroeconomist.)

To put it a different way, the Fed's use of the portfolio channel only matters to the extent that the Fed isn't being clear in its written statements about future interest rate policy and other means of setting expectations.

Woodford specifically called out research by the Peterson Institute for International Economics’ (and friend of the blog) Joseph Gagnon. Contra Woodford, Gagnon et al concluded in their research, “[QE] purchases led to economically meaningful and long-lasting reductions in longer-term interest rates on a range of securities [that] reflect lower risk premiums, including term premiums, rather than lower expectations of future short-term interest rates.” (Woodford thinks that expectations of future short-term interest rates are the only thing at play here.)

Woodford's analysis of this research immediately came under attack in the blogosphere. James Hamilton at Econobrowser noted that “Gagnon, et. al.'s finding has also been confirmed by a number of other researchers using very different data sets and methods.” Gagnon himself responded here, defending the research and noting that Woodford’s theoretical assumptions “are violated in clear and obvious ways in the real world.“ (If you are interested in the nitty-gritty of the research agenda, it is worth following the links.)

For our purposes, what can the taper explain for us? Remember, in Woodford’s world of strict expectations, QE purchases don’t matter. Since the purchases don't matter, moving future purchases up or down at the margins, keeping the expected future path of short-term interest rates constant, shouldn't matter either. Raising the $85 billion to $100 billion wouldn't help, and lowering it to $70 billion wouldn't hurt, unless Bernanke also moved the expectations of future policy.

The taper was a test case for this theory. Bernanke meant to keep expectations of future short-term interest rates the same (there was no change to when interest rates would rise) while reducing the flow of QE purchases. But from our first readings, it has been accepted by the market as a major tightening of policy. This strikes me as a major victory for Gagnon and a loss for the strongest versions of the expectations channel.

Of course, the taper could be a signal that Bernanke has lost his coalition or is otherwise going soft on expansionary policy. If that’s the case, then according to the stronger version of the expectations theory, QE3 should never have been started, because it adds no value and is just another thing that could go wrong. Bernanke should just have focused on crafting a more articulate press release instead. This doesn't seem the right lesson when a body of research argues purchases are making a difference.

An objective bystander would say that if the taper is being read as tightening even though future expectations language is the same, it means that we should be throwing everything we have at the problem because everything is in play. That includes fiscal policy. As Woodford writes, “[t]he most obvious source of a boost to current aggregate demand that would not depend solely on expectational channels is fiscal stimulus.” We should be expanding, rather than contracting, the portfolio channel, while also avoiding the sequester and extending the payroll tax cut. Arguments, like Woodford's, about the supremacy of any one approach tend to get knocked down by all the other concerns.

Follow or contact the Rortybomb blog:

  

 

Last week, Ben Bernanke tested the waters for “tapering,” or cutting back on the rate at which he carries out new asset purchases, and everything is going poorly. As James Bullard, the president of the Federal Reserve Bank of St. Louis, argued in discussing his dovish dissent with Wonkblog, “This was tighter policy. It’s all about tighter policy. You can communicate it one way or another way, but the markets are saying that they’re pulling up the probability we’re going to withdraw from the QE program sooner than they expected, and that’s having a big influence.”

But if you really believe in the expectations channel of monetary policy, can this even matter? Let’s use this to revisit an obscure monetary beef from fall 2012.

Cardiff Garcia had a recent post discussing the fragile alliance between fiscalists and monetarists at the zero lower bound. But one angle he missed was the disagreements between monetarists, or more generally those who believe that the Federal Reserve has a lot of “ammo” at the zero lower bound, over what really matters and how.

For instance, David Becksworth writes, “What is puzzling to me is how anyone could look at the outcome of this experiment and claim the Fed's large scale asset programs (LSAPs) are not helpful.” But one of the most important and influential supporters of expansionary monetary policy, the one who probably helped put the Federal Reserve on its bold course in late 2012, thinks exactly this. And that person is the economist Michael Woodford.

To recap, the Fed took two major steps in 2012. First, it used a communication strategy to say that it would keep interest rates low until certain economic states were hit, such as unemployment hitting 6.5 percent or inflation hitting 2.5 percent. This was the Evans Rule, which used what is called the expectations channel. Second, the Fed started purchasing $85 billion a month in assets until this goal was hit. This was QE3, which used what is called the portfolio channel.

In his major September 2012 paper, Woodford argued that the latter step, the $85 billion in purchases every month, doesn't even matter, because "'portfolio-balance effects' do not exist in a modern, general-equilibrium theory of asset prices." At best, such QE-related purchases "can be helpful as ways of changing expectations about future policy — essentially, as a type of signalling that can usefully supplement purely verbal forms of forward guidance." (He even calls the idea that purchases matter "1950s-vintage," which is as cutting as you can get as a macroeconomist.)

To put it a different way, the Fed's use of the portfolio channel only matters to the extent that the Fed isn't being clear in its written statements about future interest rate policy and other means of setting expectations.

Woodford specifically called out research by the Peterson Institute for International Economics’ (and friend of the blog) Joseph Gagnon. Contra Woodford, Gagnon et al concluded in their research, “[QE] purchases led to economically meaningful and long-lasting reductions in longer-term interest rates on a range of securities [that] reflect lower risk premiums, including term premiums, rather than lower expectations of future short-term interest rates.” (Woodford thinks that expectations of future short-term interest rates are the only thing at play here.)

Woodford's analysis of this research immediately came under attack in the blogosphere. James Hamilton at Econobrowser noted that “Gagnon, et. al.'s finding has also been confirmed by a number of other researchers using very different data sets and methods.” Gagnon himself responded here, defending the research and noting that Woodford’s theoretical assumptions “are violated in clear and obvious ways in the real world.“ (If you are interested in the nitty-gritty of the research agenda, it is worth following the links.)

For our purposes, what can the taper explain for us? Remember, in Woodford’s world of strict expectations, QE purchases don’t matter. Since the purchases don't matter, moving future purchases up or down at the margins, keeping the expected future path of short-term interest rates constant, shouldn't matter either. Raising the $85 billion to $100 billion wouldn't help, and lowering it to $70 billion wouldn't hurt, unless Bernanke also moved the expectations of future policy.

The taper was a test case for this theory. Bernanke meant to keep expectations of future short-term interest rates the same (there was no change to when interest rates would rise) while reducing the flow of QE purchases. But from our first readings, it has been accepted by the market as a major tightening of policy. This strikes me as a major victory for Gagnon and a loss for the strongest versions of the expectations channel.

Of course, the taper could be a signal that Bernanke has lost his coalition or is otherwise going soft on expansionary policy. If that’s the case, then according to the stronger version of the expectations theory, QE3 should never have been started, because it adds no value and is just another thing that could go wrong. Bernanke should just have focused on crafting a more articulate press release instead. This doesn't seem the right lesson when a body of research argues purchases are making a difference.

An objective bystander would say that if the taper is being read as tightening even though future expectations language is the same, it means that we should be throwing everything we have at the problem because everything is in play. That includes fiscal policy. As Woodford writes, “[t]he most obvious source of a boost to current aggregate demand that would not depend solely on expectational channels is fiscal stimulus.” We should be expanding, rather than contracting, the portfolio channel, while also avoiding the sequester and extending the payroll tax cut. Arguments, like Woodford's, about the supremacy of any one approach tend to get knocked down by all the other concerns.

Follow or contact the Rortybomb blog:

  

 

Share This

What’s New in the New Surveillance State?

Jun 11, 2013Mike Konczal

I had a post at Wonkblog over the weekend, “Is a democratic surveillance state possible?”

In some sense, the issue of the government spying and collecting data on its citizens isn’t a new problem. One of my favorite tweets of the past week was Brooke Jarvis noting "Collapsing bridges alongside massive spy networks... Ah, the Jeffersonian ideal of government."

The United States has been tracking, observing, and surveilling its citizens for centuries. That includes that long-standing form of communication, the mail. As Senator Lindsey Graham just said, “In World War II... you wrote a letter overseas, it got censored...If I thought censoring the mail was necessary, I would suggest it.”

From the Census in the Constitution to the Cold War spy network (including the NSA, founded in 1952 through the Executive Branch), maybe this should be seen as a continuation of an old issue rather than a brand new one. But I think there are genuinely new and interesting problems with the 21st century Surveillance State and the brand new digital technologies that create the foundations for it. What’s new about the new surveillance state?

1. It’s always on and always has been. Old acts of surveillance had to be triggered and were forward-looking. However, we now spend so much of our lives online, and that is always being recorded. As the leaker Edward Snowden said in his interview, “they can use this system to go back in time and scrutinize every decision you've ever made, every friend you've ever discussed something with. And attack you on that basis to sort to derive suspicion from an innocent life and paint anyone in the context of a wrongdoer."

To the extent that old surveillance was capable of going back, it was by checking old records or interrogating old sources. And there the concept of amnesia comes into play.

2. It will never forget. “Amnesia” is a normal front line of defense. People forget things. Clear memories, stories, and ideas become grey. Photos and documents get lost with time. Trying to piece together history will necessarily involve a lot of missing gaps and poor recollection.

Not with the surveillance state. Cheap digital storage means that clear, easily replicable data will exist for the foreseeable future.

3. It scales easily. If the FBI was keeping records on 100 people in the 1950s, and it then wanted to monitor 1,000 people, it would probably need 10 times as many resources. Certainly it wouldn’t be effortless to scale up that level of surveillance.

As we can see in the age of Big Data and fast computing, this is no longer the case. The resource costs of accessing your phone’s metadata history versus all phones’ metadata history is going to be (somewhat) trivial. And the fact that there’s no amnesia means that you’ll always have access to that extra data.

4. It’s designed to be accessible. As Orin Kerr emphasizes, digital data here isn’t collected or surveilled via the human senses. A person can’t simply “peek” into your email the way they could peek at your physical mail. Instead devices need to be installed to access and make sense of this data. Private sector agents will do this, because it is part of their business model to make this information accessible. These access points will also be accessible to government agents under certain conditions - part of the major debate over the PRISM program is under what conditions the government can actually access these devices.

5. It’s primarily driven by the private sector. Broadly speaking, measures of democratic accountability and constitutional protections do not extend to the private sector. More on this soon, but things like the Freedom of Information Act to the Administrative Procedure Act to our whole regime of transparency laws do not apply to outside businesses. The government has worked with private groups before on surveillance, but here it is in large part driven by private agents, both for contractors and information gathering.

6. It predicts the future for individuals using mass data. Surveillance has generally used mass data to either predict or determine future courses of action on a mass scale. For instance, Census data is used to allocate federal money, or predict population growth. Alternatively, it uses individual data to analyze individual behavior - asking around and snooping to dig up dirt on someone, for instance.

The surveillance state, however, allows for using mass data to predict the actions of individuals and groups of individuals. This is what generates your Netflix and Amazon suggestions, but it is also now providing the basis for government actions. As Kieran Healy notes, this would have been interesting back in the American Revolution.

This is distinct from the normal Seeing Like a State (SLS) critique of how states see their citizens. Some think states produce “a logic of homogenization and the virtual elimination of local knowledge...an agency of homogenization, uniformity, grids and heroic simplification” (SLS 302, 8). But rather than flatten or homogenize its citizens when observed under bulk conditions, it actually creates a remarkably individualized image of what its citizens are up to.

What else is missing, or shouldn't have been listed? You could view these as a technological evolution of what was already in place, and in some ways that would make sense. But the technology has opened a brand new field. This existed before the War on Terror, and will likely exist afterwards; dealing with the laws and institutions behind this new state is crucial. As the technology has changed, so must our laws.

Follow or contact the Rortybomb blog:

  

 

I had a post at Wonkblog over the weekend, “Is a democratic surveillance state possible?”

In some sense, the issue of the government spying and collecting data on its citizens isn’t a new problem. One of my favorite tweets of the past week was Brooke Jarvis noting "Collapsing bridges alongside massive spy networks... Ah, the Jeffersonian ideal of government."

The United States has been tracking, observing, and surveilling its citizens for centuries. That includes that long-standing form of communication, the mail. As Senator Lindsey Graham just said, “In World War II... you wrote a letter overseas, it got censored...If I thought censoring the mail was necessary, I would suggest it.”

From the Census in the Constitution to the Cold War spy network (including the NSA, founded in 1952 through the Executive Branch), maybe this should be seen as a continuation of an old issue rather than a brand new one. But I think there are genuinely new and interesting problems with the 21st century Surveillance State and the brand new digital technologies that create the foundations for it. What’s new about the new surveillance state?

1. It’s always on and always has been. Old acts of surveillance had to be triggered and were forward-looking. However, we now spend so much of our lives online, and that is always being recorded. As the leaker Edward Snowden said in his interview, “they can use this system to go back in time and scrutinize every decision you've ever made, every friend you've ever discussed something with. And attack you on that basis to sort to derive suspicion from an innocent life and paint anyone in the context of a wrongdoer."

To the extent that old surveillance was capable of going back, it was by checking old records or interrogating old sources. And there the concept of amnesia comes into play.

2. It will never forget. “Amnesia” is a normal front line of defense. People forget things. Clear memories, stories, and ideas become grey. Photos and documents get lost with time. Trying to piece together history will necessarily involve a lot of missing gaps and poor recollection.

Not with the surveillance state. Cheap digital storage means that clear, easily replicable data will exist for the foreseeable future.

3. It scales easily. If the FBI was keeping records on 100 people in the 1950s, and it then wanted to monitor 1,000 people, it would probably need 10 times as many resources. Certainly it wouldn’t be effortless to scale up that level of surveillance.

As we can see in the age of Big Data and fast computing, this is no longer the case. The resource costs of accessing your phone’s metadata history versus all phones’ metadata history is going to be (somewhat) trivial. And the fact that there’s no amnesia means that you’ll always have access to that extra data.

4. It’s designed to be accessible. As Orin Kerr emphasizes, digital data here isn’t collected or surveilled via the human senses. A person can’t simply “peek” into your email the way they could peek at your physical mail. Instead devices need to be installed to access and make sense of this data. Private sector agents will do this, because it is part of their business model to make this information accessible. These access points will also be accessible to government agents under certain conditions - part of the major debate over the PRISM program is under what conditions the government can actually access these devices.

5. It’s primarily driven by the private sector. Broadly speaking, measures of democratic accountability and constitutional protections do not extend to the private sector. More on this soon, but things like the Freedom of Information Act to the Administrative Procedure Act to our whole regime of transparency laws do not apply to outside businesses. The government has worked with private groups before on surveillance, but here it is in large part driven by private agents, both for contractors and information gathering.

6. It predicts the future for individuals using mass data. Surveillance has generally used mass data to either predict or determine future courses of action on a mass scale. For instance, Census data is used to allocate federal money, or predict population growth. Alternatively, it uses individual data to analyze individual behavior - asking around and snooping to dig up dirt on someone, for instance.

The surveillance state, however, allows for using mass data to predict the actions of individuals and groups of individuals. This is what generates your Netflix and Amazon suggestions, but it is also now providing the basis for government actions. As Kieran Healy notes, this would have been interesting back in the American Revolution.

This is distinct from the normal Seeing Like a State (SLS) critique of how states see their citizens. Some think states produce “a logic of homogenization and the virtual elimination of local knowledge...an agency of homogenization, uniformity, grids and heroic simplification” (SLS 302, 8). But rather than flatten or homogenize its citizens when observed under bulk conditions, it actually creates a remarkably individualized image of what its citizens are up to.

What else is missing, or shouldn't have been listed? You could view these as a technological evolution of what was already in place, and in some ways that would make sense. But the technology has opened a brand new field. This existed before the War on Terror, and will likely exist afterwards; dealing with the laws and institutions behind this new state is crucial. As the technology has changed, so must our laws.

Follow or contact the Rortybomb blog:

  

 

Share This

We Already Tried Libertarianism - It Was Called Feudalism

Jun 11, 2013Mike Konczal

Bob Dole recently said that neither he nor Ronald Reagan would count as conservatives these days. It’s worth noting that John Locke probably wouldn’t count as a libertarian these days, either.

Michael Lind had a column in Salon in which he asked, “[i]f libertarians are correct in claiming that they understand how best to organize a modern society, how is it that not a single country in the world in the early twenty-first century is organized along libertarian lines?” EJ Dionne agrees. Several libertarians argue that the present is no guide, because the (seasteading?) future belongs to libertarians.

I’d actually go in a different direction and say the past belonged to libertarians. We tried libertarianism for a long time; it was called feudalism. That modern-day libertarianism of the Nozick-Rand-Rothbard variety resembles feudalism, rather than some variety of modern liberalism, is a great point made by Samuel Freeman in his paper "Illiberal Libertarians: Why Libertarianism Is Not a Liberal View." Let’s walk through it.

Freeman notes that there are several key institutional features of liberal political structures shared across a variety of theorists. First, there’s a set of basic rights each person equally shares (speech, association, thought, religion, conscience, voting and holding office, etc.) that are both fundamental and inalienable (more on those terms in a bit). Second, there’s a public political authority which is impartial, institutional, continuous, and held in trust to be acted on in a representative capacity. Third, positions should be open to talented individuals alongside some fairness in equality of opportunity. And last, there’s a role for governments in the market for providing public goods, checking market failure, and providing a social minimum.

The libertarian state, centered solely around ideas of private property, stands in contrast to all of these. I want to stick with the libertarian minimal state laid out by Robert Nozick in Anarchy, State, and Utopia (ASU), as it's a landmark in libertarian thought, and I just re-read it and wanted to write something about it. Let’s look at how it handles each of the political features laid out above.

Rights. Libertarians would say that of course they believe in basic rights, maybe even more than liberals! But there’s a subtle trick here.

For liberals, basic rights are fundamental, in the sense that they can’t be compromised or traded against other, non-basic rights. They are also inalienable; I can’t contractually transfer away or otherwise give up my basic rights. To the extent that I enter contracts that do this, I have an option of exit that restores those rights.

This is different from property rights in specific things. Picture yourself as a person with a basic right to association, who also owns a wooden stick. You can sell your stick, or break it, or set it on fire. Your rights over the stick are alienable - you don’t have the stick anymore once you’ve done those things. Your rights to the stick are also not fundamental. Given justification, the public could regulate its use (say if it were a big stick turned into a bridge, it may need to meet safety requirements), in a way that the liberal state couldn’t regulate freedom of association.

When libertarians say they are for basic rights, what they are really saying is that they are for treating what liberals consider basic rights as property rights. Basic rights receive no more, or less, protection than other property rights. You can easily give them up or bargain them away, and thus alienate yourself from them. (Meanwhile, all property rights are entirely fundamental - they can never be regulated.)

How is that possible? Let’s cut to the chase: Nozick argues you can sell yourself into slavery, a condition under which all basic liberties are extinguished. (“[Would] a free system... allow him to sell himself into slavery[?] I believe that it would.” ASU 331) The minimal libertarian state would be forced to acknowledge and enforce contracts that permanently alienate basic liberties, even if the person in question later wanted out, although the liberal state would not at any point acknowledge such a contract.

If the recession were so bad that millions of people started selling themselves into slavery, or entering contracts that required lifelong feudal oaths to employers and foregoing basic rights, in order to survive, this would raise no important liberty questions for the libertarian minimal state. If this new feudal order were set in such a way that it persisted across generations, again, no problem. As Freeman notes, “what is fundamentally important for libertarians is maintaining a system of historically generated property rights...no attention is given to maintaining the basic rights, liberties, and powers that (according to liberals) are needed to institutionally define a person’s freedom, independence, and status as an equal citizen.”

Government. Which brings us to feudalism. Feudalism, for Freeman, means “the elements of political authority are powers that are held personally by individuals, not by enduring political institutions... subjects’ political obligations and allegiances are voluntary and personal: They arise out of private contractual obligations and are owed to particular persons.”

What is the libertarian government? For Nozick, the minimal state is basically a protection racket (“protection services”) with a certain kind of returns to scale over an area and, after some mental cartwheels, a justification in forcing holdouts in their area to follow their rules.

As such, it is a network of private contracts, arising solely from protection and arbitration services, where political power also remains in private hands and privately exercised. The protection of rights is based on people’s ability to pay, bound through private authority and bilateral, individual contracts. “Protection and enforcement of people’s rights is treated as an economic good to be provided by the market,” (ASU 26) with governments as a for-profit corporate entities.

What doesn’t this have? There is no impartial, public power. There’s no legislative capacity that is answerable to the people in a non-market form. There’s no democracy and universal franchise with equal rights of participation. Political power isn’t to be acted on in a representative capacity toward public benefit, but instead toward private ends. Which is to say, it takes the features we associate with public, liberal government power and replaces them with feudal, private governance.

Opportunity. Liberals believe that positions should be open for all with talent, and that public power should be utilized to ensure disadvantaged groups have access to opportunities. Libertarianism believes that private, feudal systems of exclusion, hierarchy, and domination are perfectly fine, or at least that there is no legitimate public purpose in checking these private relationships. As mentioned above, private property rights are fundamental and cannot be balanced against other concerns like opportunity. Nozick is clear on this (“No one has a right to something whose realization requires certain uses of things and activities that other people have right and entitlements over.” ASU 238).

Do we need more? How about Rand Paul, one of the leading advocates for libertarianism, explaining why he wouldn’t vote for the Civil Rights Act: “I abhor racism. I think it’s a bad business decision to exclude anybody from your restaurant — but, at the same time, I do believe in private ownership.”

Markets. The same goes for markets, where Nozick is pretty clear: no interference. “Taxation of earnings from labor is on a par with forced labor.” (ASU, 169) Nozick thinks it is likely that his entitlement theory will lead to an efficient distribution of resources and avoid market problems, but he doesn’t particularly require it and contrasts himself with end-staters who assume it will. “Distribution according to benefits to others is a major patterned strand in a free capitalist society, as Hayek correctly points out, but it is only a strand and does not constitute the whole pattern of a system of entitlements.” (ASU 158)

I sometimes see arguments about how bringing “markets” into the provision of government services makes it more libertarian. Privatizing Social Security, bringing premium support to Medicare, or having vouchers for public education is more libertarian than the status quo. Again, it’s not clear to me why libertarians would think taxation for public, in-kind provisioning is a form of slavery and forced labor while running these services through private agents is not.

You could argue that introducing markets into government services respects economic liberty as a basic liberty, or does a better job of providing for the worst off, or leaves us all better off overall. But these aren’t libertarian arguments; they are the types of arguments Nozick spends Part II of ASU taunting, trolling, or otherwise bulldozing.

Three last thoughts. (1) Do read Atossa Abrahamian on actually existing seasteading. (2) It’s ironic that liberalism first arose to bury feudal systems of private political power, and now libertarians claim the future of liberalism is in bringing back those very same systems of feudalism. (3) Sometimes libertarians complain that the New Deal took the name liberal, which is something they want to claim for themselves. But looking at their preferred system as it is, I think people like me will be keeping the name “liberal.” We do a better job with it.

Follow or contact the Rortybomb blog:

  

 

Bob Dole recently said that neither he nor Ronald Reagan would count as conservatives these days. It’s worth noting that John Locke probably wouldn’t count as a libertarian these days, either.

Michael Lind had a column in Salon in which he asked, “[i]f libertarians are correct in claiming that they understand how best to organize a modern society, how is it that not a single country in the world in the early twenty-first century is organized along libertarian lines?” EJ Dionne agrees. Several libertarians argue that the present is no guide, because the (seasteading?) future belongs to libertarians.

I’d actually go in a different direction and say the past belonged to libertarians. We tried libertarianism for a long time; it was called feudalism. That modern-day libertarianism of the Nozick-Rand-Rothbard variety resembles feudalism, rather than some variety of modern liberalism, is a great point made by Samuel Freeman in his paper "Illiberal Libertarians: Why Libertarianism Is Not a Liberal View." Let’s walk through it.

Freeman notes that there are several key institutional features of liberal political structures shared across a variety of theorists. First, there’s a set of basic rights each person equally shares (speech, association, thought, religion, conscience, voting and holding office, etc.) that are both fundamental and inalienable (more on those terms in a bit). Second, there’s a public political authority which is impartial, institutional, continuous, and held in trust to be acted on in a representative capacity. Third, positions should be open to talented individuals alongside some fairness in equality of opportunity. And last, there’s a role for governments in the market for providing public goods, checking market failure, and providing a social minimum.

The libertarian state, centered solely around ideas of private property, stands in contrast to all of these. I want to stick with the libertarian minimal state laid out by Robert Nozick in Anarchy, State, and Utopia (ASU), as it's a landmark in libertarian thought, and I just re-read it and wanted to write something about it. Let’s look at how it handles each of the political features laid out above.

Rights. Libertarians would say that of course they believe in basic rights, maybe even more than liberals! But there’s a subtle trick here.

For liberals, basic rights are fundamental, in the sense that they can’t be compromised or traded against other, non-basic rights. They are also inalienable; I can’t contractually transfer away or otherwise give up my basic rights. To the extent that I enter contracts that do this, I have an option of exit that restores those rights.

This is different from property rights in specific things. Picture yourself as a person with a basic right to association, who also owns a wooden stick. You can sell your stick, or break it, or set it on fire. Your rights over the stick are alienable - you don’t have the stick anymore once you’ve done those things. Your rights to the stick are also not fundamental. Given justification, the public could regulate its use (say if it were a big stick turned into a bridge, it may need to meet safety requirements), in a way that the liberal state couldn’t regulate freedom of association.

When libertarians say they are for basic rights, what they are really saying is that they are for treating what liberals consider basic rights as property rights. Basic rights receive no more, or less, protection than other property rights. You can easily give them up or bargain them away, and thus alienate yourself from them. (Meanwhile, all property rights are entirely fundamental - they can never be regulated.)

How is that possible? Let’s cut to the chase: Nozick argues you can sell yourself into slavery, a condition under which all basic liberties are extinguished. (“[Would] a free system... allow him to sell himself into slavery[?] I believe that it would.” ASU 331) The minimal libertarian state would be forced to acknowledge and enforce contracts that permanently alienate basic liberties, even if the person in question later wanted out, although the liberal state would not at any point acknowledge such a contract.

If the recession were so bad that millions of people started selling themselves into slavery, or entering contracts that required lifelong feudal oaths to employers and foregoing basic rights, in order to survive, this would raise no important liberty questions for the libertarian minimal state. If this new feudal order were set in such a way that it persisted across generations, again, no problem. As Freeman notes, “what is fundamentally important for libertarians is maintaining a system of historically generated property rights...no attention is given to maintaining the basic rights, liberties, and powers that (according to liberals) are needed to institutionally define a person’s freedom, independence, and status as an equal citizen.”

Government. Which brings us to feudalism. Feudalism, for Freeman, means “the elements of political authority are powers that are held personally by individuals, not by enduring political institutions... subjects’ political obligations and allegiances are voluntary and personal: They arise out of private contractual obligations and are owed to particular persons.”

What is the libertarian government? For Nozick, the minimal state is basically a protection racket (“protection services”) with a certain kind of returns to scale over an area and, after some mental cartwheels, a justification in forcing holdouts in their area to follow their rules.

As such, it is a network of private contracts, arising solely from protection and arbitration services, where political power also remains in private hands and privately exercised. The protection of rights is based on people’s ability to pay, bound through private authority and bilateral, individual contracts. “Protection and enforcement of people’s rights is treated as an economic good to be provided by the market,” (ASU 26) with governments as a for-profit corporate entities.

What doesn’t this have? There is no impartial, public power. There’s no legislative capacity that is answerable to the people in a non-market form. There’s no democracy and universal franchise with equal rights of participation. Political power isn’t to be acted on in a representative capacity toward public benefit, but instead toward private ends. Which is to say, it takes the features we associate with public, liberal government power and replaces them with feudal, private governance.

Opportunity. Liberals believe that positions should be open for all with talent, and that public power should be utilized to ensure disadvantaged groups have access to opportunities. Libertarianism believes that private, feudal systems of exclusion, hierarchy, and domination are perfectly fine, or at least that there is no legitimate public purpose in checking these private relationships. As mentioned above, private property rights are fundamental and cannot be balanced against other concerns like opportunity. Nozick is clear on this (“No one has a right to something whose realization requires certain uses of things and activities that other people have right and entitlements over.” ASU 238).

Do we need more? How about Rand Paul, one of the leading advocates for libertarianism, explaining why he wouldn’t vote for the Civil Rights Act: “I abhor racism. I think it’s a bad business decision to exclude anybody from your restaurant — but, at the same time, I do believe in private ownership.”

Markets. The same goes for markets, where Nozick is pretty clear: no interference. “Taxation of earnings from labor is on a par with forced labor.” (ASU, 169) Nozick thinks it is likely that his entitlement theory will lead to an efficient distribution of resources and avoid market problems, but he doesn’t particularly require it and contrasts himself with end-staters who assume it will. “Distribution according to benefits to others is a major patterned strand in a free capitalist society, as Hayek correctly points out, but it is only a strand and does not constitute the whole pattern of a system of entitlements.” (ASU 158)

I sometimes see arguments about how bringing “markets” into the provision of government services makes it more libertarian. Privatizing Social Security, bringing premium support to Medicare, or having vouchers for public education is more libertarian than the status quo. Again, it’s not clear to me why libertarians would think taxation for public, in-kind provisioning is a form of slavery and forced labor while running these services through private agents is not.

You could argue that introducing markets into government services respects economic liberty as a basic liberty, or does a better job of providing for the worst off, or leaves us all better off overall. But these aren’t libertarian arguments; they are the types of arguments Nozick spends Part II of ASU taunting, trolling, or otherwise bulldozing.

Three last thoughts. (1) Do read Atossa Abrahamian on actually existing seasteading. (2) It’s ironic that liberalism first arose to bury feudal systems of private political power, and now libertarians claim the future of liberalism is in bringing back those very same systems of feudalism. (3) Sometimes libertarians complain that the New Deal took the name liberal, which is something they want to claim for themselves. But looking at their preferred system as it is, I think people like me will be keeping the name “liberal.” We do a better job with it.

Follow or contact the Rortybomb blog:

  

 

Jousting knights image via Shutterstock.com

Share This

Pages