The UNC Coup and the Second Limit of Economic Liberalism

Nov 13, 2014Mike Konczal

There was a quiet revolution in the University of North Carolina higher education system in August, one that shows an important limit of current liberal thought. In the aftermath of the 2014 election, there’s been a significant amount of discussion over whether liberals have an economic agenda designed for the working and middle classes. This discussion has primarily been about wages in the middle of the income distribution, which are the first major limit of liberal thought; however, it is also tied to a second limit, which is the way that liberals want to provide public goods and services.

So what happened? The UNC System Board of Governors voted unanimously to cap the amount of tuition that may be used for financial aid for need-based students at no more than 15 percent. With tuition going up rapidly at public universities as the result of public disinvestment, administrators have recently begun using general tuition to supplement their ability to provide aid. This cross-subsidization has been heralded as a solution to the problem of high college costs. Sticker price is high, but the net price for poorer students will be low.

This system works as long as there is sufficient middle-class buy-in, but it’s now capped at UNC. As a board member told the local press, the burden of providing need-based aid “has become unfairly apportioned to working North Carolinians,” and this new policy helps prevent that. Iowa implemented a similar approach back in 2013. And as Kevin Kiley has reported for IHE, similar proposals have been floated in Arizona and Virginia. This trend is likely to gain strength as states continue to disinvest.

The problem for liberals isn’t just that there’s no way for them to win this argument with middle-class wages stagnating, though that is a problem. The far bigger issue for liberals is that this is a false choice, a real class antagonism that has been created entirely by the process of state disinvestment, privatization, cost-shifting of tuitions away from general revenues to individuals, and the subsequent explosion in student debt. As long as liberals continue to play this game, they’ll be undermining their chances.

First Limit: Middle-Class Wages

There’s been a wave of commentary about how the Democrats don’t have a middle-class wage agenda. David Leonhardt wrote the core essay, “The Great Wage Slowdown, Looming Over Politics,” with its opening line: “How does the Democratic Party plan to lift stagnant middle-class incomes?” Josh Marshall made the same argument as well. The Democrats have many smart ideas on the essential agenda of reducing poverty, most of which derive from pegging the low-end wage at a higher level and then adding cash or cash-like transfers to fill in the rest. But what about the middle class?

One obvious answer is “full employment.” Running the economy at full steam is the most straightforward way of boosting overall wages and perhaps reversing the growth in the capital-share of income. However, that approach hasn’t been adopted by the President, strategically or even rhetorically. Part of it might be that if the economy is terrible because of vague forces, technological changes and necessary pain following a financial crisis, then the Democrats can’t really be blamed for stagnation. That strategy will not work out for them.

The Democrats (and even many liberals in general) also haven’t developed a story about why inequality matters so much for the middle class. There are such stories, of course: the collapse of high progressive taxation creates incentives to rent seek, financialization makes the economy focused less on innovation and more on disgorging the cash, and new platform monopolies are deploying forms of market power that are increasingly worrisome.

Second Limit: Public Provisioning

A similar dynamic is in play with social goods. The liberal strategy is increasingly to leave the provisioning of social goods to the market, while providing coupons for the poorest to afford those goods. By definition, means-testing this way puts high implicit taxes on poorer people in a way that decommodification does not. But beyond that simple point, this leaves middle-class people in a bind, as the ability of the state to provide access and contain costs efficiently through its scale doesn’t benefit them, and stagnating incomes put even more pressure on them.

As noted, antagonisms between the middle class and the poor in higher education are entirely a function of public disinvestment. The moment higher education is designed to put massive costs onto individual students, suddenly individuals are forced to look out only for themselves. If college tuition was largely free, paid for by all people and income sources, then there’d be no need for a working-class or middle-class student to view poorer student as a direct threat to their economic stability. And there's no better way to prematurely destroy a broader liberal agenda by designing a system that creates these conflicts.

These worries are real. The incomes of recent graduates are stagnating as well. The average length of time people are taking to pay off their student loans is up 80 percent, to over 13 years. Meanwhile, as Janet Yellen recently showed in the graphic below, student debt is rising as a percentage of income for everyone below the bottom 5 percent. It’s not surprising that studies find student debt impacting family formation and small business creation, and that people are increasingly looking out for just themselves.

You could imagine committing to lowering costs broadly across the system, say through the proposal by Sara Goldrick-Rab and Nancy Kendall to make the first two years free. But Democrats aren't doing this. Instead, President Obama’s solution is to try and make students better consumers on the front-end with more disclosures and outcome surveys for schools, and to make the lowest-income graduates better debtors on the back-end with caps on how burdensome student debt can be. These solutions by the President are not designed to contain the costs of higher education in a substantial way and, crucially, they don’t increase the public buy-in and interest in public higher education.

The Relevance for the ACA

I brought up higher education because I think it’s relevant, but I think it also can help explain the lack of political payout for the Affordable Care Act. It’s here! The ACA is not only meeting expectations, it’s even exceeding them in major ways. Yet it still remains unpopular, even as millions of people are using the exchanges. There is no political payout for the Democrats.

Liberals chalk this up to the right-wing noise machine, and no doubt that hurts. But part of the problem is that middle-class individuals still end up facing an individual product they are purchasing in a market, except without any subsidies. Though the insurance is better regulated, serious cost controls so far have not been part of the discussion. Polling shows half of the users of the exchange are unsure if they can make their payments and are worried about being able to afford getting sick. This, in turn, blocks the formation of a broad-based coalition capable of defending, sustaining, and expanding the ACA in the same way those have formed for Social Security and Medicare.

Any serious populist agenda will have to have a broader agenda for wages, with full employment as the central idea. But it will also need to include social programs that are broader based and focused on cost controls; here, luckily, the public option is a perfect organizing metaphor.

Follow or contact the Rortybomb blog:
 
  

 

There was a quiet revolution in the University of North Carolina higher education system in August, one that shows an important limit of current liberal thought. In the aftermath of the 2014 election, there’s been a significant amount of discussion over whether liberals have an economic agenda designed for the working and middle classes. This discussion has primarily been about wages in the middle of the income distribution, which are the first major limit of liberal thought; however, it is also tied to a second limit, which is the way that liberals want to provide public goods and services.

So what happened? The UNC System Board of Governors voted unanimously to cap the amount of tuition that may be used for financial aid for need-based students at no more than 15 percent. With tuition going up rapidly at public universities as the result of public disinvestment, administrators have recently begun using general tuition to supplement their ability to provide aid. This cross-subsidization has been heralded as a solution to the problem of high college costs. Sticker price is high, but the net price for poorer students will be low.

This system works as long as there is sufficient middle-class buy-in, but it’s now capped at UNC. As a board member told the local press, the burden of providing need-based aid “has become unfairly apportioned to working North Carolinians,” and this new policy helps prevent that. Iowa implemented a similar approach back in 2013. And as Kevin Kiley has reported for IHE, similar proposals have been floated in Arizona and Virginia. This trend is likely to gain strength as states continue to disinvest.

The problem for liberals isn’t just that there’s no way for them to win this argument with middle-class wages stagnating, though that is a problem. The far bigger issue for liberals is that this is a false choice, a real class antagonism that has been created entirely by the process of state disinvestment, privatization, cost-shifting of tuitions away from general revenues to individuals, and the subsequent explosion in student debt. As long as liberals continue to play this game, they’ll be undermining their chances.

First Limit: Middle-Class Wages

There’s been a wave of commentary about how the Democrats don’t have a middle-class wage agenda. David Leonhardt wrote the core essay, “The Great Wage Slowdown, Looming Over Politics,” with its opening line: “How does the Democratic Party plan to lift stagnant middle-class incomes?” Josh Marshall made the same argument as well. The Democrats have many smart ideas on the essential agenda of reducing poverty, most of which derive from pegging the low-end wage at a higher level and then adding cash or cash-like transfers to fill in the rest. But what about the middle class?

One obvious answer is “full employment.” Running the economy at full steam is the most straightforward way of boosting overall wages and perhaps reversing the growth in the capital-share of income. However, that approach hasn’t been adopted by the President, strategically or even rhetorically. Part of it might be that if the economy is terrible because of vague forces, technological changes and necessary pain following a financial crisis, then the Democrats can’t really be blamed for stagnation. That strategy will not work out for them.

The Democrats (and even many liberals in general) also haven’t developed a story about why inequality matters so much for the middle class. There are such stories, of course: the collapse of high progressive taxation creates incentives to rent seek, financialization makes the economy focused less on innovation and more on disgorging the cash, and new platform monopolies are deploying forms of market power that are increasingly worrisome.

Second Limit: Public Provisioning

A similar dynamic is in play with social goods. The liberal strategy is increasingly to leave the provisioning of social goods to the market, while providing coupons for the poorest to afford those goods. By definition, means-testing this way puts high implicit taxes on poorer people in a way that decommodification does not. But beyond that simple point, this leaves middle-class people in a bind, as the ability of the state to provide access and contain costs efficiently through its scale doesn’t benefit them, and stagnating incomes put even more pressure on them.

As noted, antagonisms between the middle class and the poor in higher education are entirely a function of public disinvestment. The moment higher education is designed to put massive costs onto individual students, suddenly individuals are forced to look out only for themselves. If college tuition was largely free, paid for by all people and income sources, then there’d be no need for a working-class or middle-class student to view poorer student as a direct threat to their economic stability. And there's no better way to prematurely destroy a broader liberal agenda by designing a system that creates these conflicts.

These worries are real. The incomes of recent graduates are stagnating as well. The average length of time people are taking to pay off their student loans is up 80 percent, to over 13 years. Meanwhile, as Janet Yellen recently showed in the graphic below, student debt is rising as a percentage of income for everyone below the bottom 5 percent. It’s not surprising that studies find student debt impacting family formation and small business creation, and that people are increasingly looking out for just themselves.

You could imagine committing to lowering costs broadly across the system, say through the proposal by Sara Goldrick-Rab and Nancy Kendall to make the first two years free. But Democrats aren't doing this. Instead, President Obama’s solution is to try and make students better consumers on the front-end with more disclosures and outcome surveys for schools, and to make the lowest-income graduates better debtors on the back-end with caps on how burdensome student debt can be. These solutions by the President are not designed to contain the costs of higher education in a substantial way and, crucially, they don’t increase the public buy-in and interest in public higher education.

The Relevance for the ACA

I brought up higher education because I think it’s relevant, but I think it also can help explain the lack of political payout for the Affordable Care Act. It’s here! The ACA is not only meeting expectations, it’s even exceeding them in major ways. Yet it still remains unpopular, even as millions of people are using the exchanges. There is no political payout for the Democrats.

Liberals chalk this up to the right-wing noise machine, and no doubt that hurts. But part of the problem is that middle-class individuals still end up facing an individual product they are purchasing in a market, except without any subsidies. Though the insurance is better regulated, serious cost controls so far have not been part of the discussion. Polling shows half of the users of the exchange are unsure if they can make their payments and are worried about being able to afford getting sick. This, in turn, blocks the formation of a broad-based coalition capable of defending, sustaining, and expanding the ACA in the same way those have formed for Social Security and Medicare.

Any serious populist agenda will have to have a broader agenda for wages, with full employment as the central idea. But it will also need to include social programs that are broader based and focused on cost controls; here, luckily, the public option is a perfect organizing metaphor.

Follow or contact the Rortybomb blog:
 
  

 

Share This

On Public and Profits at Boston Review

Nov 12, 2014Mike Konczal

Did you know that prosecutors were paid based on how many cases they tried in the 19th century? Or that Adam Smith argued for judges running on the profit motive in the Wealth of Nations? I have a new piece discussion the rise and fall of disinterested public service as a response to the abuses of the profit motive in government service, or how we got away from that system and how we are now going back to it, at Boston Review. It's called Selling Fast: Public Goods, Profits, and State Legitimacy.

It's a review of Against the Profit Motive: The Salary Revolution in American Government, 1780–1940 by Yale legal historian Nicholas R. Parrillo, The Teacher Wars by Dana Goldstein, and Rise of the Warrior Cop by Radley Balko. There's a lot of interesting threads through all three, and I really enjoyed working on this review. I hope you check it out.

Follow or contact the Rortybomb blog:
 
  

 

Did you know that prosecutors were paid based on how many cases they tried in the 19th century? Or that Adam Smith argued for judges running on the profit motive in the Wealth of Nations? I have a new piece discussion the rise and fall of disinterested public service as a response to the abuses of the profit motive in government service, or how we got away from that system and how we are now going back to it, at Boston Review. It's called Selling Fast: Public Goods, Profits, and State Legitimacy.

It's a review of Against the Profit Motive: The Salary Revolution in American Government, 1780–1940 by Yale legal historian Nicholas R. Parrillo, The Teacher Wars by Dana Goldstein, and Rise of the Warrior Cop by Radley Balko. There's a lot of interesting threads through all three, and I really enjoyed working on this review. I hope you check it out.

Follow or contact the Rortybomb blog:
 
  

 

Share This

In Blowout Aftermath, Remember GDP Growth Was Slower in 2013 Than in 2012

Nov 5, 2014Mike Konczal

In the aftermath of the electoral blowout, a reminder: the Great Recession isn't over. In fact, GDP growth was slower in 2013 than in 2012. Let's go to the FRED data:

There's dotted lines added at the end of 2012 to give you a sense that throughout 2013 the economy didn't speed up. Even though we were another year into the "recovery" GDP growth slowed down a bit.

There's a lot of reasons people haven't discussed it this way. I saw a lot of people using year-over-year GDP growth for 2013, proclaiming it a major success. A problem with using that method for a single point is that it's very sensitive to what is happening around the end points, and indeed the quarter before and after that data point featured negative or near zero growth. Averaging it out (or even doing year-over-year on a longer scale) shows a much worse story. Also much of the celebrated convergence between the two years was really the BEA finding more austerity in 2012. (I added a line going back to 2011 to show that the overall growth rate has been lower since then. According to David Beckworth, this is the point when fiscal tightening began.)

Other people were hoping that the Evans Rule and open-ended purchases could stabilize "expectations" of inflation regardless of underlying changes in economic activity (I was one of them), a process that didn't happen. And yet others knew the sequestration was put into place and was unlikely to be moved, so might as well make lemonade out of the austerity.

And that's overall growth. Wages are even uglier. (Note in an election meant to repudiate liberalism, minimum wage hikes passed with flying colors.) The Federal Reserve's Survey of Consumer Finances is not a bomb-throwing document, but it's hard not to read class war into their latest one. From 2010 to 2013, a year after the Recession ended until last year, median incomes fell:

When 45 percent of the electorate puts the economy as the top issue in exit polls, and the economy performs like it does here, it's no wonder we're having wave election after wave election of discontentment.

Follow or contact the Rortybomb blog:
 
  

 

In the aftermath of the electoral blowout, a reminder: the Great Recession isn't over. In fact, GDP growth was slower in 2013 than in 2012. Let's go to the FRED data:

There's dotted lines added at the end of 2012 to give you a sense that throughout 2013 the economy didn't speed up. Even though we were another year into the "recovery" GDP growth slowed down a bit.

There's a lot of reasons people haven't discussed it this way. I saw a lot of people using year-over-year GDP growth for 2013, proclaiming it a major success. A problem with using that method for a single point is that it's very sensitive to what is happening around the end points, and indeed the quarter before and after that data point featured negative or near zero growth. Averaging it out (or even doing year-over-year on a longer scale) shows a much worse story. Also much of the celebrated convergence between the two years was really the BEA finding more austerity in 2012. (I added a line going back to 2011 to show that the overall growth rate has been lower since then. According to David Beckworth, this is the point when fiscal tightening began.)

Other people were hoping that the Evans Rule and open-ended purchases could stabilize "expectations" of inflation regardless of underlying changes in economic activity (I was one of them), a process that didn't happen. And yet others knew the sequestration was put into place and was unlikely to be moved, so might as well make lemonade out of the austerity.

And that's overall growth. Wages are even uglier. (Note in an election meant to repudiate liberalism, minimum wage hikes passed with flying colors.) The Federal Reserve's Survey of Consumer Finances is not a bomb-throwing document, but it's hard not to read class war into their latest one. From 2010 to 2013, a year after the Recession ended until last year, median incomes fell:

When 45 percent of the electorate puts the economy as the top issue in exit polls, and the economy performs like it does here, it's no wonder we're having wave election after wave election of discontentment.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Rortybomb on the March: Special Washington Monthly Inequality Issue and The Nation

Nov 4, 2014Mike Konczal

Hey everyone, I have two new pieces out there I hope you check out.

The first is a piece about the financialization of the economy in the latest Washington Monthly. I'm heading up a new project at Roosevelt, more details to come soon, about the financialization of the economy, and this essay is the first product. And I'm happy to have it as part of a special issue on inequality and the economy headed up by the fine people at the Washington Center for Equitable Growth. There's a ton of great stuff in there, including an intro by Heather Boushey, Ann O'Leary on early childhood programs, Alan Blinder on boosting wages, and a conclusion by Joe Stiglitz. It's all really great stuff, and I hope it shows a deeper and wider understanding of an inequality agenda.

The second is the latest The Score column at The Nation, which is a focus on the effect of high tax rates on inequality and structuring markets. It's a writeup of the excellent Saez, Piketty, and Stantcheva Three Elasticies paper, and a continuation of a post here at this blog.

Follow or contact the Rortybomb blog:
 
  

 

Hey everyone, I have two new pieces out there I hope you check out.

The first is a piece about the financialization of the economy in the latest Washington Monthly. I'm heading up a new project at Roosevelt, more details to come soon, about the financialization of the economy, and this essay is the first product. And I'm happy to have it as part of a special issue on inequality and the economy headed up by the fine people at the Washington Center for Equitable Growth. There's a ton of great stuff in there, including an intro by Heather Boushey, Ann O'Leary on early childhood programs, Alan Blinder on boosting wages, and a conclusion by Joe Stiglitz. It's all really great stuff, and I hope it shows a deeper and wider understanding of an inequality agenda.

The second is the latest The Score column at The Nation, which is a focus on the effect of high tax rates on inequality and structuring markets. It's a writeup of the excellent Saez, Piketty, and Stantcheva Three Elasticies paper, and a continuation of a post here at this blog.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Finance 101 Problems in National Affairs' Case For Fair-Value Accounting

Nov 4, 2014Mike Konczal

In the latest National Affairs, Jason Delisle and Jason Richwine make what they call ”The Case for Fair-Value Accounting.” This is the process of using the price of, say, student loans in the capital markets to budget and discount government student loans. (The issue also has articles walking back support for previously acceptable moderate-right ideas like Common Core and the EITC, showing the way conservative wonks are starting to line up for 2016.)

In the piece Delisle and Richwine make two basic mistakes in financial theory, mistakes that undermine their ultimate argument. Let’s dig into them, because it’s a wonderful opportunity to get some finance back into this blog (like it used to have back when it was cool).

Error 1: Their Definition of FVA Is Wrong

What is fair-value accounting (FVA)? According to the authors, FVA “factors in the cost of market risk,” meaning “the risk of a general downturn in the economy.” This market risk reflects the potential for defaults; it’s “the cost of the uncertainty surrounding future loan payments.”

These statements are false. There is a consensus that FVA incorporates significantly more than this definition of market risk.

Here’s the Financial Economists Roundtable, endorsing FVA: "Use of Treasury rates as discount factors, however, fails to account for the costs of the risks associated with government credit assistance -- namely, market risk, prepayment risk, and liquidity risk."

And the CBO specifically incorporates all these additional risks when it evaluates FVA: "Student loans also entail prepayment risk… investors… also assign a price to other types of risk, such as liquidity risk… CBO takes into account all of those risks in its fair-value estimates."

This is a much broader set of concerns than what Delisle and Richwine bring up. For instance, FVA requires taxpayers to be subject to the same liquidity and prepayment risks as the capital markets. Remember when the federal government stepped in to provide liquidity to the capital markets when they failed in late 2008, because the markets couldn’t? That gives us a clue that there might be some differences between public and private risks.

Crucially, it’s not clear to me that taxpayers have the same prepayment risk as the capital markets. Private holders of student loans are terrified that their loans might be paid back too quickly, because they are likely to get paid back when interest rates are low and it will be tough to reinvest at the same rate. This is a particularly big risk with the negative convexity of student loan payments, which can be prepaid without penalty. Private actors need to be compensated generously for this risk.

Do taxpayers face the same risk? If student loans owed to the government were paid down faster than anyone expected, would taxpayers be furious? I wouldn’t. I certainly wouldn’t say “how are we going to continue to make the profit we were making?” as a citizen, though it would be an essential question as a private bondholder. Either way, it’s as much a political question as an economic one. (I make the full argument for this in a blog post here.)

Error 2: Their Definition of Market Risk Is Wrong

The authors like FVA because it accounts for market risk. But what is market risk? According to Delisle and Richwine, market risk is “associated with expecting future loan repayments,” as “[s]tudents might pay back the expected principal and interest” but they also may not. It is also “the risk of a general downturn in the economy… market risk cannot be diversified away.”

So the first part is wrong: market risk is not credit risk, or the risk of default or missing payments. The International Financial Reporting Standards (IFRS7), for instance, requires reporting market risk separate from credit risk, because they are obviously two different things. I’ve generally only heard market risk used in the context of bond portfolios to mean interest rate risk, which they also don’t mention. So if market risk isn’t credit risk or interest rate risk, what is it?

I’m not sure. What I think is going on is they are confusing the concept with the market risk of a stock, specifically its beta. A stock’s beta is its sensitivity to overall equity prices. (Pull up a random stock page and you’ll see the beta somewhere.) It’s very common phrasing to say this risk can’t be diversified away and is a proxy for the risk of general downturns in the economy, which is the same language used in this piece.

Market risk for stocks is the question of how much your portfolio will go down if the market as a whole goes down. But this has nothing to do with student loans, because students (aside from an enterprising few) don’t sell equity; they take out loans. If students paid for school with equity, in theory an economic downturn would lead to less revenue, since students would make less money overall. But even then it’s a shaky concept.

This isn’t just academic. There’s a reason people don’t speak of a one-to-one relationship between a market downturn and the value of a bond portfolio, as the authors’ “market risk” definition does. If the economy tanks, credit risk increases, so bonds are worth less, but interest rates fall, meaning the same bonds are worth more. How this all balances is complicated, and strongly driven by the distribution of bond maturities. This is why financial risk management distinguishes between credit, liquidity, and interest rate risks, and doesn’t conflate those concepts as the authors do.

(Though they are writing as experts, I think they are just copying and pasting from the CBO’s confusing and erroneous definition of “market risk.” If they are sourcing any kind of common financial industry practices or definitions, I don’t see it. I guess Jason Richwine didn’t get a chance to study finance while publishing his dissertation.)

Here again I’d want to understand more how the value of student loans to taxpayers moves with interest rates. Repayments are mentioned above. And for private lenders, higher interest rates mean that they can sell bonds for less and that they’re worth less as collateral. They need to be compensated for this risk. Do taxpayers have this problem to the same extent? If interest rates rise, do we worry we can’t sell the student loan portfolio for the same amount to another government, or that we can’t use it as collateral to fund another war? If not, why would we use this market rate?

Is This Just About Credit Risk?

Besides all the theoretical problems mentioned above, there’s also the practical problem that the CBO uses the already existing private market for student loans (“relied mainly on data about the interest rates charged to borrowers in the private student loan market”), even though there’s obviously a massive adverse selection problem there. Though not an error, it's a third major problem for the argument. The authors don’t even touch this.

But for all the talk about FVA, the only real concern the authors bring up is credit risk. “What if taxpayers don’t get paid?” is the question raised over and over again in the piece. The authors don’t articulate any direct concerns about, say, a move in interest rates changing the value of a bond portfolio, aside from the possibility that it might mean more credit losses.

So dramatically scaling back consumer protections like bankruptcy and statute of limitations for student debtors wasn’t enough for the authors. Fair enough. But there’s an easy fix: the government could buy some credit protection for losses in excess of those expected on, say, $10 billion of its portfolio, and use that price as a supplemental discount. This would be quite low-cost and provide useful information. But it’s a far cry from FVA, even if FVA’s proponents don’t quite understand that.

Follow or contact the Rortybomb blog:
 
  

 

In the latest National Affairs, Jason Delisle and Jason Richwine make what they call ”The Case for Fair-Value Accounting.” This is the process of using the price of, say, student loans in the capital markets to budget and discount government student loans. (The issue also has articles walking back support for previously acceptable moderate-right ideas like Common Core and the EITC, showing the way conservative wonks are starting to line up for 2016.)

In the piece Delisle and Richwine make two basic mistakes in financial theory, mistakes that undermine their ultimate argument. Let’s dig into them, because it’s a wonderful opportunity to get some finance back into this blog (like it used to have back when it was cool).

Error 1: Their Definition of FVA Is Wrong

What is fair-value accounting (FVA)? According to the authors, FVA “factors in the cost of market risk,” meaning “the risk of a general downturn in the economy.” This market risk reflects the potential for defaults; it’s “the cost of the uncertainty surrounding future loan payments.”

These statements are false. There is a consensus that FVA incorporates significantly more than this definition of market risk.

Here’s the Financial Economists Roundtable, endorsing FVA: "Use of Treasury rates as discount factors, however, fails to account for the costs of the risks associated with government credit assistance -- namely, market risk, prepayment risk, and liquidity risk."

And the CBO specifically incorporates all these additional risks when it evaluates FVA: "Student loans also entail prepayment risk… investors… also assign a price to other types of risk, such as liquidity risk… CBO takes into account all of those risks in its fair-value estimates."

This is a much broader set of concerns than what Delisle and Richwine bring up. For instance, FVA requires taxpayers to be subject to the same liquidity and prepayment risks as the capital markets. Remember when the federal government stepped in to provide liquidity to the capital markets when they failed in late 2008, because the markets couldn’t? That gives us a clue that there might be some differences between public and private risks.

Crucially, it’s not clear to me that taxpayers have the same prepayment risk as the capital markets. Private holders of student loans are terrified that their loans might be paid back too quickly, because they are likely to get paid back when interest rates are low and it will be tough to reinvest at the same rate. This is a particularly big risk with the negative convexity of student loan payments, which can be prepaid without penalty. Private actors need to be compensated generously for this risk.

Do taxpayers face the same risk? If student loans owed to the government were paid down faster than anyone expected, would taxpayers be furious? I wouldn’t. I certainly wouldn’t say “how are we going to continue to make the profit we were making?” as a citizen, though it would be an essential question as a private bondholder. Either way, it’s as much a political question as an economic one. (I make the full argument for this in a blog post here.)

Error 2: Their Definition of Market Risk Is Wrong

The authors like FVA because it accounts for market risk. But what is market risk? According to Delisle and Richwine, market risk is “associated with expecting future loan repayments,” as “[s]tudents might pay back the expected principal and interest” but they also may not. It is also “the risk of a general downturn in the economy… market risk cannot be diversified away.”

So the first part is wrong: market risk is not credit risk, or the risk of default or missing payments. The International Financial Reporting Standards (IFRS7), for instance, requires reporting market risk separate from credit risk, because they are obviously two different things. I’ve generally only heard market risk used in the context of bond portfolios to mean interest rate risk, which they also don’t mention. So if market risk isn’t credit risk or interest rate risk, what is it?

I’m not sure. What I think is going on is they are confusing the concept with the market risk of a stock, specifically its beta. A stock’s beta is its sensitivity to overall equity prices. (Pull up a random stock page and you’ll see the beta somewhere.) It’s very common phrasing to say this risk can’t be diversified away and is a proxy for the risk of general downturns in the economy, which is the same language used in this piece.

Market risk for stocks is the question of how much your portfolio will go down if the market as a whole goes down. But this has nothing to do with student loans, because students (aside from an enterprising few) don’t sell equity; they take out loans. If students paid for school with equity, in theory an economic downturn would lead to less revenue, since students would make less money overall. But even then it’s a shaky concept.

This isn’t just academic. There’s a reason people don’t speak of a one-to-one relationship between a market downturn and the value of a bond portfolio, as the authors’ “market risk” definition does. If the economy tanks, credit risk increases, so bonds are worth less, but interest rates fall, meaning the same bonds are worth more. How this all balances is complicated, and strongly driven by the distribution of bond maturities. This is why financial risk management distinguishes between credit, liquidity, and interest rate risks, and doesn’t conflate those concepts as the authors do.

(Though they are writing as experts, I think they are just copying and pasting from the CBO’s confusing and erroneous definition of “market risk.” If they are sourcing any kind of common financial industry practices or definitions, I don’t see it. I guess Jason Richwine didn’t get a chance to study finance while publishing his dissertation.)

Here again I’d want to understand more how the value of student loans to taxpayers moves with interest rates. Repayments are mentioned above. And for private lenders, higher interest rates mean that they can sell bonds for less and that they’re worth less as collateral. They need to be compensated for this risk. Do taxpayers have this problem to the same extent? If interest rates rise, do we worry we can’t sell the student loan portfolio for the same amount to another government, or that we can’t use it as collateral to fund another war? If not, why would we use this market rate?

Is This Just About Credit Risk?

Besides all the theoretical problems mentioned above, there’s also the practical problem that the CBO uses the already existing private market for student loans (“relied mainly on data about the interest rates charged to borrowers in the private student loan market”), even though there’s obviously a massive adverse selection problem there. Though not an error, it's a third major problem for the argument. The authors don’t even touch this.

But for all the talk about FVA, the only real concern the authors bring up is credit risk. “What if taxpayers don’t get paid?” is the question raised over and over again in the piece. The authors don’t articulate any direct concerns about, say, a move in interest rates changing the value of a bond portfolio, aside from the possibility that it might mean more credit losses.

So dramatically scaling back consumer protections like bankruptcy and statute of limitations for student debtors wasn’t enough for the authors. Fair enough. But there’s an easy fix: the government could buy some credit protection for losses in excess of those expected on, say, $10 billion of its portfolio, and use that price as a supplemental discount. This would be quite low-cost and provide useful information. But it’s a far cry from FVA, even if FVA’s proponents don’t quite understand that.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Guest Post: A Review of Fragile By Design

Nov 3, 2014Mike Konczal

(With conservatives looking to make big gains Tuesday, it's important to understand how they understand the financial crisis. Luckily we have a guest post by David Fiderer, on a recent book about the crisis. For over 20 years, Fiderer has been a banker covering the energy industry. He is trained as a lawyer and is working on a book about the rating agencies.)

Pundit-Level Arguments Dominating Elite Business Schools Financial Crisis Discussions

by David Fiderer

(With conservatives looking to make big gains Tuesday, it's important to understand how they understand the financial crisis. Luckily we have a guest post by David Fiderer, on a recent book about the crisis. For over 20 years, Fiderer has been a banker covering the energy industry. He is trained as a lawyer and is working on a book about the rating agencies.)

Pundit-Level Arguments Dominating Elite Business Schools Financial Crisis Discussions

by David Fiderer

Fragile By Design: The Political Origins of Banking Crises and Scarce Credit is a tour de force, and not in a good way. The book’s history of U.S. banking is troubling. The narrative covering the period from the Civil War until the 1990s is highly selective and misleading. Worse, the section that covers U.S. banking over the past 25 years is a set of distortions and falsehoods that should be obvious to anyone with a basic knowledge of the recent financial crisis.

Yet the book has been greeted enthusiastically. It was recently considered by the Financial Times and McKinsey for the Business Book of the Year Award, and its thesis about the recent financial crisis has been presented by the authors at events hosted by the World Bank, the Bank of England, the San Francisco Fed, the Atlanta Fed and the SEC. “[I]f you are looking for a rich history of banking over the last couple of centuries and the role played by politics in that evolution there is no better study,” wrote The New York Times reviewer. “It deserves to become a classic.” The book’s false portrayal of the recent crisis, left unchallenged, is likely to be used as a standard reference work for conservatives intent on rewriting history.

The two authors, Prof. Charles Calomiris of Columbia and Prof. Stephen Haber of Stanford, are well known. Calomiris’s 67-page CV cites, among many accomplishments, his stints as a Visiting Research Fellow at the International Monetary Fund and as a Senior Fellow at the Bank of England, as well as his 21-year affiliation with the American Enterprise Institute. Haber, who teaches Political Science at Stanford, is a Senior Fellow at Stanford’s Hoover Institution.

The book’s central argument is that the proximate cause of the financial collapse was the risky lending mandated by Community Reinvestment Act (CRA) and by affordable housing goals set for government-sponsored enterprises (GSEs) Fannie Mae and Freddie Mac. This familiar narrative, identified as “The Big Lie” by Joe Nocera, Barry Ritholtz, and others, is still deemed valid by a lot of people who should know better. Simply put, loan performance at Fannie and Freddie has always been exponentially superior to that of any other sector in residential mortgages, whereas the loan performance of private label residential mortgage securities has been radically worse than that of other sectors in the mortgage market. Most of the credit losses were tied to private mortgage securities. To state otherwise is a lie.

Calomiris and Haber embrace The Big Lie, and double down by tracing everything to Bill Clinton’s grand strategy of income redistribution as a response to economic inequality or as a sop to community activists at ACORN. Their story is as follows: in the 1990s banks sought government approval for proposed mergers and soon recognized that such approval was subject to certain conditions set by Clinton and his urban activist allies. The banks were compelled to book vast numbers of recklessly imprudent loans extended to the urban poor, by way of the CRA and GSE affordable housing goals.

Once banks started making ultra-risky loans under the CRA, they quickly started making ultra-risky loans to everyone else, because all these crappy loans could be sold to the GSEs, which then foisted them off onto unsuspecting investors who bought GSE mortgage securities. And once the GSEs started financing ultra-risky loans to poor people, they were forced to apply the same ultra-risky credit standards to everyone else. Eventually, the CRA and housing goals created a kind of Animal Farm dystopia, where everyone was equal because everyone’s mortgage was underwritten with the same recklessly imprudent terms.

In short, the GSEs, working in tandem with the banks and the investment banks, created and sold private mortgage securities, CDOs, and credit default swaps to unsuspecting investors. And when home prices stopped rising and the music stopped, the GSEs, the banks, and the investment banks were stuck holding those same private mortgage securities, CDOs, and credit default swaps, which is why many of them became insolvent.

No, I am not distorting Calomiris and Haber’s work.

The Financial Times, reviewing this book, says that “[t]hose on the left…tend to close their ears to this story, filing it under Republican disingenuity.” Sadly for the FT, this crackpot narrative has been debunked many times over. The Federal Reserve Board “found no connection between CRA and the subprime mortgage problems.” A subsequent Fed study found “lender tests indicate that areas disproportionately served by lenders covered by the CRA experienced lower delinquency rates and less risky lending.” Per the Minneapolis Fed: “The available evidence seems to run counter to the contention that the CRA contributed in any substantive way to the current mortgage crisis.” These findings were echoed by the Richmond Fed.

The St. Louis Fed posed a question: “Did Affordable Housing Legislation Contribute to the Subprime Securities Boom?” And the data offered a clear-cut answer: “No… We find no evidence that lenders increased subprime originations or altered pricing around the discrete eligibility cutoffs for the Government Sponsored Enterprises' (GSEs) affordable housing goals or the Community Reinvestment Act.” An earlier Fed study arrived at a substantially similar conclusion, as did nine out of ten members of the FCIC.

How do Calomiris or Haber address and respond to these studies? They don’t, and they aren’t alone. The lack of response to the critics of The Big Lie defines the entire genre. And these aren’t random writers; these are business professors at elite universities and think tanks who reject an empirical analysis framework for engaging their critics. Read Fault Lines by Raghuram Rajan at the University of Chicago. Read Guaranteed To Fail by Profs.Viral Acharya, Stijn Van Nieuwerburgh, Matthew Richardson, and Lawrence J. White, all at NYU. Or, on related topics, read “Rethinking FHA” by Prof. Joseph Gyourko at Wharton, or “Do We Need the 30-Year Fixed-Rate Mortgage?” by Prof. Anthony Sanders of George Mason University and Prof. Michael Lea at San Diego State. None of them compare loan performance of the GSEs, or FHA, or 30-year fixed rate loans, with that of other sectors in the same market.

It’s worth taking a minute to dissect the historical fantasy Calomiris and Haber construct. Their central narrative goes as follows:

Once the basic rules of this game were laid down in the early 1990s, the game unfolded in a predictable manner. Fannie and Freddie were forced to reduce their underwriting standards to accommodate increasing lending mandates to targeted groups. Importantly, those weaker standards were applied to all borrowers: to have done otherwise would have been a tacit admission that a portion of their portfolio was, in fact, high risk, which would have alarmed their shareholders. Many commercial banks, knowing that they could either sell high-risk loans to Fannie and Freddie or convert them into mortgage-backed securities guaranteed by Fannie and Freddie, jumped into the subprime securitization market. [Emphasis in the original.] […]

We cannot emphasize this point strongly enough: when Fannie and Freddie agreed to purchase loans the required only a 3% down payment, no documentation of income or employment, and a far from perfect credit score, they change the risk calculus of millions of American families, not just the urban poor. […]

As a matter of logic, it is conceivable that Fannie and Freddie could have selectively relaxed underwriting standards for targeted groups. As a practical matter, however, doing so would have been very difficult.

This is core to their story: affordability goals weren’t a small portion of GSEs’ loans. They effectively rewrote the entire mortgage market for everyone in the country.

Yet anyone who did a five-minute web search would demolish this notion. This link and this link are but two examples of many public filings that show how the GSEs used different credit standards for different types of borrowers, forgoing the entire logic of the Fragile by Design narrative. The entire premise of affordable housing goals was that the GSEs had the capacity to take on incremental exposures of higher-risk loans as a small counterweight to their huge portfolios of low-risk mortgages. Every business student knows basic portfolio theory.

And as a practical matter, just about every public utility, common carrier, pharmaceutical company, and hospital “effectively discriminates against most Americans by explicitly granting special arrangements to targeted groups.” Consider, for example, the rampant and institutionalized age discrimination seen at movie theater box offices. And yet, everyone who followed the companies knew that the GSEs used more relaxed standards for certain targeted groups.

Some of the authors’ zingers are harder to unpack. Consider the GSE loans they describe above, ones that “required only a 3% down payment, no documentation of income or employment, and a far from perfect credit score,” ones that changed “the risk calculus of millions of American families.”

There is zero evidence that the loans described by Calomiris and Haber ever existed. From 2001 through 2006, GSE originations that had loan-to-value (LTV) ratios of 95 percent or higher and FICO scores of 639 or lower represented between 1 and 2 percent of total originations. According to GSE credit guidelines, those borrowers had characteristics that disallowed any kind of reduced documentation, much less no documentation or employment.

Fannie and Freddie could not, by law, assume the primary credit risk on any mortgage with an LTV in excess of 80 percent. If a loan had an LTV higher than 80 percent, then the first loss was covered by private mortgage insurance. In addition, the GSEs’ policies prevented them from assuming 80 percent credit exposure on high-LTV loans. So, for example, if Fannie booked a loan had an LTV of 97 percent, the minimum insurance coverage would be 35 percent, so that Fannie’s net risk exposure would be no more than 62 percent of the LTV. The data is very clear that homes financed by the GSEs never experienced the steep rise, or drop, in prices that was measured by the Case-Shiller composite (see page 90).

In other words, the amount of low-down-payment loans available in the marketplace was never decided by the GSEs. It was decided by private mortgage insurers, which were not regulated by the federal government.

Business Models: The Difference Between Originate-to-Distribute and Buy-and-Hold

Calomiris and Haber blur commercial banks with non-banks and the GSEs, and they conflate GSE mortgage securities with private label mortgage securities and their progeny, throughout their text. Private label mortgage securities transfer credit risk and interest rate risk from the underwriting bank to the bondholders, whereas GSE mortgage securities do not transfer credit risk, only interest rate risk. All GSE mortgage bonds benefited from unconditional corporate guarantees.

Moreover, the financial meltdown of September 2008 was not triggered by bank failures; it was triggered by the failures of non-banks and by the unforeseen consequences of derivatives. The government had a clear legal path and precedent for dealing with bank failures like Wachovia, Washington Mutual, and IndyMac. But it had no clear path and no precedent for dealing with the imminent collapse of Lehman Brothers and AIG. This uncertainty about the fate of non-banks, which included the non-bank subsidiaries of bank holding companies, rocked the financial markets after Lehman filed for bankruptcy on September 15, 2008.

Remember that time everyone had to suddenly memorize all the financial acronyms? If you read about the financial crisis, you should know about CDOs (collateralized debt obligations); and about CDS (credit default swaps); and the initials MBS, which generally refer to the private label mortgage-backed securitizations, where most credit losses resided. Just one more serving of alphabet soup: CDS collapsed AIG, and CDO collapsed Citigroup, Merrill Lynch, UBS, MBIA and AMAC. Fannie and Freddie had nothing to do with CDOs and CDS.

Fannie and Freddie did hold large amounts of their own securities, but again, it made no difference whether they sold or held them, because their credit risk exposure never changed, and those holdings had nothing to do with regulatory capital. And the GSEs did hold about $225 billion of the most senior tranches of private mortgage securities. Court filings and settlements indicate that most of the losses were caused by fraud.

When the GSEs were taken over by the government in September 2008, Fannie’s serious delinquency rate was 1.36 percent, well below levels seen in the mid-1980s. And Freddie’s serious delinquency rate, 0.93 percent, was lower than the lowest national average ever recorded by the Mortgage Bankers Association. According to the MBA, the nationwide serious delinquency rate as of June 30, 2008 was 4.5 percent. For subprime mortgages it was almost 18 percent. Again, in terms of loan performance, the GSEs were in a class by themselves.

The Premise of The Big Lie

There’s only one reason why The Big Lie seemed so plausible to so many people. The polite word for it is social stereotyping. Affordable housing goals are set for “Central Cities, Rural Areas and Other Underserved Areas.” These goals target “low and moderate income borrowers.” A Financial Times columnist translates this into “the government’s euphemism for ethnic minority neighbourhoods.”

Calomiris and Haber do the same. They scrub away references to anything rural or to moderate-income borrowers. “At the core of this bargain was a coalition of two very unlikely partners: rapidly growing megabanks and activist groups that promoted expansion of risky mortgage lending to poor and intercity borrowers, such as the Association of Community Organizations for Reform Now (ACORN),” they write. They reference ACORN 11 times.

The book’s broader narrative about U.S. banking is framed around an urban/rural divide. Prior to the 1990s, the farmers in rural states were suspicious of nationwide banking that would concentrate economic power in the money centers of the Northeast. (The authors sidestep the impact of the National Banking system and the absence of a central bank until 1913.) Calomiris and Haber contrive another urban/rural divide to explain the CRA and affordable housing goals. This was the core of a “grand bargain” that favored a key constituency of the Democratic Party, the urban poor and urban activists like ACORN, at the expense of Republican constituencies in rural areas.

If you go in for that kind of stuff, then it makes perfect sense that any government program intended to benefit low-income people must corrupt the free marketplace and eventually create a financial disaster. Who needs empirical data to prove that? This kind of fact-free analysis, a staple of cable TV and certain media outlets, has become pervasive. But it has no place in a legitimate business setting or university setting. Determining whether or not a loan’s terms match “the market,” a much more useful debate, involves a very detailed analysis of the borrower and the loan product, which is way beyond the ken of Calomiris and Haber.

There is no evidence that CRA goals ever represented a material hurdle towards attaining regulatory approval of the large bank mergers in the 1990s. Of the 13,500 applications submitted to Fed, only 25 were denied, with eight being denied because of “unsatisfactory consumer protection or community reinvestment issues.” The GSEs, however, were subject to ability-to-repay regulations and other anti-predatory constraints put in place in 2000.

The irony is rich. This private label securitization system was built over decades, and at every step of the expansion of this predatory and abusive lending system conservative economists were there lending support. Calomiris in particular was an active participant, fighting against any prohibition against single premium credit insurance, opposing prohibitions on loans based on housing collateral that disregarded a borrower’s ability to repay, and writing in 1999 that 125 percent LTV lending was no big deal.

After skyrocketing in size and scope before the crisis, the securitization of the housing market is now dead. There’s debate on whether it can ever come back to life. As we discuss what the future of housing finance and the financial sector looks like, there needs to be a real accounting for what has happened in the past. Sadly, a group of elite academics are more dedicated to confusion and playing up innuendo than actual analysis and the truth.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Did the Federal Reserve Do QE Backwards?

Oct 30, 2014Mike Konczal

QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?

To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don't. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.

This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.

Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.

But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”

This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.

Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.

What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.

The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.

And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.

Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.

He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”

The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.

Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.

And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.

This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, "I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates." His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.

But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.

Follow or contact the Rortybomb blog:
 
  

 

QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?

To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don't. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.

This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.

Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.

But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”

This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.

Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.

What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.

The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.

And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.

Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.

He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”

The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.

Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.

And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.

This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, "I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates." His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.

But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.

Follow or contact the Rortybomb blog:
 
  

 

Share This

It's Essential the Federal Reserve Discusses Inequality

Oct 28, 2014Mike Konczal

Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.

It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending -- a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.

But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.

Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.

The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.

Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.

Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile -- like the minimum wage, which has fallen in value -- could potentially boost aggregate demand.

Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.

One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.

The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation." There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.

Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.

The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.

Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.

Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.

But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.

These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.

Follow or contact the Rortybomb blog:
 
  

 

Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.

It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending -- a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.

But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.

Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.

The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.

Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.

Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile -- like the minimum wage, which has fallen in value -- could potentially boost aggregate demand.

Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.

One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.

The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation." There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.

Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.

The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.

Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.

Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.

But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.

These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.

Follow or contact the Rortybomb blog:
 
  

 

Share This

The Phenomenology of Google's Self-Driving Cars

Oct 23, 2014Mike Konczal

(image via NYPL)

Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

[T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

Follow or contact the Rortybomb blog:
 
  

 

(image via NYPL)

Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

[T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Does the USA Really Soak the Rich?

Oct 10, 2014Mike Konczal

There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

As you can see, they are related. (The same goes if you use gini coefficients.)

Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

Follow or contact the Rortybomb blog:
 
  

 

There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

As you can see, they are related. (The same goes if you use gini coefficients.)

Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

Follow or contact the Rortybomb blog:
 
  

 

Share This

Pages