Mike Konczal

Roosevelt Institute Fellow

Recent Posts by Mike Konczal

  • Did the Federal Reserve Do QE Backwards?

    Oct 30, 2014Mike Konczal

    QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?

    To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don't. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.

    This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.

    Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.

    But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”

    This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.

    Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.

    What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.

    The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.

    And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.

    Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.

    He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”

    The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.

    Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.

    And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.

    This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, "I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates." His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.

    But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.

    Follow or contact the Rortybomb blog:
     
      

     

    QE3 is over. Economists will debate the significance of it for some time to come. What sticks out to me now is that it might have been entirely backwards: what if the Fed had set the price instead of the quantity?

    To put this in context for those who don’t know the background, let’s talk about carbon cooking the planet. Going back to Weitzman in the 1970s (nice summary by E. Glen Weyl), economists have focused on the relative tradeoff of price versus quantity regulations. We could regulate carbon by changing the price, say through carbon taxes. We could also regulate it by changing the quantity, say by capping the amount of carbon in the air. In theory, these two choices have identical outcomes. But, of course, they don't. It depends on the risk involved in slight deviations from the goal. If carbon above a certain level is very costly to society, then it’s better to target the quantity rather than the price, hence setting a cap on carbon (and trading it) rather than just taxing it.

    This same debate on the tradeoff between price and quantity intervention is relevant for monetary policy, too. And here, I fear the Federal Reserve targeted the wrong one.

    Starting in December 2012, the Federal Reserve started buying $45 billion a month of long-term Treasuries. Part of the reason was to push down the interest rates on those Treasuries and boost the economy.

    But what if the Fed had done that backwards? What if it had picked a price for long-term securities, and then figured out how much it would have to buy to get there? Then it would have said, “we aim to set the 10-year Treasury rate at 1.5 percent for the rest of the year” instead of “we will buy $45 billion a month of long-term Treasuries.”

    This is what the Fed does with short-term interest rates. Taking a random example from 2006, it doesn’t say, “we’ll sell an extra amount in order to raise the interest rate.” Instead, it just declares, “the Board of Governors unanimously approved a 25-basis-point increase in the discount rate to 5-1/2 percent.” It announces the price.

    Remember, the Federal Reserve also did QE with mortgage-backed securities, buying $40 billion a month in order to bring down the mortgage rate. But what if it just set the mortgage rate? That’s what Joseph Gagnon of the Peterson Institute (who also helped execute the first QE), argued for in September 2012, when he wrote, “the Fed should promise to hold the prime mortgage rate below 3 percent for at least 12 months. It can do this by unlimited purchases of agency mortgage-backed securities.” (He reiterated that argument to me in 2013.) Set the price, and then commit to unlimited purchases. That’s good advice, and we could have done it with Treasuries as well.

    What difference would this have made? The first is that it would be far easier to understand what the Federal Reserve was trying to do over time. What was the deal with the tapering? I’ve read a lot of commentary about it, but I still don’t really know. Do stocks matter, or flows? I’m reading a lot of guesswork. But if the Federal Reserve were to target specific long-term interest rates, it would be absolutely clear what they were communicating at each moment.

    The second is that it might have been easier. People hear “trillions of dollars” and think of deficits instead of asset swaps; focusing on rates might have made it possible for people to be less worried about QE. The actual volume of purchases might also have been lower, because the markets are unlikely to go against the Fed on these issues.

    And the third is that if low interest rates are the new normal, through secular stagnation or otherwise, these tools will need to be formalized. We should look to avoid the herky-jerky nature of Federal Reserve policy in the past several years, and we can do this by looking to the past.

    Policy used to be conducted this way. Providing evidence that there’s been a great loss of knowledge in macroeconomics, JW Mason recently wrote up this great 1955 article by Alvin Hansen (of secular stagnation fame), in which Hansen takes it for granted that economists believe intervention along the entirety of the rate structure is appropriate action.

    He even finds Keynes arguing along these lines in The General Theory: “Perhaps a complex offer by the central bank to buy and sell at stated prices gilt-edged bonds of all maturities, in place of the single bank rate for short-term bills, is the most important practical improvement which can be made in the technique of monetary management.”

    The normal economic argument against this is that all the action can be done with the short-rate. But, of course, that is precisely the problem at the zero lower bound and in a period of persistent low interest rates.

    Sadly for everyone who imagines a non-political Federal Reserve, the real argument is political. And it’s political in two ways. The first is that the Federal Reserve would be accused of planning the economy by setting long-term interest rates. So it essentially has to sneak around this argument by adjusting quantities. But, in a technical sense, they are the same policy. One is just opaque, which gives political cover but is harder for the market to understand.

    And the second political dimension is that if the Federal Reserve acknowledges the power it has over interest rates, it also owns the recession in a very obvious way.

    This has always been a tension. As Greta R. Krippner found in her excellent Capitalizing on Crisis, in 1982 Frank Morris of the Boston Fed argued against ending their disaster tour with monetarism by saying, "I think it would be a big mistake to acknowledge that we were willing to peg interest rates again. The presence of an [M1] target has sheltered the central bank from a direct sense of responsibility for interest rates." His view was that the Fed could avoid ownership of the economy if it only just adjusted quantities.

    But the Federal Reserve did have ownership then, as it does now. It has tools it can use, and will need to use again. It’s important for it to use the right tools going forward.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • It's Essential the Federal Reserve Discusses Inequality

    Oct 28, 2014Mike Konczal

    Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.

    It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending -- a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.

    But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.

    Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.

    The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.

    Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.

    Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile -- like the minimum wage, which has fallen in value -- could potentially boost aggregate demand.

    Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.

    One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.

    The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation." There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.

    Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.

    The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.

    Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.

    Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.

    But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.

    These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.

    Follow or contact the Rortybomb blog:
     
      

     

    Janet Yellen gave a reasonable speech on inequality last week, and she barely managed to finish it before the right-wing went nuts.

    It’s attracted the standard set of overall criticisms, like people asserting that low rates give banks increasingly “wide spreads” on lending -- a claim made with no evidence, and without addressing that spreads might have fallen overall. One notes that Bernanke has also given similar inequality speeches (though the right also went off the deep end when it came to Bernanke), and Jonathan Chait notes how aggressive Greenspan was with discussing controversial policies to crickets on the right.

    But I also just saw that Michael Strain has written a column arguing that by even “by focusing on income inequality [Yellen] has waded into politically choppy waters.” Putting the specifics of the speech to the side, it’s simply impossible to talk about the efficacy of monetary policy and full employment during the Great Recession without discussing inequality, or discussing economic issues where inequality is in the background.

    Here are five inequality-related issues off the top of my head that are important in monetary policy and full employment. The arguments may or not be convincing (I’m not sure where I stand on some), but to rule these topics entirely out of bounds will just lead to a worse understanding of what the Federal Reserve needs to do.

    The Not-Rich. The material conditions of the poorest and everyday Americans are an essential part of any story of inequality. If the poor are doing great, do we really care if the rich are doing even better? Yet in this recession everyday Americans are doing terribly, and it has macroeconomic consequences.

    Between the end of the recession in 2009 and 2013, median wages fell an additional 5 percent. One element of monetary policy is changing the relative interest in saving, yet according to recent work by Zucman and Saez, 90 percent of Americans aren’t able to save any money right now. If that is the case, it’s that much harder to make monetary policy work.

    Indeed, one effect of committing to low rates in the future is making it more attractive to invest where debt servicing is difficult. For example, through things like subprime auto loans, which are booming (and unregulated under Dodd-Frank because of auto-dealership Republicans). Meanwhile, policy tools that we know flatten low-end inequality between the 10 and 50 percentile -- like the minimum wage, which has fallen in value -- could potentially boost aggregate demand.

    Expectations. The most influential theories about how monetary policy can work when we are at the zero lower bound, as we’ve been for the past several years, involve “expectations” of future inflation and wage growth.

    One problem with changing people’s expectations of the future is that those expectations are closely linked to their experiences of the past. And if people’s strong expectations of the future are low or zero nominal growth in incomes because everything around them screams inequality, because income growth and inflation rates have been falling for decades, strongly worded statements and press releases from Janet Yellen are going to have less effect.

    The Rich. The debate around secular stagnation is ongoing. Here’s the Vox explainer. Larry Summers recently argued that the term emphasizes “the difficulty of maintaining sufficient demand to permit normal levels of output.” Why is this so difficult? “[R]ising inequality, lower capital costs, slowing population growth, foreign reserve accumulation, and greater costs of financial intermediation." There’s no sense in which you can try to understand the persistence of low interest rates and their effect on the recovery without considering growing inequality across the Western world.

    Who Does the Economy Work For? To understand how well changes in the interest-sensitive components of investment might work, a major monetary channel, you need to have some idea of how the economy is evolving. And stories about how the economy works now are going to be tied to stories about inequality.

    The Roosevelt Institute will have some exciting work by JW Mason on this soon, but if the economy is increasingly built around disgorging the cash to shareholders, we should question how this helps or impedes full output. What if low rates cause, say, the Olive Garden to focus less on building, investing, and hiring, and more on reworking its corporate structure so it can rent its buildings back from another corporate entity? Both are in theory interest-sensitive, but the first brings us closer to full output, and the second merely slices the pie a different way in order to give more to capital owners.

    Alternatively, if you believe (dubious) stories about how the economy is experiencing trouble as a result of major shifts brought about by technology and low skills, then we have a different story about inequality and the weak recovery.

    Inequality in Political and Market Power. We should also consider the political and economic power of industry, especially the financial sector. Regulations are an important component to keeping worries about financial instability in check, but a powerful financial sector makes regulations useless.

    But let’s look at another issue: monetary policy’s influence on underwater mortgage financing, a major demand booster in the wake of a housing collapse. As the Federal Reserve Bank of New York found, the spread between primary and secondary rates increased during the Great Recession, especially into 2012 as HARP was revamped and more aggressive zero-bound policies were adopted. The Fed is, obviously, cautious about claiming pricing power from the banks, but it does look like the market power of finance was able to capture lower rates and keep demand lower than it needed to be. The share of the top 0.1 percent of earners working in finance doubled during the past 30 years, and it’s hard not to see that not being related to displays of market and political power like this.

    These ideas haven’t had their tires kicked. This is a blog, after all. (As I noted, I’m not even sure if I find them all convincing.) They need to be modeled, debated, given some empirical handles, and so forth. But they are all stories that need to be addressed, and it’s impossible to do any of that if there’s massive outrage at even the suggestion that inequality matters.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • The Phenomenology of Google's Self-Driving Cars

    Oct 23, 2014Mike Konczal

    (image via NYPL)

    Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

    In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

    Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

    The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

    This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

    But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

    [T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

    Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

    Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

    Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

    Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

    Follow or contact the Rortybomb blog:
     
      

     

    (image via NYPL)

    Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

    In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

    Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

    The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

    This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

    But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

    [T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

    Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

    Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

    Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

    Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • Does the USA Really Soak the Rich?

    Oct 10, 2014Mike Konczal

    There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

    There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

    You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

    Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

    When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

    At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

    They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

    The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

    We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

    As you can see, they are related. (The same goes if you use gini coefficients.)

    Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

    Follow or contact the Rortybomb blog:
     
      

     

    There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

    There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

    You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

    Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

    When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

    At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

    They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

    The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

    We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

    As you can see, they are related. (The same goes if you use gini coefficients.)

    Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • New The Score Column: The Rise of Mass Incarceration

    Sep 29, 2014Mike Konczal

    I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here's how they made this radical transformation of the state.

    Follow or contact the Rortybomb blog:
     
      

     

    I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here's how they made this radical transformation of the state.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

Pages