Mike Konczal

Roosevelt Institute Fellow

Recent Posts by Mike Konczal

  • The Phenomenology of Google's Self-Driving Cars

    Oct 23, 2014Mike Konczal

    (image via NYPL)

    Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

    In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

    Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

    The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

    This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

    But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

    [T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

    Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

    Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

    Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

    Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

    Follow or contact the Rortybomb blog:
     
      

     

    (image via NYPL)

    Guess what? I’m challenging you to a game of tennis in three days. Here’s an issue though: I don’t know anything about tennis and have never played it, and the same goes for you.

    In order to prepare for the game, we are each going to do something very different. I’m going to practice playing with someone else who isn’t very good. You, meanwhile, are going to train with an expert. But you are only going to train by talking about tennis with the expert, and never actually play. The expert will tell you everything you need to know in order to win at tennis, but you won’t actually get any practice.

    Chances are I’m going to win the game. Why? Because the task of playing tennis isn’t just reducible to learning a set of things to do in a certain order. There’s a level of knowledge and skills that become unconsciously incorporated into the body. As David Foster Wallace wrote about tennis, “The sort of thinking involved is the sort that can be done only by a living and highly conscious entity, and then it can really be done only unconsciously, i.e., by fusing talent with repetition to such an extent that the variables are combined and controlled without conscious thought.” Practicing doesn’t mean learning rules faster; it means your body knows instinctively where to put the tennis racket.

    The same can be said of most skills, like learning how to play an instrument. Expert musicians instinctively know how the instrument works. And the same goes for driving. Drivers obviously learn certain rules (“stop at the stop sign”) and heuristics (“slow down during rain”), but much of driving is done unconsciously and reflexively. Indeed a driver who needs to think through procedurally how to deal with, say, a snowy off ramp will be more at risk of an accident than someone who instinctively knows what to do. A proficient driver is one who can spend their mental energy making more subtle and refined decisions based on determining what is salient about a specific situation, as past experiences unconsciously influence current experiences. Our bodies and minds aren’t just a series of logic statements but also a series of lived-through meanings.

    This is my intro-level remembrance of Hubert Dreyfus’ argument against artificial intelligence via Merleau-Ponty's phenomenology (more via Wikipedia). It’s been a long time since I followed any of this, and I’m not able to keep up with the current debates. As I understand it Dreyfus’ arguments were hated by computers scientists in the 1970s, then appreciated in the 1990s, and now computer scientists assume cheap computing power can use brute force and some probability theory to work around it.

    But my vague memory of these debates is why I imagine driverless cars are going to hit a much bigger obstacle than most. I was reminded of all this via a recent article on Slate about Google's driverless cars from Lee Gomes:

    [T]he Google car was able to do so much more than its predecessors in large part because the company had the resources to do something no other robotic car research project ever could: develop an ingenious but extremely expensive mapping system. These maps contain the exact three-dimensional location of streetlights, stop signs, crosswalks, lane markings, and every other crucial aspect of a roadway [...] But the maps have problems, starting with the fact that the car can’t travel a single inch without one. [...]

    Because it can't tell the difference between a big rock and a crumbled-up piece of newspaper, it will try to drive around both if it encounters either sitting in the middle of the road. [...] Computer scientists have various names for the ability to synthesize and respond to this barrage of unpredictable information: "generalized intelligence,” "situational awareness,” "everyday common sense." It's been the dream of artificial intelligence researchers since the advent of computers. And it remains just that.

    Focus your attention on the issue that the car can’t tell the difference between a dangerous rock to avoid and a newspaper to drive through. As John Dewey found when he demolished the notion of a reflex arc, reflexes become instinctual so attention is paid only when something new breaks the habitual response. Or, experienced human drivers don’t see the rock, and then decide to move. They just as much decide to move because that forces them to see the rock. The functionalist breakdown, necessary to the propositional logic of computer programming, is just an ex post justification for a whole, organic action. This is the "everyday common sense" alluded to in the piece.

    Or let’s put it a different way. Imagine learning tennis by setting up one of those machines that shoots tennis balls at you, the same repetitive way. There would be a strict limit in how much you could learn, or how much that one motion would translate into you being able to play an entire game. But by teaching cars to drive by essentially having them follow a map means that they are playing tennis by just repeating the same ball toss, over and over again.

    Again, I’m willing to sustain the argument that the pure, brute force of computing power will be enough - stack enough processors on top of each other and they’ll eventually bang out an answer on what to do. But if the current action requires telling cars absolutely everything that will be around them, instead of some sort of computational ability react to the road itself, including via experience, this will be a much harder issue. I hope it works, but maybe we can slow down the victory laps that are already calling massive overhauls to our understanding of public policy (like the idea that public buses are obsolete) until these cars encounter a situation they don't know in advance.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • Does the USA Really Soak the Rich?

    Oct 10, 2014Mike Konczal

    There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

    There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

    You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

    Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

    When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

    At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

    They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

    The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

    We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

    As you can see, they are related. (The same goes if you use gini coefficients.)

    Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

    Follow or contact the Rortybomb blog:
     
      

     

    There's a new argument about taxes: the United States is already far too progressive with taxation, it says, and if we want to build a better, eglitarian future we can't do it through a "soak the rich" agenda. It's the argument of this recent New York Times editorial by Edward D. Kleinbard, and a longer piece by political scientists Cathie Jo Martin and Alexander Hertel-Fernandez at Vox. I'm going to focus on the Vox piece because it is clearer on what they are arguing.

    There, the researchers note that the countries “that have made the biggest strides in reducing economic inequality do not fund their governments through soak-the-rich, steeply progressive taxes.” They put up this graphic, based on OECD data, to make this point:

    You can quickly see that the concept of "progressivity" is doing all the work here, and I believe the way they are going to use that word will be problematic. What does it mean for Sweden to be one of the least progressive tax state, and the United States the most?

    Let’s graph out two ways of soaking the rich. Here’s Rich Uncle Pennybags in America, and Rik Farbror Påse av Mynt in Sweden, as well as their respective tax bureaus:

    When average people usually talk about soaking the rich, they are talking about the marginal tax rates the highest income earners pay. But as we can see, in Sweden the rich pay a much higher marginal tax rate. As Matt Bruenig at Demos notes, Sweden definitely taxes its rich much more (he also notes that what they do with those taxes is different than what Vox argues).

    At this point many people would argue that our taxes are more progressive because the middle-class in the United States is taxed less than the middle-class in Sweden. But that is not what Jo Martin and Hertel-Fernandez are arguing.

    They are instead looking at the right-side of the above graphic. They are measuring how much of tax revenue comes from the top decile (or, alternatively, the concentration coefficient of tax revenue), and calling that the progressivity of taxation ("how much more (or less) of the tax burden falls on the wealthiest households"). The fact that the United States gets so much more of its tax revenue from the rich when compared to Sweden means we have a much more progressive tax policy, one of the most progressive in the world. Congratulations?

    The problem is, of course, that we get so much of our tax revenue from the rich because we have one of the highest rates of inequality across peer nations. How unequal a country is will be just as much of a driver of the progressivity of taxation as the actual tax polices. In order to understand how absurd this is, even flat taxes on a very unequal income distribution will mean that taxes are “progressive” as more income will come from the top of the income distribution, just because that’s where all the money is. Yet how would that be progressive taxation?

    We can confirm this. Let’s take the OECD data that is likely where their metric of tax progressivity comes from, and plot it against the market distribution. This is the share of taxes that come from the top decile, versus how much market income the top decile takes home:

    As you can see, they are related. (The same goes if you use gini coefficients.)

    Beyond the obvious one, there's a much deeper and more important relationship here. As Saez, Piketty and Stantcheva find, the fall in top tax rates over the past 30 years are a major driver of the explosion of income inequality during that same exact period. Among other ways, lower marginal tax rates give high-end mangagement a greater incentive to bargain for higher wages, and for corporate structures to pay them out. This is an important element for the creation of our recent inequality, and it shouldn't get lost among odd definitions of the word "progressive," a word that always seems to create confusion.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • New The Score Column: The Rise of Mass Incarceration

    Sep 29, 2014Mike Konczal

    I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here's how they made this radical transformation of the state.

    Follow or contact the Rortybomb blog:
     
      

     

    I have a new column at The Score: Why Prisons Thrive Even When Budgets Shrink. Even as the era of big government was over, the incarceration rate quintupled over just 20 years. It had previously been stable for a century. Logically, three actors set the rate of incarceration: here's how they made this radical transformation of the state.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • How Much are Local Civil Asset Forfeiture Abuses Driven By the Feds? A Reply to Libertarians

    Sep 12, 2014Mike Konczal

    (Wonkish, as they say.)

    I wrote a piece in the aftermath of the Michael Brown shooting and subsequent protests in Ferguson noting that the police violence, rather than a federalized, militarized affair, should be understood as locally driven from the bottom-up. Others made similar points, including Jonathan Chait (“Why the Worst Governments in America Are Local Governments”) and Franklin Foer (“The Greatest Threat to Our Liberty Is Local Governments Run Amok”). Both are smart pieces.

    The Foer piece came into a backlash on a technical point that I want to dig into, in part because I think it is illuminating and helps proves his point. Foer argued that “If there’s a signature policy of this age of unimpeded state and local government, it’s civil-asset forfeiture.” Civil-asset forfeiture is where prosecutors press charges against property for being illicit, a legal tool that is prone to abuse. (I’m going to assume you know the basics. This Sarah Stillman piece is fantastic if you don’t, or even if you do.)

    Two libertarian critics jumped at that line. Jonathan Blanks of the Cato Institute wrote “the rise of civil asset forfeiture is a direct result of federal involvement in local policing. In what are known as ‘equitable sharing’ agreements, federal law enforcement split forfeiture proceeds with state and local law authorities.”

    Equitable sharing is a system where local prosecutors can choose to send their cases to the federal level and, if successful, up to 80 percent of the forfeited funds go back to local law enforcement. So even in states where the law lets law enforcement keep less than 80 percent of funds to try and prevent corruption (by handing the money to, say, roads or schools), “federal equitable sharing rules mandate those proceeds go directly to the law enforcement agencies, circumventing state laws to prevent “‘policing for profit.’”

    Lucy Steigerwald at Vice addresses all three posts, and make a similar point about Foer. “Foer mentions the importance of civil asset forfeiture while skirting around the fact that forfeiture laws incentivize making drug cases into federal ones, so as to get around states with higher burdens of proof for taking property...Include a DEA agent in your drug bust—making it a federal case—and suddenly you get up to 80 percent of the profits from the seized cash or goods. In short, it’s a hell of a lot easier for local police to steal your shit thanks to federal law.”

    Equitable sharing, like all law in this realm, needs to be gutted yesterday, and I’m sure there’s major agreement on across-the-board reforms. But I think there’s three serious problems with viewing federal equitable sharing as the main driver of state and local forfeitures.

    Legibility, Abuse, Innovation

    The first is that we are talking about equitable sharing in part because it’s only part of the law that we are capable of measuring. There’s a reason that virtually every story about civil asset forfeiture highlights equitable sharing [1]. It’s because it’s one of the few places where there are good statistics on how civil asset forfeiture is carried out.

    As the Institute for Justice found when they tried to create a summary of the extent of the use of civil asset forfeiture, only 29 states have a requirement to record the use of civil asset forfeiture at all. But most are under no obligation to share that information, much less make it accessible. It took two years of FOIA requests, and even then 8 of those 29 states didn’t bother responding, and two provided unusable data. There's problematic double-counting and other problems with the data that is available. As they concluded, “Thus, in most states, we know very little about the use of asset forfeiture” at the county and state level.

    We do know about it at the federal level however. You can look up the annual reports of the federal Department of Justice’s Assets Forfeiture Fund (AFF) and the Treasury Forfeiture Fund (TFF) of the U.S. Department of the Treasury. There you can see the expansion of the program over time.

    You simply can’t do this in any way at the county or state levels. You can’t see statistics to see if equitable sharing is a majority of forfeiture cases - though, importantly, equitable sharing was the minority of funds in the few states the Institute for Justice were able to measure, and local forfeitures were growing rapidly - or the relationship between the two. It’s impossible to analyze the number of forfeiture cases (as opposed to amount seized), which is what you’d want to measure to see the increased aggressiveness in its use on small cases.

    This goes to Foer’s point that federal abuses at least receive some daylight, compared to the black boxes of county prosecutor’s offices. This does, in turn, point the flashlight towards the Feds, and gives the overall procedure a Federal focus. But this is a function of how well locals have fought off accountability.

    The second point is that the states already have laws that are more aggressive than the Fed’s. A simple graph will suffice (source). The Feds return 80 percent of forfeited assets to law enforcement. What do the states return?

    Only 15 states have laws that that are below the Fed’s return threshold. Far, far more states already have a more expansive “policing for profit” regime set in at the state level than what is available at the Federal level. It makes sense that for those 15 states equitable sharing changes the incentives [2], of course, and the logic extends to the necessary criterion to make a seizure. But the states, driven no doubt by police, prosecutors and tough-on-crime lawmakers, have written very aggressive laws in this manner. They don't need the Feds to police for profit; if anything they'd get in the way.

    The third is that the innovative expansion of civil asset forfeiture is driven at the local level just as much as the federal level. This is the case if only because equitable sharing can only go into effect if there’s a federal crime being committed. So aggressive forfeiture of cars of drunk drivers or those who hire sex workers (even if it your wife’s car) is a local innovation, because there’s no federal law to advance them.

    There’s a lot of overlap for reform across the political spectrum here, but seeing the states as merely the pawns of the federal government when it comes to forfeiture abuse is problematic. Ironically, we see this precisely because we can’t see what the states are doing, but the hints we do know point to awful abuses, driven by the profit motive from the bottom-up.

    [1]  To take two prominent, excellent recent examples. Stillman at the New Yorker: “through a program called Equitable Sharing…At the Justice Department, proceeds from forfeiture soared from twenty-seven million dollars in 1985 to five hundred and fifty-six million in 1993.”

    And Michael Sallah, Robert O’Harrow Jr., Steven Rich of the Washington Post: “There have been 61,998 cash seizures made on highways and elsewhere since 9/11 without search warrants or indictments through the Equitable Sharing Program, totaling more than $2.5 billion.”

    If either wanted to get these numbers at the state and local levels it would be impossible.

    [2] I understand why one want to put an empirical point on it, and the law needs to be changed no matter what, but the core empirical work relating payouts to equitable sharing isn’t as aggressive as you’d imagine. Most of the critical results aren’t significant at a 5% level, and even then you are talking about a 25% increase in just equitable sharing (as opposed to the overall amount forfeited by locals, which we can’t measure) relative to 100% change in state law payouts.

    Which makes sense - no prosecutor is going to be fired for bringing in too much money into the school district, if only because money is fungible on the back end.

    Follow or contact the Rortybomb blog:
     
      

     

    (Wonkish, as they say.)

    I wrote a piece in the aftermath of the Michael Brown shooting and subsequent protests in Ferguson noting that the police violence, rather than a federalized, militarized affair, should be understood as locally driven from the bottom-up. Others made similar points, including Jonathan Chait (“Why the Worst Governments in America Are Local Governments”) and Franklin Foer (“The Greatest Threat to Our Liberty Is Local Governments Run Amok”). Both are smart pieces.

    The Foer piece came into a backlash on a technical point that I want to dig into, in part because I think it is illuminating and helps proves his point. Foer argued that “If there’s a signature policy of this age of unimpeded state and local government, it’s civil-asset forfeiture.” Civil-asset forfeiture is where prosecutors press charges against property for being illicit, a legal tool that is prone to abuse. (I’m going to assume you know the basics. This Sarah Stillman piece is fantastic if you don’t, or even if you do.)

    Two libertarian critics jumped at that line. Jonathan Blanks of the Cato Institute wrote “the rise of civil asset forfeiture is a direct result of federal involvement in local policing. In what are known as ‘equitable sharing’ agreements, federal law enforcement split forfeiture proceeds with state and local law authorities.”

    Equitable sharing is a system where local prosecutors can choose to send their cases to the federal level and, if successful, up to 80 percent of the forfeited funds go back to local law enforcement. So even in states where the law lets law enforcement keep less than 80 percent of funds to try and prevent corruption (by handing the money to, say, roads or schools), “federal equitable sharing rules mandate those proceeds go directly to the law enforcement agencies, circumventing state laws to prevent “‘policing for profit.’”

    Lucy Steigerwald at Vice addresses all three posts, and make a similar point about Foer. “Foer mentions the importance of civil asset forfeiture while skirting around the fact that forfeiture laws incentivize making drug cases into federal ones, so as to get around states with higher burdens of proof for taking property...Include a DEA agent in your drug bust—making it a federal case—and suddenly you get up to 80 percent of the profits from the seized cash or goods. In short, it’s a hell of a lot easier for local police to steal your shit thanks to federal law.”

    Equitable sharing, like all law in this realm, needs to be gutted yesterday, and I’m sure there’s major agreement on across-the-board reforms. But I think there’s three serious problems with viewing federal equitable sharing as the main driver of state and local forfeitures.

    Legibility, Abuse, Innovation

    The first is that we are talking about equitable sharing in part because it’s only part of the law that we are capable of measuring. There’s a reason that virtually every story about civil asset forfeiture highlights equitable sharing [1]. It’s because it’s one of the few places where there are good statistics on how civil asset forfeiture is carried out.

    As the Institute for Justice found when they tried to create a summary of the extent of the use of civil asset forfeiture, only 29 states have a requirement to record the use of civil asset forfeiture at all. But most are under no obligation to share that information, much less make it accessible. It took two years of FOIA requests, and even then 8 of those 29 states didn’t bother responding, and two provided unusable data. There's problematic double-counting and other problems with the data that is available. As they concluded, “Thus, in most states, we know very little about the use of asset forfeiture” at the county and state level.

    We do know about it at the federal level however. You can look up the annual reports of the federal Department of Justice’s Assets Forfeiture Fund (AFF) and the Treasury Forfeiture Fund (TFF) of the U.S. Department of the Treasury. There you can see the expansion of the program over time.

    You simply can’t do this in any way at the county or state levels. You can’t see statistics to see if equitable sharing is a majority of forfeiture cases - though, importantly, equitable sharing was the minority of funds in the few states the Institute for Justice were able to measure, and local forfeitures were growing rapidly - or the relationship between the two. It’s impossible to analyze the number of forfeiture cases (as opposed to amount seized), which is what you’d want to measure to see the increased aggressiveness in its use on small cases.

    This goes to Foer’s point that federal abuses at least receive some daylight, compared to the black boxes of county prosecutor’s offices. This does, in turn, point the flashlight towards the Feds, and gives the overall procedure a Federal focus. But this is a function of how well locals have fought off accountability.

    The second point is that the states already have laws that are more aggressive than the Fed’s. A simple graph will suffice (source). The Feds return 80 percent of forfeited assets to law enforcement. What do the states return?

    Only 15 states have laws that that are below the Fed’s return threshold. Far, far more states already have a more expansive “policing for profit” regime set in at the state level than what is available at the Federal level. It makes sense that for those 15 states equitable sharing changes the incentives [2], of course, and the logic extends to the necessary criterion to make a seizure. But the states, driven no doubt by police, prosecutors and tough-on-crime lawmakers, have written very aggressive laws in this manner. They don't need the Feds to police for profit; if anything they'd get in the way.

    The third is that the innovative expansion of civil asset forfeiture is driven at the local level just as much as the federal level. This is the case if only because equitable sharing can only go into effect if there’s a federal crime being committed. So aggressive forfeiture of cars of drunk drivers or those who hire sex workers (even if it your wife’s car) is a local innovation, because there’s no federal law to advance them.

    There’s a lot of overlap for reform across the political spectrum here, but seeing the states as merely the pawns of the federal government when it comes to forfeiture abuse is problematic. Ironically, we see this precisely because we can’t see what the states are doing, but the hints we do know point to awful abuses, driven by the profit motive from the bottom-up.

    [1]  To take two prominent, excellent recent examples. Stillman at the New Yorker: “through a program called Equitable Sharing…At the Justice Department, proceeds from forfeiture soared from twenty-seven million dollars in 1985 to five hundred and fifty-six million in 1993.”

    And Michael Sallah, Robert O’Harrow Jr., Steven Rich of the Washington Post: “There have been 61,998 cash seizures made on highways and elsewhere since 9/11 without search warrants or indictments through the Equitable Sharing Program, totaling more than $2.5 billion.”

    If either wanted to get these numbers at the state and local levels it would be impossible.

    [2] I understand why one want to put an empirical point on it, and the law needs to be changed no matter what, but the core empirical work relating payouts to equitable sharing isn’t as aggressive as you’d imagine. Most of the critical results aren’t significant at a 5% level, and even then you are talking about a 25% increase in just equitable sharing (as opposed to the overall amount forfeited by locals, which we can’t measure) relative to 100% change in state law payouts.

    Which makes sense - no prosecutor is going to be fired for bringing in too much money into the school district, if only because money is fungible on the back end.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

  • New Piece on Where the ACA Should Go Next

    Sep 5, 2014Mike Konczal

    In light of the increasingly good news about the launch of the Affordable Care Act, I wanted to write about what experts think should be next on the health care front. Particularly with the implosion of the right-wing argument that there would be something like a death spiral, I wanted to flesh out what the left's critique would be at this point. Several people pointed me in the direction of the original bill that passed the House, the one that was abandoned after Scott Brown's upset victory in early 2010 in favor of passing the Senate bill, as a way forward.

    Here's the piece. Hope you check it out.

    Follow or contact the Rortybomb blog:
     
      

     

    In light of the increasingly good news about the launch of the Affordable Care Act, I wanted to write about what experts think should be next on the health care front. Particularly with the implosion of the right-wing argument that there would be something like a death spiral, I wanted to flesh out what the left's critique would be at this point. Several people pointed me in the direction of the original bill that passed the House, the one that was abandoned after Scott Brown's upset victory in early 2010 in favor of passing the Senate bill, as a way forward.

    Here's the piece. Hope you check it out.

    Follow or contact the Rortybomb blog:
     
      

     

    Share This

Pages