Abolish Risk Assessment

In this episode we speak with three abolitionists who attended the 2018 Allied Media Conference in Detroit.

Chelsea Barabas and Rodrigo Ochigame of MIT outline the reformist logic of pre-trial assessment and discuss the recent history of individualized actuarial fairness, and whether they see opportunities for data driven abolitionist organizing.

Hamid Khan of the Stop LAPD Spying Coalition provides further historical context for surveillance and algorithmic policing tools deployed by technocratic stalker state, and how they intersect with contemporary redlining and the suppression of dissent.

Image credit: Jackson O’Brasky

Subscribe via iTunes | Subscribe via RSS |Download the MP3

Click here to display the episode transcript

SHOW RUNDOWN

Welcome to Rustbelt Abolition Radio, my name is a Maria. Detroit’s 2018 Allied Media Conference hosted a number of abolitionist organizers, several of whom presented on data-driven research for the abolition of prisons and policing. In this episode we spoke with three of them –Rodrigo Ochigame, Chelsea Barabas, and Hamid Khan– to contextualize individualized risk assessment and the algorithmic policing tools that are used to track and trace people with the intent to cause harm and suppress dissent.

But before we begin, here’s Kaif Syed with some movement news you have missed.

NEWS HEADLINES

On Feb 16th, after facing mounting pressure from activist groups, Michigan governor Gretchen Whitmer canceled the sale of a site for a future immigrant detention center in the city of Ionia. Groups such as No Detention Center in Michigan spoke out against the planned construction in the Ionia city council meeting and have held community meetings in opposition. The proposed jail was slated to be built and operated by the private company Immigration Centers of America, which currently runs a similar detention center in Virginia.

On Feb 2, protesters stormed a Brooklyn jail, protesting inhumane treatment of inmates. The Metropolitan Detention Center had been running on emergency electricity since Jan 27th, and inmates have been struggling with a lack of heat and light. The rally demanded that the inmates get heat, hot meals, and allowed to have contact with their families and lawyers. The rush into the jail was eventually suppressed by officers using pepper spray.

On Feb 26th, inmates in Coyote Ridge Correctional Center won concessions from prison officials after the inmates held a large food strike. The food strike, which entailed refusing prison-served meals, encompassed 1700 inmates at its peak, and was held in protest of low food quality and other jail conditions. Concessions by prison officials included providing more protein, as well as other non-food related concessions, like re-padding benches in the recreational area.

ABOLISH RISK ASSESSMENT

a María: I’m a Maria here with Alejo Stark and you’re listening to Rustbelt Abolition Radio: an abolitionist media and movement building project based in Detroit, Michigan.

Alejo Stark: Today our guests are Chelsea Barabas and Rodrigo Ochigame, who have brought a conversation on data and abolition to the 2018 allied media conference.

a María: Chelsea’s a research scientist at MIT who works with interdisciplinary researchers and community organizers to unpack and transform mainstream narratives around criminal justice reform and push them towards abolition. Rodrigo is a student at MIT who examines risk-based thinking in capitalist institutions. He joined Chelsea and organizers in Chicago and Philadelphia at the 2018 allied media conference to present on their implications in fighting against the carceral state. Welcome and thank you for joining us.

Rodrigo Ochigame: Thanks for having us.

a María: At yesterday’s session, Chelsea, you introduce the liberal rhetoric that’s used to put a computer between people and their freedom. Can you describe that basic argument and how risk assessment actually functions in relation to the histories of prison and policing?

Chelsea Barabas: Our work specifically focuses on risk assessments and their expanding use in pretrial decisions in the u s risk assessments had been around though for like a hundred years in some form or fashion. But over the last five to 10 years we’ve seen a pretty rapid expansion in the area of pretrial reform where people have started talking about it in the context of bail reform where it’s posed as a solution to the growing pretrial jail population in the United States. So in the, in the pretrial context, it’s specifically framed around characterizing pretrial release decisions as tasks of prediction. The idea being that somebody is arrested –constitutionally by default, the presumption is that that person should be released before their trial date. But there are a few very specific considerations that judges think about: that tends to be either if you think that that person is a flight risk, might flee and avoid their court date, or if you think the person poses a danger to the community.

Chelsea Barabas: So if it’s a domestic violence case and you think something like a restraining order isn’t enough to protect like the family or something like that, that’s a situation. If you think somebody might coerce witnesses and that might be another reason people think about detaining someone. Those concepts have been expanded in generalized in a lot of ways that have led to a much broader set of people being detained. So now people talk very much more generally around the idea of failure to appear. The vast majority of people who failed to appear in court are not, people who have fled to Canada or something. There are people who for a number of much more mundane logistical reasons, don’t get to court on time. In this context, risk assessments are framed as a way of being able to more accurately predict who might fail to appear or who might be rearrested before their court date.

Chelsea Barabas: And it’s often framed as something that both kind of conservative and liberal criminal justice reformers are interested in and have embraced specifically in terms of an efficiency argument in terms of trying to only detain the most risky people, but then also– one version of an equity argument, which is: we know that judges are biased, they have inter… they have implicit biases that lead to racial disparities and if we limit their discretion and provide them a more data driven perspective about who might be risky, then the decision is going to be more fair in general. So, so you see a really specific characterization of racial disparities in terms of implicit bias as opposed to like, rationalized procedures and systems that basically systematize things that disparately impact people.

Chelsea Barabas: A lot of our work is trying to push past that actually to kind of also talk about just fundamentally whether or not risk assessments are something that are a worthwhile thing to pursue and is even if we fixed issues of bias, disparate predictive accuracy within them, would they, would they be something we want to see it in the world. That’s what, that’s what we brought to the table yesterday in the workshop.

Alejo Stark: Rodrigo, you talked about -yesterday- about the fundamental flaw in some of these risk assessment tools and what it is that they actually measure. So can you explain this again for us and talk about the importance and what it is —the importance of understanding and utilizing this information, as abolitionist organizers?

Rodrigo Ochigame: Yeah, yeah, absolutely. So even though these risk assessment tools, they’re presented as validated, accurate, or scientific, and most of the conversation about them and the controversy about them has been framed in terms of accuracy and bias. All of these tools are fundamentally flawed. They are unscientific, you could say, because even though they’re intended to predict the criminal behavior of individuals, the data on which the systems are trained are always data that largely reflect the penal practices of the police departments and courts.

Rodrigo Ochigame: So they take that on arrests, convictions, and incarcerations, run predictions on those. So presumably they would be predicting arrests, convictions, and incarcerations, but they don’t say they’re doing that. They say that they’re predicting criminal activity or recidivism. Right. They always shift the agency and responsibility to the individual so that they can make claims about their risk or the danger that they pose your community. So in that sense, they’re fundamentally flawed and that there’s no way to correct for that. There’s no magic and computer science that can possibly correct for that. So when somebody comes to propose risk assessment tools in your community and they claim that the tools are accurate or validated or objective, you can safely say that they are not.

a María: Could you talk a little bit about this history of actuarial fairness?

Rodrigo Ochigame: Yeah, absolutely. So the first experiments with actuarial risk assessment in criminal justice that happened in the 1920s and thirties and they were proposed by these white Chicago school sociologists to make parole decisions in Chicago and in different counties in Illinois.

Rodrigo Ochigame: And at that time, these methods of calculation, they wouldn’t be called algorithms back then, were based on simpler statistical techniques. And the data generally included explicit racial variables. In the tools that they propose now there are usually no explicit racial variables, but there are many other variables that correlate with race so that they serve as proxies for race. Like arrest history or a neighborhood. We called them actuarial tools because they are derived or inspired by insurance risk classification instruments. So, that’s where they were first used. And in the insurance industry there have been recently, you know, some interesting historical work showing how in the 1970s and eighties there were disputes between civil rights and feminist activists and the private insurance industry in the United States. And you know, life and health insurance, the activists were claiming that private insurance was discriminatory with very good reason, and what the industry did was to advance a very large public campaign to try to convince Americans with campaigns and advertisements that risk classification and private insurance was inherently fair. So they invented this concept of actuarial fairness and framed as a highly technical concept of fairness and used that to naturalize private insurance is something that’s essential, natural and inherently fair and to a large extent they succeeded. You know, so even in these controversies now, there are a lot of lots of different controversies involving discrimination in statistical classification, insurance doesn’t really come up. And it doesn’t really come up because the industry was was so successful. Now in the conversation in the prison industrial complex on risk assessment instruments now, we see a very similar tactic in that as a response to, for example, the 2016 propublica article and the public controversy that followed, the industry proposing, the increased adoption of these instruments is saying that we can address these issues of accuracy and bias and we can implement algorithms or systems that are fair. So we see a very similar kind of tactic to the one that the private insurance industry trade in the seventies and eighties with large measure of success.

Alejo Stark: This is sometimes pitched as a reform. What are the arguments that people tend to make as to why to move toward risk assessments, from a liberal position.

Rodrigo Ochigame: Another thing that’s important to note is that one of the arguments, one of the trickier arguments that the people proposing a wider adoption of risk assessment instruments tend to make is that they can help reduce incarceration. Right? So they will say, you know, it’s basically a kind of selective incapacitation argument. You know, you identified the most dangerous people and you and you only imprison them. And if you do so you can reduce incarceration without increasing crime.

Rodrigo Ochigame: That’s the kind of argument that they make. The important thing to note is that this very same argument was made by the new liberal think tanks that started proposing risk assessment in the early eighties. So there’s a quite important report by the military think tank, Rand Corporation from 1982 that made essentially the same argument. You know, they said that they were proposing their method of calculation, their method of risk assessment, as a solution to discrimination and racism in the US penal system. And that ultimately could help reduce incarceration rates. So there’s this kind of seemingly progressive argument. So right now we’re hearing very much the same thing. We know that, you know, since the early 1980s in any place where the selective incapacitation and risk assessment model was adopted, we have seen a dramatic increase in incarceration rates. It’s important to know that history, to know that this supposedly progressive argument doesn’t hold historically and that the end of mass incarceration requires dramatic decarceral changes such as the elimination of cash bail, the abolition of mandatory minimums, the dramatic reduction of prison admissions and sentence lengths. Risk assessment is being proposed as an, as an alternative to those urgent changes and it’s important to reject that false alternative, which historically has not worked.

a María: Yeah. So one of the things that came up yesterday was appropriating the veneer of data and data science for abolitionist organizing.

Rodrigo Ochigame: Yeah. Certainly the appeal of data and of numbers is one of the strategies that the prison industrial complex uses in order to suppress dissent. Just as in the debates in insurance when activists would make criticisms, the insurance industry would say, Oh, you know, fairness is a technical matter. It’s beyond the grasp of activists and that was essentially a way to try to deal legitimize the activists’ claims. And so we see something very similar in the risk assessment conversation. I am less attracted to try to use data and numbers to make different arguments and I’m more attracted to just demystifying these technocratic arguments that are made, and trying to empower people who are working on the ground to be able to say, you know, “Oh, when somebody comes to me and says that these tools are validated and accurate, I know what to respond”.

Chelsea Barabas: Yeah. I though acknowledged that like data has a bunch of cultural capital right now [laughter] I am interested in figuring out how you can use it to your own end. Like, how can we wrest control of that toolkit so that you’ve got multi textured stories that help you see things at you know, multiple levels. I think that would be interesting.

a María: Before we close, is there anything else that the two of you think would be important to either demystify or other avenues of exploration for people to understand and move on? This kind of work?

Rodrigo Ochigame: I think I would like to just reiterate: because of the history of risk assessment and entanglements with selective incapacitation and mass incarceration, we should reject these easy responses to the public controversy that’s going on.

Chelsea Barabas: Yeah, and maybe my last takeaway would just be don’t get distracted when engaging in these conversations. I think the goal should be to try to shift us away from them as soon as possible. I just would encourage people to do that and stay focused on the core issue.

Alejo Stark: Abolish risk assessments, right?

Chelsea Barabas: Yeah! [laughter]

Alejo Stark: Thank you. Thank you for speaking with us today.

Rodrigo Ochigame: Thank you so much.

Chelsea Barabas: Thanks for having us.

Rodrigo Ochigame: It’s an honor.
HAMID KHAN, STOP LAPD SPYING

a María: After the conference, we caught up with our next guest, Hamid Khan by telephone.

Hamid Khan: My name is Hamid Khan, I’m a coordinator and organizer with the Stop LAPD Spying Coalition, which is based out of Los Angeles in the Skid Row area of Los Angeles. The coalition has been around for about nine years almost now and one of the primary reasons for the coalition to come about was: how increasingly counter-terrorism and counter-insurgency methodologies and tactics were at a very fast pace being incorporated into local policing. Nothing new, it’s been going on forever, particularly post Vietnam, but given the advent of technology, given how fast information moves, given how vast the information sharing environment is, this was becoming a major issue as to how programs under the guise of national security were becoming the day to day domestic policing tactics. Typically, surveillance is seen to this very narrow lens of invasion of privacy. I say narrow because of that is very much a privileged position –I would say even rooted in white privilege– about invasion of privacy because surveillance has been a part and parcel of policing and policing of the bodies of communities of color, particularly black communities and indigenous communities. And it’s not just a way of gathering information, but it’s a process, a very methodical process of gathering information and monitoring and tracking and tracing people and literally stal king communities with the intent to cause harm. So information gathering and surveillance has been central to these aspects of our existence. So that was the primary purpose for the coalition to come about.

a María: I know that the Stop LAPD Spying Coalition has been actively analyzing predictive modeling. Tell me a little bit about what you found, what you all in coalition have been thinking about.

Hamid Khan: I want to bring in a concept which is known as intelligence led policing, which came out of England, which has become a major process of policing in the United States for about 30 years. And the central pieces of intelligence led policing is behavioral surveillance and data mining. That observing behavior, not just actions but observing behaviors as well, and then computing things, all of these feeds to data mining.

And in a sense what it does is it allows or, quote unquote allows an agency for outcomes that it wants to achieve. And I think that’s really critical in there. So intelligence led policing then becomes a key factor. So now let’s use that in the context of predictive analytics and predictive algorithms. Most of the time when we talk about predictive algorithms or algorithms themselves, the answer within our organizing remains limited that because data is dirty, and of course it is, because data is coming from racist policing and institutional policing, of course it has been. Then all of that data which would be dirty data for example, or corrupt data, would give out dirty and corrupt outputs. So racism in, racism out, or in computer language: garbage in, garbage out. But I think it limits our understanding to that point. What we need to look at is that, what this argument does is it gives the algorithm of free pass.

We have to look at the tools of competing themselves as well. For example, the predictive algorithm: because the tool would be created for particular outcomes and if the tools are created, for example, for policing, for an outcome to predict criminality, so then in a sense what you’re doing is that it would produce that outcome where certain communities would remain and criminality will be assigned to those communities and it would remain consistent.

We have to go back to the purpose of policing in our society and I think that remains very critical because what happens is that there’s this almost constant glorification of policing –particularly in the white imagination– in the context of public safety, in the context of protection. If we were to go back and draw out a whole trajectory and a timeline of policing, looking at it through the lens of enslavement and genocide and people of Color and immigrant and poor folks and Queer folks and Trans folks — then we start looking that there was an intent for the police to exist. From slave patrols, do policing communities, to then looking at young people of color and creating these notions of gang injunctions and gang databases and naming them as super predators and on and on and on. So the practice or the tradition or the impact of the war on drugs and the war on crime and the war on gangs, and now the war on terror, needs to be seen that there was an intent behind that. And in that, that’s where policing comes in. So the algorithms by themselves will be serving that fundamental intent to control and contain and criminalize.

Data is being gathered from so many pickup points that it is something that, you know, it’s almost a menu for people to pick and choose and to take and to recreate our being and to assign criminality to us.

The second thing where our fight is that, you know, at what point do we stop gathering data? At what point do we look back? And even going back to, um, and, and a professor Simone Brown has written very eloquently about it in her book, Dark Matters about the history of surveillance during slavery. In the early 17 hundreds there were lantern laws where if you were to enslave the body and you are not walking with the master, you have to walk with a lantern to self identify yourself as a threat to the system or the other. So there’s so many others out there, in a sense, and then if we keep on building it.. You know, not even going back 300 years: the red squads from late 1800s, then even going to cointelpro, but even more recent, starting in the 60s around the war on gang, the war on drugs, broken windows policing, the war on terror of suspicious activity reporting program, SWAT, various other things.

So there’s so much information just in this very limited scope, and I’m saying limited scope of policing only of law enforcement. But when you expand that off information that is being gathered by public health, for example, by health department, our transportation department, by education department, by various other agencies, then then let’s look at the private sector information that has been gathered. So in essence, what it does is that it provides this huge universe of quote unquote dirty data or data on us that the idea of predictive analytics can never be unbiased. It just, it is just not possible. But what it does is it masks institutional and structural racism and the story gets limited to computer driven models, that computers are race neutral. Well, this is something that happened in the past. What are we supposed to do with these? So that’s the broad stroke kind of understanding of, of how we are looking at it or what we are finding out.

Now what are some of the practical implications of that for example? So let’s look at predictive policing. Predictive policing is a two tier program. One is location-based or community-based, which uses a predictive algorithms, Predpol. Then there’s another one called Operation LASER in Los Angeles, which laser stands for Los Angeles Strategic Extraction and restoration program, which directly targets individuals and locations as well. So now how does that happen? For example, you know there’s some history of crime. A longterm history and a short term history of crime in a certain location. Then cops are deployed. So these become management tools for quote unquote efficiency and reduced deployment of resources. What communities are they going back to? So now it not only becomes cyclical, but it provides this masked opportunity to continue to lay siege on the communities, to continue to target people, to continue to do stop and frisk, to continue to criminalize, whole neighborhoods and continue to police like that.

Let’s look at the individual-based: the LASER program, which is based upon individuals who may have had some history in the past. So it’s a point system. For example, five points are assigned to some previous gun possession. Five points are assigned that if you may be gang affiliated. Five points are assigned to you may be on parole or probation. One point is assigned if you have been stopped. So now people are being stopped, stop-and-frisk or stop for whatever reason. But you know, even if they are not arrested, even if they’re not detained, even if the only the field interview card is filled in, that attaches another point. So what happens is then that these bulletins are released, they are known as chronic offender bulletins –which identify a person by their photo, which identify a person by the address, which identify a person by their biometrics, which identify a person by their past history– and they’re released in the community with the police with patrol cars. And these are like most wanted posters. So in essence you’re most wanted for not (being) wanting for anything. That’s the whole dichotomy between that. And in that then the process takes place where then they are traced and tracked and monitored and stopped and harassed. And according to LAPD themselves, their goal is to banish these individuals from the community. So now banishment becomes the goal and and people very well maybe rebuilding their lives. And we have cases directly in the city of Los Angeles where in South Central Los Angeles these things are going on.

Now that’s the individual basis, now let’s move on to the next level in this: then LASER zones are created. So where, where would the individuals be? Within LASER zones are what they call anchor points. Anchor points would be a house, it could be a small business, it could be there’s some location like a park or something. But in that these areas are then deemed as criminal areas and then what they do is, and they bring in the city attorney and the adjudication branch and they start using nuisance abatements. So in essence, you know, we also have to look at that how policing practices like broken windows was very much about the gentrification and development.

So similarly, broken windows policing as is was unleashing and skid row in downtown Los Angeles, which is home to the Stop LAPD Spying Coalition as well, that when broken windows was unleashed under the guise of safer cities initiative by Bill Bratton in 2005, 2006, within the first year or two years from 18 to 25 thousand tickets were issued just to make people’s lives miserable. For loitering, for throwing ash on the ground for sitting on the sidewalk, for jaywalking. And these are unhoused individuals, you know, just who are living intense, um, where there are no trash cans, where there are hardly any bathroom facilities where there’s a high rates of mental health conditions that people are out in the streets. So similarly then predictive policing is being used very effectively to expand gentrification and for the purposes of development as well, which leads to displacement and eviction off individuals.

So these are some of the more direct impacts that I can lift as well. And it’s all based on computer generated things; let’s talk about the bail issues let’s talk about pre sentencing guidelines. I mean supposedly the answer is that we will use predictive algorithms… And you know there’s compass one program, then of course IBM has a program and then John and Arnold and another foundation has come up with these under the guise of rehabilitation, under the guise to reduce prison population.

But then what happens is that you are forever marked as well. And secondly, what happens is then there’s electronic monitoring, so there’s the bracelet is put on your ankles as well. Which then creates almost like a constant carceral state that you are in, and there’s a whole lot of money to be made. So I think profit, gentrification, displacement, criminalization of the community is structural racism. I mean the story remains the same. The tools have just shifted.

a María: It seems important to recognize that people are being pre-individualized and pre-constructed as criminals under these algorithms and recognizing that the abolitionist horizon is vast: what would be some concrete advice from the coalition for people to locate that fight, as people are organizing?

Hamid Khan: We filed a lawsuit against the LAPD, and this is a second sort of public records law suit that we have filed. The first one was from another national security program. The ‘Suspicious Activity Reporting’ which we won and this one we’ve been demanding things like: where are the hotspots? What are the 500 by 500 foot hotspots? Who are the people in these chronic offender bulletins? And what we are seeing in downtown Los Angeles for example is when we looked at a hotspot from June, 2018 compared them to June, 2015 –these are actual LAPD generated hotspots– and it’s so interesting that skid row in Los Angeles –the Los Angeles community action network came out with a report called The Dirty Divide and it is a dirty divide between extreme affluence and extreme poverty– that these hotspots are all located. You would think about criminality and assigning criminality and which happened in skid row through broken windows. But these are mostly located in financial district. These are mostly located in boundaries where gentrification is happening in downtown Los Angeles, where a lot of these of these old buildings are being changed. So in a sense, a digital redlining is now being imposed on the communities.

So that’s how this thing is taking place at how, effectively this digital red lining, if it’s not the same old, you know, band redlining and real estate redlining –which is still going on– but this is now in the context of criminality as well through digital redlining. So these borders are created that if somebody even walks across immediately, you know, calls for service company and in the cops are there. So that’s one thing.

There are definitely alternatives that are on the books. It just needs to be: what is our will to fight back and how are we looking at it through the lens of abolition. The conversation needs to be rooted in the history of crime and how crime has been defined. It’s much bigger than the invasion of privacy. It’s much deeper than the invasion of privacy. And this is something that needs to be seen through this much sort of the bigger lens that this is the intent to cause harm and it will remain fundamentally flawed. So that’s why the fight is towards abolition. And when we talk about artificial intelligence and machine learning and predictive analytics, we cannot limit our understanding and our vision towards fighting for good data, quote unquote, or clean data because it can’t ever be a clean or a good dad. I’ve given the range of the scale of this thing and need to really go after the algorithm itself. There’s no such thing as a race neutral algorithm and there’s no such thing as an unbiased algorithm either.

MUSIC TRANSITION

CREDITS

Kaif Syed: Thanks for tuning in. You can listen to past episodes or read their transcripts on our website at www.rustbeltradio.org. This show was co-produced by the Rustbelt Abolition Radio crew: a Maria, Kaif Syed, and Alejo Stark. Original music by Bad Infinity.