Criminal Justice Data Analysis
Right now, in the United States the most talked-about policy subject is crime and police reform. By far the biggest reason for this is that following the unnecessary deaths of several unarmed black Americans at the hands of law enforcement officers, the public has been demanding serious reform to the police system in the United States. But at the same time, law enforcement in this country was already potentially facing major changes for a different reason: the rise of predictive policing technology. This generally involves using predictive models to determine who is most likely to commit future crimes and where the crimes are probably going to be committed. Although systemic racial biases tend to manifest themselves to some degree in computer algorithms that were programmed by humans, it is possible that by combining data analysis strategies with certain political reforms suggested by activists today, local governments can decrease funding for the police by targeting their spending more efficiently and reducing bail recidivism. However, the benefits of these goals must be weighed against the downsides of the data analysis techniques, specifically, its risk to contribute to the racial inequality in the criminal justice system.
However, upon further inspection it becomes apparent that there are underlying issues with predictive policing that should be addressed before it is used. First, as a researcher from the Human Rights Data Analysis Group’s Policing Project states, data used in these predictive police systems “are collected by a criminal justice system in which race makes a big difference in the probability of arrest — even for people who behave identically”. This means that minorities will generally be deemed higher risks of getting arrested for a crime even if they are in the exact same circumstances as their white counterparts. And just because they are more likely to be arrested for a crime does not mean they are more likely to commit it. Furthermore, when police officers are sent to a specific neighborhood and warned that it is filled with people likely to commit a violent crime, they could perceive a non-threatening situation as being much more dangerous than it is and respond with unnecessary force. This is already something that occurs in predominantly black areas because many police officers instinctively (but often inaccurately) consider them unsafe, but when law enforcement has actual proof of certain areas being dangerous, the problem would most likely get worse. This would only exacerbate the problem we are facing in the United States of law enforcement officers in non-threatening situations killing or seriously injuring people, very frequently black men, who are not truly a danger to the officers or anyone around them. So although predictive policing could allow for significant reductions in funding for the police while still maintaining an effective law enforcement presence, local governments who are currently using these tactics or thinking about using them need to fully consider how to address their drawbacks.
This method’s growth in popularity in the United States has come with quite a bit of controversy, and it is not difficult to see why. The public currently does not have the right to see what data goes into any individual’s score, giving the algorithm a feel of secrecy that is unnerving to defendants when their entire prison sentence could depend on what risk rating they get. Furthermore, in a study done by ProPublica from 2013 to 2014, black defendants in Broward County were twice as likely to be wrongly labeled as a future criminal by the COMPAS model (meaning the algorithm said they would commit a crime in the future when they did not) as white defendants, while white defendants were more likely to be wrongly labeled as low risk. When the researchers separated the effect of race from other typically important factors such as age, gender, and previous criminal history, they found that blacks were still 77% more likely than whites to be labeled as high risk of committing a future violent crime and 45% more likely for all crimes. This leads judges to assign higher bail amounts to black defendants in the exact same situation, resulting in higher rates of incarceration for blacks. While this is only one study, and there has not been enough similar research to reach any concrete conclusions, the results suggest that bail recidivism prediction, like the predictive policing strategies discussed earlier, threatens to exacerbate the staggering racial inequality in the American criminal justice system rather than fix it. There is potential for it to improve the system by making more informed decisions regarding bail or probation sentences, but the facts that it is seemingly shrouded in mystery to the public and tends to make especially inaccurate decisions for minorities need to be addressed before moving forward. The former can be easily done by informing defendants and lawyers what types of data (and information about their own data) are factored into the model so they can correct any mistakes and are fully informed about why they received the decision that they did. Making the data fully public would also allow third-party sources to evaluate the algorithm for racial bias. But the latter issue is quite obviously more complicated, as the United States has struggled for decades, even centuries, over institutionalized racism, which is likely an enormous factor in the higher risk scores for black defendants. There will be no easy fix, and there are many complexities involving how to determine how and why racial biases are influencing the models as well as how to correct for this, but cooperation between local governments, law enforcement, and the public should be key to making sure everyone is as informed as possible in order to come to a collective solution.
In summary, despite the great potential for data analysis to improve the criminal justice system in the United States, there are too many underlying problems and there has not been enough conclusive research on the subject to declare it successful. But if the companies developing these technologies can figure out ways to effectively address the issues mentioned earlier and then implement their models nationwide, it is certainly possible for our justice system to undergo major changes for the better.
Angwin, Julia, et al. "Machine Bias." ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. Accessed 18 June 2020.
Barry-Jester, Anna Maria, et al. "Should Prison Sentences Be Based On Crimes That Haven't Been Committed Yet?" FiveThirtyEight, 4 Aug. 2015, fivethirtyeight.com/features/prison-reform-risk-assessment/. Accessed 23 June 2020.
Eckhouse, Laurel. "Big data may be reinforcing racial bias in the criminal justice system." The Washington Post, 10 Feb. 2017, www.washingtonpost.com/opinions/big-data-may-be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html. Accessed 22 June 2020.
Ferguson, Andrew. "How data-driven policing threatens human freedom." The Economist: Open Future, 4 June 2018, www.economist.com/open-future/2018/06/04/how-data-driven-policing-threatens-human-freedom. Accessed 17 June 2020.
Heath, Dan. Upstream: The Quest to Solve Problems before They Happen. New York, Avid Reader Press, 2020.