Criminal Justice Part 2 (Primary Research Study)

There is a troubling lack of transparency from the creators of the models discussed in Part 1 regarding the factors that get incorporated into the algorithms. Neither details about computational techniques or anonymized data have been released to researchers (Eckhouse, 2017), meaning that the models’ accuracy and biases cannot be properly evaluated. Until they can, the public is left to make their own decisions about whether it is ethical to have sentencing decisions informed by a computer model.


In the face of this lack of public knowledge (for good reason) regarding data science in criminal justice, it is critical to both inform others of what is taking place in police departments nationwide and to survey their opinions on the technology given this new information. Research has shown that the more the public is aware of the procedures their local police are using, the more receptive they are to new ideas and the more comfortable they feel having conversations with law enforcement to make their voice heard (Hustedt, 2016). But too often, local and state governments have proceeded with these technologies without any feedback from the public or even evidence that the technologies are effective.

Those of voting age (adults) are the ones who can effect change through their votes, while limitations in contacting participants mean only those located close to the researcher can be surveyed. Thus, I conducted this research to answer the question: Given accurate, comprehensive information about the effectiveness and potential implicit biases of predictive policing and bail recidivism risk assessments, how does public feedback among adults in the Bay Area towards these technologies indicate that they should be utilized by criminal justice departments?

San Francisco Bay Area

              The purpose of this study was to explore the public’s level of familiarity with this technology and perception of it given the facts. Furthermore, the methodology used was a cross-sectional survey with a combination of closed-ended and open-ended questions. The survey approach allowed me to collect answers to several short questions from many participants, while also giving them space to elaborate on their answers or offer a response not quite fitting with the pre-designated options. Closed-ended questions provided a simple way of comparing participants’ perspectives on certain topics, while the open-ended ones added reasoning and justification for why each person chose the stance they did.

              Over the course of my study, I collected qualitative and primary data. I recruited participants for my study using social media such as Nextdoor and Facebook had them fill out an online response form. Furthermore, as previously mentioned, I considered adults to be the most important demographic for making conclusions since minors are unable to vote, so all of my participants were over 18 years old. Included in the survey was a background information document, essentially a summarized version of the introduction section above, as well as a consent form and letter of intent. Before moving on to the main questions I surveyed the participants’ race, age, and area of residence distributions. The remaining questions in the form asked about each person’s attitudes about law enforcement in general and knowledge of predictive policing, then narrowed down to specific legislation and individuals’ attitudes towards ethical implications of these laws. Specifically, participants were asked to rate their overall familiarity with predictive policing (on a five-option scale), then whether they believe predictive policing should be more strictly regulated. The latter question had a blank space to elaborate on answers if participants wanted to. They were subsequently asked to rate their opinion of law enforcement and how much they believe predictive policing benefits/hurts communities on a 10-point scale. Finally, another blank space was included at the end of the survey to allow participants to explain any of their answers or add overall comments. 

              After completion of the study, I had 50 responses to my online survey. The proportion of individuals identifying with a minority race was somewhat lower than expected, with 18% of the participants selecting Asian, 12% selecting Hispanic/Latino, and 4% selecting Black compared to the most recent Census data of the Bay Area, which showed proportions of 23%, 24%, and 6%, respectively ("Bay Area Census", 2010). The fact that the Bay Area has been becoming more diverse since 2010 means that the true proportions of the minority populations are likely even higher today. Since it is especially important to survey relationships between minority communities and law enforcement, this disparity could be a significant limitation for my results. 

              Potential limitations notwithstanding, the survey results showed telling patterns. A slim majority of participants (56%) reported a score of 6 to 10 on a scale from 1 to 10 for their overall opinion of law enforcement, meaning a majority has a favorable opinion of police all things considered. However, 76% of participants marked a score between 3 and 8. This suggests few people have especially strong opinions to one end or another, something that becomes a trend in these findings. In terms of predictive policing specifically, the background document appears to have helped people understand it better, with only 22% of people answering either “Familiar with the basics plus slightly more” or “Familiar with all aspects” prior to reading it and 58% doing so afterward. (The remaining 42% answered “Somewhat familiar with the basics”). However, this is still not an especially high percentage, slightly surprising given that the background document encompassed months of my own research as well as the fact that predictive policing was part of California’s Proposition 21 in the 2020 election, and voters could have researched the subject at that time.


              Given this relative lack of familiarity with the subject, it stands to reason that several participants would not have strong opinions about it, which is supported by the data. 14% of participants had no opinion on question (1): whether or not predictive policing should be more strictly regulated, and a majority (62%) answered between 4 and 7 on question (2): a 10-point scale rating their opinion on how much predictive policing benefits communities. Of those who selected Yes or No to (1), 77% answered Yes, that they believe that the government should enact stricter regulations on the usage of predictive policing. Nine of the 17 who used the open-ended space in (1) mentioned that the bias and prejudice in the law enforcement system was a strong factor in their opinions. Nearly all of those who elaborated had negative comments about predictive policing, with only one person saying it likely benefits communities where it is used. Several said it could work better with stronger relationships between police officers and the communities, suggesting requiring police to live in the areas that they serve or having them build connections with citizens in those areas. Along the same lines, 74% of participants answered on the “predictive policing currently harms communities” end of the scale for (2) (answer choices 1 to 5). Nine of the 50 chose to use the space I provided for additional feedback or further thoughts on predictive policing as a whole, while only three wrote more than two sentences. Most of the comments focused on the inherent bias in the criminal justice system as well as how technology is a tool to be used when making decisions rather than something to be obeyed at all times.


The pattern of participants being unenthusiastic about predictive policing is relatively consistent with the findings of Khan (2018). His report states that most Southern California residents believe that their local law enforcement is untrustworthy and that predictive policing is not effective. However, many people have limited knowledge of the technology, with only 58% understanding it to a significant degree in my study and 63% in the previously mentioned report. Hustedt (2016) points out that when the public is informed about how and why the algorithms are used, the technology is better received and even more effective. In areas where police officers took the time to have conversations with citizens and ask questions of them, communities were more willing to cooperate with law enforcement and the installation of predictive policing.

In the same vein, participants frequently mentioned that building closer relationships between communities and law enforcement would be the best way for policing to be improved. Suggestions for how this could be done included “hav[ing] police live in the communities they serve”, having the “goal of the community partnering with the police”, and prioritizing “informed citizens”. Maxson, et al. (2003) found that participating in community meetings, speaking with citizens of their communities, and overall increasing their visibility helped police officers to increase their approval ratings, thus fostering deeper relationships between the two groups would likely simultaneously help predictive policing gain footing in communities and ease some of the tension that exists between the groups. For example, in Shreveport, Louisiana, Perry et al. (2013) analyzed police behavior and saw that as part of implementing predictive policing in the city, officers would simply talk with as many citizens as possible to build familiarity with the area and share their knowledge with the public. This was so valuable to the community that Shreveport personnel believed the improved relationships to be a bigger benefit of the predictive policing program than the predictive policing itself. Many of the negative opinions in my study were not particularly strong, so if the tactic mentioned above can be replicated in other cities, these doubters may be swayed towards believing in the merits of the technology when it is used correctly.


That is what participants were perhaps the most adamant about: predictive policing is a tool to be used by people who make decisions, not something that should make decisions instead of people. A strong majority of participants believed that the technology should be regulated more strictly, and several stated that the primary reason for this was that the input data is heavily biased. It follows that humans should strictly oversee the algorithms because without regulation the prejudices held by law enforcement could become more pronounced. However, these models are not making decisions by themselves, rather, criminal justice systems are using them to inform their decisions. For instance, in Philadelphia, defendants are assigned risk scores which can then be utilized by decision-makers however they like (Ritter, 2013). The algorithms are just one factor that contributes to criminal justice processes. It is important to keep this in mind, though: people do not want to see the algorithms left unchecked. 


The discussion above outlines two clear messages, which must both be used by criminal justice systems looking to implement predictive policing or improve their existing systems. Firstly, until more is known about the true level of bias and accuracy of the algorithms, the models must remain no more than tools to be used by human decision-makers. Public feedback shows that most people are hesitant to give full control to computers, and for good reason. However, risk scores or maps of where crimes are most likely are valuable tools for law enforcement to use, as they analyze historical data to determine trends so that police can use their resources more efficiently. 

Since predictive policing should be used as part of the overall process, and to different extents by different cities depending on context, it is critical that each city’s law enforcement understands their area’s context. Police can gain a better understanding of this context, and improve their approval from the public, by becoming involved in their communities and having a public presence there (i.e. speaking to citizens regularly, holding meetings to explain procedures like predictive policing, etc.). By understanding the dynamics of the areas that they work in, it will be easier to implement predictive policing in the context of each area’s crime rates, locations, community relationships, and more. Additionally, these conversations would ideally go two ways, with the police asking questions of the citizens in their areas and informing them of how predictive policing works to improve public familiarity with the topic. Once the public gains a more advanced knowledge of how it works, whether that is through more research being conducted on its effectiveness and biases or simply having better communication between the government and the public, it will be easier to determine whether it is appropriate to move forward with the technology. 

Given that this study was conducted in the Bay Area due to physical limitations in terms of contacting participants, surveys that involve citizens of different regions and of different demographic proportions would likely return different results. Obtaining a representative picture of the nation’s opinions would go farther towards providing actionable guidance than simply using the Bay Area’s opinions. This is such a fast-moving and fast-growing field that keeping tabs on all the developments is a difficult task, but at the same time, law enforcement affects everyone’s life, and taking the time to understand predictive policing is worth it if you are then able to provide valuable feedback for how it should be utilized. Thus, the ability for researchers, police, and communities to have productive conversations with each other on the matter is extremely critical. As technology and predictive algorithms grow more advanced over time, it appears inevitable that their influence over criminal justice will follow suit. And although the corporations that publish the models currently do not allow for much transparency, this may change with increased pressure on them to release the factors that are input into the algorithms. 

In summary, there remains quite a lot of work to be done in terms of mending police and community relations as well as improving familiarity with predictive policing before the technology can fully be implemented nationwide. Public feedback indicates that both are necessary, but also that the lack of strong opinions towards predictive policing leaves a window for the technology to make a real impact in improving society if it can be analyzed and applied properly in the right context.


Eckhouse, L. (2017, February 10). Big data may be reinforcing racial bias in the criminal justice system. The Washington Post. be-reinforcing-racial-bias-in-the-criminal-justice-system/2017/02/10/d63de518-ee3a-11e6-9973-c5efb7ccfb0d_story.html

Hustedt, C. (2016, November). A Public Value Perspective on Predictive Policing in the US.

Khan, H. (2018, May 8) Dismantling Predictive Policing in Los Angeles. Stop LAPD Spying. Retrieved September 27, 2020 from Body-May-8-2018.pdf

Maxson, C., Hennigan, K., & Sloane, D. (2003). Factors That Influence Public Opinion of the Police. National Institute of Justice.

Perry, W. L., McInnis, B., Price, C. C., Smith, S. C., & Hollywood, J. S. (2013) Predictive Policing: The Role of Crime Forecasting in Law Enforcement Operations. RAND Corporation.

Ritter, N. (2013, February). Predicting Recidivism Risk: New Tool in Philadelphia Shows Great Promise. NIJ Journal, 271, 4-13.

San Francisco Bay Area. (2010). Bay Area Census. Retrieved March 1, 2021 from



  1. Nice info, I am very thankful to you that you have shared this special information with us. I got some different kind of knowledge from your web page, and it is really helpful for everyone. Thanks for share it. San Antonio Criminal Defense Attorney.

  2. I am truly impressed by the details which you have provided regarding MCA Law Firm It is an interesting article for me as well as for others. Thanks for sharing such articles here.


Post a Comment

Popular posts from this blog

Criminal Justice Data Analysis

Housing Voucher and Tax Credit Programs

Using Big Data To Prevent Pandemics