“How AI Could Reinforce Biases In The Criminal Justice System” — CNBC, 2019, 8:33 — https://www.youtube.com/watch?v=ZMsSc_utZ40
How will algorithmic policing and crime prediction technologies affect racial biases in the criminal justice system? This video examines the growing popularity of such technologies in police departments and court rooms throughout the United States. Proponents believe algorithms will reduce racial biases by basing policing and sentencing decisions on “objective” data. Critics say the technology is inherently biased since the data on which it operates is skewed against people of color. We learn about several ways algorithms have been used by various sectors of the criminal justice system and the ensuing debate regarding their effectiveness. Many people seem to put their faith in the simple narrative that technology will liberate society from oppressive biases. Meaningful change, however, appears to be more difficult to achieve than what these technologies promise.
What are your thoughts on computer-aided decision making in the criminal justice system? Should the system continue to embrace these technologies? If so, what can be done to make the technology more effective or objective? If not, what are better strategies to confront bias?
From the video’s description: Increasingly, algorithms and machine learning are being implemented at various touch points throughout the criminal justice system, from deciding where to deploy police officers to aiding in bail and sentencing decisions. The question is, will this tech make the system more fair for minorities and low-income residents, or will it simply amplify our human biases? We all know humans are imperfect. We're subject to biases and stereotypes, and when these come into play in the criminal justice system, the most disadvantaged communities end up suffering. It's easy to imagine that there's a better way, that one day we'll find a tool that can make neutral, dispassionate decisions about policing and punishment. Some think that day has already arrived. Around the country, police departments and courtrooms are turning to artificial intelligence algorithms to help them decide everything from where to deploy police officers to whether to release defendants on bail. Supporters believe that the technology will lead to increased objectivity, ultimately creating safer communities. Others however, say that the data fed into these algorithms is encoded with human bias, meaning the tech will simply reinforce historical disparities. Learn more about the ways in which communities, policemen and judges across the U.S. are using these algorithms to make decisions about public safety and people's lives.