Popular Sociology

View Original

Automating Racism

Are We Automating Racism?” — Vox, 2021, 22:53https://www.youtube.com/watch?v=Ok5sKLXqynQ

This video covers multiple examples of how algorithms reflect social biases, perhaps perpetuating social inequalities like racism. Image cropping algorithms routinely prioritize white faces over Black faces, many sensors perform poorly on darker skin, and beauty filters adjust facial features in ways that make them appear white(r). The lesson is that machine learning will ultimately reflect societal biases despite us believing otherwise.

Overall, we like to believe in the idea of “tech neutrality”, that our technology is free from bias and can help liberate our society from human subjectivities. Tech neutrality, however, is fiction… but since we believe it is real, we tend not to question it. This video dispels that belief by showing us the multitudes of human decisions that go into the creation of algorithms and other technologies. All of the choices we make when creating technology are potential opportunities for our biases to contaminate the resulting technology. Likewise, the data sets upon which our technology operate are saturated with white faces. There is also the problem of having algorithms make decisions based on data loaded with biases, like crime statistics and policing data. The way we label variables is also problematic, such as defining “high-risk patients” based on their consumption of health care resources (in other words, risk becomes a proxy for cost). The overarching pattern is that the biases within our technology benefit white people while harming people of color.

From the video’s description: Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?