Those who praise predictive policing do so in part because they say it is not biased. Essentially, a computer system uses artificial intelligence to guess where a crime may happen. The computer alerts the officers, who go to that location. Since it is a computer making the call, proponents say it cannot be biased along racial lines, for instance. The computer doesn’t have some human bias against African American citizens or anyone else.
Critics, however, say that it does just the opposite. It actually amplifies the bias.
Consider how the system gets the information it uses. That data comes from live officers on the ground. They report where they make arrests. The system tracks the numbers and sends them to high-arrest areas.
While the computer may not be biased, those officers could be. Imagine that an officer assumes that African Americans use drugs more often and naturally goes to an African American neighborhood to make most of his arrests. He’s operating from a racial position. The computer then gets his data and pinpoints that neighborhood as a place with a lot of crime.
The AI then sends officers back to the same area. It is just feeding off of the officer’s bias. Far more crime could be happening in a white neighborhood, but the computer won’t know that if the officers neglect to go there in the first place.
Bias in police departments is, unfortunately, very real. Even computers cannot eliminate that. It can have a drastic impact on arrest patterns, and those who get arrested need to understand their legal rights and all of the defense options they have.