Police prediction algorithms: what they are used for and why they are cruel to the poorest



Police departments have been experimenting with predictive systems supported by data analysis and artificial intelligence for two decades. These types of tools are widely established in the United States and China, but they also have a presence in countries such as the United Kingdom, Germany or Switzerland. Your goal is to determine crime hot spots to deploy police patrols more efficiently. One of the perverse effects of these systems is that they tend to overcriminalize the less affluent neighborhoods: as the algorithms are usually fed with data on arrests, they usually demand more surveillance in the areas where these actions take place, which in turn causes even more arrests.

Ending this vicious cycle is complicated. Some developers have tried to do this by feeding the system with complaint data as well. This is the case of the PredPol tool, one of the most popular in the United States. If the reports of the victims are taken into account, they say, a clearer picture of crime is obtained and possible prejudices that may lead the police to make more arrests in certain districts (for example, in predominantly black neighborhoods) are eliminated. .

But those efforts are futile. Recent research concludes that the algorithms that work with that information make the same mistakes. “Our study shows that even systems based exclusively on complaints filed by victims can lead to geographic biases that can lead to a significantly wrong placement of police patrols,” explains Nil-Jana Akpinar, researcher at Carnegie University to EL PAÍS. Mellon and co-author of the work.

The dream of predictive policing

The systems Akpinar analyzes have been with us for years. As early as 1998, 36% of US police departments claimed to have the data and technical capacity to generate digital crime maps, according to federal sources. Just a few years later, 70% said they use those maps to identify hot spots. “The most modern versions of these early preventive policing tools date back to 2008, when the Los Angeles Police Department (LAPD) began its own scans, followed by the New York Police (NYPD), using tools developed by Azavea, KeyStats and PredPol ”, describes the researcher.

Various studies have proven the problems posed by the application of predictive algorithms in police activity. One of the first systems of its kind to come to light was the one launched by the city of Chicago in 2013: an algorithm identified potential criminals by analyzing arrest data and the network of relationships of both gunmen and victims. The objective was to launch preventive social service programs with whom the algorithm detected that they could commit crimes. It was a failure. Not only from an efficiency point of view (it did not help crime to drop), but the black population was also overrepresented on the lists, according to an independent study a few years later.

More information

In 2010, an investigation by the Department of Justice determined that the New Orleans Police Department (NOPD) needed to be reformed from almost zero after detecting serious anomalies: evidence of several violations of federal law was found, including the excessive use of force, irregular detentions or discrimination based on race and sexual orientation. Just two years later, the NOPD secretly began working with Palantir, a mysterious Silicon Valley company specializing in data analysis related to the CIA and the Pentagon. Its founder, Peter Thiel, who started PayPal a few years earlier with Elon Musk, would later be one of the few tech moguls who openly supported President Donald Trump during his tenure.

Palantir developed for the NOPD a software designed to hunt down the main members of the most important drug gangs in the city, as revealed The Verge. It is not yet clear whether the tool broke the law (it drew up lists of “suspects” to watch for preemptively), but the data says the number of gun violence killings declined in New Orleans.

In order for its methods and algorithms to avoid public scrutiny, the company did not close the contract with the police department, but with a foundation created for this purpose. Palantir followed the same modus operandi with the New York Police (NYPD), as revealed Buzzfeed in 2017, and presumably with his colleagues in Los Angeles (LAPD).

This secrecy is not accidental. Predictive algorithms used by the police have already been shown to reproduce biases such as racism and tend to harm the poorest. One of the best known studies is that of Rashida Richardson, Jason M. Schultz and Kate Crawford, from the AI ​​Now Institute, who emphasize that this type of algorithm always produces results that contain biases. His paper, published in 2019, shows that “illegal police practices”, referring to both corruption and biased judgments, “can significantly distort the data that is collected”. If the way in which the security forces collect and organize data is not reformed, they say, “the risk that predictive police systems are biased and that they affect Justice and society increases.”

Trying to correct the biases

Hence, the main companies in the sector try to correct these defects: they want to be able to continue selling their tools without society getting on top of them. Akpinar and her colleagues Alexandra Chouldechova, also from Carnegie Mellon, and Maria De-Arteaga, from the University of Texas (Austin), concluded that it is possible for us to eliminate the biases of these systems after developing their own predictive algorithm. They were based on the model used by PredPol, the most popular tool in the United States. “Although there is no official list of how many police departments have contracts with PredPol, dozens of American cities are known to use or have used the company’s tools at some point, including Los Angeles, San Francisco, Oakland and Richmond,” he says. Akpinar. There is also evidence, he adds, that the Kent police in England have worked with this tool.

The researchers trained their algorithm with statistical crime data from Bogotá, one of the few cities in the world that publishes reports of complaints by districts and is currently working on the implementation of one of these predictive systems. When they crossed the results of their model with the real crime data, they detected large errors. The areas with the highest crime rates, but few complaints were less identified as hot spots than the areas with medium or low crime rates but many complaints. Some districts needed to have half as many crimes as others for the system to predict that it was a hot spot.

“This is because the data on crimes committed do not accurately reflect the actual distribution of crime, as different neighborhoods or communities have a different propensity to file a complaint with the police,” Akpinar emphasizes. Going into detail, it is statistically common for a white to denounce a black, or for a black to denounce another black, but not so common that a black denounces a white.

What then is the solution? Is there a way that police predictive algorithms don’t further pigeonhole the underprivileged? “The only way is not to use them,” concludes the researcher.

You can follow EL PAÍS TECNOLOGÍA at Facebook Y Twitter or sign up here to receive our weekly newsletter.



Leave a Reply

Your email address will not be published. Required fields are marked *