AI in Policing: can it deliver?
2018 was London’s worst year for homicides in almost a decade with 135 murders and this crisis is showing no signs of slowing down in 2019. There is debate across the political spectrum, ranging from suggestions to make changes in the law to placing greater emphasis on education, with Prime Minister Theresa May declaring that the UK “cannot simply arrest itself” out of the crisis.
There has been rapid growth in the use of technology across the public sector, including many police forces. At the end of last year, The Home Office announced that it is making funding available for innovative research projects that could help police spot suspects and reduce knife crime. It’s clear that there’s a need and a desire for technological advancements in policing but with many existing projects surrounded by controversy, there’s a fundamental question that needs to be debated: how can technology make us safer without destroying our freedom?
This article explores some of the new and existing projects promising to drive innovation within the police, with a focus on the underlying ethical issues surrounding them.
A controversial database known as the “Gangs Matrix” was setup by the Metropolitan Police after the 2011 London riots as a way of identifying those at risk of committing gang-related violence. Based on a number of variables including previous offences and social media activity, the database uses an algorithm to calculate an individual’s risk score.
In theory, this should help the police efficiently use its resources to catch criminals. However in a damning report by Amnesty International, it is described as “a racially biased database criminalising a generation of young black men”. This project, considered a complete failure by many, once again raises the ethical dilemma around people-based applications of algorithms, and how to avoid damaging biases in these models.
Nonetheless, police funding has been cut significantly in recent times, so police forces need a system that helps them operate more efficiently, and to have any chance of being successful, these systems need to facilitate fairer, bias-free policing. It’s clear that this wasn’t the case with the Gangs Matrix and police forces around the world are now exploring how they implement artificial intelligence that is free of discrimination.
Predicting at risk areas
Another route for advancements in technology in the police is, rather than trying to predict who is likely to commit crimes, instead attempting to predict where crimes are likely to be committed. Could this lead to efficiency gains without problems relating to bias and discrimination?
A Metropolitan Police detective believes that he has found a way of predicting where deadly knife attacks are likely to take place. By analysing records of knife crimes in London, the detective found that more than two thirds of killings in 2017-18 happened in neighbourhoods where someone had been victim to a knife attack the previous year. This study is one of the first to show such a strong correlation.
University of Cambridge’s Professor Lawrence Sherman, who was involved in the study, commented that by predicting neighbourhoods more likely to experience knife crime, police forces can become more effective at deploying officers and can localise the use of stop and search.
However, the professor is also concerned by the quality of data collection within the police. He commented: "Police IT is in urgent need of refinement - instead of just keeping case records for legal uses, the systems should be designed to detect crime patterns for prioritising targets".
Making AI work for the police
The prospect of predicting crime before it happens has got police forces excited about artificial intelligence. A programme is currently being led by the West Midlands Police (WMP) which is using machine learning to analyse numerous local and national police databases containing data on things like crime reports and stop and search records.
Though we are still a long way off being able to accurately predict exactly when and where a crime will happen, WMP has been applauded for trying to build a predictive policing model that stands up ethically but, of course, there are still concerns. So, how can AI work for the police in a way that enables forces to be more efficient but also takes into consideration ethical and discrimination concerns?
Start with the basics. It’s evident that local predictive policing programmes are causing controversy. At a national, higher level, there are opportunities for AI to help the police become more efficient without contaminating systems with dirty, biased data.
Clean data is key. Artificial intelligence is only as intelligent as the data humans feed into them. Machines aren’t biased - but humans most definitely are. From the projects explored in this article, it’s clear that this is the single most important point moving forward with predictive policing.
Transparency is vital. Currently, it’s unlikely that any evidence gathered by AI would be admissible in court, purely because the systems are being tested secretively. The use of AI needs to be transparent with information widely available for the police, courts, lawyers and the public.
Developments in technology will continue to play a vital role for the police. However, there are clearly a number of ethical issues that need to be considered. Machine learning should make policing fairer and more accountable, but this just isn’t happening yet. If AI is going to make a real difference, the data it relies on needs to be clean and bias-free.
There’s a long way to go, but, it’s an interesting thought - what if the crime fighting tool of choice was… data.