One in three police forces are using predictive policing techniques to crunch data without proper consideration of implications for civil liberties. The human rights group Liberty has sent freedom of information requests to all 43 police forces in England and Wales which reveal that 14 including the Metropolitan Police, West Yorkshire, Merseyside and the West Midlands have rolled out techniques or were about to do so âwithout proper consideration of our rightsâ or the ‘discriminatory impactâ of such technologies.
Predictive policing aims to anticipate where crime will occur, when it will occur, and even the profile or identity of the perpetrator of the potential crime. According to Liberty, the programs were âfar from… neutral, incorporate human biases, exacerbate social inequalities and threaten our privacy and freedom of expressionâ.
The report focuses on two techniques. Predictive mapping includes identifying hotspots and directing police to patrol these areas. Libertyâs report stresses that this sort of policing needs to end as it relies on âproblematic historical arrest data and encourages the over-policing in marginalised communitiesâ.
The second type of predictive policing is the individual risk assessment program which considers the likelihood of someone committing an offence based on factors such as previous criminality and their postcode. These individuals are then made to attend rehabilitation programmes, among varying approaches taken by the different police authorities. Liberty states that this âencourages discriminatory profilingâ with decisions being made which do not have enough human oversight and cannot be challenged sufficiently.
âPredictive policing is sold as innovation, but the algorithms are driven by data already imbued with bias, firmly embedding discriminatory approaches in the system while adding a âneutralâ technological veneer that affords false legitimacy,â commented Hannah Couchman, a policy officer for Liberty. âLife-changing decisions are being made about us that are impossible to challenge. In a democracy which should value policing by consent, red lines must be drawn on how we want our communities to be policed.â
The report highlighted HART, the Harm Assessment Risk Tool, used by Durham police, which uses âmachine learningâ to decide how likely person is to commit an offence over the next two years. The program gives the person a risk score of low, medium or high based on an analysis of 34 pieces of data, 29 of which refer to a personâs past criminal history. Other data relied upon include a person’s postcode which, Liberty argues, can act as a âproxyâ for race. âThis is why research on algorithms using the criminal justice system in the US showed that even where race was not included in the data the algorithm used, the algorithm still learned characteristics in a way that is discriminatoryâ.
The HART program was also supplemented by data from the consumer credit agency Experian which classified people into what Liberty called âspurious groupsâ – for example, âa crowded kaleidoscopeâ which is, apparently, âa low income multicultural family working jobs with high turnover and living in cramped houses or overcrowded flatsâ. According to Liberty, that data set even linked names to stereotypes. According to Liberty, âpeople called Stacey are likely to fall under âfamilies with needâ who receive a range of benefitsâ.
The report highlights four key issues raised by predictive policing strategies. Discrimination as a result of programs making decisions though âcomplex software that few people understandâ which adds âunwarranted legitimacy to biased policing strategies that disproportionately focus on BAME and lower income communitiesâ. Secondly, declining privacy and freedom of expression rights as a result of the use of âbig dataâ allowing large amounts of personal information to be accumulated to build profiles which the authorities can monitor. Liberty call this âa dangerous emerging narrative (which) requires us to justify our desire for privacy, rather than requiring the state – including the police – provide a sound legal basis for the interferenceâ. Thirdly, a lack of human oversight due to the fact that humans are simply not able to deal with the âautomation biasâ prevalent in these systems. Finally, a lack of transparency as predictive policing is referred to as a âblack boxâ with no public understanding of how algorithms make their decisions.
âA police officer may be hesitant to overrule an algorithm which indicates that someone is high risk, just in case that person goes on to commit a crime and responsibility for this falls to them â they simply fear getting it wrong,â the group argues. â… [It] is incredibly difficult to design a process of human reasoning that can meaningfully run alongside a deeply complex mathematical process.â
Liberty flags up American research from 2015 (Big data and predictive reasonable suspicion, University of Pennsylvania) that âwithout the requirement of some observable activityâ the odds increase that predictive stops will âtarget innocent people, criminalize by association, and negatively impact individuals based on little more than a hunch supported by non-criminal factsâ.
âWhile it may seem a laudable aim to prevent crime before it ever occurs, this is best achieved by addressing underlying social issues through education, housing, employment and social care. The solution does not lie in policing by machine.â
Liberty