WE ARE A MAGAZINE ABOUT LAW AND JUSTICE | AND THE DIFFERENCE BETWEEN THE TWO
October 14 2024
WE ARE A MAGAZINE ABOUT LAW AND JUSTICE | AND THE DIFFERENCE BETWEEN THE TWO
Search
Close this search box.

One in three police forces using predictive policing techniques drawing on ‘problematic’ data

One in three police forces using predictive policing techniques drawing on ‘problematic’ data

One in three police forces are using predictive policing techniques to crunch data without proper consideration of implications for civil liberties. The human rights group Liberty has sent freedom of information requests to all 43 police forces in England and Wales which reveal that 14 including the Metropolitan Police, West Yorkshire, Merseyside and the West Midlands have rolled out techniques or were about to do so ‘without proper consideration of our rights’ or the ‘discriminatory impact’ of such technologies.

Predictive policing aims to anticipate where crime will occur, when it will occur, and even the profile or identity of the perpetrator of the potential crime. According to Liberty, the programs were ‘far from… neutral, incorporate human biases, exacerbate social inequalities and threaten our privacy and freedom of expression’.

The report focuses on two techniques. Predictive mapping includes identifying hotspots and directing police to patrol these areas. Liberty’s report stresses that this sort of policing needs to end as it relies on ‘problematic historical arrest data and encourages the over-policing in marginalised communities’.

The second type of predictive policing is the individual risk assessment program which considers the likelihood of someone committing an offence based on factors such as previous criminality and their postcode. These individuals are then made to attend rehabilitation programmes, among varying approaches taken by the different police authorities. Liberty states that this ‘encourages discriminatory profiling’ with decisions being made which do not have enough human oversight and cannot be challenged sufficiently.

‘Predictive policing is sold as innovation, but the algorithms are driven by data already imbued with bias, firmly embedding discriminatory approaches in the system while adding a “neutral” technological veneer that affords false legitimacy,’ commented Hannah Couchman, a policy officer for Liberty. ‘Life-changing decisions are being made about us that are impossible to challenge. In a democracy which should value policing by consent, red lines must be drawn on how we want our communities to be policed.’

The report highlighted HART, the Harm Assessment Risk Tool, used by Durham police, which uses ‘machine learning’ to decide how likely person is to commit an offence over the next two years. The program gives the person a risk score of low, medium or high based on an analysis of 34 pieces of data, 29 of which refer to a person’s past criminal history. Other data relied upon include a person’s postcode which, Liberty argues, can act as a ‘proxy’ for race. ‘This is why research on algorithms using the criminal justice system in the US showed that even where race was not included in the data the algorithm used, the algorithm still learned characteristics in a way that is discriminatory’.

The HART program was also supplemented by data from the consumer credit agency Experian which classified people into what Liberty called ‘spurious groups’ – for example, ‘a crowded kaleidoscope’ which is, apparently, ‘a low income multicultural family working jobs with high turnover and living in cramped houses or overcrowded flats’. According to Liberty, that data set even linked names to stereotypes. According to Liberty, ‘people called Stacey are likely to fall under “families with need” who receive a range of benefits’.

The report highlights four key issues raised by predictive policing strategies. Discrimination as a result of programs making decisions though ‘complex software that few people understand’ which adds ‘unwarranted legitimacy to biased policing strategies that disproportionately focus on BAME and lower income communities’. Secondly, declining privacy and freedom of expression rights as a result of the use of ‘big data’ allowing large amounts of personal information to be accumulated to build profiles which the authorities can monitor. Liberty call this ‘a dangerous emerging narrative (which) requires us to justify our desire for privacy, rather than requiring the state – including the police – provide a sound legal basis for the interference’. Thirdly, a lack of human oversight due to the fact that humans are simply not able to deal with the ‘automation bias’ prevalent in these systems. Finally, a lack of transparency as predictive policing is referred to as a ‘black box’ with no public understanding of how algorithms make their decisions.

‘A police officer may be hesitant to overrule an algorithm which indicates that someone is high risk, just in case that person goes on to commit a crime and responsibility for this falls to them – they simply fear getting it wrong,’ the group argues. ‘… [It] is incredibly difficult to design a process of human reasoning that can meaningfully run alongside a deeply complex mathematical process.”

Liberty flags up American research from 2015 (Big data and predictive reasonable suspicion, University of Pennsylvania) that ‘without the requirement of some observable activity’ the odds increase that predictive stops will ‘target innocent people, criminalize by association, and negatively impact individuals based on little more than a hunch supported by non-criminal facts’.

‘While it may seem a laudable aim to prevent crime before it ever occurs, this is best achieved by addressing underlying social issues through education, housing, employment and social care. The solution does not lie in policing by machine.’
Liberty

Related Posts