The first major legal challenge to police use of Automated Facial Recognition (AFR) has begun at Cardiff High Court. The case, brought by former Liberal Democrat councillor Ed Bridges, and civil liberties group Liberty, seeks to challenge the police’s use of the technology on the grounds that it breaches the right to privacy under Article 8 of the Human Rights Act 1998.
AFR works by scanning large groups of people in public spaces, creating a unique biometric data profile for each individual, and running that data profile through a database of pre-existing images on police ‘watch lists’ to see if they match. ‘By scanning and capturing the biometric data of all passers-by in their use of facial recognition technology, the police are violating our right to privacy’, wrote Bridges in an opinion piece for The Guardian last week.
‘It is just like taking people’s DNA or fingerprints, without their knowledge or their consent,’ said Megan Goulding, a lawyer at Liberty. Bridges goes on to describe the technology as‘a discriminatory tool that…will increase the over-policing of minority groups’.
The use of AFR for law enforcement is unregulated: there exist no laws that underpin or govern its use. Despite this, police forces throughout England and Wales have been deploying it at their discretion since 2015, and, although similar technologies have been found to present with considerable algorithmic bias pertaining to race and gender, the software used by police in the UK has never been demographically tested. Research on similar AFR technology created by IBM, Microsoft, and Face++ conducted in 2018, revealed that it was over three times more likely to misidentify a black or ethnic minority women than a white man. Last year, in the first independent academic evaluation of police use of AFR carried out by Cardiff University, South Wales Police commented that they were unable to conduct demographic tests‘due to limited funds’.
Dan Squires QC, representing Ed Bridges, said: ‘What AFR enables the police to do is to monitor people’s activity in public in a way they have never done before.’
When the technology identifies an individual, the police will stop them and ask them to prove their identity. Ordinarily, the police do not have the power to compel any member of the public to identify themselves without ‘reasonable cause’, meaning the indiscriminate use of AFR has generated a major and lawless elevation of police power; power that will disproportionally affect those impacted by bias in the technology (potentially women and people of colour).
Further, Bridges has described the use of AFR as ‘dangerously inaccurate’. A freedom of information request published last year by civil liberties organisation Big Brother Watch, found the use of AFR by South Wales Police to be 91% ineffective. In other words, more than nine out of 10 those ‘identified’ by the technology were actually misidentified. Figures were no healthier for other police forces throughout the country, with the Metropolitan Police’s use revealed to be over 98% ineffective, and ineffectiveness overall throughout England and Wales reported to be 90%.
The former head of the National Counter Terrorism Security Office, Chris Phillips, has commented: ‘If there are hundreds of people walking the streets who should be in prison because there are outstanding warrants for their arrest, or dangerous criminals bent on harming others in public places, the proper use of AFR has a vital policing role… . The police need guidance to ensure this vital anti-crime tool is used lawfully.’
Last week, San Francisco became the first major American city to ban government use of facial recognition technology, in response to similar challenges relating to privacy rights, and software reliability.