UK police facial recognition: What you need to know

UK police have been using live facial recognition (LFR) technology for the best part of a decade, with the Met being the first force to deploy it at Notting Hill Carnival in 2016.

Since then, the use of the biometric surveillance and identification tool by the Met has ramped up considerably. While the initial deployments were sparse, happening only every few months, they are now run-of-the-mill, with facial recognition-linked cameras regularly deployed to events and busy areas of London.

Similarly, while South Wales Police (SWP) – the only other force in England and Wales to have officially deployed the “live” version of facial recognition – used the technology much more extensively than the Met during its initial roll-outs through 2017, it is now deploying it with much more frequency.

From the police’s perspective, the main operational benefits of facial recognition include the ability to find people they otherwise would not be able to (whether that be for safeguarding or apprehending offenders), and as a preventative measure to deter criminal conduct.

Almost immediately, however, the technology proved controversial. Out of the gate, police facial recognition was derided for having no firm legal basis, poor transparency and questionable accuracy (especially for women and those with darker skin tones), all while being rolled out with zero public or Parliamentary debate.

The Met’s choice to first deploy the technology at Carnival – the biggest Afro-Caribbean cultural event in Europe and the second-largest street carnival in the world outside of Brazil – also attracted criticisms of institutional racism.

In the case of SWP, its use of live facial recognition against activists protesting an arms fair in Cardiff eventually led to a legal challenge.

In August 2020, the Court of Appeal concluded that SWP’s use of the tech up until that point had been unlawful, because the force had failed to conduct an appropriate Data Protection Impact Assessment (DPIA) and comply with its Public Sector Equality Duty (PSED) to consider how its policies and practices could be discriminatory. 

Although the court also concluded that SWP had violated the privacy rights of the claimant, the judgement ultimately found the problem was with how the technology had been approached and deployed by police, rather than a particular problem with the technology itself.

In this essential guide, learn about how the police have been approaching the technology, the ongoing concerns around its proportionality, necessity and efficacy, and the direction of travel set for 2024 and beyond.

What is facial recognition?

While LFR has received the most public attention and scrutiny, other facial recognition techniques have also started gaining popularity among UK law enforcement.

With LFR, the technology essentially acts as a biometric police checkpoint, with a facial recognition-linked camera scanning public spaces and crowds to identify people in real time by matching their faces against a database of images compiled by police.

Otherwise known as a “watchlist”, these databases are primarily comprised of custody photos and can run into thousands of images for any given LFR deployment, but are deleted after each operation along with any facial images captured during.

Más contenido para leer:  Birmingham City Council needs a further £45m to fix ‘disastrous’ Oracle system

The second technique is retrospective facial recognition (RFR). While it works in a similar fashion to LFR by scanning faces and matching them against a watchlist, RFR can be applied to any already-captured images retroactively.

Unlike LFR, which is used overtly with specially equipped cameras atop a visibly marked police van, RFR use is much more covert, and can be applied to footage or images behind closed doors without any public knowledge the surveillance has taken place.

Critics are particularly concerned by the increasing use of this technology, because the sheer amount of image and video-capturing devices in the modern world – from phones and social media to smart doorbell cameras and CCTV – is creating an abundance of material that can be fed into the software.

There is also concern about what its operation at scale means for human rights and privacy, as it smooths out the various points of friction that have traditionally been associated with conducting mass surveillance.

Looking at operator-initiated facial recognition (OIFR), the newest iteration of facial recognition being rolled out for UK police, the technology works via an app on officers’ phones that allows them to automatically compare the photos they’ve taken out in the field with a predetermined watchlist.

While national plans to equip officers with OIFR tools were only announced by UK police chiefs in November 2023, South Wales, Gwent and Cheshire police are already conducting joint trials of the tech.  

Why is facial recognition so controversial?

A major question hanging over the police’s use of facial recognition is whether it is actually necessary and proportionate in a democratic society, especially given the lack of public debate about its roll-out.

Before they can deploy any facial recognition technology, UK police forces must ensure their deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate. This must be assessed for each individual deployment of the tech.

For example, the Met’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.

Responding to questions about how the force decided each individual deployment was both necessary and proportionate, the Met has given the same answer to Computer Weekly on multiple occasions.

“The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents,” it said, adding in each case that “the proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system”.

Más contenido para leer:  More than two million now work in UK tech, says CompTIA

However, critics have questioned whether scanning tens of thousands of faces every time LFR is used is both a necessary and proportionate measure, particularly when other, less intrusive methods are already available to police.

While there are a number of legally recognised purposes (such as national security, prevention of disorder or public safety) that state authorities can use to intrude on people’s rights, proportionality and necessity tests are already well established in case law, and exist to ensure these authorities do not unduly interfere.

“In the case of police, they’re going to say ‘it’s prevention of disorder or crime, or public safety’, so they get past first base, but then one of the questions is, ‘is this necessary in a democratic society?’” said Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School.

“There’s a very rich case law about what that means, but the core test is you can’t use a hammer to crack a nut. Even though a machete might be perfectly good for achieving your task, if a pen knife will do, then you can only use the pen knife, and the use of a machete is unlawful because it’s disproportionate … the basic way of explaining it is that it has to go no further than necessary to achieve the specified goal.”

In the case of RFR, while it has its own separate legal mandate document, there are similarities in the need to establish the purpose and grounds of every search made with the software, as well as the proportionality and necessity of doing so in each case.

There is currently no legal mandate published for OIFR tools, but police chiefs have said this version of the tech won’t be rolled out to forces until sometime in 2024.

Is facial recognition biased or discriminatory?

Closely linked with necessity and proportionality, there is also the question of who the cameras are ultimately aimed at and why. This in turn brings up questions about bias and discrimination, which from the police and government perspective can be solved via improved algorithmic accuracy.

When LFR first began being deployed by UK police, one of the major concerns was its inability to accurately identify women and people with darker skin tones, which led to a number of people being wrongly identified over its first few years of deployment.

However, as the accuracy of the algorithms in use by UK police has improved, the concerns have shifted away from questions of algorithmic bias towards deeper questions of structural bias in policing, and how that bias is reflected in its technology practices.

Civil society groups maintain, for example, that the technology is “discriminatory and oppressive” given repeated findings of institutional racism and sexism in the police, and that it will only further entrench pre-existing patterns of discrimination.

Others have argued the point further, saying that accuracy is a red herring. Yeung, for example, has argued that even if LFR technology gets to the point where it is able to identify faces with 100% accuracy 100% of the time, “it would still be a seriously dangerous tool in the hands of the state”, because “it’s almost inevitable” that it would continue to entrench existing power discrepancies and criminal justice outcomes within society.

Más contenido para leer:  Por qué una red de servicios puede no ser para todos

How do facial recognition watchlists work?

Watchlists are essentially images of people’s faces that facial recognition software uses to determine whether someone passing the camera is a match. While images can come from a range of sources, most are drawn from custody images stored in the Police National Database (PND).

Given the well-documented disproportionality in policing outcomes across different social groups in the UK, the concern is that – in using historic arrest data and custody images to direct where facial recognition should be deployed and who it’s looking for respectively – people from certain demographics or backgrounds then end up populating the watchlists.

“If you think about the disproportionality in stop and search, the numbers of black and brown people, young people, that are being stopped, searched and arrested, then that starts to be really worrying because you start to get disproportionality built into your watchlists,” London Assembly member and chair of the police committee, Caroline Russell, previously told Computer Weekly.

Further, in their appearances before a Lords committee in December 2023, senior officers from the Met and SWP confirmed to the Lords that both forces use generic “crime categories” to determine targets for their LFR deployments.

This means watchlists are selected based on the crime type categories linked to images of people’s faces (which are mostly custody images), rather than based on intelligence about specific individuals that are deemed a threat.

Another issue with the watchlists is the fact that millions of these custody images are held there completely unlawfully, meaning people never convicted of a crime could potentially be included.

In 2012, a High Court ruling found that its retention of custody images was unlawful because unconvicted people’s information was being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be disproportionate.

While the Met’s LFR Data Protection Impact Assessment (DPIA) says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”, millions of custody images are still being unlawfully retained.

Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the NPCC lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy and wider support and challenge in our use of these images for technologies such as facial recognition”.

In November 2023, the NPCC confirmed to Computer Weekly that it has launched a programme that (while still not yet publicised) will seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK. This will be implemented over a…

Nuestro objetivo fué el mismo desde 2004, unir personas y ayudarlas en sus acciones online, siempre gratis, eficiente y sobre todo fácil!

¿Donde estamos?

Mendoza, Argentina

Nuestras Redes Sociales