Use of Live Facial Recognition by UK Police Violates Ethics, Human Rights: Here’s Why

A study said that using live facial recognition (LFR) technology in public places has violated ethical standards and human rights.

The study conducted by the Minderoo Centre for Technology and Democracy at the University of Cambridge recommended a ban on using LFR in public places where the police find the technology a valuable tool against terrorism. 

LFR, its proponents inside the UK law enforcement department, is considered a valuable innovation in fighting crime. It is akin to the use of fingerprints in crime investigation and enforcement.

But the critics believe otherwise. The use of LFR in law enforcement could lead to human rights abuses, including the rights of the people to air their grievances and freedom of assembly.

Linking Cameras to Databases

The technology operates by linking cameras installed in public places to databases where photographs of people are stored.

Using LFR, the cameras captured images of people walking on the streets, railways, parking lots, and parks and checked these images in the databases for a possible match.

The police think this technology is a big help for law enforcement agencies to identify criminals and track and arrest them.

It will be the next big thing in the campaign against criminals, the police said.

Read Also: ID.me CEO Admits Usage of Facial Recognition: 3 Uses And Advantages of Facial Recognition

Study Examines Three Cases

The researchers used at least three cases where LFR was deployed. These three cases are the following: Metropolitan police and two other cases by the South Wales police.

In an interview with Guardian, the police officers in these areas said peace and security have improved with the use of LFR.

The study, however, disagreed with the police.

Evani Radiya-Dixit, the author of the report, said all these three cases where LFR was deployed failed to meet even the minimum of ethical and legal standards. 

The study also found that using LFR. The police have failed to incorporate the established practices of ethical and safe use of large bodies of data.

The problem is well beyond the concern of bias, including racial bias, in LFR algorithms, the report said.

Repressive Tools

Among authoritarian governments, the use of LFR technology is part of their repressive toolbox.

 In China, the facial recognition system using a massive network of cameras requires every citizen to register. 

No one suspected how pervasive the Chinese surveillance tools were until a leak of the database in 2019.

The Chinese government, through its facial recognition system, according to a database leak, has over 6.8 million records each day. 

These records are from the cameras installed in hotels, tourism spots, parks, and places of worship, a report published by CNET said. The logs include every detail about people. Some of them are as young as nine years old.

This same technology was used by Beijing against Uyghur Muslims, deploying its facial recognition system to commit grave human rights violations against the minority population in the country.

'Powerful And Intrusive Technology'

The study hired the services of Pete Fussey of the University of Essex to conduct an audit on their LFR trials.

After the audit, Fussey said LFR is "a powerful and intrusive technology" that could infringe on the rights of every individual.

He said the court of appeals in South Wales had already ruled the use of LFR as unlawful.

In the face of the court ruling, he said it would be impossible to argue the use of this technology.

Related Article: Facial Recognition Technology And Its Business Use Cases

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

Company from iTechPost

More from iTechPost