Police Reportedly Used Clearview AI Over a Million Times for Face Searches

AI has become instrumental in making jobs easier across several professions. In this case, law enforcement agencies have used AI tools to aid in police work by using Clearview AI facial recognition technology, according to the company CEO.

Facial Recognition
(Photo : Dowell/Getty Images)

Police Using Clearview AI

Hoan Ton-That, Clearview AI's CEO stated in an interview that police have run over a million facial recognition searches through the company's database, which is said to have 30 billion images taken from social media sites.

Although Clearview AI's data-gathering methods are questionable, its services are used by approximately 2,400 law enforcement agencies, according to Gizmodo. Although many organizations opposed the company's practices, they remain operational until now.

If it is to be believed that Clearview AI scans photos from social media, it means that anyone who has posted a photo of themselves on any of the platforms can be identified by law enforcement, even linking them to crimes.

Albert Fox Cahn, a Surveillance Technology Oversight Project Executive Director expressed that it was "appalling" that a company can steal billions of photos, which is worsened by the fact that the police are paying for its services instead of investigating them.

Miami's Assistant Chief of Police Armando Aguilar shared that his agency used the services of the AI company around 450 times a year, which involves crimes like murder or shoplifting. Although, Aguilar reasoned that they used it more as an initial tip and not final evidence. 

Although the Assistant Chief of Police clarified that they do not make arrests based on an algorithm's outcome, Clearview CEO Ton-That stated that there have already been several wrongful arrests due to Clearview AI, and said that it was a result of "poor policing."

Read Also: NYPD is Required to Disclose How Its Facial Recognition Procedures Used During the Black Lives Matter Protests

The Dangers of Clearview AI

Many tech companies with more resources are completely capable of creating what Clearview AI has, but they have refrained to do so due to the implications that could come with it. It is, after all, a way of invading someone's privacy.

The AI tool can be used to aid law enforcement and agencies to identify criminals quicker, but it can also be used by people with bad intentions. Some cities recognize this such as San Francisco, which has barred its police force from using facial recognition technology.

The New York Times reviewed the code used for the app which has a programming language that can be used with AR glasses. This means that the user who wears those glasses can determine the identity of the people they come across with.

Co-Director of the High Tech Law Institute at Santa Clara University, Eric Goldman expressed that the possibility of the tool being weaponized is endless. It can be used to identify protesters or activists, or even learn about a stranger you found attractive in public.

Clearview CEO Ton-That addressed the programming language that makes it possible for the AI tool to be integrated with AR glasses. It was clarified that the company was designing a prototype, but that it was never intended to be released.

Related: Clearview AI Legal Settlement With ACLU Will Limit Facial-Recognition Use to US Law Enforcement

© 2024 iTech Post All rights reserved. Do not reproduce without permission.

More from iTechPost