Clearview AI, an American-based facial recognition company, faces a fine of £17 million imposed by the Information Commissioner's Office (ICO) for contravening the country's data protection laws. This company identifies itself as the "world's largest facial network". They allow the users to match their facial images against more than 10 billion images pulled from the internet. The UK's national privacy regulator asked Clearview AI to terminate processing all the users' data. In addition to that, the watchdog has asked the company to delete all the users' personal information.
According to The Information Commissioner's Office (ICO), the Clearview AI Inc's database "are likely to include the data of a substantial number of people from the UK and may have been gathered without people's knowledge from publicly available information online, including social media platforms." The facial recognition company made these services available for UK law enforcement agencies on a free trial basis. Due to the data privacy breach, the enforcement agencies have ceased to continue the Clearview AI services in the UK.
Clearview AI, a facial recognition company, works in this way. First, a person clicks their picture. Then, they upload their click in the AI tool, where the software crawls across the web to find the relevant images of the respective person. Based on the search results, the software links the clicked picture to the pictures that appear online. Later, they sell the same to law enforcement agencies and other companies. Now, here's the problem: The company scraps users' data from multiple websites and social media platforms without their permission. This is simply crossing a line and not acceptable in the eyes of the public and the law.
The ICO shared the Clearview AI company's breach list, stating how they failed to abide by the UK's data protection laws. To begin with, the company didn't process the users' data the way it should have. They also failed to come up with a process that could cease storing users' data in their database without their consent. The company also doesn't have a policy/rule/law expressing the reasons for collecting the data. They also failed to meet the higher data protection standards required for biometric data. And the worst part is that the company didn't bother to share any information about how users' data is used and for what purposes.
The ICO plans to announce their final verdict on Clearview AI's transgression in mid-2022. So, the UK's independent regulator asked the company to reply to all the breaches before the ICO hits the deadline. The UK Information Commissioner, Elizabeth Denham, said: "I have significant concerns that personal data was processed in a way that nobody in the UK will have expected. It is, therefore, only right that the ICO alerts people to the scale of this potential breach and the proposed action we're taking. UK data protection legislation does not stop the effective use of technology to fight crime, but to enjoy public trust and confidence in their products technology providers must ensure people's legal protections are respected and complied with."
On addressing the Clearview AI's issue, she further added, "Clearview AI Inc's services are no longer being offered in the UK. However, the evidence we've gathered and analyzed suggests Clearview AI Inc were and maybe continue to process significant volumes of UK people's information without their knowledge. We, therefore, want to assure the UK public that we are considering these alleged breaches and taking them very seriously."