Highlights –
- The enforcement action comes close to a joint investigation carried out by the Information Commissioner’s Office and the Office of the Australian Information Commissioner.
- It was also found that the firm had asked people for additional personal information when asked by public members if they were on their database.
The UK’s data protection watchdog, the Information Commissioner’s Office (ICO), has announced a penalty of £7.5 million (USD 9.4 million) for Clearview AI, a controversial facial recognition company, for breaching data protection laws. It has also issued an enforcement notice, directing the firm to stop obtaining and using publicly available data on the internet of the UK residents and to delete the data from its systems.
According to ICO’s findings, Clearview AI did not inform the people in the UK that it was amassing their images from the web and social media to create a global database that could be used for facial recognition. Further, it failed to provide a lawful reason for collecting people’s information. It did not even have a system to stop the data from being retained for an indefinite time. Data protection standards required for biometric data under the General Data Protection Regulation, too, were not followed.
It was also found that the firm had asked people for additional personal information, including photos, when asked by public members if they were on their database.
The privacy watchdog also concluded that because of a higher number of internet and social media users in the UK, Clearview AI’s database is “likely to include a substantial amount of data” from the country’s residents. Further, though the company does not offer services to UK organizations anymore, it continues to do so in other countries, including using the personal data of the residents of the UK.
The enforcement action comes close to a joint investigation carried out by the ICO and the Office of Australian Information Commissioner (OAIC). Both the watchdogs have been investigating Clearview AI since 2020 and were conducted in line with the Australian Privacy Act and the UK Data Protection Act. The duo investigated how the firm used citizen’s images, data scraping from the internet, and biometric data for facial recognition.
Earlier this month, in a landmark agreement, Clearview AI had agreed to cease sales to private companies and individuals in the United States. It had also agreed to stop making the database available to the Illinois state government and local police departments for five years. However, the New York-based company continue to serve other law enforcement and federal agencies and government contractors outside of Illinois.
Experts’ Take
“Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20 billion images,” UK Information Commissioner John Edwards said.
“The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”
“People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement.”
“This international cooperation is essential to protect people’s privacy rights in 2022. That means working with regulators in other countries, as we did in this case with our Australian colleagues,” Edwards said.
Hoan Ton-That, Clearview AI’s chief executive, said, “I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and intentions … I would welcome the opportunity to engage in conversation with leaders and lawmakers so the true value of this technology, which has proven so essential to law enforcement, can continue to make communities safe.”