UK Privacy Watchdog Imposes 7.5M-Pound Fine on Clearview AIICO Orders Firm to Erase Illegally Obtained Citizen Data
The Information Commissioner's Office in the United Kingdom has imposed a penalty of 7.5 million pounds - or $9.4 million - against Clearview AI for using unlawfully obtained U.K. citizen facial images to power the company's AI database that reportedly helps law enforcement agencies with facial recognition.
See Also: LIVE Webinar | Stop, Drop (a Table) & Roll: An SQL Highlight Discussion
The privacy watchdog has directed Clearview AI to delete images of all U.K. citizens and to stop scraping data from the open internet.
We also issued an enforcement notice, ordering Clearview AI Inc to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems.— ICO - Information Commissioner's Office (@ICOnews) May 23, 2022
Watch how Clearview operates ⬇ pic.twitter.com/xMAjzedyBD
The enforcement action follows the results of a joint investigation by the ICO and the Office of the Australian Information Commissioner, which was opened in July 2020. The motive for this joint inquiry was to look into the personal information-handling practices of Clearview AI, and the investigation focused on the company's use of biometrics and scraped data for facial recognition purposes without the user's knowledge.
In November 2021, after the conclusion of the joint investigation, both the ICO and the OAIC found Clearview AI guilty of violating data privacy laws of their respective countries. But as each data protection authority operates under its own country's legislation, any outcomes are considered separately. Therefore, the fine imposed by the ICO is restricted to the Clearview AI's operations in the U.K., according to the ICO's ruling.
Following that ruling in November, the ICO imposed a provisional fine of 17 million pounds - or $21.2 million - on Clearview AI. It said that Clearview AI had had an opportunity to make respond to the findings and that the ICO would carefully consider any response before making a final decision. "As a result, the proposed fine and preliminary enforcement notice may be subject to change or no further formal action," the ICO's statement said.
The ICO had expected to make its final decision by mid-2022, and it announced its final ruling on Monday.
In the final ruling, U.K. Information Commissioner John Edwards says, "Clearview AI Inc. has collected multiple images of people all over the world, including in the U.K., from a variety of websites and social media platforms, creating a database with more than 20 billion images. The company not only enables identification of those people, but effectively monitors their behavior and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the U.K. by both fining the company and issuing an enforcement notice.
"People expect that their personal information will be respected, regardless of where in the world their data is being used. That is why global companies need international enforcement. Working with colleagues around the world helped us take this action and protect people from such intrusive activity. This international cooperation is essential to protect people’s privacy rights in 2022."
How Clearview AI Works
Clearview AI describes itself as the "world's largest facial network." It has scraped and collected more than 20 billion images of people's faces and data from public sources, such as the open internet and social media platforms from around the globe, to create an online database, the ICO's ruling says.
The ICO says that Clearview AI provides a service in the form of a web-based intelligence platform that allows its customers, which include law enforcement agencies of several countries, to upload an image of a person to the company's app. This image is then referenced and matched against all the images in the database. Once the processing is complete, the app provides a list of images with characteristics similar to those in the photo and a link to the websites where it found those images, the ICO says.
Counts of Breach
Although the U.K. has separate governing laws in place since its departure from the European Union, it still has the Data Protection Act 2018, known as the U.K.'s version of the EU's General Data Protection Regulation. Citing this law as the baseline, the ICO found that Clearview AI Inc. breached U.K. data protection laws by failing to do the following:
- Use the information of people in the U.K. in a way that is fair and transparent;
- Have a lawful reason for collecting people's information;
- Have a process in place to stop the data being retained indefinitely;
- Meet the higher data protection standards required for biometric data, which is classified as "special category data" under the GDPR and the Data Protection Act;
- Inform people in the U.K. about what was happening to their data;
- Ask its customers for additional personal information, including photos, to disincentivize them from submitting data to be processed.
The ICO concluded that given the high number of U.K. citizens that use internet and social media, Clearview AI's database is "likely to include a substantial amount of data" from U.K. residents, and while the company no longer offers services to U.K. organizations, it continues its services in other countries. These services may include using personal data of U.K. residents, which is why the ICO has directed the company to delete all U.K. users' data.
The case of Clearview AI was just one of the many GDPR cases in which the controversial AI company has been penalized, says Rie Aleksandra, a privacy officer at Fathom Analytics and a strategic compliance partner who specializes in GDPR, e-privacy and data protection.
Aleksandra says that Clearview AI was recently fined 20 million euros - or $21.4 million - by the Italian Data Protection Authority. CNIL, the French data protection authority, found the company guilty of violating two GDPR laws but did not impose a fine. It only required Clearview AI "to stop unlawfully collecting and processing the personal data of data subjects on the French territory."
There have also been cases in both Sweden and Finland in which police departments have been fined and reprimanded for illegal and unlawful usage of Clearview's AI tool.
The Swedish Authority for Privacy Protection found that the Swedish Police Authority had processed personal data in violation of the Swedish Criminal Data Act when using Clearview AI to identify individuals, and it imposed an administrative fine of SEK 2,500,000 - approximately $255,000 - on the Police Authority.
The Office of Data Protection Ombudsman of Finland reprimanded Finland’s National Police Board during a facial recognition technology trial for illegally using Clearview AI to process special categories of personal data.
It says in a statement, "The National Bureau of Investigation unit specializing in the prevention of child sexual abuse had experimented with facial recognition technology in identifying potential victims. The decision to try the software had been made independently by the police unit, and the National Police Board was not aware of the trial."
No fines were issued, but the ombudsman directed the police board to notify all affected individuals and to erase all data that had been saved on Clearview AI's servers during this process.
In response to these fines and allegations, Hoan Ton-That, CEO of Clearview AI, told TechCrunch, "Clearview AI does not have a place of business the EU. It does not have any customers in the EU, and does not undertake any activities that would otherwise mean it is subject to the GDPR. We only collect public data from the open internet and comply with all standards of privacy and law. My intentions and those of my company have always been to help communities and their people to live better, safer lives."
Settlement with ACLU Illinois
Earlier this month, Clearview AI reached a settlement with the Illinois ACLU that aligns the company with the state's Illinois Biometric Information Privacy Act. The ACLU statement says, "The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the United States. ... Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years."
In response to this settlement, Ton-That said, "The court’s endorsement of the BIPA [Illinois Biometric Information Privacy Act] settlement is an achievement for Clearview AI's customers and our mission of providing justice to victims of crime across the country. Clearview AI intends to serve private sector clients with product offerings that are not affected by this agreement, focused on our core mission of enhancing security."
Under the settlement, Clearview AI can continue to offer its services to law enforcement, federal agencies and government contractors outside of Illinois.