How to infringe the Privacy rights of millions of Australians with a few clicks… a cautionary tale by Big Brother and his friends

9 November 2021
Dudley Kneller, Partner, Melbourne Antoine Pace, Partner, Melbourne

On 14 October 2021, the Australian Information Commissioner and Privacy Commissioner determined that by using its facial recognition platform to crawl the web to scrape biometric information from various sources on the internet and disclose it through its software, Clearview AI, had breached the privacy rights of millions of Australians.[1] Clearview AI has been described by The New York Times in 2020 as ‘The Secretive Company that might end Privacy as we know it’.[2]

This decision discusses the far-reaching implications of automated data scraping and identification tools, to which regulators are increasingly having to respond, and demonstrates the adage that merely because something can be done, it is not a logical outcome that it should be done.

About Clearview AI’s facial recognition platform

Clearview AI’s facial recognition and identification platform (Clearview Platform) operates on an ‘as a Service’ basis, and functions in five steps:

  • an automated image scraper crawls the web (including social media), collecting images of individuals’ facial features and associated data including the source webpage URL and image title; storing all such data (harvested images) in a database on its servers (the database purportedly contains more than three billion images);
  • a vector creation engine generates a mathematical representation (vector) of each of the harvested images using a machine-learning algorithm, storing and associating those data in the same database as the scraped images (stored vectors);
  • a registered Clearview AI user uploads an individual’s image (target individual image) through Clearview AI’s app or portal, which the vector creation engine analyses to create a vector of the target individual image (target vector);
  • the comparison tool then compares the target vector with the stored vectors in the database, which are linked to the harvested images; and
  • if the tool identifies any sufficiently similar matches in the harvested images, these are then served to the registered user as ‘search results’ in the form of thumbnail images, which are linked to the source web pages.

Clearview AI says that it offers this service to its government customers, solely for law enforcement and national security purposes. Its website states that users of the Clearview Platform ‘receive high-quality leads with fewer resources expended’ and that the Clearview Platform helps law enforcement agencies to ‘accurately and rapidly identify suspects, persons of interest, and victims to help solve and prevent crimes’.[3]

Leaving aside the Orwellian implications of the use of the Clearview Platform for mass surveillance, which themselves are greatly concerning, the Clearview Platform is also capable of many other uses, such as those described in Clearview AI’s US and international patent applications. These include “to learn more about a person the user has just met, such as through business, dating, or other relationship“, and “to verify personal identification for the purpose of granting or denying access for a person, a facility, a venue, or a device“. These potential applications may on their face appear attractive; however not stated in the patent applications, is the fact that platforms like the Clearview Platform and the data contained in them, could also be used for fraudulent purposes including identity theft.

Clearview AI’s Australian activities in 2019 and 2020

In late 2019 and early 2020, Clearview AI offered free trials of the Clearview Platform to Australian law enforcement agencies. According to its press release, the OAIC is undertaking investigation into the law enforcement agencies’ trial use of the technology and whether they had complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks.

Clearview AI ceased trials with Australian law enforcement agencies, and instituted a policy of refusing all requests for user accounts from Australia, effectively withdrawing from the Australian market (from a customer-facing perspective); however according to the Commissioner, it “provided no evidence that it is taking steps to cease its large scale collection of Australians’ sensitive biometric information, or its disclosure of Australians’ … images to its registered users for profit“.

The decision

After a joint investigation commenced in 2020 with the UK Information Commissioner’s Office, the Australian Privacy Commissioner determined that Clearview AI breached the Australian Privacy Act 1988 (Cth) by:

  • collecting Australians’ sensitive information (i.e. their biometric data) without their consent;
  • collecting personal information by unfair means;
  • not taking reasonable steps to notify affected individuals of the collection of their personal information;
  • not taking reasonable steps to ensure that personal information it disclosed was accurate, having regard to the purpose of the disclosure; and
  • not taking reasonable steps to implement practices, procedures and systems to ensure compliance with the Australian Privacy Principles.

The Commissioner found that Clearview AI’s practices “fall well short of Australians’ expectations for the protection of their personal information”, that “covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” and that it “carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database“.

Tellingly, the Commissioner pointed out that biometric information of this kind (being facial features of an individual) is unlike a driver’s licence or other means of identification because facial features cannot be wiped or reissued.

When discussing the reasonableness of Clearview AI’s practices, the Commissioner did not accept that the impact on individuals’ privacy was necessary, legitimate and proportionate, having regard to any public interest benefits, commenting that the covert and indiscriminate collection of harvested images and associated vectors was unreasonably intrusive, and concluding that Clearview AI had interfered with the privacy of Australian individuals by collecting harvested images and vectors by unfair means.

The Commissioner then ordered Clearview AI to:

  • cease to collect images and vectors for the Clearview Platform from individuals in Australia; and
  • destroy all Harvested Images and vectors, target individual images and vectors, and other relevant information it has collected from individuals in Australia in breach of the Privacy Act; and
  • confirm to the Commissioner that such collections have ceased, and that such data have been destroyed, within 90 days.

Key takeaway

This case sends a strong regulatory message to technology platforms seeking to capitalise on recent advances in AI technologies. Its timing couldn’t have been better as the Australian Attorney General launches the next stage of its review into the Privacy Act while at the same time seeking feedback on the release of the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021 (Online Privacy Bill). Whilst the review and Online Privacy Bill have been in the offing for a while it would seem we are seeing a definite shift in emphasis from the regulator from education and awareness to enforcement. Technology platforms will no doubt want to get ahead of the curve and avoid the genuine risks of over-regulation. Facebook’s formal announcement in the last few days that they were shutting down their Face Recognition technology within their platform gives us some insight into the genuine concerns within the sector.[4]

Returning to the Clearview decision in its press release, the OAIC noted that the UK’s Information Commissioner’s Office (ICO) is considering its next steps and any formal regulatory action that may be appropriate under the UK’s data protection laws.[5] The UK regulator along with its European counterparts have demonstrated they are well beyond the education and awareness phase and are quite comfortable seeking to enforce privacy obligations broadly across the technology sector. Will this case see Australia finally follow suit or will the regulator continue to ‘educate’ and ‘raise awareness’ around privacy obligations? We will likely know the answer to that question within the next 12 months.

The Commissioner’s full determination can be found on the OAIC website. [6]

If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.


Authored by:

Antoine Pace, Partner
Dudley Kneller, Partner

 


[1] See Determination at https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf

[2] See New York Times article ‘The Secretive Company That Might End Privacy as We Know It’, 18 January 2020 – https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html

[3] See Clearview website at https://www.clearview.ai/law-enforcement – as at 5 November 2021

[4] See https://about.fb.com/news/2021/11/update-on-use-of-face-recognition/

[5] See OAIC press release at https://www.oaic.gov.au/updates/news-and-media/clearview-ai-breached-australians-privacy

[6] See https://www.oaic.gov.au/__data/assets/pdf_file/0016/11284/Commissioner-initiated-investigation-into-Clearview-AI,-Inc.-Privacy-2021-AICmr-54-14-October-2021.pdf

This update does not constitute legal advice and should not be relied upon as such. It is intended only to provide a summary and general overview on matters of interest and it is not intended to be comprehensive. You should seek legal or other professional advice before acting or relying on any of the content.

Get in touch