On 14 October 2021, the Australian Information Commissioner and Privacy Commissioner determined that by using its facial recognition platform to crawl the web to scrape biometric information from various sources on the internet and disclose it through its software, Clearview AI, had breached the privacy rights of millions of Australians. Clearview AI has been described by The New York Times in 2020 as ‘The Secretive Company that might end Privacy as we know it’.
This decision discusses the far-reaching implications of automated data scraping and identification tools, to which regulators are increasingly having to respond, and demonstrates the adage that merely because something can be done, it is not a logical outcome that it should be done.
Clearview AI’s facial recognition and identification platform (Clearview Platform) operates on an ‘as a Service’ basis, and functions in five steps:
Clearview AI says that it offers this service to its government customers, solely for law enforcement and national security purposes. Its website states that users of the Clearview Platform ‘receive high-quality leads with fewer resources expended’ and that the Clearview Platform helps law enforcement agencies to ‘accurately and rapidly identify suspects, persons of interest, and victims to help solve and prevent crimes’.
Leaving aside the Orwellian implications of the use of the Clearview Platform for mass surveillance, which themselves are greatly concerning, the Clearview Platform is also capable of many other uses, such as those described in Clearview AI’s US and international patent applications. These include “to learn more about a person the user has just met, such as through business, dating, or other relationship“, and “to verify personal identification for the purpose of granting or denying access for a person, a facility, a venue, or a device“. These potential applications may on their face appear attractive; however not stated in the patent applications, is the fact that platforms like the Clearview Platform and the data contained in them, could also be used for fraudulent purposes including identity theft.
In late 2019 and early 2020, Clearview AI offered free trials of the Clearview Platform to Australian law enforcement agencies. According to its press release, the OAIC is undertaking investigation into the law enforcement agencies’ trial use of the technology and whether they had complied with requirements under the Australian Government Agencies Privacy Code to assess and mitigate privacy risks.
Clearview AI ceased trials with Australian law enforcement agencies, and instituted a policy of refusing all requests for user accounts from Australia, effectively withdrawing from the Australian market (from a customer-facing perspective); however according to the Commissioner, it “provided no evidence that it is taking steps to cease its large scale collection of Australians’ sensitive biometric information, or its disclosure of Australians’ … images to its registered users for profit“.
After a joint investigation commenced in 2020 with the UK Information Commissioner’s Office, the Australian Privacy Commissioner determined that Clearview AI breached the Australian Privacy Act 1988 (Cth) by:
The Commissioner found that Clearview AI’s practices “fall well short of Australians’ expectations for the protection of their personal information”, that “covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” and that it “carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database“.
Tellingly, the Commissioner pointed out that biometric information of this kind (being facial features of an individual) is unlike a driver’s licence or other means of identification because facial features cannot be wiped or reissued.
When discussing the reasonableness of Clearview AI’s practices, the Commissioner did not accept that the impact on individuals’ privacy was necessary, legitimate and proportionate, having regard to any public interest benefits, commenting that the covert and indiscriminate collection of harvested images and associated vectors was unreasonably intrusive, and concluding that Clearview AI had interfered with the privacy of Australian individuals by collecting harvested images and vectors by unfair means.
The Commissioner then ordered Clearview AI to:
This case sends a strong regulatory message to technology platforms seeking to capitalise on recent advances in AI technologies. Its timing couldn’t have been better as the Australian Attorney General launches the next stage of its review into the Privacy Act while at the same time seeking feedback on the release of the Privacy Legislation Amendment (Enhancing Online Privacy and Other Measures) Bill 2021 (Online Privacy Bill). Whilst the review and Online Privacy Bill have been in the offing for a while it would seem we are seeing a definite shift in emphasis from the regulator from education and awareness to enforcement. Technology platforms will no doubt want to get ahead of the curve and avoid the genuine risks of over-regulation. Facebook’s formal announcement in the last few days that they were shutting down their Face Recognition technology within their platform gives us some insight into the genuine concerns within the sector.
Returning to the Clearview decision in its press release, the OAIC noted that the UK’s Information Commissioner’s Office (ICO) is considering its next steps and any formal regulatory action that may be appropriate under the UK’s data protection laws. The UK regulator along with its European counterparts have demonstrated they are well beyond the education and awareness phase and are quite comfortable seeking to enforce privacy obligations broadly across the technology sector. Will this case see Australia finally follow suit or will the regulator continue to ‘educate’ and ‘raise awareness’ around privacy obligations? We will likely know the answer to that question within the next 12 months.
The Commissioner’s full determination can be found on the OAIC website. 
If you found this insight article useful and you would like to subscribe to Gadens’ updates, click here.
Antoine Pace, Partner
Dudley Kneller, Partner
 See New York Times article ‘The Secretive Company That Might End Privacy as We Know It’, 18 January 2020 – https://www.nytimes.com/2020/01/18/technology/clearview-privacy-facial-recognition.html
 See OAIC press release at https://www.oaic.gov.au/updates/news-and-media/clearview-ai-breached-australians-privacy