The UK’s Data Protection Agency warns against reckless and inappropriate use of Live Face Recognition (LFR) in public places.
Information Commissioner Elizabeth Denham, who released a statement today on the public’s use of this biometric surveillance to establish what is known as the “rules of engagement,” also noted that her office had already conducted a number of investigations into proposed applications for the technique all cases found problems.
“I am deeply concerned about the potential for live facial recognition (LFR) technology to be used inappropriately, excessively, or even recklessly. If sensitive personal information is collected on a large scale without people’s knowledge, choice or control, the impact could be significant, ”she warned in a blog post.
“We saw our efforts to address public safety concerns and create biometric profiles to target people with personalized advertising, among other things.
“It is significant that none of the organizations involved in our completed investigations could fully justify the processing and none of the systems that were put into operation fully complied with the requirements of data protection law. All organizations chose to discontinue or not continue using LFCs. “
“In contrast to CCTV, LFR and its algorithms can automatically recognize who you are and infer sensitive details about you. It can be used to instantly profile you, serve personalized ads or match your image with known shoplifters while doing your weekly grocery shopping, ”added Denham.
“In the future there is the potential to overlay CCTV cameras with LFR and even combine them with social media data or other ‘big data’ systems – LFR is CCTV with supercharger.”
The use of biometric technologies to remotely identify people raises major human rights concerns, including privacy and the risk of discrimination.
All over Europe there are campaigns – such as Reclaim your Face – calling for a ban on biometric mass surveillance.
In another targeted campaign, Privacy International and others filed legal suits against the controversial US facial recognition company Clearview AI back in May in order to completely cease operations in Europe. (Some regional police forces have tapped their way – including in Sweden, where the police were fined earlier this year by the national data protection authority for unlawful use of the technology.)
But while there is great public opposition to biometric surveillance in Europe, the region’s lawmakers have so far – at best – fiddled with the margins of the controversial issue.
An EU-wide regulation presented in April that proposes a risk-based framework for artificial intelligence applications contained only a partial ban on biometric surveillance by law enforcement authorities in public places – with far-reaching exceptions that have received much criticism.
There have also been calls for a total ban on the use of technologies such as live facial recognition in public by MPs from across the political spectrum. The EU’s highest data protection officer has also asked the legislature to at least temporarily ban the use of biometric surveillance in public.
The EU’s planned AI regulation will not apply in the UK anyway, as the country is now outside the bloc. And it remains to be seen whether the UK government will attempt to weaken the national data protection regime.
For example, a recent report he commissioned to look at how the UK could overhaul its post-Brexit regulatory system suggested replacing the UK GDPR with a new “UK framework” proposing changes to ” Data for innovation and in the public interest “as it is called, and advocate revisions for AI and” growth sectors “. Whether the British data protection regime will be torched in an “office fire” after Brexit is a key concern for rights observers.
(For example, the report by the Task Force on Innovation, Growth and Regulatory Reform advocates the complete deletion of Article 22 of the GDPR – which gives people the right not to be subject to decisions based solely on automated processing – and suggests that it be replaced by “a focus” “Whether the automated profiling is performing a legitimate or public interest test,” with guidance provided by the Information Commissioner’s Office (ICO), but it should also be noted that the government is in the process of recruiting a successor to Denham; and the digital minister said he wanted her successor to “take a bold new approach” that “sees data no longer as a threat, but as a great opportunity of our time”. Then?)
For now, those looking to implement LFR in the UK must comply with the provisions of the UK Data Protection Act 2018 and the UK General Data Protection Regulation (also known as their transposition of the EU GDPR, which was transposed into national law prior to Brexit), ICO opinion, including the data protection principles set out in Article 5 of the UK GDPR, including lawfulness, fairness, transparency, purpose limitation, data minimization, storage limitation, security and accountability.
Responsible persons must also enable individuals to exercise their rights, it says in the statement.
“Organizations must demonstrate high standards of governance and accountability from the outset, including the ability to justify that the use of LFCs is fair, necessary and proportionate in each specific context in which it is used. You have to show that less intrusive techniques don’t work, ”Denham wrote. “These are important standards that require solid assessment.
“Organizations also need to understand and assess the risks of using a potentially intrusive technology and its impact on people’s privacy and lives. For example, how problems related to accuracy and bias can lead to misidentification and the harm or disadvantage associated with it. “
The timing of the publication of the ICO’s position paper on LFR is interesting given broader concerns about UK travel direction for data protection and privacy.
For example, if the government intends to recruit a new, more “compliant” ICO – which tends to tear apart the privacy and AI rulebook, including in areas like biometric surveillance – it will at least be a bit of a hassle, with an opinion from the previous commissioner too the public records detailing the dangers of reckless and improper use of LFCs.
The next information officer will certainly not be able to say that he was not clearly warned that biometric data are particularly sensitive – and can used to estimate or infer other characteristics such as age, gender, gender, or ethnicity.
Or that “major British” courts previously concluded that “like fingerprints and DNA” [a facial biometric template] is information of an ‘intrinsically private’ character ”, as the ICO statement states, and at the same time underlines that LFR can lead to this highly sensitive data being collected without the person concerned even noticing.
Denham’s opinion also hammers hard on the need for public trust and confidence for the success of any technology, warning, “TThe public must be able to trust that the use is lawful, fair, transparent and complies with other data protection standards. “
The ICO previously published a statement on the use of LFCs by police forces – which it believes also sets “a high threshold for use”. (And some UK police forces – including the Met in London – were among the early adopters of facial recognition technology, which in turn drew some into legal hot water on issues like bias.)
Disappointingly, however, the ICO statement is reluctant for human rights activists to recommend a total ban on the use of biometric surveillance in public by private companies or public organizations – with the Commissioner arguing that while the technology is risky, it could still be risky also be cases where it has a high benefit (like finding a missing child).
“It is not my job to endorse or ban a technology, but while that technology is evolving and not widely used, we have an opportunity to ensure that it does not expand without due consideration of privacy protection,” wrote instead, in their instead “data protection and people’s privacy must be at the center of all decisions about the use of LFC”.
Denham added that UK (current) law “sets a high bar to justify the use of LFR and its algorithms in places where we shop, socialize or meet”.
“With any new technology, building public trust and trust in the way people’s information is used is critical to taking full advantage of the technology,” she repeated, noting how a Lack of trust in the US has led to some cities banning the use of LFR in certain contexts and leading some companies to pause their services until the rules are clearer.
“Without trust, the benefits of technology are lost,” she also warned.
There is a red line that the UK government may forget in its undue rush to stamp out the UK data protection regime (possibly) in the name of a flimsy “innovation”. If it tries to “free” national data protection regulations from the EU’s core principles (legality, fairness, proportionality, transparency, accountability, etc.), there is a risk that it will fall out of regulatory alignment with the EU, which would then force it the European Commission to dissolve a data adequacy agreement between the EU and the UK (where the ink is still drying).
The UK, which has a data adequacy agreement with the EU, relies on the UK to have essentially equivalent protection for individuals’ data. Without this coveted data adequacy status, UK companies will immediately face far greater legal hurdles when processing EU citizens’ data (as the US is now doing after the fall of Safe Harbor and Privacy Shield). There could even be situations where EU data protection authorities order that the flow of data between the EU and the UK cease altogether …
Obviously, such a scenario would be terrible for UK business and “innovation” even before you consider the general issue of public trust in technology and whether the UK public would like to have their privacy rights set on fire for themselves.
With all this in mind, one really has to wonder if anyone in the UK government has thought through this “regulatory reform”. For now, the ICO is at least still able to think for them.