Facial Recognition Technology: Is it time to face the music?

On 19 February 2020, the EU Commission released a White Paper outlining its vision for the development and regulation of artificial intelligence (“AI”) in the European Union over the next ten years1.

The White Paper sets out the EU’s strategy to become a world leader in AI, and in particular the central role AI will play in aiding in the achievement of the EU’s Sustainable Development Goals in coming years. However, it also expresses concerns about the proliferation of “high risk” AI such as automated facial recognition technology (“FRT”), and the potential for this, without an EU-wide regulatory framework put in place, to fragment the internal market. As recently as January 2020, the EU Commission was considering the introduction of a temporary ban of 3 to 5 years on the use of FRT in public places, to allow time to develop an EU-wide approach. However, it seems the Commission has reconsidered this position, following the omission of the proposed ban from the White Paper1.

What is FRT and where is it used?

FRT is a technology based on an algorithm that can identify individuals by automatically analysing key facial features and generating a unique biometric template of these features. This template can then be compared against similar templates generated from a collection of known faces in a database or “watch list” and possible matches may be determined. FRT can analyse facial features from still images or from live camera feeds and may also be deployed through real time CCTV feeds to locate a particular individual.

Concerns regarding FRT have arisen in recent years amid reports that the automated technology has repeatedly displayed racial and gender bias when deployed in crowds, often returning higher false positive matches for women and certain ethnic groups (“algorithmic bias”). Further, the use of FRT on a large scale by law enforcement bodies, or even private security service providers, may interfere with a number of fundamental rights such as the right to privacy and rights of freedom of expression, association and assembly. To date, FRT has more typically been used by public authorities responsible for law enforcement and involves a high level of human oversight, however, it may be used by private entities in specific circumstances.  

In July 2019, Danish football club Brøndby IF was authorised by the Danish Data Protection Agency (the Datatilsynet), for reasons of substantial public interest, to begin deploying live FRT in its stadium to identify blacklisted fans and prevent them from entering during high profile matches.

Since February 2020, the London Metropolitan Police has deployed live FRT, following a number of trials of the technology in different parts of the city. This system can match passers-by to a watch list of approximately 5,000 people who are wanted by the police or are missing persons in the area. Similarly, the South Wales Police have deployed live FRT during large events in Cardiff since 2017. The deployment during the UEFA Champions League Final that year resulted in a high incidence of false positives. Of the 2,470 total matches made only 173 were “true” matches – an accuracy rate of only 7%. Nevertheless, the High Court of England and Wales has since found that the use of FRT by the South Wales Police is in compliance with data protection law2.

Applicable Law

Biometric data

Where a facial template is analysed by means of technical processing for the purposes of identifying a natural person, this will be considered to be the processing of biometric data.

Article 4(14) of the General Data Protection Regulation (“GDPR”) defines biometric data as personal data resulting from specific technical processing relating to the physical, physiological or behavioural characteristics of a natural person, which allow or confirm the unique identification of that natural person, such as facial images or dactyloscopic data (fingerprint data).

The processing of images or footage of a data subject will not be considered to involve the processing of biometric data unless it allows for the unique identification or authentication of the data subject3.

Restrictions on processing biometric data

Biometric data is a special category of personal data and processing biometric data is generally prohibited under Article 9(1) of the GDPR. Processing biometric data may be permissible, subject to one of the exceptions listed in Article 9(2) of the GDPR that include processing:

  • with explicit consent of the data subject (Article 9(2)(a));
  • to protect the vital interests of the data subject or another natural person (Article 9(2)(c));
  • personal data that has manifestly been made public by the data subject (Article 9(2)(e)); and
  • necessary for reasons of substantial public interest (subject to suitable and specific measures to safeguard the fundamental rights and interests of the data subject) (Article 9(2)(g)).

The use of FRT identification by private controllers will most likely require reliance on the explicit consent of the data subject. It is important for any controller considering relying on consent to use FRT to ensure that consent may be withdrawn at any time.

Although other exceptions may be applicable, the EDPB has rejected that the exception under Article 9(2)(e) (processing data that have been manifestly made public) could be relied on to process biometric data in the context of live video surveillance, stating that the “mere fact of entering into the range of the camera does not imply that the data subject intends to make public special categories of data relating to him or her”4.

It is important to remember that Article 6 of the GDPR must be complied with as well as Article 9 when processing biometric data. This means that a clear legal basis for the processing must be established before a controller can consider which exception may be relied upon under Article 9(2).

FRT is currently more commonly deployed by criminal law enforcement bodies. In these instances the processing activities will be subject to the Law Enforcement Directive (“LED”), as transposed into Member State law, rather than the GDPR5.

A data protection impact assessment (“DPIA”) will be required where the use of FRT by a controller is being considered, as FRT is likely to result in a high risk to the rights and freedoms of data subjects. If a high risk to the data subject is found by the DPIA which cannot be mitigated by the controller, the relevant organisation’s competent data protection supervisory authority must be consulted for both GDPR and LED purposes.

Developing an “ecosystem of trust”

In the Commission White Paper one of the key strategies to promote successful development of AI in the EU is the establishment of an “ecosystem of trust” at an early stage, to ensure that Member States and citizens embrace the benefits that AI can offer. As the use of AI for remote biometric identification (including FRT) and other “intrusive” surveillance technologies would always be considered to be “high-risk”, according to the White Paper, they will require a number of additional safeguards to guarantee this trust, including:

  • ensuring AI systems are trained on data that is sufficiently broad to avoid dangerous situations and discrimination;
  • taking into account the complexity of the decision-making processes of AI systems (sometimes called the black box effect), it will be essential to keep records of the programming of the algorithm, the training methodologies applied to the AI system and the data used to train the AI system;
  • providing adequate information to data subjects regarding high risk AI on the system’s capabilities and limitations and individuals should be clearly informed when they are interacting with AI;
  • ensuring that all risks that an AI system may generate are routinely considered and ensuring that the AI is as technically accurate as possible;
  • ensuring appropriate oversight from natural persons is in place to ensure that AI systems operate in an ethical and trustworthy manner; and
  • allowing the use of FRT and other biometric identification only when in accordance with the GDPR and the LED and where such use is duly justified, proportionate and subject to adequate safeguards.

Also contributed by Siobhán Power


  1. “On Artificial Intelligence – A European approach to excellence and trust” COM(2020) 65 final.
  2. R (Bridges) v Chief Constable of South Wales [2019] EWHC 2341 (Admin).
  3. Recital 51 of the GDPR.
  4. EDPB, Guidelines 3/2019 on processing of personal data through video devices, Version 2.0, adopted on 29 January 2020.
  5. Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016.

This document has been prepared by McCann FitzGerald LLP for general guidance only and should not be regarded as a substitute for professional advice. Such advice should always be taken before acting on any of the matters discussed.