Posted: 27/01/2025
Biometric recognition technologies offer enhanced security, efficiency and convenience, but their use also brings enhanced data protection obligations. Organisations using these systems, particularly educational institutions, must be alert to the risks they pose and the steps that must be taken to mitigate these.
Many organisations use systems that involve the processing of biometric data, such as fingerprint recognition, facial recognition, iris recognition and voice recognition.
Where biometric data are used to identify individuals, they are considered ‘special category data’ under the UK General Data Protection Regulation (the UK GDPR), and the data controller is subject to additional hurdles in order to demonstrate that they are complying with their data protection obligations.
Biometric recognition systems introduce specific risks. They work by comparing two sets of biometric features: the biometric reference and a newly created biometric sample. The biometric reference and the new biometric sample will never be exactly the same, even when they belong to the same person, so the system produces an estimate based on the probability that they belong to the same person. The fact that the system relies on probability creates the potential for two types of errors: false biometric acceptance (false positives) and false biometric rejection (false negatives).
Just like human decision-making, automated processes are prone to biases. If the technology has been trained on a data set that is not representative of the context in which it is to be used, for example, it is likely to be less effective at identifying certain groups. This can result in bias in the algorithm and, ultimately, in discrimination against those groups. Some biometric recognition systems have been found to be biased on the basis of age, race and gender.
In July 2024, the ICO issued a reprimand to a school in Essex for its use of facial recognition technology to take cashless payments from pupils in its canteen.
Chelmer Valley High School, in Chelmsford, was reprimanded for beginning to use the technology without first carrying out a data protection impact assessment (DPIA) and for failing to properly obtain permission to process pupils’ biometric data. The ICO noted that the school had also failed to seek opinions from its data protection officer or to consult with parents and pupils before implementing the technology.
By implementing facial recognition technology without first carrying out a DPIA, the school failed to make any assessment of the risks that might be presented to its pupils by the technology. This type of processing presented a risk to pupils’ rights and freedoms, particularly around bias and discrimination.
The school had sent parents a slip to return if they did not want their child to use the facial recognition technology. ‘Opt-out’ consent, such as this, is not a valid form of consent for the purposes of UK data protection law. By failing to properly obtain valid consent, the school had failed to give its pupils the opportunity to decide whether they did or did not want their data to be used in this way. Children are often less aware of the risks that may be involved when their personal data are collected and processed, so these risks must be explained to them in a way that they can understand.
In addition, most of the pupils were old enough to provide their own consent, so seeking consent from their parents or carers deprived those pupils of the ability to exercise their own rights over their personal data.
Importantly, the ICO has not said that biometric recognition systems, such as facial or fingerprint recognition technologies, should never be used in schools or other educational institutions. Rather, institutions should take proper steps to ensure that risks are assessed and addressed, and valid consent obtained, before these systems are utilised.
Here is a summary of the steps the school should have taken, which is applicable to the use of any system that involves the processing of pupils’ biometric data.