The digital panopticon – who watches the watchers?

Recent statements by the Home Secretary signalling a significant nationwide expansion of live facial recognition (LFR) technology indicate that biometric surveillance may be moving from limited pilot deployments to a more permanent policing infrastructure across England and Wales.

Under the proposed model, every police force would have access to LFR as a standard operational tool. It is not just the police who deploy LFR, but  also private sector security providers. This raises an important question: is the current legal framework adequate to support the scale and sophistication of this technology?

In the United States, for example, Immigration and Customs Enforcement (ICE) has demonstrated an ability to leverage agentic AI to combine facial recognition, automated identity matching, and large‑scale data mining across numerous commercial and governmental datasets with comparatively limited statutory constraint. This highlights a model of surveillance that would be difficult to reconcile with European expectations around necessity, proportionality, purpose limitation and fundamental rights. The UK’s anticipated expansion of LFR therefore raises legitimate concerns as to whether the currently fragmented legal regime can provide sufficient safeguards to prevent similar forms of capability creep.

Existing framework

At present, the regulation of LFR is derived from a patchwork of instruments. The primary sources of law include:

  • the Data Protection Act 2018 (DPA 2018), which implements the Law Enforcement Directive (part 3) and governs sensitive processing by competent public authorities;
  • the UK GDPR (engaged in non-law enforcement contexts);
  • the Surveillance Camera Code of Practice, issued under the Protection of Freedoms Act 2012.

While these instruments impose requirements around data minimisation, necessity, proportionality, transparency, and governance, they do not constitute a bespoke statutory regime for LFR. As a result, operational guidance is often interpreted at force level, leading to variability in practice.

The leading authority is still R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, which examined pilot deployments by the police in 2017 and 2018.

The Court of Appeal held that, although LFR was not inherently unlawful, the deployment in that case breached several requirements:

  • watchlist composition: the police exercised an excessively broad discretion in determining who could be included, extending to individuals of possible, but unclear, interest;
  • data protection impact assessment (DPIA): the DPIA failed to fully recognise that the mere capture and analysis of biometric data from all passers-by constituted the processing of special category data;
  • public sector equality duty: concerns were raised regarding the absence of sufficient engagement with potential algorithmic or demographic bias under the Equality Act 2010.

Although the case pre-dated the UK GDPR, it was analysed under the GDPR by agreement of the parties.

Evolving risks in 2026

Since Bridges, the technological environment has changed markedly. The accuracy and speed of biometric systems have improved, while their interoperability with large language models (LLMs) and other analytical tools enables far more comprehensive data correlation than in 2018.

This raises several modern risk considerations:

  • enhanced profiling capability – biometric data, when combined with behavioural or contextual datasets, could facilitate deeper forms of identity inference;
  • operational scale – nationwide deployment would normalise continuous biometric scanning in everyday public spaces, creating the potential for what is effectively mass surveillance;
  • impact on civil liberties – human rights organisations have highlighted risks to freedom of expression, legitimate protest, and freedom of movement, all protected under Articles 8, 10 and 11 of the Human Rights Act 1998;
  • accuracy and bias – while the technology has improved, concerns persist that demographic bias may produce disproportionate policing outcomes, aligning with the Court of Appeal’s observations in Bridges.

Although proposals emphasise data protection impact assessments, human‑in‑the‑loop review and improved accuracy of output, these safeguards may not fully mitigate the systemic risks identified above.

Adequacy of the current framework

Against this backdrop, the present legal regime appears limited in several respects:

  • it does not provide a statutory basis specifically authorising LFR deployment;
  • it lacks clear, uniform criteria for permissible use cases or for excluding contexts where LFR would be inappropriate (eg political demonstrations);
  • individuals have limited routes for challenge or redress in cases of misidentification or wrongful intervention;
  • there is no statutory requirement for independent oversight, or reporting of system failures.

The move towards a nationwide deployment of live facial recognition marks a decisive shift in the UK’s surveillance landscape, yet it is unfolding atop a legal framework not designed for technologies capable of real‑time, potentially ‘always on’ biometric analysis. The existing patchwork of data protection law and non‑binding guidance offers important principles but lacks the statutory safeguards needed to prevent capability creep and protect fundamental rights.

Until a bespoke legislative regime is established, comprehensive DPIAs remain the most effective tool for imposing accountability and tempering the expansion of LFR with appropriate safeguards – in this context, a rigorous, independently scrutinised data protection impact assessment becomes essential.

We can help you develop suitable compliance frameworks, enabling innovation while minimising regulatory and operational risk. If you are developing, procuring or deploying biometric or AI‑enabled technologies, please do get in touch with us.

Related expertise