News and Publications

EU AI Act – and a UK AI Regulation Bill?

Posted: 15/12/2023

AI legislation has been much in the news, with the European Parliament and Council of the EU reaching long-awaited political agreement on the proposed European Union Artificial Intelligence Act on Friday 8 December. The last few weeks also saw the UK Artificial Intelligence (Regulation) Bill being introduced as a Private Members’ Bill to the House of Lords. So what is the position, and will the UK look to move away from its principles-based approach towards legislation too?

EU AI Act – a summary

The final text of the agreed European Union Artificial Intelligence Act is not yet available. However, some new elements are expected to include the following:

  • transparency requirements for ‘general purpose AI models’ which are trained on large volumes of data and would include ChatGPT. This will be a two-tier approach, with all models required to make technical documentation available and additional obligations for ‘high impact’ models which pose a ‘systemic risk’.  This is significant because this approach focuses on the type of technology used; 
  • high-risk AI applications that pose a significant risk will be subject to a requirement to carry out a ‘fundamental rights impact assessment’ prior to implementation; and
  • possible use of ‘remote biometric identification’ or ‘automated facial recognition’ systems for specific law enforcement purposes.

It is likely that the act will become law in the summer of 2024, and come into force progressively over the next two years.

UK approach

In March 2023 the UK set out its ‘pro-innovation’ approach to AI regulation in the government’s AI white paper. As mentioned in an earlier update, the white paper suggested there would be no new legislation regulating the technology. Indications were that the government would take a principles-based approach, relying on existing legislation in areas like intellectual property as well as sector-based guidance from regulators, including the Medicines and Healthcare Products Regulatory Agency and the Competition and Markets Authority.

Since then, there has been a rapid succession of developments. On 30 October, the US passed an executive order on ‘safe, secure and trustworthy artificial intelligence’. On 1 and 2 November, the UK held the AI Safety Summit, focusing on a responsible approach to seizing the opportunities of AI through an international effort to research, understand and mitigate the risks posed by AI technologies. In addition, the G7 announced agreement on the International Guiding Principles for Organisations Developing Advanced AI Systems, promoting the responsible and ethical use of AI. 

Possibility of a UK AI Act?

The draft Artificial Intelligence (Regulation) Bill, has had its first reading in the House of Lords. As a Private Members’ Bill, which usually provides an opportunity for non-government ministers to put forward legislative proposals and respond to issues of public concern, this one is unusual as it is on an area of specific focus for the government. These bills are not often successful, but against the regulatory backdrop, it could be one to watch in case it could signal a departure from the principles-based approach, and prompt the government to consider legislation.

The bill aims to regulate AI technology through the introduction of a new AI regulatory body and the crystallisation of the white paper’s AI principles in legislative form. The provisions contained in the bill appear to go some way to implementing the G7’s 11 guiding principles, such as through effective identification and mitigation of risks associated with AI systems, fostering a culture of transparency with regard to the training, deployment and use of AI, and introducing effective measures to protect personal data and intellectual property rights. Some key provisions relate to:

  • creation of a dedicated AI authority within the UK;
  • setting out regulatory obligations (which mirror the UK white paper’s principles):
    • on the AI authority to have regard to the principles of safety, security, and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress;
    • that businesses deploying AI systems should be transparent and compliant with applicable laws, including data protection and intellectual property; and
    • for AI applications to be inclusive, non-discriminatory, accessible and reusable;
  • regulatory sandboxes;
  • designated AI officers – a new role with oversight of AI compliance, conceptually similar to data protection officers: 
  • transparency, including requirements on AI developers to provide warnings and labelling, and provide the AI authority with a record of all third-party data and intellectual property used by it to train AI systems. These are onerous requirements, and similar requirements were hotly debated as part of the journey of the EU AI Act.

However, speaking at the Science, Innovation and Technology Committee, Secretary of State for Science, Innovation and Technology, Michelle Donelan, has said that with such a fast moving technology, the UK government will wait until the time is right to legislate on AI, and that we should expect to hear more in the new year. One area of focus is whether or not the UK chooses to create a new central regulator for AI (which would mirror the Office of AI within the EU), or to instead upskill existing regulators to avoid duplication, recognising that AI is in every sector. The government will of course need to consider how to ensure that gaps in oversight don’t occur if it does not create a central AI regulator.


It will be necessary to ‘watch this space’ on the EU and UK fronts, both for the final text of the EU AI Act (and a date for implementation), as well as further responses from the UK government regarding AI regulation.

All businesses should in any case be getting themselves ready to assess the impact of the EU AI Act and the UK regulatory position on any use of AI within their business, and should:

  • identify what AI solutions are being used in their organisation, and for what intended purposes;
  • consider where they are in the supply chain in relation to AI, as this will be important in understanding compliance obligations and risks;
  • understand how the AI solutions have been trained and whether there are any obvious gaps in training data which could result in inherent system bias. This will also be necessary to ensure explainability, fairness and the safeguarding of rights of individuals;
  • consider and document appropriate risk assessments, using existing compliance frameworks if they are helpful. This will need to take into account:
    • the EU AI Act's risk classifications and associated compliance obligations, as well as the UK white paper principles;
    • risks that could potentially exist through the use of outputs from any AI system, including how this can be mitigated by improving inputs or introducing additional safeguards to ensure fairness;
    • that the approach to compliance with AI principles and regulation must be looked at side by side with other factors such as data protection compliance and mandatory impact assessments under the UK and EU GDPR (and other privacy laws if relevant), the need to update privacy or other notices to individuals, and a review of intellectual property and liability issues relevant to use of AI systems.

Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP