News and Publications

The state of AI regulation in the UK and EU

Posted: 03/08/2023


On 14 June 2023, the European Parliament overwhelmingly voted to adopt the proposed European Union Artificial Intelligence Act (AI Act). This act aims to regulate in somewhat fine detail the use and development of artificial intelligence (AI) across the EU. The next stage is that trilogue discussions will commence between the European Commission, Parliament and Council to agree the text of the AI Act, with a view to this becoming law by the end of the year.

The EU AI Act in its current form represents a different approach to the UK’s current proposals, which were set out in the government’s white paper in March 2023. In its draft form, the AI Act had already received significant criticism and opposition from the AI community, including OpenAI, which has lobbied against its proposed approach to regulation. With these early approaches, the UK and EU are both motivated by the prospect of leading the global market in AI through fostering innovation, as well as world-leading regulation.

Effectively, developers looking to a global market should assess their compliance with each regime, and look at how they can balance the distinct approaches to AI regulation in the UK and EU, with a view also to further guidance or regulation that might come from important markets such as the USA. Common themes within international regulations will be welcomed by those already using, or seeking to use AI solutions, to ease potential compliance burdens.   

Common objectives

Even though the UK and EU have taken different approaches, as this article will explore in more detail, their overall objectives are similar in seeking to balance risks to users and the general public, whilst supporting investment and innovation in the field of AI.

The white paper on AI, which follows the 2022 policy paper, Establishing a pro-innovation approach to AI regulation, outlines the UK’s pro-innovation approach and states that ‘while we should capitalise on the benefits of these technologies, we should also not overlook the new risks that may arise from their use’. Similarly, the objectives of the AI Act are to:

  • ensure that AI systems placed on the EU market and used are safe and respect existing law on fundamental rights and EU values;
  • ensure legal certainty to facilitate investment and innovation in AI;
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; and
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.

What brings a system into scope?

UK - the white paper approach
Unlike the new AI Act introduced by the EU, the UK’s white paper indicates that, at least at this stage, there are no plans to introduce new legislation to deal specifically with AI.

Instead, the white paper proposes five principles of AI governance.

The five principles are as follows:

  • Safety, security and robustness – AI systems must be able to function in a robust, secure and safe way throughout their life cycle.
  • Appropriate transparency and ‘explainability’ – AI systems must have an appropriate level of transparency and ‘explainability’ so that regulators have sufficient information about AI systems and their associated inputs and outputs to give meaningful effect to the other principles.
  • Fairness – AI systems should not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals, or create unfair market outcomes.
  • Accountability and governance – Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI life cycle.
  • Contestability and redress – Where appropriate, users, impacted third parties and actors in the AI life cycle should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.

The UK’s existing network of regulators, including the Medicines and Healthcare Products Regulatory Agency and the Competition and Markets Authority, are to produce context specific guidance for their respective industries. This horizontal approach to guidance is proposed to exploit the regulators’ industry specific expertise in order to tailor the implementation of the principles to their own sector.

There is some concern that it creates potential overlap between regulators’ jurisdictions and subsequent conflict between relevant guidance, or that there may be gaps between jurisdictions where guidance is required but not provided by any regulator. The white paper does propose the concept of a central regulator function to oversee cohesion and consistence across the sector regulators, but the relative power and funding afforded to that central function is yet to be clarified. The UK is therefore seeking to take a less rigid, principles-based approach to regulation to try to promote, and not hold back, AI innovation, and hopes that this will enable regulators to respond quickly and in a proportionate way to future technological advances.

EU - the AI Act approach
The AI Act, on the other hand, proposes to regulate the use and development of AI through the adoption of a ‘risk-based’, top-down legislative approach and the introduction of a new central European AI Board (similar to the European Data Protection Board) that will oversee the implementation of the AI Act by competent authorities in member states.

The EU ‘risk-based’ approach to regulation allocates AI systems to one of four risk categories. This then determines the regulatory approach that will apply. Some AI will be banned entirely where the risk associated with it is deemed to be ‘unacceptable’, whereas certain high risk AI technology may be subject to increased restrictions on its use and implementation. The risk categories are outlined below:

  • Unacceptable risk: A small number of uses are prohibited outright where the risk of such technology is considered unacceptable. This includes the use of social scoring by public authorities, real-time biometric identification in public spaces or predictive policing systems based on profiling, location or past criminality.
  • High risk: High risk uses are the most regulated of the permitted uses of AI systems. Examples of high risk AI include uses in critical infrastructures that could put the health or fundamental rights of citizens at risk. This category will also include public service uses, such as where AI is used for credit scoring that could ultimately deny citizens a loan. This category also includes medical devices, vehicles, toys and marine equipment. Therefore, there is a broad range of AI applications that could be categorised as high risk. If an AI system is identified as high risk, then strict obligations must be complied with before it can be put on the market. There must be:
    • adequate risk assessment and mitigation systems;
    • high quality datasets feeding the system to minimise risks and discriminatory outcomes;
    • logging of activity to ensure traceability of results;
    • detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance;
    • clear and adequate information to the user; and
    • appropriate human oversight measures to minimise risk.
  • Limited risk: Refers to AI systems with specific transparency obligations. For example, when using AI systems like chatbots, users should be made aware that they are interacting with a machine.
  • Minimal risk: These are uses of AI that do not fall into the above categories and are therefore not subject to any regulation.

The AI Act proposes to establish a publicly available database for registering AI systems that are classified as high risk. The EU envisions that both regulators and users will use the database to verify that such AI systems are compliant with the requirements for high risk systems in the AI Act. Developers of such systems will be required to supply information about their AI systems, as well as a conformity assessment, and to self-report any malfunctions or incidents causing breaches of fundamental rights. It is clear that the disclosure and compliance obligations of developers under this regime will be extensive and require considerable transparency to ensure compliance.

Allocation of liability

In respect of the allocation of liability for AI systems, the UK white paper again does not prescribe any particular approach. It does state that the government recognises that liability is an important consideration, but it goes on to state that it is ‘too soon to make decisions about liability’.

The white paper indicates that the government will consult with experts, technicians and lawyers to consider how existing frameworks can be adapted for application to AI systems but, at least initially, allocation of liability will be left to regulator guidance adopting a context-based approach in each sector. This is in keeping with the UK’s non-legislative approach to regulation.

The EU has however made a proposal for allocating liability for AI systems. In September 2022, the EU proposed two instruments: a new AI Liability Directive, and an amendment to the existing Product Liability Directive. Both directives work in conjunction with the AI Act, in that the directives provide individual rights and remedies for those who have suffered harm or damage as a result of AI systems that are not compliant with the AI Act, so these directives should be considered as part of the EU approach to AI regulation. The EU’s approach attempts to bring AI liability and responsibility for damage into line with other products and technology regulated by the EU. The directives are in draft form, but could provide useful clarity with regard to attributing liability for AI.

Copyright considerations

One area in which the white paper has received particular backlash from the relevant stakeholders was in relation to copyright. It is silent on this issue and instead references Sir Patrick Vallance’s recommendation that existing intellectual property law was broadly sufficient to deal with copyright protection for computer-generated works and that he supported the proposal to introduce a broad exemption for text and data mining for any use, including commercial exploitation. The proposed text and data mining exemption would have removed copyright holders’ rights to charge licence fees for commercial usage, which of course was widely opposed by the creative and academic industries.

The government has since rowed back from this proposed exemption, and has instead pledged to work with AI developers and rightsholders to produce a code of practice to support the use of, and access to copyrighted works to train AI systems. This position does also reintroduce the concept of rightsholders granting licences to AI developers in order to access such materials for use in training AI systems, which is good news for rightsholders.

The proposed AI Act, however, introduces some particularly stringent compliance requirements in respect of copyright that significantly increase developers’ disclosure obligations. The EU already has a text and data mining exception for commercial purposes (in certain circumstances) so such a concept is not included in the AI Act, but significant amendments have been made to the original draft of the AI Act to respond to the advances in AI since.

In particular, AI developers, and specifically developers of generative AI, would be required by the AI Act to ‘document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law’. This has been supported by the publishing industry but opposed as technically impossible by AI developers. Failure to comply with these disclosure requirements could result in fines up to €‎10 million or 2% of annual turnover, whichever is higher. There are a number of potentially ambiguous terms in this wording, such as what is defined as copyright protected training data, but this proposal is a clear move by the EU to improve the transparency of AI systems. This may also enable copyright owners to more clearly assess whether their copyright protected works have been used to train AI, thus increasing accountability.

Some considerations for businesses

The proposed approaches to regulating AI in both the EU and UK are still in the early stages of development. The challenges of seeking to introduce regulation were evidenced by the fact that since the first draft of the AI Act was published, thousands of amendments were made to the text to keep pace with advances in AI. OpenAI in fact lobbied the EU to amend its original categorisation of generative AI systems as high risk if they generated text or imagery that could ‘falsely appear to a person to be human generated and authentic’. 

These issues highlight the potential difficulties that a prescriptive approach to regulation could face, particularly in light of how significantly the field of AI can advance in a short period of time. This may be a reason why, at least at this stage, the white paper provides ambiguity by design to allow the industry to lead and shape AI regulation in the UK. It is clear that the respective regimes will evolve over time.

Given the already widespread interest in, and adoption of AI, AI developers and businesses that are looking to benefit from AI solutions in both the UK and the EU must begin to consider how to develop robust policies and frameworks that are capable of balancing compliance and innovation, in a way which satisfies the requirements of each relevant jurisdiction. 

Any organisation looking to deploy AI solutions is recommended to:

  • identify what AI solutions are being used in their organisation, and for what intended purposes;
  • understand how the AI solutions have been trained and whether there are any obvious gaps in training data which could result in inherent system bias. This will also be necessary to ensure explainability, fairness and the safeguarding of rights of individuals;
  • consider and document appropriate risk assessments, using existing compliance frameworks if helpful. Such compliance documentation is likely to also be useful in satisfying the UK white paper principles, given that the overall objectives behind the two regimes are similar. This will need to take into account:
    • the AI Act's risk classifications and associated compliance obligations, as well as the white paper principles;
    • risks that could potentially exist through the use of outputs from any AI system, including how this can be mitigated by improving inputs or introducing additional safeguards to ensure fairness;
    • that the approach to compliance with AI principles and regulation must be looked at side by side with other factors such as data protection compliance and mandatory impact assessments under the UK and EU GDPR (and other privacy laws if relevant), the need to update privacy or other notices to individuals, and a review of intellectual property and liability issues relevant to use of AI systems.

Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP