On 14 June 2023, the European Parliament overwhelmingly voted to adopt the proposed European Union Artificial Intelligence Act (AI Act). This act aims to regulate in somewhat fine detail the use and development of artificial intelligence (AI) across the EU. The next stage is that trilogue discussions will commence between the European Commission, Parliament and Council to agree the text of the AI Act, with a view to this becoming law by the end of the year.
The EU AI Act in its current form represents a different approach to the UK’s current proposals, which were set out in the government’s white paper in March 2023. In its draft form, the AI Act had already received significant criticism and opposition from the AI community, including OpenAI, which has lobbied against its proposed approach to regulation. With these early approaches, the UK and EU are both motivated by the prospect of leading the global market in AI through fostering innovation, as well as world-leading regulation.
Effectively, developers looking to a global market should assess their compliance with each regime, and look at how they can balance the distinct approaches to AI regulation in the UK and EU, with a view also to further guidance or regulation that might come from important markets such as the USA. Common themes within international regulations will be welcomed by those already using, or seeking to use AI solutions, to ease potential compliance burdens.
Even though the UK and EU have taken different approaches, as this article will explore in more detail, their overall objectives are similar in seeking to balance risks to users and the general public, whilst supporting investment and innovation in the field of AI.
The white paper on AI, which follows the 2022 policy paper, Establishing a pro-innovation approach to AI regulation, outlines the UK’s pro-innovation approach and states that ‘while we should capitalise on the benefits of these technologies, we should also not overlook the new risks that may arise from their use’. Similarly, the objectives of the AI Act are to:
UK - the white paper approach
Unlike the new AI Act introduced by the EU, the UK’s white paper indicates that, at least at this stage, there are no plans to introduce new legislation to deal specifically with AI.
Instead, the white paper proposes five principles of AI governance.
The five principles are as follows:
The UK’s existing network of regulators, including the Medicines and Healthcare Products Regulatory Agency and the Competition and Markets Authority, are to produce context specific guidance for their respective industries. This horizontal approach to guidance is proposed to exploit the regulators’ industry specific expertise in order to tailor the implementation of the principles to their own sector.
There is some concern that it creates potential overlap between regulators’ jurisdictions and subsequent conflict between relevant guidance, or that there may be gaps between jurisdictions where guidance is required but not provided by any regulator. The white paper does propose the concept of a central regulator function to oversee cohesion and consistence across the sector regulators, but the relative power and funding afforded to that central function is yet to be clarified. The UK is therefore seeking to take a less rigid, principles-based approach to regulation to try to promote, and not hold back, AI innovation, and hopes that this will enable regulators to respond quickly and in a proportionate way to future technological advances.
EU - the AI Act approach
The AI Act, on the other hand, proposes to regulate the use and development of AI through the adoption of a ‘risk-based’, top-down legislative approach and the introduction of a new central European AI Board (similar to the European Data Protection Board) that will oversee the implementation of the AI Act by competent authorities in member states.
The EU ‘risk-based’ approach to regulation allocates AI systems to one of four risk categories. This then determines the regulatory approach that will apply. Some AI will be banned entirely where the risk associated with it is deemed to be ‘unacceptable’, whereas certain high risk AI technology may be subject to increased restrictions on its use and implementation. The risk categories are outlined below:
The AI Act proposes to establish a publicly available database for registering AI systems that are classified as high risk. The EU envisions that both regulators and users will use the database to verify that such AI systems are compliant with the requirements for high risk systems in the AI Act. Developers of such systems will be required to supply information about their AI systems, as well as a conformity assessment, and to self-report any malfunctions or incidents causing breaches of fundamental rights. It is clear that the disclosure and compliance obligations of developers under this regime will be extensive and require considerable transparency to ensure compliance.
In respect of the allocation of liability for AI systems, the UK white paper again does not prescribe any particular approach. It does state that the government recognises that liability is an important consideration, but it goes on to state that it is ‘too soon to make decisions about liability’.
The white paper indicates that the government will consult with experts, technicians and lawyers to consider how existing frameworks can be adapted for application to AI systems but, at least initially, allocation of liability will be left to regulator guidance adopting a context-based approach in each sector. This is in keeping with the UK’s non-legislative approach to regulation.
The EU has however made a proposal for allocating liability for AI systems. In September 2022, the EU proposed two instruments: a new AI Liability Directive, and an amendment to the existing Product Liability Directive. Both directives work in conjunction with the AI Act, in that the directives provide individual rights and remedies for those who have suffered harm or damage as a result of AI systems that are not compliant with the AI Act, so these directives should be considered as part of the EU approach to AI regulation. The EU’s approach attempts to bring AI liability and responsibility for damage into line with other products and technology regulated by the EU. The directives are in draft form, but could provide useful clarity with regard to attributing liability for AI.
One area in which the white paper has received particular backlash from the relevant stakeholders was in relation to copyright. It is silent on this issue and instead references Sir Patrick Vallance’s recommendation that existing intellectual property law was broadly sufficient to deal with copyright protection for computer-generated works and that he supported the proposal to introduce a broad exemption for text and data mining for any use, including commercial exploitation. The proposed text and data mining exemption would have removed copyright holders’ rights to charge licence fees for commercial usage, which of course was widely opposed by the creative and academic industries.
The government has since rowed back from this proposed exemption, and has instead pledged to work with AI developers and rightsholders to produce a code of practice to support the use of, and access to copyrighted works to train AI systems. This position does also reintroduce the concept of rightsholders granting licences to AI developers in order to access such materials for use in training AI systems, which is good news for rightsholders.
The proposed AI Act, however, introduces some particularly stringent compliance requirements in respect of copyright that significantly increase developers’ disclosure obligations. The EU already has a text and data mining exception for commercial purposes (in certain circumstances) so such a concept is not included in the AI Act, but significant amendments have been made to the original draft of the AI Act to respond to the advances in AI since.
In particular, AI developers, and specifically developers of generative AI, would be required by the AI Act to ‘document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law’. This has been supported by the publishing industry but opposed as technically impossible by AI developers. Failure to comply with these disclosure requirements could result in fines up to €10 million or 2% of annual turnover, whichever is higher. There are a number of potentially ambiguous terms in this wording, such as what is defined as copyright protected training data, but this proposal is a clear move by the EU to improve the transparency of AI systems. This may also enable copyright owners to more clearly assess whether their copyright protected works have been used to train AI, thus increasing accountability.
The proposed approaches to regulating AI in both the EU and UK are still in the early stages of development. The challenges of seeking to introduce regulation were evidenced by the fact that since the first draft of the AI Act was published, thousands of amendments were made to the text to keep pace with advances in AI. OpenAI in fact lobbied the EU to amend its original categorisation of generative AI systems as high risk if they generated text or imagery that could ‘falsely appear to a person to be human generated and authentic’.
These issues highlight the potential difficulties that a prescriptive approach to regulation could face, particularly in light of how significantly the field of AI can advance in a short period of time. This may be a reason why, at least at this stage, the white paper provides ambiguity by design to allow the industry to lead and shape AI regulation in the UK. It is clear that the respective regimes will evolve over time.
Given the already widespread interest in, and adoption of AI, AI developers and businesses that are looking to benefit from AI solutions in both the UK and the EU must begin to consider how to develop robust policies and frameworks that are capable of balancing compliance and innovation, in a way which satisfies the requirements of each relevant jurisdiction.
Any organisation looking to deploy AI solutions is recommended to:
This article was co-written with Alison Ross, trainee solicitor in the IP,IT and commercial team.