News and Publications

Exploring generative AI: ChatGPT, Dall-E, and the copyright conundrum in the UK

Posted: 30/03/2023


Background to generative AI

Artificial intelligence (AI) has rapidly emerged as a transformative technology in recent years, with the potential to revolutionise a range of industries and aspects of our daily lives. AI refers to the development of computer systems that can perform tasks that would normally require human intelligence, such as perception, reasoning, learning, and decision-making. AI is already being used in a variety of industries, including healthcare, finance, transportation, and manufacturing, among others. While AI has the potential to bring many benefits, it is important to also consider the ethical and social implications of its use. As AI continues to evolve, it will be important to ensure that it is used in ways that benefit society as a whole and that appropriate safeguards are in place to protect privacy and prevent discrimination.

Applications and benefits

The above paragraph was drafted entirely by OpenAI’s ChatGPT. This AI driven chatbot provides answers to user prompts ranging from requests for cooking recipes to marketing content requests and, as the introduction to this article shows, it is demonstrably competent in delivering results. It can even be used to code websites from very low detail prompts, as proven by the OpenAI team, which showcased the new ChatGPT 4’s capabilities by drafting a plan on the back of a napkin, photographing it, and inputting it into ChatGPT, which then followed up with the code for the desired website.

Current application of AI already stretches beyond chatbots and permeates most industries. It is used to research new treatments and drugs, produce stock market trading algorithms and personalised financial advice, for fraud detection, and for customer data analysis to improve marketing campaigns. The benefits of AI in such applications and beyond vary, but generally AI driven processes have the possibility to increase efficiency and accuracy over human performance. It also allows significant amounts of information to be analysed at speeds far exceeding human capability, which is evident in AI drug research, for example.

Whilst the benefits of AI are extensive, there are nevertheless significant ethical and legal challenges accompanying the technology that must be considered as AI continues to improve and advance. For this reason, it is important that, at least in the near future, AI is monitored by humans. The focus of this article is on the legal issues related to content generating AI such as ChatGPT and Dall-E.

Potential legal issues

The technology that is mostly in the public consciousness today is generative AI, including ChatGPT, but also Dall-E (also created by OpenAI), which generates images from user generated descriptions. Each produces its own content which, provided the copyright originality requirement is met (which itself is a separate discussion altogether), introduces IP issues surrounding the ownership of the generated content. S.9(3) of the UK Copyright Designs and Patents Act 1988 (CDPA) outlines the UK’s regime on copyright ownership for works generated by computers:

‘(3) In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’

The wording here does not adequately resolve the issue of ownership, as ‘the person by whom the arrangements necessary for the creation of the work are undertaken’ is not clear in a scenario in which a user provides prompts for the AI generator to produce outputs. There has also been little by way of case law to assess the meaning of the legislation as it applies to AI.

Nova Productions Ltd v Mazooma Games Ltd [2007] EWCA Civ 219 did look at this issue to some extent, although the facts are not entirely identical to an AI generated scenario. Nevertheless, the finding that the creators of the game were the authors, and copyright owners, of various screenshots made by a player playing the game, is useful in understanding the direction in which s.9(3) may be interpreted. It may therefore suggest that the copyright owner of AI generated images and literary content is the creator of the AI technology, rather than the user. Additionally, provided that the user generated input is copyrightable, the user will itself own the copyright in the prompts used to generate such content.

ChatGPT’s terms of use indicate that OpenAI assigns to the user ‘all its right, title and interest’ in the content generated by the chatbot, which indicates that, at least in the case of ChatGPT, the user in fact owns the output. This is clouded further by the terms of use as OpenAI acknowledges that the output of ChatGPT may not be unique, and therefore multiple users may receive the same or similar results to other users, which raises issues of copyright enforceability between recipients of AI generated content. However, OpenAI’s position is not indicative of the approach to ownership of output across the board, nor is the existing law anywhere near to being settled on this point. This will continue to change as the needs of AI advance.

Equally, the training that such AI technology undergoes to deliver these outputs also poses interesting questions related to copyright ownership. AI like ChatGPT uses a technique called ‘deep learning’ that mirrors how humans might accumulate knowledge in order to learn and acquire skills, only at a far greater rate of digestion. Deep learning utilises algorithms that repeatedly perform certain tasks, each time improving the result; for example, responding to questions about science or generating images. In order to improve the output results, the algorithm must be fed significant amounts of information to learn and improve the output. In this example, the algorithm would be given access to considerable amounts of scientific information, data and research, or should be given access to art works and photography.

The questions arise as many copyright owners require commercial users to enter fee-paying licences to use the kinds of content that the deep learning algorithm uses. Training AI technology like ChatGPT or Dall-E would constitute a commercial purpose, so this poses a question as to whether companies like OpenAI are infringing IP rights or copyright owners’ rights when using these resources.

This has already led to litigation around the globe. A class action has commenced in the US against GitHub alleging that, amongst other issues, GitHub Copilot was trained and gives outputs using code posted under 11 open-source licences on GitHub by creators, and that attribution requirements, including the author’s name and copyright, have not been complied with.

In the UK, stock image company Getty Images announced that it is bringing an action against the practices used in training Stability AI. In this instance, Getty Images argues that the AI unlawfully copied and processed millions of copyright protected images without a licence in order to improve its outputs. The outcome of these cases could be pivotal to the use and implementation of AI, which may shape how copyright is licenced in the future.

Regulatory developments

How such issues will be dealt with, beyond the courts, is very much uncertain. There is no legislation currently in the UK that directly regulates AI, and the government’s current stance does not seem to indicate that there will be large swathes of regulation in the near future. In 2022 the government published a policy paper outlining to its approach to AI indicating that, whilst data security and safety relating to AI would be reviewed, it would generally support innovation in the field over a regulation heavy regime.

The government also indicated that the law relating to computer generated works would not change in the immediate future, meaning that the existing position concerning copyright protection will be retained. It is likely therefore that the UK will not make major changes to legislation governing AI and copyright until it is forced to do so, perhaps as a result of case law such as the Getty Images case. With AI capabilities accelerating, it may be that 2023 is the year that such regulatory change will in fact be triggered.

Actions for businesses

Whilst legislation is not yet keeping pace with generative AI, its use by businesses and their employees is accelerating, particularly in industries such as marketing where the production of content is paramount. Without the traditional crutch of regulation to guide businesses, there are still steps that they can take to effectively utilise generative AI as a tool that improves efficiency, output and accuracy, whilst minimising risks.

It is crucial that businesses have an accurate and up to date understanding of any generative AI used in their organisation by their staff. This can be done by producing an ‘approved list’ of AI tools that have been vetted for use within the organisation. Such vetting would include analysis of the AI’s terms of use to understand whether use of its output would constitute infringement of any IP rights. This could also include an analysis of the AI’s outputs for accuracy or reliability. This kind of approach can provide a guide as to how the tools can be used and can reduce the potential risk of liability for IP infringement.

The use of AI tools should be continually monitored, and the AI strategy generally kept under review. The capabilities of generative AI are changing rapidly and so too will the contractual terms of use and (eventually) the law in this area, and businesses need to be prepared.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP