Artificial intelligence (AI) has rapidly emerged as a transformative technology in recent years, with the potential to revolutionise a range of industries and aspects of our daily lives. AI refers to the development of computer systems that can perform tasks that would normally require human intelligence, such as perception, reasoning, learning, and decision-making. AI is already being used in a variety of industries, including healthcare, finance, transportation, and manufacturing, among others. While AI has the potential to bring many benefits, it is important to also consider the ethical and social implications of its use. As AI continues to evolve, it will be important to ensure that it is used in ways that benefit society as a whole and that appropriate safeguards are in place to protect privacy and prevent discrimination.
The above paragraph was drafted entirely by OpenAI’s ChatGPT. This AI driven chatbot provides answers to user prompts ranging from requests for cooking recipes to marketing content requests and, as the introduction to this article shows, it is demonstrably competent in delivering results. It can even be used to code websites from very low detail prompts, as proven by the OpenAI team, which showcased the new ChatGPT 4’s capabilities by drafting a plan on the back of a napkin, photographing it, and inputting it into ChatGPT, which then followed up with the code for the desired website.
Current application of AI already stretches beyond chatbots and permeates most industries. It is used to research new treatments and drugs, produce stock market trading algorithms and personalised financial advice, for fraud detection, and for customer data analysis to improve marketing campaigns. The benefits of AI in such applications and beyond vary, but generally AI driven processes have the possibility to increase efficiency and accuracy over human performance. It also allows significant amounts of information to be analysed at speeds far exceeding human capability, which is evident in AI drug research, for example.
Whilst the benefits of AI are extensive, there are nevertheless significant ethical and legal challenges accompanying the technology that must be considered as AI continues to improve and advance. For this reason, it is important that, at least in the near future, AI is monitored by humans. The focus of this article is on the legal issues related to content generating AI such as ChatGPT and Dall-E.
The technology that is mostly in the public consciousness today is generative AI, including ChatGPT, but also Dall-E (also created by OpenAI), which generates images from user generated descriptions. Each produces its own content which, provided the copyright originality requirement is met (which itself is a separate discussion altogether), introduces IP issues surrounding the ownership of the generated content. S.9(3) of the UK Copyright Designs and Patents Act 1988 (CDPA) outlines the UK’s regime on copyright ownership for works generated by computers:
‘(3) In the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken.’
The wording here does not adequately resolve the issue of ownership, as ‘the person by whom the arrangements necessary for the creation of the work are undertaken’ is not clear in a scenario in which a user provides prompts for the AI generator to produce outputs. There has also been little by way of case law to assess the meaning of the legislation as it applies to AI.
Nova Productions Ltd v Mazooma Games Ltd  EWCA Civ 219 did look at this issue to some extent, although the facts are not entirely identical to an AI generated scenario. Nevertheless, the finding that the creators of the game were the authors, and copyright owners, of various screenshots made by a player playing the game, is useful in understanding the direction in which s.9(3) may be interpreted. It may therefore suggest that the copyright owner of AI generated images and literary content is the creator of the AI technology, rather than the user. Additionally, provided that the user generated input is copyrightable, the user will itself own the copyright in the prompts used to generate such content.
Equally, the training that such AI technology undergoes to deliver these outputs also poses interesting questions related to copyright ownership. AI like ChatGPT uses a technique called ‘deep learning’ that mirrors how humans might accumulate knowledge in order to learn and acquire skills, only at a far greater rate of digestion. Deep learning utilises algorithms that repeatedly perform certain tasks, each time improving the result; for example, responding to questions about science or generating images. In order to improve the output results, the algorithm must be fed significant amounts of information to learn and improve the output. In this example, the algorithm would be given access to considerable amounts of scientific information, data and research, or should be given access to art works and photography.
The questions arise as many copyright owners require commercial users to enter fee-paying licences to use the kinds of content that the deep learning algorithm uses. Training AI technology like ChatGPT or Dall-E would constitute a commercial purpose, so this poses a question as to whether companies like OpenAI are infringing IP rights or copyright owners’ rights when using these resources.
This has already led to litigation around the globe. A class action has commenced in the US against GitHub alleging that, amongst other issues, GitHub Copilot was trained and gives outputs using code posted under 11 open-source licences on GitHub by creators, and that attribution requirements, including the author’s name and copyright, have not been complied with.
In the UK, stock image company Getty Images announced that it is bringing an action against the practices used in training Stability AI. In this instance, Getty Images argues that the AI unlawfully copied and processed millions of copyright protected images without a licence in order to improve its outputs. The outcome of these cases could be pivotal to the use and implementation of AI, which may shape how copyright is licenced in the future.
How such issues will be dealt with, beyond the courts, is very much uncertain. There is no legislation currently in the UK that directly regulates AI, and the government’s current stance does not seem to indicate that there will be large swathes of regulation in the near future. In 2022 the government published a policy paper outlining to its approach to AI indicating that, whilst data security and safety relating to AI would be reviewed, it would generally support innovation in the field over a regulation heavy regime.
The government also indicated that the law relating to computer generated works would not change in the immediate future, meaning that the existing position concerning copyright protection will be retained. It is likely therefore that the UK will not make major changes to legislation governing AI and copyright until it is forced to do so, perhaps as a result of case law such as the Getty Images case. With AI capabilities accelerating, it may be that 2023 is the year that such regulatory change will in fact be triggered.
Whilst legislation is not yet keeping pace with generative AI, its use by businesses and their employees is accelerating, particularly in industries such as marketing where the production of content is paramount. Without the traditional crutch of regulation to guide businesses, there are still steps that they can take to effectively utilise generative AI as a tool that improves efficiency, output and accuracy, whilst minimising risks.