News and Publications

Uncovering the risks of ChatGPT: a litigator’s perspective

Posted: 18/05/2023


Launched by OpenAI in November 2022, ChatGPT is an artificial intelligence chat bot with impressive abilities. In January 2023, it became the fastest growing consumer application ever, having amassed over 100 million users. The newest version of the technology, GPT4, was launched in mid-March.

The chat bot’s popularity stems from its ability to produce lengthy, articulate and (often) accurate responses and analyses across a seemingly infinite range of subject areas. Simply ask it to answer a technical question, produce a piece of marketing material, draft a contract or letter, or even write a poem and you will receive a humanlike response within seconds. If the outcome is not quite what you wanted, just feed it some further information (such as ‘more technical’, ‘longer’ or ‘make it rhyme’) and the bot will improve its previous response.

ChatGPT was initially trained on an expansive range of data sources, from websites to newspapers and books, to build its knowledge and understanding of language use. But, unlike similar previous technologies, the chat bot is also constantly improving its response quality through learning more about context and language from its interactions with users.

Companies from Apple to Coca-Cola have announced that they are experimenting with the technology, leading investors to consider the potential impact of generative AI on company profitability. This year has seen a twofold increase in the number of shares in Nvidia, the producer of the chatbot’s chips. On the other side of the coin, study materials company Chegg has suffered a huge hit to its share price in 2023, with shares falling 48% last week.

The benefits of this technology will be huge. Its ability to churn out technical and inventive responses is quite extraordinary. But what are the risks of ChatGPT from a litigator’s perspective?

‘The risks of using ChatGPT from a litigator’s perspective include potential inaccuracies, privacy and confidentiality concerns, bias and discrimination, ethical concerns, and potential liability. It is important to be aware of these risks and mitigate them.’

That’s what the chat bot itself had to say on the matter. It is not a bad summary but, as flagged in the last sentence, it is necessary to delve deeper into the legal risks posed and assess how best to mitigate them.

Bias

It is a well-known fact that humans are susceptible to bias, no matter how unintentional or unconscious. These biases ultimately stem from exposure to a limited pool of facts and events. The same risks apply to ChatGPT. ChatGPT forms its understanding of facts from the enormous quantities of data with which it has been trained. However, specific leanings and bias within the data can undoubtedly lead to the bot’s production of biased responses. 

Ring wing commentators have reportedly criticised the ‘liberal bias’ built into ChatGPT’s responses. Whatever your political leaning, it is important to bear in mind that responses may not be devoid of opinions the chat bot has derived from its data sources.

This potential for bias is one for litigators’ radars. It is easy to imagine a scenario where a user takes a generated answer at face value and circulates its content as fact. If answers contain political or social biases, the user could find themselves on the receiving end of discrimination or defamation claims.

Inaccuracy

Linked to the bias risk is that of inaccuracy. A quick Google search pulls up countless examples of instances where ChatGPT has produced responses that are downright wrong. It has incorrectly stated the size of countries, seemingly invented legal cases and has provided links to non-existent academic articles.

Users who take these answers at face value and act in faith of this are in grave danger of litigation. A blind assumption of ChatGPT’s accuracy could lead to false information being given to clients, customers or even the wider public. 

Open-AI is not always able to pinpoint exactly why inaccurate results are sometimes produced. However, a serious limitation to the bot’s accuracy lies in the fact that it was trained on data produced pre-September 2021 and so its knowledge is effectively a snapshot of the data that existed at that time. For example, when asked about the last earthquake in Turkey, ChatGPT’s responses assume that the user is referring to the 2020 earthquake, rather than the most recent one in February 2023.

This limitation should be kept in mind by those using the product to obtain information on current events or recent developments. Take lawyers for example. We asked ChatGPT about the employment law case of The Harpur Trust v Brazel and the bot produced an accurate summary of the Court of Appeal’s judgment, but completely ignored the fact that the Supreme Court has since heard the appeal. For lawyers and professionals in other fields, blind reliance could lead to reputational damage and risks of professional negligence litigation.

One example of just that is from Australia where a mayor revealed plans last month to bring a defamation claim after ChatGPT wrongly stated that he had been imprisoned for bribery, when in reality he had been a whistleblower about an alleged bribery scandal and was never charged with a crime. The case suggests that there may indeed be future incidences in which public figures are defamed by incorrect ChatGPT generated text.

Copyright

Copyright is an intellectual property (IP) right over literary, music or artistic works (these can range from a lyric to text on a website). Although ChatGPT produces its own responses, its wide-cast net of sources creates the risk of the reproduction of copyright material in its answers.

As an AI product, ChatGPT itself is incapable of owning copyright. Under the ChatGPT terms of use, the user is assigned any IP rights of the responses. However, there is unfortunately no indication given to users as to whether the responses have used pre-existing written material to the extent that any IP rights have been infringed. The usage requirements section of the terms makes it clear that this responsibility falls on the user.

This is extremely important for users to note. From a legal perspective, it would be unwise to circulate or republish ChatGPT responses as to do so runs the risk of copyright infringement proceedings being taken against them. Instead, the responses should be used as inspiration, or as a starting point for research.

In February, Getty Images filed a lawsuit in Delaware against AI company Stability AI Inc. Getty Images claims that the AI tool, which produces images based on text prompts, had misused over 12 million of its images. It would not be surprising to see similar claims emerging in the ChatGPT world.

Microsoft, GitHub and OpenAI are facing a US group lawsuit over the creation of AI-powered coding assistant GitHub Copilot. The companies are accused of violating copyright law after the coding assistant was found to reproduce long sections of licensed code without crediting the owners.

The copyright concern has recently entered the music world. Universal Music Group has told streaming platforms, including Spotify and Apple, to block artificial intelligence services from taking lyrics from their copyrighted songs.

For a more detailed analysis of the copyright issues that generative AI throws up, read our commercial team’s recent article: Exploring generative AI: ChatGPT, DALL-E and the copyright conundrum.

Privacy and confidentiality

The risk of dissemination of confidential data must also be noted. Take an example where a business user plugs a client’s personal data into the chat bot to achieve a personalised solution to an issue. The bot will now use this information to improve its understanding of context and language. There is then a consequential risk that, in the absence of the data subject’s consent, the personal data will be repurposed in a response to future users questions. 

Italy has become the first Western country to impose a temporary ban on the use of ChatGPT following a recent cyber security breach, which involved users being shown excerpts of other people’s ChatGPT conversations and their financial information. This kneejerk ban marks the first regulatory action to be taken in relation to the chatbot. The Italian authorities have launched an investigation to determine whether there is a legal basis for the technology’s mass collection and storage of data.

Ethical concerns

We know that AI brings with it potential ethical concerns which is why the EU AI Regulation has at its heart ‘trustworthy AI’. With the UK being less restrictive on AI uses, there could be concerns that users of ChatGPT could find themselves in unethical positions (eg inadvertently discriminating against others or breaching someone else’s privacy or legal rights), vulnerable to the advice that ChatGPT gives (by way of example, there has already been a reported case of an AI chatbot convincing a man to commit suicide) or it may be the user who intends to use ChatGPT for unethical reasons.

More than 1,000 tech researchers and executives have signed an open letter calling for a ‘pause’ on the availability of powerful AI technology such as ChatGPT. Signatories include Elon Musk, as well as the co-founders of Apple, Pinterest and Skype. The letter warns of the dangers of the ‘out-of-control' race between AI labs and raises concern about the impact on employment and public discourse.

We are yet to see a negative impact on employment, but labour rights groups such as the TUC have voiced concerns about the acceleration of these types of technology especially given the lack of regulation and the potential risks for discrimination, as outlined earlier in this article.

There is also a risk of the chatbot being used for illegal and unethical causes. Analysists have warned that the technology can assist threat actors seeking to hack networks by enabling those with little coding experience to write malware and automate part of the hacking process.

Conclusion

ChatGPT offers a seemingly endless list of opportunities. Across industries, the technology is set to improve the way we work, the material we produce and the quality and accuracy of our content.

However, although legal challenges involving ChatGPT are yet to flood the headlines, the risks are clear and should not be underestimated. As with any novel technology, we must navigate these exciting (but uncertain) opportunities with caution and continue to monitor AI with our eyes wide open.

This article was co-written with Katie Barnes-Monaghan, trainee solicitor in the commercial dispute resolution team.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP