AI, art and global approaches to copyright law: US Supreme Court declines to review the case of Thaler v Perlmutter
In March 2026, the US Supreme Court declined to hear Stephen Thaler’s petition for certiorari (a request for an appeal), leaving in place the decision of the US Court of Appeals for the DC Circuit that AI alone cannot create a work eligible for copyright protection under the Copyright Act 1976.
The refusal to review the case reinforces the long-standing requirement of human authorship at a time when AI-driven creativity continues to test traditional legal frameworks.
In 2018, Dr Stephen Thaler made headlines when he filed a US copyright application for A Recent Entrance to Paradise, an artwork generated entirely by an AI system. The application listed the AI as the sole author. The US Copyright Office rejected the application on the basis that copyright subsists only in works created by human authors and that non-human entities cannot be recognised as authors or co‑authors. Readers may recall that a similar principle was applied, albeit in very different circumstances, in the ‘monkey selfie’ case.
What followed was a series of administrative challenges and appeals, culminating in the DC Circuit’s March 2025 judgment, which re-affirmed the District Court’s 2023 position that ‘human authorship is a bedrock requirement of copyright’. The court emphasised that copyright protects original human expression and that the statute does not extend to autonomously generated material. Thaler’s subsequent petition to the Supreme Court, arguing that the refusal to recognise AI authorship would chill creative experimentation, was denied on 2 March 2026. Although this brings the litigation to an end, significant questions remain about the boundary between human‑directed and AI‑generated creativity.
Many commentators have welcomed the Supreme Court’s decision to let the lower court ruling stand, viewing it as a necessary safeguard for human creativity and the commercial value of human-authored works. The ruling makes clear that, while AI systems are capable of producing sophisticated and compelling outputs, businesses and creators should exercise caution. Purely AI-generated material remains outside the scope of copyright protection, meaning organisations may need to ensure that human creative contributions are sufficiently documented and embedded in their workflows to secure protectable rights.
This direction of travel is likely to influence global approaches to AI and authorship. In the UK, the position is broadly aligned in requiring human authorship for standard works. However, the UK retains a unique statutory category of ‘computer‑generated works’ under section 9(3) of the Copyright, Designs and Patents Act 1988, which attributes authorship to the person who makes the ‘arrangements necessary’ for the creation of such works. That person might typically be the computer programmer.
This provision has never been tested in the context of modern generative AI, and its scope, reliability and enforceability remain uncertain. The primary attribution of authorship under the 1988 act (section 9(1)) is to ‘the person who creates [the work]’, and it is possible that a user of generative AI, if contributing sufficient originality and direction, could be deemed the author, on the basis that they have used the AI as a tool to undertake the arrangements necessary for the work’s creation. The broader debate has been highly charged, with creative industry figures warning against reforms that would dilute protection for human creators.
Recent case law illustrates the complexity of the landscape, with other cases focusing on whether the training of an AI tool infringes the copyright in materials used for that purpose. In November 2025, the High Court delivered judgment in Getty Images v Stability AI, finding in favour of Getty on trademark infringement but dismissing its secondary copyright infringement claim. The court held that Getty had not established that its images were used in the relevant training of Stability AI’s models, highlighting the evidential challenges rights‑holders face in proving the use of specific works in large scale datasets.
A recent report by the House of Lords Communications and Digital Committee has since concluded that the UK should not introduce new copyright exceptions for AI training. Instead, it recommends developing a ‘fair and inclusive’ licensing market to ensure that creators are compensated when their works contribute to AI‑generated outputs. Both the US and UK appear committed to maintaining the current legal framework, with any legislative intervention likely to be incremental and contested.
AI is becoming deeply embedded in creative practice, expanding what artists, designers and businesses can achieve. Sir Wayne McGregor, resident choreographer of The Royal Ballet, exemplifies this trend. His recent exhibition at Somerset House, Infinite Bodies, showcased AI‑driven visual representations of movement and dance, demonstrating how technology can augment artistic expression.
As creators explore new mediums, it is increasingly important to understand both the risks and the legal direction of travel. The combination of rapid technological change and the need for case law to evolve with it raises questions about how long the current approach will endure. For now, however, the position is clear: fully autonomous AI-generated works are not eligible for copyright protection, and the focus has shifted to how human involvement in AI-assisted creation can be structured, evidenced and protected.
Comparison of US/UK copyright protections
| Issue | United States | United Kingdom (England & Wales) |
| Core requirement for protection | Human authorship is essential. AI‑generated material without meaningful human creative input is not protectable. | Human authorship is also required for standard works, but the UK uniquely recognises computer‑generated works under section 9(3) of the CDPA 1988. |
| Status of AI as an author | AI systems cannot be authors or co‑authors. The Copyright Office rejects registrations listing AI as an author or joint author. | AI cannot be an author. For computer‑generated works, authorship is attributed to the person who made the ‘arrangements necessary’ for creation, though this has never been tested with modern generative AI. |
| Threshold for human input | Requires ‘sufficient human creativity’ and ‘originating expression’. Prompting alone may be insufficient unless it reflects creative choices. | No clear judicial test. For computer‑generated works, the threshold for ‘arrangements necessary’ is uncertain and may not guarantee strong or enforceable rights. |
| Treatment of fully autonomous AI output | Not eligible for copyright protection. No fallback category. | Potentially protected under section 9(3), but the scope, duration (50 years), and enforceability remain unclear and untested. |
| Approach to AI training data | Litigation is ongoing. Courts focus on whether specific copyrighted works were used in training. No statutory exception for training. | No new exceptions for AI training. Rights‑holders must prove their works were used. Government and Lords Committee favours licensing markets over legislative reform. |
| Recent key case | Thaler v. Perlmutter (2026 cert denial) confirms AI‑only works cannot be copyrighted. | Getty Images v Stability AI (2025) highlights evidential hurdles in proving training data infringement. |
| Policy direction | Strong judicial emphasis on human creativity and authorship. Legislative change unlikely in the near term. | Government signals no immediate reform. Emphasis on licensing solutions and maintaining existing copyright framework. |
| Commercial implications | Businesses must ensure human creative contribution is documented to secure protection. Purely AI-generated assets carry no copyright. | Businesses may rely on section 9(3) but should treat it cautiously. Hybrid human/AI workflows offer more reliable protection. |
This article was co-written by Lucy Aylmer, paralegal.

