By: Stephen Morrissey, Christy Mellifont and Katie Kyung

GEMA v OpenAI


In our Technology Disputes in Focus series, WK’s Cyber Privacy & Technology team provides updates on the latest developments in technology disputes.

Following a recent German decision involving OpenAI, this second edition explores potential liability for AI providers for copyright infringement and compares international and domestic responses to the issue.

GEMA v OpenAI

On 11 November 2025, a German court found that OpenAI had breached German copyright law, ruling that both the use of copyrighted works to train large language models (LLMs) and the reproduction of those works in AI-generated outputs constituted copyright infringement. The case highlights the growing legal focus on how artificial intelligence systems interact with protected works. The decision also clarifies that the use of copyrighted material in AI training and outputs can raise liability concerns under German law, signalling potential compliance challenges for technology providers operating in Europe and elsewhere (including Australia).1

The case was brought by GEMA, Germany’s music rights society, who argued that OpenAI unlawfully used German song lyrics to train its models without obtaining licenses. According to GEMA, this enabled ChatGPT to reproduce substantial portions of protected works, amounting to an “unauthorized reproduction” under German copyright law (sections 15, 16, and 19a of the Urheberrechtsgesetz (UrhG)).

OpenAI maintained that its models do not store specific data but rather generate outputs based on statistical patterns, asserting that any reproduction falls under EU text-and-data-mining (TDM) exceptions and user responsibility. The TDM exception is a legal provision under copyright law that allows certain uses of copyrighted works without prior permission for the purpose of analysing large amounts of text or data.

The Court’s Decision

The Munich Regional Court largely upheld GEMA’s claims for injunctive relief, disclosure, and damages, on the basis of three key factors:

1. Memorisation  

GEMA relied on research which showed that copyrighted lyrics can become embedded in the AI model’s parameters and remain retrievable, a phenomenon referred to as memorisation. The Court considered this to be a form of unauthorised reproduction, an intentional outcome of the training process and equivalent to copying.

2. TDM Exceptions

The Court dismissed OpenAI’s reliance on the TDM exceptions, stressing that the TDM provisions only allow temporary copies for analytical purposes, not the permanent embedding of works in model parameters or their reproduction in outputs. OpenAI’s commercial deployment of ChatGPT, its ability to generate substantial portions of lyrics, and GEMA’s explicit offer to license the works placed OpenAI’s conduct squarely outside the scope of these exceptions.

3. Responsibility

The Court also rejected OpenAI’s attempt to shift its liability to end users, holding it responsible for copyright compliance. In circumstances where OpenAI carries the responsibility of selecting training data, the design of its models, and the operation of ChatGPT, it is accountable for any unauthorised reproduction of protected works. OpenAI’s argument to rely on user prompts was also rejected as such prompts merely trigger the model’s internal processes and have no influence over how copyrighted material is embedded or generated.

Subject to any appeal by OpenAI, the ruling establishes that embedding and reproducing protected works during LLM model training constitutes an infringement.

Australia’s Copyright Laws

In the US, the Anthropic settlement was a major copyright case, in which Anthropic agreed to pay US $1.5 billion (approximately AUD2.27 billion) to resolve claims that it trained its language models on books retrieved from pirate sites such as LibGen and PiLiMi without permission.

Closer to home, there is also a long list of Australian authors, including Trent Dalton, Helen Garner and Charlotte Wood, whose works were impacted by training AI language models.

Australian courts have yet to issue rulings that directly address AI training on copyrighted works or AI-generated outputs, leaving significant legal uncertainty. The Copyright Act 1968 (Cth) (Copyright Act) still assumes human authorship, creating ambiguity around both the ownership of AI-generated content and the legality of using copyrighted material in training models.

The Australian government has been actively reviewing its copyright laws to address and respond to the challenges posed by artificial intelligence.

Select Committee on Adopting Artificial Intelligence

In November 2024, the Select Committee on Adopting Artificial Intelligence highlighted concerns from creators about AI’s usage of copyrighted works without consent. Conversely, voices within the tech industry have advocated for clearer legal frameworks and potential exceptions, such as those for text and data mining (TDM). The Committee’s interim report recommended that AI developers be transparent and properly license any copyrighted material used for training, and urged careful consultation before introducing new copyright exceptions for AI to ensure a balance between innovation and creator rights.

Productivity Commission Report

In August 2025, the Productivity Commission published an interim report, Harnessing data and digital economy, which relevantly identified the challenges that Australia’s copyright laws are facing by the rise of artificial intelligence, particularly with large AI models being trained on copyrighted datasets. The Commission suggests that, rather than introducing new, AI-specific regulations, Australia should adapt its existing copyright framework, possibly by making licensing easier or by introducing a TDM exception. The report seeks feedback on whether such reforms are needed, especially regarding how to balance the interests of AI development and copyright holders, and whether clearer guidance or new exceptions would help ensure both innovation and fair compensation for creators.

Australian Copyright Law Reform

Following the release of the Productivity Commission’s interim report, the Australian Government announced on 27 October 2025, that the government would rule out a text-and-data mining (TDM) exception for AI training under the Copyright Act. Unlike other jurisdictions, including the UK, EU, US, Japan, and Singapore, which have adopted various forms of the TDM exception, Australia has chosen to maintain the requirement for licensing and creator consent. In making this decision, the government acknowledged the economic opportunities presented by AI but made it clear that these must not come at the expense of creators’ rights or the integrity of Australia’s copyright framework.

Implications for technology providers and insurers

As artificial intelligence continues to develop and become more prevalent, technology providers and their insurers are facing increased risks arising from its use (including complex compliance challenges and potential infringement claims across borders).

For the government, the challenge now lies in finding a sustainable balance, one which fosters innovation and harnesses the potential of AI, while ensuring that artists, authors, and musicians are recognised and compensated for their work. Conversely, for technology providers, the next few years could bring heightened exposure and uncertainty, with legislative reforms and court rulings potentially reshaping liability frameworks.

The interplay between AI innovation and copyright protection is far from settled. Watch this space closely, as the coming years could redefine liability frameworks and compliance obligations for technology providers worldwide.


Key Contacts & Updates

Wotton Kearney’s Technology Disputes Team uniquely combines expertise in insurance, technology and disputes to provide end-to-end support for insurers, technology providers, corporates and government entities.

As the only dedicated technology disputes practice in Australia, the team provides a full-service technology capability spanning regulatory risks and investigations, recoveries, disputes and third-party claims under one roof.

For more information, please contact:

Alternatively, please complete the form below to subscribe to Cyber, Privacy and Technology updates.


    [1] GEMA v OpenAI (Regional Court of Munich I – 11 November 2025, Case No. 42 O 14139/24).