top of page

AI: Fair use or not fair use? That is the question…

  • Writer: Marie-Avril Roux Steinkühler
    Marie-Avril Roux Steinkühler
  • Sep 12
  • 3 min read
ree

The lines are shifting rapidly on the application of fair use to generative Artificial Intelligence models. While some courts are closing the door on training AI on protected content, others are opening it slightly or even throwing it wide open… But ultimately, uncertainty continues to reign!


🤔 But first, what is fair use?

This American judge-made doctrine allows limited use of copyrighted material without needing permission from the creator. Fair use aims to support freedom of expression, especially for things like commentary, criticism, news reporting, teaching and research.


Four fair use-factors are to be met:

  • a transformative use (focus on the commercial purpose and character of the use)

  • a factual, creative work (focus on the nature of the copyrighted work)

  • limited use of the pre-existing work (focus on the amount and substantiality of the portion used in relation to copyrighted work as a whole)

  • no replacement for the work in its potential market i.e. no market harm or dilution (focus on the effect of the use upon the potential market for or value of the copyrighted work)


⚖️ The Delaware Federal Court, in Thomson Reuters v. ROSS Intelligence (February 11, 2025, Case 1:20-cv-00613, ECF 770), had clearly ruled that training AI with protected content does not constitute fair use (allows limited use of copyrighted material without needing permission from the creator nor payment).


Ross Intelligence, which had used protected summaries from Westlaw to feed its AI model, had its arguments rejected point by point:

❌ Commercial and non-transformative use: generative is not transformative

❌ Massive use of original works

❌ Proven harm to the publisher's business model


⚠️ However, this position contrasts with the more flexible approach taken in major recent cases by two judges in the Northern District of California: both conclude that training AI with protected content is transformative use!


1️⃣ On June 23, District Judge William Alsup ruled in Bartz v. Anthropic (Case 3:24-cv-05417) that the use of millions of books to train the Claude LLM was transformative and did not affect the markets of those works: thus, Anthropic could rely on fair use defence. However, this exception does not apply to pirated books.


2️⃣ Two days later, District Judge Vince Chhabria dismissed the action brought by 13 authors against Meta for copyright infringement in Kadrey v. Meta (Case 3:23-cv-03417). He argued that the plaintiffs had failed to provide sufficient evidence of market harm or dilution but pointed out that potential profit and bad-faith copying would be relevant.


On the contrary, the US Copyright Office wrote in favor of content creators in its Report on Copyright and Artificial Intelligence, Part 3: Generative AI Training, published on May 9. It opposes the application of fair use. It considers that transformative use depends on the AI model used and that exploitation for the purpose of training generative AI may result in significant potential harm in the form of lost licensing opportunities, lost sales, and market dilution.


This divergence in decisions shows that disagreements persist among US judges on the concepts of transformative use and market harm in relation to AI training. It reveals a growing tension between:

🔹 fair use designed to promote innovation,

🔹 and an urgent need to guarantee remuneration and control for authors over their works.


🧩 Until the Supreme Court intervenes, fragmented case law and uncertainty will prevail. Settlements are the best solution, as shown by Anthropic's class action settlement with a group of U.S. authors. Prof. James Grimmelmann estimated that it could be a model for other cases.


✍️ The law has not yet had its final say. But for now, AI companies are calling the shots.

Image: CDNLogo

Comments


bottom of page