Anthropic Closer to $1.5 Billion Piracy Settlement
A US judge granted preliminary approval for Anthropic to pay a group of authors $1.5 billion for downloading pirated books to train its Claude AI chatbot.
A US judge granted preliminary approval for Anthropic to pay a group of authors $1.5 billion for downloading pirated books to train its Claude AI chatbot. This settlement could be a significant milestone in AI copyright litigation.
The class-action lawsuit was filed in a court in the US state of California in 2024, alleging Anthropic used thousands of books to train Claude. Claimants stated the AI company intentionally downloaded known pirated copies of books from the internet from databases, including Library Genesis and Pirate Library Mirror.
Earlier reports indicated Anthropic agreed to pay $1.5 billion to settle the case. The federal judge declined to approve the arrangement and asked the parties to answer several questions before deciding. Information from the plaintiffs shows the judge commended the fairness and thoughtfulness of the settlement structure.
The lawsuit is led by Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, but includes thousands of unnamed authors. Plaintiffs stated the preliminary approval brings them one step closer to real accountability for Anthropic and puts all AI companies on notice they can’t shortcut the law or override creators’ rights.
Anthropic will also destroy illegally downloaded materials and certify that they were not used in training commercial products. The claimants stated that the judge described the settlement as fair and reasonable. He will consider final approval in a future hearing.
The preliminary approval does not alter a court ruling made in June that AI training on copyrighted material constitutes transformative fair use, a principle upheld despite disagreement over the legality of downloading works separately from their use. But the settlement could serve as a warning to other AI players, who also face lawsuits over using published material to train their models.
Anthropic is pleased that the court has granted preliminary approval of the settlement. “The decision will allow us to focus on developing safe AI systems that help people and organisations extend their capabilities, advance scientific discovery and solve complex problems,” the company said.