Anthropic’s Deal with Book Authors Won’t Be Enough to Curb Abusive AI Practices

Eileen Pomeroy argues that while the $1.5 billion Anthropic settlement offers limited compensation to authors, it fails to address broader copyright abuses in AI training—and highlights the urgent need for the news industry to organize collectively for fair remuneration.


A $1.5 billion class action settlement between book authors and Anthropic, the maker of popular large language model Claude, has been widely praised as a positive step toward compensating creators for the use of their works in AI systems. But details of the deal show it will do little to curb copyright infringement practices by Big Tech and other AI companies that affect creators and other potential claimants, including news publishers and local media.

Monetarily, the settlement represents a small burden to Amazon-backed Anthropic compared to a trial that could have found that everyone in the class deserved compensation. The settlement amount — up to $3,000 for about 500,000 works out of the seven million pirated books — pales in comparison to the corporation’s latest market valuation. In September, Anthropic raised $13 billion in new capital, which valued the corporation at $183 billion, or more than 100 times the settlement payout.

Most important for authors and other creators, Bartz v. Anthropic allowed the corporation to sidestep core copyright concerns by characterizing AI training on copyrighted works as “fair use.” Federal Judge William Alsup sided with Anthropic when responding to a motion to dismiss filed earlier in the case, ruling that using legally obtained work in AI training is transformative “fair use” and therefore legal under current copyright law. This effectively prevented the book authors from receiving royalties for their work’s contributions to the future profits of large language models.

Such a ruling narrowed the case for a subsequent settlement, with only one issue still to be resolved: piracy. Despite Anthropic’s partial win from its motion to dismiss, the settlement did hold that Anthropic — and potentially other AI providers — can be held accountable for using pirated works in its model training, a practice that AI companies have admitted. As such, this case not only makes it clear that data acquisition methods matter but may also chart a path for other authors whose rights have been infringed in similar ways to claim compensation.

There is much uncertainty about how the settlement will be implemented in practice. Judge Alsup voiced significant concerns over its obscure details and complex procedures for copyright owners, such as dividing settlement payments among multiple claimants, which almost caused him to reject the settlement deal entirely.

On the other hand, the complicated nature of the retrospective settlement may also encourage other AI companies to pursue proactive licensing to avoid legal costs later, which would give creators some upfront compensation. 

Even for the most prolific of authors, the settlement will not result in anything approaching a windfall. Even if a particular book is in line to receive the $3,000 payout, the author will likely have to split the settlement 50/50 with their publisher, and that’s after legal fees and other costs are deducted.

Even though this case concerns book authors, its implications for training practices and compensation affect other copyright holders, especially the news community. For now, Bartz v. Anthropic and its narrow focus on pirated works will have a limited direct impact across the news community, said Matt Pearce, Director of Public Policy at Rebuild Local News. For local journalists and nonprofit newsrooms, pursuing compensation is a daunting challenge, not least because they increasingly depend on funding from philanthropic foundations and hence must remain publicly accessible, he said.

The news industry has complex economic ties with Big Tech, whose platforms serve as critical infrastructure for delivering and monetizing content, said Pearce. Additionally, publishers need to balance copyright claims with securing public access to their content, including in AI services, Pearce said. This makes it hard for newsrooms to move forward with litigation in a united front, similar to the class action case faced by Anthropic.

Thus far, most media outlets have negotiated directly with particular AI companies, resulting in a chaotic legal landscape. Just in the case of Open AI, for instance, the Washington Post, Guardian, and Atlantic have all signed deals with the company, while the New York Times and Ziff Davis have filed lawsuits against the corporation.

Ultimately, the Anthropic settlement underscores that the news community needs to unite and organize like book authors, he said. “There’s a challenging question for us to think about: What would an equivalent settlement look like for the local news world, and would it be more onerous than what the authors are trying to deal with?”

This article originally appeared in The Corner Newsletter: November 4th, 2025.