A federal judge has delayed final approval of Anthropic’s $1.5 billion copyright settlement, the largest of its kind in U.S. history, after authors and class members raised objections over the terms. On Thursday, U.S. District Judge Araceli Martinez-Olguin declined to approve the agreement without further clarification, signaling that the legal process is far from over.

The settlement stems from allegations that Anthropic, an AI company, used copyrighted books without permission to train its AI models. Plaintiffs, including authors and publishers, had negotiated the $1.5 billion deal to resolve claims of widespread copyright infringement. However, objections filed by class members and authors have thrown a wrench into the approval process, forcing the judge to seek more details before moving forward.

Judge seeks clarity on objections

Judge Martinez-Olguin ordered attorneys representing the class to address key concerns raised by objectors, who argue that legal fees are disproportionately high while payments to class members are minimal. In one objection reviewed by Ars Technica, objectors called the proposed class payouts a “pittance,” arguing that the compensation does not fairly reflect the harm caused by AI training practices.

The objections also allege that the legal team representing the class has attempted to exclude certain authors from participating in settlement discussions. This claim has added another layer of complexity to the case, as the judge examines whether all affected parties have been given a fair opportunity to voice their concerns.

If approved, the $1.5 billion settlement would set a precedent as the largest copyright settlement in U.S. history, surpassing previous agreements in high-profile cases involving digital piracy and unauthorized content use. The case highlights growing tensions between AI developers and content creators over the use of copyrighted material in training datasets.

Anthropic has not commented publicly on the objections, but the company has previously stated that it is committed to resolving the dispute through fair and transparent means. The legal team for the class plaintiffs has not responded to requests for comment on the specific criticisms raised by objectors.

What happens next

The judge has asked authors and class members to submit additional information addressing the objections by a specified deadline. A hearing has been scheduled to review the responses, after which the judge will decide whether to approve the settlement, reject it entirely, or request further modifications. Legal experts say the outcome could influence future AI copyright cases, setting a benchmark for how similar disputes are resolved.

The case underscores the broader challenges facing the AI industry as it grapples with intellectual property laws that were written long before generative AI existed. Courts are now tasked with interpreting these laws in the context of rapidly evolving technology, a process that is likely to continue for years to come.

The delay in approval means Anthropic’s settlement remains in legal limbo, leaving plaintiffs and objectors in a state of uncertainty as they await the judge’s next move.

What You Need to Know

  • Source: Ars Technica
  • Published: May 15, 2026 at 21:51 UTC
  • Category: Technology
  • Topics: #arstechnica · #tech · #science · #anthropic · #thursday

Read the Full Story

This is a curated summary. For the complete article, original data, quotes and full analysis:

Read the full story on Ars Technica →

All reporting rights belong to the respective author(s) at Ars Technica. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.


Curated by GlobalBR News · May 15, 2026


🇧🇷 Resumo em Português

O Brasil se aproxima cada vez mais do epicentro das discussões globais sobre direitos autorais e inteligência artificial, com um revés judicial que pode redefinir como empresas de IA negociam com criadores de conteúdo. Um juiz federal nos Estados Unidos interrompeu temporariamente a aprovação de um acordo de US$ 1,5 bilhão entre a Anthropic e autores que alegavam violação de direitos autorais por seus textos usados no treinamento de modelos de linguagem. A decisão veio após objeções de escritores e membros da classe processual, que contestam não só os valores destinados à indenização, mas também os altos honorários dos advogados envolvidos no acordo.

O caso ganha relevância no Brasil, onde a legislação sobre direitos autorais e uso de obras em sistemas de IA ainda está em fase de amadurecimento, mas já enfrenta pressões de criadores e plataformas digitais. Aqui, discussões semelhantes pipocam em fóruns jurídicos e acadêmicos, com autores temendo que suas obras sejam exploradas sem consentimento ou remuneração justa, enquanto empresas de tecnologia argumentam pela necessidade de acesso a grandes volumes de dados para desenvolver IA avançada. A pausa no acordo da Anthropic serve como um alerta para o Brasil: é preciso regulamentar urgentemente como obras protegidas podem ser utilizadas em treinamentos de IA, evitando conflitos judiciais bilionários e garantindo direitos aos criadores.

Agora, o desfecho desse caso nos EUA pode influenciar diretamente o debate local, pressionando legisladores e órgãos reguladores brasileiros a avançarem em marcos legais claros — ou arriscarem mais batalhas judiciais no futuro.


🇪🇸 Resumen en Español

Un juez federal frena el multimillonario acuerdo de copyright de Anthropic, valorado en 1.500 millones de dólares, tras las objeciones de autores y afectados por las comisiones de los abogados y los pagos a la clase representada. La decisión, que retrasa la posible resolución del caso, refleja las crecientes tensiones entre la industria tecnológica y los creadores de contenido en la era de la inteligencia artificial.

La polémica surge en un momento clave para el sector, donde gigantes como Anthropic —empresa detrás del modelo de lenguaje Claude— negocian acuerdos masivos para compensar a autores cuyos textos se han usado para entrenar sus sistemas. La suspensión del juez subraya la importancia de garantizar transparencia y equidad en estos pactos, especialmente para los hispanohablantes, cuya producción cultural y literaria también podría verse afectada por futuros modelos de IA. Además, el caso pone de relieve cómo los tribunales están llamado a definir los límites éticos y económicos de la innovación tecnológica, con consecuencias directas para creadores, empresas y usuarios en todo el mundo.