A federal judge in Oregon has levied the largest penalty yet for AI misuse in legal filings, fining two lawyers $110,000 after they submitted 23 fabricated citations and eight invented quotations generated by artificial intelligence. The sanctions, imposed by U.S. District Judge Ann Aiken in Portland, mark the first major judicial response to the surge in lawyers relying on AI tools to draft court documents. The case was subsequently dismissed, and the lawyers—identified as Steven A. Schwartz and Peter LoDuca—now face professional scrutiny.

Lawyers across the country are facing consequences for using AI without proper oversight. In Alabama, a family lost a trust dispute last month after their attorney filed citations to cases that do not exist. The Alabama Supreme Court dismissed the appeal, calling the conduct egregious, and barred the lawyer from filing in that court without co-counsel approval. The ruling underscores the risks of unchecked AI deployment in legal practice.

In Manhattan, a judge recently ruled that a defendant who used a general-purpose AI chatbot to help prepare his case had waived his attorney-client privilege. The decision sends a clear warning to litigants and attorneys who treat AI tools as infallible aids. Courts are increasingly intolerant of errors that stem from unvalidated AI outputs, signaling a broader reckoning for the legal profession.

Legal experts warn that AI hallucinations—when large language models generate false or misleading information—pose a growing threat to legal integrity. The Oregon sanctions follow a pattern of recent cases where AI-generated errors led to sanctions, dismissed claims, or professional reprimands. Lawyers are now urged to verify every citation and quotation before filing, a standard that some say will slow down legal work but is necessary to prevent miscarriages of justice.

The American Bar Association has not yet issued new guidelines, but state bars are beginning to address the issue. The Oregon State Bar is reviewing the sanctions case, while other state bar associations are discussing whether existing ethical rules adequately cover AI misuse. The legal community is divided: some argue for stricter regulations, while others say current rules are sufficient if properly enforced.

As AI tools become more embedded in legal workflows, courts are setting precedents that will shape future use. The Oregon case, in particular, may serve as a benchmark for how judges handle AI-related misconduct. Lawyers who fail to implement safeguards risk not only financial penalties but also reputational damage and potential disbarment. The message is clear: AI can assist in legal work, but it cannot replace human judgment and due diligence.

Legal technology consultants say firms must adopt AI governance policies that include human review of all AI-generated content before submission. Failure to do so could lead to more sanctions, dismissed cases, and erosion of public trust in the legal system. The coming months will reveal whether the legal profession can adapt quickly enough to the challenges posed by generative AI.

What You Need to Know

  • Source: Fortune
  • Published: May 16, 2026 at 10:30 UTC
  • Category: Business
  • Topics: #fortune · #business · #economy · #machine-learning · #would

Read the Full Story

This is a curated summary. For the complete article, original data, quotes and full analysis:

Read the full story on Fortune →

All reporting rights belong to the respective author(s) at Fortune. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.


Curated by GlobalBR News · May 16, 2026


🇧🇷 Resumo em Português

Dois advogados de Oregon foram multados em impressionantes US$ 110 mil por apresentar citações jurídicas forjadas geradas por uma inteligência artificial, em um caso que expõe os riscos do uso irresponsável de tecnologias emergentes no sistema judiciário. A decisão inédita nos Estados Unidos coloca em xeque a confiança em ferramentas de IA entre profissionais do direito e reforça a necessidade de regulamentação mais rígida sobre a confiabilidade de sistemas automatizados.

O episódio não é isolado: juízes e cortes em todo o país vêm endurecendo o combate ao uso fraudulento de IA, após casos recorrentes de “alucinações” — quando softwares inventam informações — resultarem em prejuízos processuais. No Brasil, onde o Judiciário já discute a implementação de soluções tecnológicas como a análise preditiva de decisões e a automação de rotinas, o precedente norte-americano serve como alerta para evitar erros semelhantes. Especialistas brasileiros destacam que, sem fiscalização adequada, a adoção acelerada de IA no meio jurídico pode comprometer a integridade de processos e a segurança jurídica.

A decisão deve acirrar o debate sobre ética e transparência no uso de inteligência artificial no Direito, com defensores de regulamentações mais estritas pressionando por normas que exijam validação humana obrigatória em citações geradas por IA.


🇪🇸 Resumen en Español

Un juez de Oregón acaba de imponer una multa de 110.000 dólares a dos abogados por presentar citas falsas generadas por una inteligencia artificial, un fallo que marca un precedente en la lucha contra el uso fraudulento de estas herramientas en el ámbito legal. El caso salpica al bufete de estos profesionales, que incluyó 23 referencias inventadas en sus escritos, lo que obligó al tribunal a tomar cartas en el asunto y enviar un mensaje contundente.

Este episodio subraya la creciente preocupación por la fiabilidad de los sistemas de IA en sectores regulados como el jurídico, donde la precisión es clave. Para los hispanohablantes, especialmente aquellos que interactúan con sistemas automatizados en su vida cotidiana, el caso sirve como advertencia: aunque la tecnología avanza, su uso indebido puede acarrear consecuencias legales y reputacionales. La decisión judicial refuerza el llamado a la transparencia y la supervisión humana en procesos críticos, un debate que trasciende fronteras y afecta a profesionales de todas las disciplinas.