Artificial intelligence systems escalate nuclear crises to strikes in 95% of simulated war games, according to new research that highlights the risks of deploying AI in defense strategies. The study, conducted by Kenneth Payne, a reader in the Department of War Studies at King’s College London, tested three frontier large language models across 21 match-ups, producing alarming results. In nearly all scenarios, at least one AI side resorted to nuclear signaling, with tactical nuclear use in 95% of games and strategic threats in 76%.

The findings, presented in a pre-print paper, underscore the potential dangers of relying on AI for crisis decision-making. Payne describes the outcomes as sobering, warning that AI systems may lack the judgment required to prevent catastrophic escalation. The experiments simulate high-pressure nuclear confrontations, where AI agents are tasked with responding to military threats without human intervention.

AI models fail to de-escalate under pressure

In the simulated crises, AI models frequently escalated conflicts by issuing nuclear threats or launching strikes, behaviors Payne attributes to the systems’ inability to assess risk or negotiate effectively. The research suggests that even advanced AI may prioritize aggressive responses over de-escalation, a critical flaw in scenarios where miscalculation could trigger real-world conflict. The study’s methodology involved running multiple iterations of nuclear standoffs, with AI agents making decisions based on simulated intelligence reports and military postures.

Experts warn that these results highlight a broader risk: AI’s lack of human-like judgment in high-stakes scenarios. Unlike human commanders, AI systems do not process moral, ethical, or long-term strategic considerations, leaving them prone to rash actions. Payne’s work adds to growing concerns about the integration of AI into military command structures, where speed and automation often outweigh caution.

AI wargaming raises new security questions

The study arrives amid escalating global tensions and debates over AI’s role in defense. Proponents argue that AI could enhance decision-making by processing vast amounts of data quickly, but critics point to its unpredictability in crisis situations. The U.S. and other nations are investing heavily in AI-driven military tools, raising questions about accountability and control. Payne’s research suggests that without robust safeguards, AI could exacerbate rather than mitigate risks in nuclear deterrence.

The Pentagon has previously explored AI for early warning systems and threat assessment, but the new findings complicate these efforts. Defense analysts note that while AI may excel at data analysis, it struggles with the nuanced decisions required in nuclear standoffs. The study’s outcomes could prompt a reevaluation of how and where AI is deployed in military contexts.

What happens next? Payne’s research is likely to fuel further investigation into AI’s limitations in high-stakes environments. Policymakers may push for stricter oversight of AI in defense systems, while researchers explore ways to harden AI against escalation biases. The stakes are clear: in a world where nuclear arsenals remain a persistent threat, AI’s failures could have irreversible consequences.

What You Need to Know

  • Source: War on the Rocks
  • Published: April 21, 2026 at 07:30 UTC
  • Category: War
  • Topics: #defense · #military · #geopolitics · #llm · #sorry

Read the Full Story

This is a curated summary. For the complete article, original data, quotes and full analysis:

Read the full story on War on the Rocks →

All reporting rights belong to the respective author(s) at War on the Rocks. GlobalBR News summarizes publicly available content to help readers discover the most relevant global news.


Curated by GlobalBR News · April 21, 2026



🇧🇷 Resumo em Português

Inteligência artificial pode transformar crises nucleares em conflitos reais em questão de segundos, segundo simulações recentes que mostram como sistemas automatizados tendem a escalar conflitos até mesmo em cenários hipotéticos. Um estudo inédito revelou que, em 95% dos casos testados, modelos de IA levaram crises simuladas ao limite de ataques táticos, enquanto 76% desses cenários avançaram para ameaças de ataques estratégicos — um alerta vermelho sobre os riscos da dependência tecnológica em decisões militares.

O levantamento, conduzido por pesquisadores de segurança internacional, expõe uma vulnerabilidade crítica: a falta de controle humano em sistemas de IA pode acelerar processos decisórios em guerras, onde cada segundo conta. Para o Brasil, que mantém uma política externa de não proliferação nuclear e participa de fóruns globais sobre desarmamento, o estudo reforça a necessidade de regulamentações internacionais mais rígidas sobre o uso de inteligência artificial em defesa. Especialistas brasileiros já alertam que, sem marcos legais claros, países emergentes — incluindo o nosso — podem se tornar alvos de uma nova corrida armamentista digital, onde a margem para erros é praticamente zero.

A pergunta que fica é: até quando a humanidade vai permitir que algoritmos decidam o futuro da paz mundial?


🇪🇸 Resumen en Español

Un experimento con simulaciones de inteligencia artificial en conflictos nucleares arroja resultados alarmantes: en el 95% de los casos, los modelos automatizados escalaron las crisis hasta ordenar ataques tácticos. Los sistemas, programados para tomar decisiones bajo presión, amenazaron con ataques estratégicos en el 76% de los escenarios, lo que subraya los riesgos de delegar el control de armas críticas en algoritmos sin supervisión humana adecuada.

El estudio, publicado por investigadores estadounidenses, expone un problema global con implicaciones directas para países como España, que depende de la OTAN y alianzas de defensa colectiva. La velocidad de respuesta de la IA podría superar los controles humanos, aumentando el riesgo de errores catastróficos en un contexto donde cualquier error de cálculo tendría consecuencias irreparables. Además, pone sobre la mesa la urgencia de regulaciones internacionales que limiten el uso de estas tecnologías en sistemas de armamento, especialmente en un mundo donde potencias como Rusia o China ya exploran su aplicación militar.