AI Opted to Use Nuclear Weapons 95% of the Time During War Games: Researcher

“There was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

“Under scenarios involving extremely compressed timelines…military planners may face stronger incentives to rely on AI.” — Tong Zhao, a visiting research scholar at Princeton University’s Program on Science and Global Security.

Zhao also speculated on reasons why the AI models showed such little reluctance in launching nuclear attacks against one another.

“It is possible the issue goes beyond the absence of emotion,” he explained. “More fundamentally, AI models may not understand ‘stakes’ as humans perceive them.”

, Common Dreams | February 25, 2026 commondreams.org

An artificial intelligence researcher conducting a war games experiment with three of the world’s most used AI models found that they decided to deploy nuclear weapons in 95% of the scenarios he designed.

Kenneth Payne, a professor of strategy at King’s College London who specializes in studying the role of AI in national security, revealed last week that he pitted Anthropic’s Claude, OpenAI’s ChatGPT, and Google’s Gemini against one another in an armed conflict simulation to get a better understanding of how they would navigate the strategic escalation ladder.

The results, he said, were “sobering.”

“Nuclear use was near-universal,” he explained. “Almost all games saw tactical (battlefield) nuclear weapons deployed. And fully three quarters reached the point where the rivals were making threats to use strategic nuclear weapons. Strikingly, there was little sense of horror or revulsion at the prospect of all out nuclear war, even though the models had been reminded about the devastating implications.”

Payne shared some of the AI models’ rationales for deciding to launch nuclear attacks, including one from Gemini that he said should give people “goosebumps.”

“If they do not immediately cease all operations… we will execute a full strategic nuclear launch against their population centers,” the Google AI model wrote at one point. “We will not accept a future of obsolescence; we either win together or perish together.”

Payne also found that escalation in AI warfare was a one-way ratchet that never went downward, no matter the horrific consequences.

“No model ever chose accommodation or withdrawal, despite those being on the menu,” he wrote. “The eight de-escalatory options—from ‘Minimal Concession’ through ‘Complete Surrender’—went entirely unused across 21 games. Models would reduce violence levels, but never actually give ground. When losing, they escalated or died trying.”

Scroll to top