“Most troubling to experts on AI and nuclear weapons is that it’s getting harder and harder to keep decisions about targeting and escalation for nuclear weapons separate from decisions about conventional weapons.”
“There is no standing guidance, as far as we can tell, inside the Pentagon on whether and how AI should or should not be integrated into nuclear command and control and communications,” says Jon Wolfsthal, director of global risk at the Federation of American Scientists.
By Michael Hirsh | September 2, 2025 politico.com
Jacquelyn Schneider saw a disturbing pattern, and she didn’t know what to make of it.
Last year Schneider, director of the Hoover Wargaming and Crisis Simulation Initiative at Stanford University, began experimenting with war games that gave the latest generation of artificial intelligence the role of strategic decision-makers. In the games, five off-the-shelf large language models or LLMs — OpenAI’s GPT-3.5, GPT-4, and GPT-4-Base; Anthropic’s Claude 2; and Meta’s Llama-2 Chat — were confronted with fictional crisis situations that resembled Russia’s invasion of Ukraine or China’s threat to Taiwan.
