“AI offers an illusion of cool exactitude, especially in comparison to error-prone, potentially unstable humans. But today’s most advanced AIs are black boxes; we don’t entirely understand how they work. In complex, high-stakes adversarial situations, AI’s notions about what constitutes winning may be impenetrable, if not altogether alien. At the deepest, most important level, an AI may not understand what Ronald Reagan and Mikhail Gorbachev meant when they said, ‘A nuclear war cannot be won.’”
By Ross Andersen – THE ATLANTIC | May 4, 2023 rsn.org
The temptation to automate command and control will be great. The danger is greater.
No technology since the atomic bomb has inspired the apocalyptic imagination like artificial intelligence. Ever since ChatGPT began exhibiting glints of logical reasoning in November, the internet has been awash in doomsday scenarios. Many are self-consciously fanciful—they’re meant to jar us into envisioning how badly things could go wrong if an emerging intelligence comes to understand the world, and its own goals, even a little differently from how its human creators do. One scenario, however, requires less imagination, because the first steps toward it are arguably already being taken—the gradual integration of AI into the most destructive technologies we possess today.
The world’s major military powers have begun a race to wire AI into warfare. For the moment, that mostly means giving algorithms control over individual weapons or drone swarms. No one is inviting AI to formulate grand strategy, or join a meeting of the Joint Chiefs of Staff. But the same seductive logic that accelerated the nuclear arms race could, over a period of years, propel AI up the chain of command. How fast depends, in part, on how fast the technology advances, and it appears to be advancing quickly. How far depends on our foresight as humans, and on our ability to act with collective restraint.
Jacquelyn Schneider, the director of the Wargaming and Crisis Simulation Initiative at Stanford’s Hoover Institution, recently told me about a game she devised in 2018. It models a fast-unfolding nuclear conflict and has been played 115 times by the kinds of people whose responses are of supreme interest: former heads of state, foreign ministers, senior NATO officers. Because nuclear brinkmanship has thankfully been historically rare, Schneider’s game gives us one of the clearest glimpses into the decisions that people might make in situations with the highest imaginable human stakes.
It goes something like this: The U.S. president and his Cabinet have just been hustled into the basement of the West Wing to receive a dire briefing. A territorial conflict has turned hot, and the enemy is mulling a nuclear first strike against the United States. The atmosphere in the Situation Room is charged. The hawks advise immediate preparations for a retaliatory strike, but the Cabinet soon learns of a disturbing wrinkle. The enemy has developed a new cyberweapon, and fresh intelligence suggests that it can penetrate the communication system that connects the president to his nuclear forces. Any launch commands that he sends may not reach the officers responsible for carrying them out.
There are no good options in this scenario. Some players delegate launch authority to officers at missile sites, who must make their own judgments about whether a nuclear counterstrike is warranted—a scary proposition. But Schneider told me she was most unsettled by a different strategy, pursued with surprising regularity. In many games, she said, players who feared a total breakdown of command and control wanted to automate their nuclear launch capability completely. They advocated the empowerment of algorithms to determine when a nuclear counterstrike was appropriate. AI alone would decide whether to enter into a nuclear exchange.
Schneider’s game is, by design, short and stressful. Players’ automation directives were not typically spelled out with an engineer’s precision—how exactly would this be done? Could any automated system even be put in place before the culmination of the crisis?—but the impulse is telling nonetheless. “There is a wishful thinking about this technology,” Schneider said, “and my concern is that there will be this desire to use AI to decrease uncertainty by [leaders] who don’t understand the uncertainty of the algorithms themselves.”