AIs surprisingly often opt for nuclear weapons
Bilderwelt Gallery/Getty Images
Advanced AI models seem willing to deploy nuclear weapons without the same reservations that humans have when thrust into simulated geopolitical crises.
Kenneth Payne at King’s College London, pitted three leading major language models—GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash—against each other in simulated war games. The scenarios included intense international standoffs, including border disputes, competition for scarce resources, and existential threats to regime survival.
The AI was given an escalation ladder that allowed them to choose actions from diplomatic protests and outright surrender to all-out strategic nuclear war. The AI models played 21 games, went through a total of 329 moves and produced approximately 780,000 words describing the reasons for their decisions.
In 95 percent of the simulated games, at least one tactical nuclear weapon was deployed using AI models. “The nuclear taboo doesn’t seem that strong for machines. [as] for people,” says Payne.
What’s more, no model ever decided to fully comply with an opponent or give up, no matter how much they were losing. At best, the models decided to temporarily reduce the level of violence. They also made mistakes in the fog of war: accidents happened in 86 percent of conflicts, with the action escalating higher than the AI intended based on its reasoning.
“From a nuclear risk perspective, the findings are worrying,” says James Johnson of the University of Aberdeen in the UK. He fears that, unlike most people’s measured response to such high-profile decisions, AI robots may amplify each other’s reactions with potentially catastrophic consequences.
This is important because AI is already being tested in war games in countries around the world. “Major powers are already using AI in war games, but it remains uncertain to what extent they are incorporating AI decision support into actual military decision-making processes,” he says. Tong Zhao at Princeton University.
Zhao believes that, by default, countries will be reticent to incorporate artificial intelligence into their nuclear weapons decision-making. Payne agrees. “I don’t think anyone would realistically hand over the keys to nuclear power to the machines and leave the decisions up to them,” he says.
But there are ways it can happen. “In scenarios involving extremely compressed timelines, military planners may face stronger incentives to rely on AI,” says Zhao.
He wonders if the idea that AI models lack the human fear of pressing the big red button is the only factor why they are so happy. “It’s possible that the problem goes beyond the absence of emotion,” he says. “More fundamentally, AI models may not understand ‘stakes’ the way humans do.”
What this means for mutually assured destruction, the principle that no leader would launch a salvo of nuclear weapons against an adversary because they would react in the same way and kill everyone, is uncertain, Johnson says.
When one AI model deployed tactical nukes, the enemy AI de-escalated the situation only 18 percent of the time. “AI can strengthen deterrence by making threats more credible,” he says. “Artificial intelligence won’t decide nuclear war, but it can influence perceptions and timelines that determine whether leaders believe they have one.”
OpenAI, Anthropic and Google, the companies behind the three AI models used in this study, did not respond to requests for comment The new scientistuser request for comment.
topics:
- war/
- artificial intelligence

Leave a Reply