The integration of artificial intelligence (AI) into nuclear and nuclear-related weapon systems is reshaping global security and influencing strategic stability. Although AI enhances speed, precision and data processing, it also introduces uncertainty that could weaken deterrence and increase the risk of escalation and inadvertent nuclear use.
States generally agree on the importance of maintaining human control over nuclear decision making, yet there is no consensus on how this should be defined or operationalized. Without greater clarity, these debates risk slowing the development of the norms and standards needed to govern the AI–nuclear nexus effectively.
This report seeks to advance the discussion by identifying commonalities in risk assessments on the AI–nuclear nexus, examining the current debate and approaches on human control and offering practical recommendations for moving forward.
1. Introduction
2. Risk assessments
3. Human control in the current debate
4. Pathways forward
5. Conclusions