The independent resource on global security

12. Artificial intelligence and international peace and security

Contents

I. Introduction

II. Governing the challenges presented by military artificial intelligence

III. Governing the challenges presented by civilian artificial intelligence

IV. Other important developments in the governance of artificial intelligence

V. Conclusions

Advances in artificial intelligence (AI) are poised to bring enormous benefits but they could also create, or exacerbate existing, threats to international peace and security. In recent years, many states have increasingly acknowledged the need to manage these complex risks—stemming from both civilian and military AI—through the establishment of new forums and initiatives. These states deepened their engagement with ongoing initiatives in 2024. The extent to which the various initiatives will evolve as complementary or competing processes remains an open question.

 

Military AI

For the past decade, the international policy conversation on military uses of AI has mostly focused on autonomous weapon systems (AWS), commonly characterized as weapon systems that, once activated, can select and engage targets without human intervention. Since 2023, however, the conversation has expanded to other military applications of AI, in areas such as targeting, planning and intelligence analysis, through what are commonly referred to as AI-enabled decision support systems. Reported uses of AI in current armed conflicts, especially in Gaza and Ukraine, illustrate that military AI is a pressing matter for policymakers.

 

Three topics were at the centre of discussions at the 2024 meetings of the group of governmental experts on ‘lethal autono-mous weapon systems’ (LAWS): characteristics and definitions of LAWS, application of international humanitarian law (IHL), and measures to ensure compliance with IHL and mitigate risks.

 

Civilian AI

Civilian AI developments could also pose risks to peace and security. Some AI models could help malicious actors to access critical knowledge to develop and use prohibited weapons. AI provides, moreover, a capability uplift and lowers the barrier 
for cybercriminals and hackers to carry out harmful operations. In addition, generative AI tools can be misused to spread dis-information. States sought to mitigate these risks across various forums in 2024. Notable multilateral efforts included United Nations-led processes on technology governance and the AI Safety Summit.

Jules Palayer and Laura Bruun
English