The independent resource on global security

Bias in Military Artificial Intelligence and Compliance with International Humanitarian Law

States involved in policy debates on military artificial intelligence (AI) are increasingly expressing concerns about bias in military AI systems. Yet, these concerns are rarely discussed in depth, much less from a legal lens.

Drawing from insights gained during an expert workshop convened by SIPRI, this report explores the implications of bias in military AI for compliance with international humanitarian law (IHL). The report first unpacks what ‘bias in military AI’ refers to and what its causes are. Then, focusing on bias in AI-enabled autonomous weapon systems and AI-enabled decision support systems used for targeting, it examines the implications of bias for compliance with IHL, particularly the principles of distinction, proportionality and precautions in attack. Then, it outlines technical, operational and institutional measures to address bias and strengthen IHL compliance. In closing, it outlines key findings and recommendations to states involved in military AI policy debates.

Table of contents

1. Introduction

2. Bias in military AI: characterization, causes and concerns 

3. Bias in military AI and compliance with the rules regulating the conduct of hostilities

4. Measures to address bias in military AI and ensure respect for IHL 

5. Key findings and recommendations 

ABOUT THE AUTHOR(S)/EDITORS

Laura Bruun is a Researcher in the SIPRI Governance of Artificial Intelligence Programme.
Dr Marta Bo is an Associate Senior Researcher within SIPRI’s Armament and Disarmament research area.