The independent resource on global security

Responsible Artificial Intelligence Research and Innovation for International Peace and Security

Responsible Artificial Intelligence Research and Innovation for International Peace and Security
November 2020
Stockholm
SIPRI

In 2018 the United Nations Secretary-General identified responsible research and innovation (RRI) in science and technology as an approach for academia, the private sector and governments to work on the mitigation of risks that are posed by new technologies.

This report explores how RRI could help to address the humanitarian and strategic risks that may result from the development, diffusion and military use of artificial intelligence (AI) and thereby achieve arms control objectives on the military use of AI.

The report makes recommendations on how the arms control community could build on existing responsible AI initiatives and export control and compliance systems to engage with academia and the private sector in the governance of risks to international peace and security posed by the military use of AI.

Table of contents

1. Introduction   

2. Addressing the risks posed by the military use of AI    

3. Responsible research and innovation as a means to govern the development, diffusion and use of AI technology

4. Building on existing efforts to promote responsible research and innovation in AI

5. Key findings and recommendations  

ABOUT THE AUTHOR(S)/EDITORS

Dr Vincent Boulanin is Director of the Governance of Artificial Intelligence Programme at SIPRI.
Kolja Brockmann is a Senior Researcher in the SIPRI Dual-Use and Arms Trade Control programme.
Luke Richards was a Research Assistant working on emerging military and security technologies.