The independent resource on global security

SIPRI Webinar: Governing the Peace and Security Risks of AI Agent Interactions

Two wire-frame glowing hands, handshake, technology, business, trust concept
Image: Shutterstock
-

The Stockholm International Peace Research Institute (SIPRI) is pleased to invite you to a public webinar on the peace and security implications of AI agents, following the publication of a new SIPRI Essay on this topic. 

Cutting-edge artificial intelligence (AI) models are increasingly agentic, meaning they are increasingly able to act on their own and work together with other AI systems. These ‘AI agents’ could bring big benefits, including speeding up scientific research, but they also carry serious risks. 

AI agents, when deployed and interacting at scale, may behave in ways that are hard to predict and control. They may also be vulnerable to adversarial attacks and malicious uses. This has implications for AI governance as well as for international peace and security. The international policy community needs to recognize and respond to the risks that emerge from interacting AI agents in short order. Agentic AI is still in its infancy, but the window of opportunity for effectively ensuring that these systems are deployed in responsible ways may soon close. 

This 60-minute webinar will convene a panel of experts for a moderated discussion on the implications of AI agent interactions for international peace and security. Panellists will discuss what AI agents are, how their interactions might create new pathways of risk, and what kinds of governance responses are needed.

Speakers

Alan Chan, Centre for the Governance of AI

Joel Z. Leibo, Google DeepMind

Sarah Shoker, Berkeley Risk and Security Lab

Moderator

Dr Vincent Boulanin, Director of the SIPRI Governance of AI Programme

Registration information

Click here to register on Zoom. 

Event contact (SIPRI)