The independent resource on global security

Social media: A tool for peace or conflict?

Social media: A tool for peace or conflict?
Photo: Shutterstock

Human rights activists have used social media technology to organize peaceful protests and defend democracy for more than a decade. More recently, peacebuilders have discovered it can be a tool to understand conflict dynamics and counter extremism better. Yet the potential of social media as a megaphone for promoting human rights, democracy and peace is overshadowed by its dismal record of being used to drive radicalization and violence through disinformation campaigns. This ‘online frontline’ will continue to be the case, unless regulators, social media firms and citizens revisit current policies and practices.

At the 2021 Stockholm Forum on Peace and Development researchers, policymakers, tech companies and civil society organizations had an opportunity to explore how social media can be harnessed for peacebuilding purposes and to assess policy responses to harmful online disinformation campaigns. This Topical Backgrounder is inspired by these discussions, particularly on the Janus-faced nature of social media. It makes four recommendations—one each for peacebuilding practitioners, policymakers, social media companies and citizens—to protect peace, democratic institutions and people’s welfare:

  • Peacebuilding practitioners should systematize the use of social media technology for conflict stakeholder analysis, early warning, counter-messaging and the defence of democracy and human rights;
  • Policymakers should stem harmful social media disinformation campaigns by creating effective oversight and strict data management guidelines;
  • Tech companies should redesign their social media tools to prevent them from being employed for harmful political ends and from favouring conflict over consensus; and
  • Citizens should improve their resilience to disinformation, but also demand insight into the information collected about them by social media firms, how it is used and by whom.

Social media as a peacebuilding instrument

Participants in the Stockholm Forum sessions highlighted four uses of  social media technology in peacebuilding research and practice: conflict stakeholder analysis; early warning; counter-messaging; and social mobilization for peaceful protest and democracy. However, none of the four uses has yet fulfilled its potential.

Conflict stakeholder analysis

Peace and conflict researchers increasingly examine social media content to map conflict actors, trace the links between them and identify their local support networks. This has greatly improved the understanding of Nigeria’s Boko Haram, for example, which has relied on social media for its messaging since 2009.

Early warning

Researchers also monitor social media content to gain better insights into local grievances—a key driver of violence. In sub-Saharan Africa, for example, local grievances have provided a fertile ground for the expansion of extremist groups. Tracking such grievances online in real time can feed into early warning systems for conflict.

Counter-messaging

Young peacebuilders use social media platforms to develop viable counter-messages to extremists. These are more likely to be successful if grounded in local (sometimes high-risk) in-person activities or activism.

Social mobilization for peaceful protest and democracy

Social media technology has also created opportunities for people to mobilize politically in defence of democracy and human rights. In 2009 in Moldova, for example, young people relied on Twitter to oppose the country’s communist leadership. In Iran, citizens used Twitter to organize protests against the results of the 2009 presidential election, leading to calls for Twitter to be considered for the Nobel Peace Prize. During the Arab Spring in 2011, protestors in Egypt and Tunisia took to social media platforms to organize, spread their message internationally and ultimately overthrow dictatorial regimes. Particularly in repressive regimes, social media has been a communication channel for people to stand up for human rights or share evidence of human rights abuses thereby preventing government monopolization of information. It is hence no coincidence that social media giants, such as Facebook, Twitter and YouTube, are blocked in China, Iran and North Korea.

All four uses of social media could be employed a lot more strategically to reap benefits for peacebuilders or human rights activists. To date, much of the hope connected to social media as a tool for human rights, democracy and peace after the 2009 ‘Twitter revolutions’ has subsided or been replaced with concern about its potential to contribute to conflict.   

Social media as a driver of conflict

In the worst cases, social media platforms have been used to suppress internal dissent, meddle in democratic elections, incite armed violence, recruit members of terrorist organizations or contribute to crimes against humanity, as in the case of persecution of the Rohingya in Myanmar. In 2020 there was evidence of social media manipulation in 81 countries and of firms offering ‘computational propaganda’ campaigns to political actors in 48 countries. While propaganda is not new, the 2021 Stockholm Forum highlighted some of the reasons why propaganda on social media presents distinct challenges compared to traditional media, unintentionally drives conflict or affects peacebuilding efforts.

From news editors to tech companies

The rise of news distribution and consumption via social media platforms has shifted the gatekeeping power for information dissemination from editors and journalists—bound by professional codes of ethics, principles of limiting harm and editorial lines—to tech companies owing allegiance primarily to their shareholders. Professional news outlets across the globe now ‘compete with content producers—domestic and international—who produce junk news that is sensational, conspiratorial, extremist, and inflammatory commentary packaged as news’. Social media providers are currently protected against liability for user content and have shied away from becoming ‘arbiters of truth’.

Creation of echo chambers

To maximize profit by growing user engagement and participation, social media companies have created sophisticated tools which filter information and place people in virtual echo chambers. These confirm or even radicalize the users world views. Currently, the algorithms underlying social media platforms’ business model amplify the angry and divisive voices which drive engagement, pushing users towards ever more extreme content.

Voter manipulation and offline violence

The pigeonholing of information not only shapes people’s world views, but also their behaviour. The violent storming of the United States Capitol Building in January 2021 was, in part, motivated by the false widespread claim that the 2020 election was rigged. Stockholm Forum speaker Aws Al-Saadi, the founder and CEO of Tech4Peace—a large fact-checking network in the Middle East and North Africa—explained how online rumours can kill and fake online news about specific people in Iraq has sometimes prompted others to take justice into their own hands. Maria Ressa, CEO of online news outlet Rappler in the Philippines, argued: ‘Social media has become a behaviour modification system. And we’ve become Pavlov’s dogs that are experimented on in real time. And the consequences are disastrous.’

Interference in conflict dynamics

It is also increasingly clear that even well-meaning global social media campaigns can interfere in conflict dynamics. A recent article explained how, after an information leak, the international #BringBackOurGirls social media campaign to free the high-school girls kidnapped by Boko Haram in Chibok in 2014 hindered rescue attempts and may have encouraged the group’s growing reliance on gender violence and kidnapping for international attention and ransom.

Risks to peacebuilders and humanitarian efforts

Another prominent theme in several Stockholm Forum discussions was the risk that adverse social media reactions pose to peacebuilders or humanitarian efforts. For example, individuals working in projects with colleagues from countries considered to be adversaries (Armenia/Turkey; Armenia/Azerbaijan) cancelled their participation in these collaborations after suffering personal attacks on social media. Systematic online campaigns to defame humanitarian organizations are another example. Between 2013 and 2017, hundreds of humanitarian White Helmet volunteers were killed in Syria after manufactured social media claims that they were terrorists with links to al-Qaeda and the Islamic State.

Policy responses to social media disinformation campaigns

Policy efforts to stem social media disinformation fall into different categories: punitive approaches; voluntary codes of conduct; and resilience building (for example through task forces that identify influence campaigns, fact-checking initiatives and digital literacy campaigns). Punitive approaches criminalize the creation of disinformation. They have been favoured by non-democratic countries that have frequently used them to censor the media or arrest journalists and opposition activists (for example in Belarus, Egypt or Kazakhstan). Voluntary codes of conduct and investment in resilience building and digital literacy have been the policies preferred by democracies which value the protection of free speech (for example Australia, Canada, European Union (EU) member states or the USA).

Multilateral initiatives

Two codes of conduct by the EU stand out among multilateral initiatives to stem harmful social media use and disinformation. Agreed in 2016 between the European Commission, Facebook, Twitter, Microsoft and YouTube, the EU Code of Conduct on Countering Illegal Hate Speech Online seeks to ensure that ‘online platforms do not offer opportunities for illegal online hate speech to spread virally’ and to counter terrorist propaganda by ‘expeditiously’ removing illegal online hate speech. Since then, numerous other social media providers have signed up. The last progress report showed that companies are, on average, ‘assessing 90% of flagged content within 24 hours’ and ‘removing 71% of the content deemed illegal hate speech’.

In May 2021, the European Commission issued guidance to strengthen the implementation and monitoring of its 2018 Code of Practice on Disinformation—a self-regulatory instrument to commit online platforms and advertisers to countering the spread of online disinformation. The strengthened code of practice contains stronger measures to disincentivize the purveyors of disinformation, increase transparency of political advertising, tackle manipulative behaviour, empower users, and call for improved collaboration with fact checkers and access to data for researchers. A critical next step is to embed it in the EU’s Digital Market Regulation. Daniel Braun, deputy chief of staff to Vera Jourova, vice president of European Commission for Values and Transparency, explained during the Stockholm Forum that the aim ‘is not to regulate content, but rather to ensure that the platforms put in place resources and processes needed to protect public health, democracy, and fundamental rights’.

Social media company responses

Content moderation. In line with voluntary codes of conduct, social media companies have removed content, monitored conflict situations, reduced the visibility of certain content or limited the re-sharing of news, and created early-warning systems in partnership with local fact-checking organizations. Between January 2019 and November 2020, for example, Facebook took down more than 10 893 accounts and 12 588 Facebook pages. To monitor conflict situations across the world, the company invested in local language technologies to help flag hate speech. The most recent estimates by Facebook Director of Human Rights Miranda Sissons, suggest that hate speech has fallen to approximately 8 per 1000 messages.

Corporate human rights policies. In March 2021, Facebook adopted a human rights policy meant to adhere to the United Nations Guiding Principles on Business and Human Rights. The policy commits Facebook to the publication of an annual report on human rights interventions undertaken, starting a fund for offline assistance to human rights defenders and journalists, removal of verified misinformation and rumours, partnership with human rights organizations and continuing technological advancement in early warning prioritization of at-risk countries. Whether other companies follow suit remains to be seen.

Towards context-sensitive algorithms? Tests of ways to break through current information bubbles and polarizing content are also under way. Particularly after the 2020 US election and the storming of the US Capitol Building, Facebook experimented with algorithms that favour rational voices and quality sources over polarization and division. However, any changes to the algorithms in the aftermath of the US election were temporary. Facebook did not disclose the results of its experiment. Although employees reported ‘nicer news feeds’ and a spike in the visibility of mainstream media publishers, the impacts of the revised algorithms are not public. Miranda Sissons said at the Stockholm Forum that Facebook is ‘actively seeking to invest in and develop the technology that limits the distribution of hateful or policy violating content or content that otherwise defies human rights principles.’

Civil society organizations’ responses

Civil society organizations have relied on building partnerships to stem disinformation. As an example, in the wake of the Covid-19 infodemic, local fact-checking organizations and local health organizations have partnered with the World Health Organization, the UN and the International Federation of the Red Cross and Red Crescent Societies to launch an initiative to combat dangerous misinformation in Africa. On other occasions, human rights groups have partnered with network analysis companies to monitor digital threats in conflict environments.  The collaboration between The Syria Campaign and Graphika, for example, uncovered a concerted disinformation campaign to discredit frontline humanitarian actors and the evidence they collected after Syria’s April 2017 sarin chemical attack.   

Outlook and recommendations

Peacebuilders have discovered that social media platforms can be used to research conflict actors, their strategies and grievances. Nevertheless, social media users’ track record of employing the technology to incite polarization, extremism or violence casts a deep shadow over social media’s potential as a peacebuilding tool. Legislators in democracies and global tech firms are responding to the harmful use of social media technology and systematic disinformation campaigns by adopting codes of conduct, strengthening monitoring and oversight, and collaborating with non-governmental organizations (NGOs) and civil society actors. Long-term investment by national governments is also required to build trust in traditional media and to strengthen civil society’s capacity to distinguish fact from fiction. However, progress will be limited if disinformation remains a source of control by autocratic governments and a source of revenue for social media providers. It will also be futile if social media companies’ understanding of how technology interferes in local conflict dynamics remains weak.

Although codes of conduct and investment in resilience through digital literacy programmes are promising, self-regulation has had limited effects. To protect peace and stability, democratic institutions, as well as the health and welfare of societies or specific communities, it is crucial for:

  • peacebuilding practitioners to use social media technology much more strategically and systematically for analysis of conflict actors, early warning, counter-messaging and the defence of democracy and human rights;
  • policymakers to create more effective oversight and data management guidelines to stem systematic disinformation campaigns;
  • social media platforms to redesign their tools to prevent them from being employed toward harmful political ends and from favouring conflict over consensus; and
  • citizens, civil society groups and researchers to increase their resilience to disinformation, but also demand insight into the information collected about them by social media firms, how it is used and by whom.

 

The role of new technologies in peacebuilding was a thematic focus of the 2021 Stockholm Forum on Peace and Development. This Topical Backgrounder is inspired by several Forum sessions including ‘New Frontiers in Peacebuilding: The Role of Social Media and ‘Using Social Media to Build Peace and Inclusivity and to Counter Hate Speech.

ABOUT THE AUTHOR(S)

Dr Simone Bunse is a Senior Researcher in the SIPRI Food, Peace and Security Programme.