LEADING GLOBAL RISK 2024: MISINFORMATION AND DISINFORMATION

Misinformation and disinformation may radically disrupt electoral processes in several economies over the next two years. A growing distrust of information, as well as media and governments as sources, will deepen polarized views a vicious cycle that could trigger civil unrest and possibly confrontation. There is a risk of repression and erosion of rights as authorities seek to crack down on the proliferation of false information as well as risks arising from inaction.

The disruptive capabilities of manipulated information are rapidly accelerating, as open access to increasingly sophisticated technologies proliferates and trust in information and institutions deteriorates. In the next two years, a wide set of actors will capitalize on the boom in synthetic content, amplifying societal divisions, ideological violence and political repression ramifications that will persist far beyond the short term. Misinformation and disinformation (#1) is a new leader of the top 10 rankings this year by the WEF. No longer requiring a niche skill set, easy-to-use interfaces to large-scale artificial intelligence (AI) models have already enabled an explosion in falsified information and so-called ‘synthetic’ content, from sophisticated voice cloning to counterfeit websites. To combat growing risks, governments are beginning to roll out new and evolving regulations to target both hosts and creators of online disinformation and illegal content. Nascent regulation of generative AI will likely complement these efforts. For example, requirements in China to watermark AI-generated content may help identify false information, including unintentional misinformation through AI hallucinated content. Generally however, the speed and effectiveness of regulation is unlikely to match the pace of development. Synthetic content will manipulate individuals, damage economies and fracture societies in numerous ways over the next two years. Falsified information could be deployed in pursuit of diverse goals, from climate activism to conflict escalation. New classes of crimes will also proliferate, such as non-consensual deepfake pornography or stock market manipulation. However, even as the insidious spread of misinformation and disinformation threatens the cohesion of societies, there is a risk that some governments will act too slowly, facing a trade-off between preventing misinformation and protecting free speech, while repressive governments could use enhanced regulatory control to erode human rights.

Mistrust in elections

Over the next two years, close to three billion people will head to the electoral polls across several economies, including the United States, India, the United Kingdom, Mexico and Indonesia . The presence of misinformation and disinformation in these electoral processes could seriously destabilize the real and perceived legitimacy of newly elected governments, risking political unrest, violence and terrorism, and a longer-term erosion of democratic processes. Recent technological advances have enhanced the volume, reach and efficacy of falsified information, with flows more difficult to track, attribute and control. The capacity of social media companies to ensure platform integrity will likely be overwhelmed in the face of multiple overlapping campaigns. Disinformation will also be increasingly personalized to its recipients and targeted to specific groups, such as minority communities, as well as disseminated through more opaque messaging platforms such as WhatsApp or WeChat. The identification of AI-generated mis- and disinformation in these campaigns will not be clear-cut. The difference between AI- and human generated content is becoming more difficult to discern, not only for digitally literate individuals, but also for detection mechanisms. Research and development continues at pace, but this area of innovation is radically underfunded in comparison to the underlying technology. Moreover, even if synthetic content is labelled as such, these labels are often digital and not visible to consumers of content or appear as warnings that still allow the information to spread. Such information can thus still be emotively powerful, blurring the line between malign and benign use. For example, an AI-generated campaign video could influence voters and fuel protests, or in more extreme scenarios, lead to violence or radicalization, even if it carries a warning by the platform on which it is shared that it is fabricated content. The implications of these manipulative campaigns could be profound, threatening democratic processes. If the legitimacy of elections is questioned, civil confrontation is possible – and could even expand to internal conflicts and terrorism, and state collapse in more extreme cases. Depending on the systemic importance of an economy, there is also a risk to global trade and financial markets. State-backed campaigns could deteriorate interstate relations, by way of strengthened sanctions regimes, cyber offense operations with related spillover risks, and detention of individuals (including targeting primarily based on nationality, ethnicity and religion).

Societies divided

Misinformation and disinformation and Societal polarization are seen to be the most strongly connected risks with the largest potential to amplify each other. Indeed, polarized societies are more likely to trust information (true or false) that confirms their beliefs. Given distrust in the government and media as sources of false information, manipulated content may not be needed merely raising a question as to whether it has been fabricated may be sufficient to achieve relevant objectives. The consequences could be vast. Societies may become polarized not only in their political affiliations, but also in their perceptions of reality, posing a serious challenge to social cohesion and even mental health. When emotions and ideologies overshadow facts, manipulative narratives can infiltrate the public discourse on issues ranging from public health to social justice and education to the environment. Falsified information can also fuel animosity, from bias and discrimination in the workplace to violent protests, hate crimes and terrorism. Some governments and platforms, aiming to protect free speech and civil liberties, may fail to act to effectively curb falsified information and harmful content, making the definition of “truth” increasingly contentious across societies. State and non-state actors alike may leverage false information to widen fractures in societal views, erode public confidence in political institutions, and threaten national cohesion and coherence. Trust in specific leaders will confer trust in information, and the authority of these actors – from conspiracy theorists, including politicians, and extremist groups to influencers and business leaders – could be amplified as they become arbiters of truth.

Defining truth

False information could not only be used as a source of societal disruption, but also of control, by domestic actors in pursuit of political agendas. Although misinformation and disinformation have long histories, the erosion of political checks and balances, and growth in tools that spread and control information, could amplify the efficacy of domestic disinformation over the next two years. Global internet freedom is already in decline and access to wider sets of information has dropped in numerous countries. Falls in press freedoms in recent years and a related lack of strong investigative media, are also significant vulnerabilities that are set to grow. Indeed, the proliferation of misinformation and disinformation may be leveraged to strengthen digital authoritarianism and the use of technology to control citizens. Governments themselves will be increasingly in a position to determine what is true, potentially allowing political parties to monopolize the public discourse and suppress dissenting voices, including journalists and opponents. Individuals have already been imprisoned in Belarus and Nicaragua, and killed in Myanmar and Iran, for online speech. The export of authoritarian digital norms to a wider set of countries could create a vicious cycle: the risk of misinformation quickly descends into the widespread control of information which, in turn, leaves citizens vulnerable to political repression and domestic disinformation. There are strong bilateral relationships between Misinformation and disinformation, Censorship and surveillance and the Erosion of human rights , indicating a higher perceived likelihood of all three risks occurring together . This is a particular concern in those countries facing upcoming elections, where a crackdown on real or perceived foreign interference could be used to consolidate existing control, particularly in flawed democracies or hybrid regimes. Yet more mature democracies could also be at risk, both from extensive exercises of government control or due to trade-offs between managing mis- and disinformation and protecting free speech. In January last year, Twitter and YouTube agreed to remove links to a BBC documentary in India. In Mexico, civil society has been concerned about the government's approach to fake news and its implications for press freedom and safety.

Add new comment