Online misinformation has been around since the mid-1990s. But in 2016 several events made it broadly clear that darker forces had emerged: automation, microtargeting and coordination were fueling information campaigns designed to manipulate public opinion at scale.
As someone who studies the impact of misinformation on society, I often wish the young entrepreneurs of Silicon Valley who enabled communication at speed had been forced to run a 9/11 scenario with their technologies before they deployed them commercially.
One of the most iconic images from that day shows a large clustering of New Yorkers staring upward. The power of the photograph is that we know the horror they’re witnessing. It is easy to imagine that, today, almost everyone in that scene would be holding a smartphone. Some would be filming their observations and posting them to Twitter and Facebook. Powered by social media, rumors and misinformation would be rampant. Hate-filled posts aimed at the Muslim community would proliferate, the speculation and outrage boosted by algorithms responding to unprecedented levels of shares, comments and likes. Foreign agents of disinformation would amplify the division, driving wedges between communities and sowing chaos. Meanwhile those stranded on the tops of the towers would be livestreaming their final moments.
Stress testing technology in the context of the worst moments in history might have illuminated what social scientists and propagandists have long known: that humans are wired to respond to emotional triggers and share misinformation if it reinforces existing beliefs and prejudices. Instead designers of the social platforms fervently believed that connection would drive tolerance and counteract hate. They failed to see how technology would not change who we are fundamentally—it could only map onto existing human characteristics.
Online misinformation has been around since the mid-1990s. But in 2016 several events made it broadly clear that darker forces had emerged: automation, microtargeting and coordination were fueling information campaigns designed to manipulate public opinion at scale. Journalists in the Philippines started raising flags as Rodrigo Duterte rose to power, buoyed by intensive Facebook activity. This was followed by unexpected results in the Brexit referendum in June and then the U.S. presidential election in November—all of which sparked researchers to systematically investigate the ways in which information was being used as a weapon.
During the past three years the discussion around the causes of our polluted information ecosystem has focused almost entirely on actions taken (or not taken) by the technology companies. But this fixation is too simplistic. A complex web of societal shifts is making people more susceptible to misinformation and conspiracy. Trust in institutions is falling because of political and economic upheaval, most notably through ever widening income inequality. The effects of climate change are becoming more pronounced. Global migration trends spark concern that communities will change irrevocably. The rise of automation makes people fear for their jobs and their privacy.
Bad actors who want to deepen existing tensions understand these societal trends, designing content that they hope will so anger or excite targeted users that the audience will become the messenger. The goal is that users will use their own social capital to reinforce and give credibility to that original message.
Most of this content is designed not to persuade people in any particular direction but to cause confusion, to overwhelm and to undermine trust in democratic institutions from the electoral system to journalism. And although much is being made about preparing the U.S. electorate for the 2020 election, misleading and conspiratorial content did not begin with the 2016 presidential race, and it will not end after this one. As tools designed to manipulate and amplify content become cheaper and more accessible, it will be even easier to weaponize users as unwitting agents of disinformation.