Part one in a series on the worsening spread of false information online looks at the challenge of balancing freedom of expression with targeting misinformation. Columbia University’s Anya Schiffrin reports.
The year 2016 was a watershed year for misinformation. Much of the world looked on with fear and horror as the UK voted in favour of Brexit, Donald J. Trump was elected 45th President of the United States and Cambridge Analytica was revealed to be manipulating our data on Facebook to sway those elections.
According to media think-tank Data & Society, “weaponisation of the digital influence machine” had taken place.
The horror gave rise to a flurry of ‘solutions’ to the problem of misinformation. But people wanted to take action before they actually knew what precisely the problem was and what would work. The intervening years have allowed us time to assess which of those responses was more effective and the advantages of each. And how the world moves in a decentralised way to combat problems like these as they arise.
Among the many solutions proposed were ideas such as fact-checking units in newsrooms, greater support for quality journalism, adjusting social media algorithms to promote quality journalism, regulation to crack down on platforms sharing misinformation, and content moderation.
At least nine different ways of tackling the problem, with countless variations, have been floated and implemented since 2016. The proliferation of solutions reflects the approach of the people proposing them. For example, journalists always think the solution to every problem is more journalism. Some journalists believed that if they’d written more articles, better articles, about the problem of the tech companies and misinformation, the problem would never have arisen.
Europeans tend to believe in regulation so their proposals were more regulatory. The US, meanwhile, has an aversion to government regulation so their response was that the tech companies should solve their own problems. Everybody believes that what they do is the right thing which is part of why we saw very different solutions.
We’ve also learned since then that different financial interests are involved. Social media platforms make money from engagement so have no incentive to highlight worthy information that doesn’t travel. They make money from outrage, fear, anger and, looking short term, have no incentive to clean up their platforms.
Then there is the exposure effect, as described back in 1968 by psychologist Robert Zajonc. He said that repeated exposure to an idea breeds support for it. Organisations do what they always have and the familiarity makes them believe it is the right course of action. This can lead to escalating commitment once a path is pursued, regardless of whether it turned out to be the right approach.
The solutions posited to misinformation since 2016 can be grouped into two camps: supply and demand. The supply side of journalism is where news is produced and disseminated; the demand side is where it is consumed and reaches the audience.
Demand-side solutions seem easier to implement in that they don’t require controversial government regulation. Charitable foundations are investing money in demand-side solutions, and journalists are already making efforts to build trust and promote their profession.
Demand-side solutions are also more attractive for the social media platforms, because it takes the onus off them and puts it on journalists and consumers. However, demand-side solutions are difficult to scale and slow to work (if they do work) – evidence of their success is mixed.
For instance, bolstering media literacy efforts is a broad solution that contains many groups and efforts. It aims to raise awareness about journalism and contributes to civic engagement, but it’s chronically underfunded and has not been adopted as widely as needed. Finland is held up as the admirable exception.
On the supply side, journalists acting as watchdogs can help show the way forward. Journalists such as The Guardian’s Carole Cadwalladr, who helped to expose the Cambridge Analytica scandal, have formed the Real Facebook Oversight Board to hold Facebook to account and ensure the social media giant is not undermining democracy and free and fair elections.
A proposed supply side solution that goes hand-in-hand with watchdog journalism is to employ fact-checkers and task them with labelling misinformation, encouraging readers to reject it.
The idea has intuitive appeal, but its execution is complicated. Establishing the credibility of fact-checkers is difficult, and it is a challenge to ensure the readers exposed to the misinformation see the later clarification.
Corrections may also aggravate the problem: due to the exposure effect, audiences seeing something twice may believe it more; or corrections may only enhance distrust in the media. More recent studies suggest that exposure to fact checks may reduce beliefs in misinformation and that warning labels may reduce the likelihood of sharing false information.
Establishing more prevalent fact-checking also helps create and support a culture of truth and signalling, and may build relationships among journalists. The creation of global standards for truth could help advertisers make better decisions about whom they support, and build trust back in the media.
Controlling what information is shown through algorithms and content moderation has been floated as a way to eliminate misinformation efficiently and at scale. Artificial intelligence on its own is not able to fully distinguish false/true/illegal information, so human moderators are still required.
However, moderators lack historical context and understanding. The job takes a toll and humans can get tired and burn out. And existing approaches have failed to keep up with misinformation phenomena – QAnon, for instance, took root despite Facebook’s moderation practices.
It’s a truism to say the media landscape is changing quickly, but it’s also true. In the last year, Facebook launched their ‘supreme court’ to review de-platforming decisions, Germany updated its NetzDG regulations governing social media, several African countries passed fake news laws, Australia forced Google and Facebook to fund journalism, and several other countries moved forward with digital services legislation.
The problem is urgent, but solutions remain decentralised. Balancing freedom of expression with targeting misinformation will be an ongoing challenge. Each country will continue to have different values and approaches to tackling the problem, meaning the proliferation of solutions will only persist.
Anya Schiffrin is a senior lecturer at Columbia University’s School of International and Public Affairs in New York.