More needs to be done by social media platforms – or, failing that, government regulators – to create protective infrastructure on the web.
That’s what a university conference on democracy and social media was told by two English experts on violent extremism online. The ‘New Ec(h)o systems: Democracy in the age of social media’ conference, hosted in Dunedin and via Zoom by the University of Otago’s National Centre for Peace and Conflict Studies.
Should tech companies, governments or the community take the lead in ensuring online spaces are safer? Click here to comment.
Sanjana Hattotuwa, a PhD candidate in online extremism at the university, chaired the event. Attendees included Chief Censor David Shanks, government officials and experts on disinformation.
In a session on Wednesday, Moonshot CVE manager Clark Hogan-Taylor and Build Up co-founder Helen Puig Larrauri spoke from the United Kingdom about countering extremism through stronger safeguards on the internet.
Larrauri, who has worked with the United Nations and NGOs in conflict management, used the metaphor of peace-building to explain the path forward.
“When I look at social media and some of the activities that are happening on social media, it’s not unlike trying to broker a ceasefire. When we look at content moderation or at deplatforming certain people, what we’re doing is we’re trying to stop the most evident violence that is happening on social media.”
– Helen Puig Larrauri
“One of the first things that happens when anybody working in peace-building goes into a conflict environment is that we look to stop what is the most evident physical violence. This is often through the brokering of ceasefires,” she said.
“When I look at social media and some of the activities that are happening on social media, it’s not unlike trying to broker a ceasefire. When we look at content moderation or at deplatforming certain people, what we’re doing is we’re trying to stop the most evident violence that is happening on social media. But one of the challenges with ceasefires in an offline environment is that what you’re doing is burying the signals of what is a much deeper societal conflict.”
Larrauri said that these steps were still necessary, but they couldn’t be the be-all, end-all of the process.
“As peace builders, we always move a level down. It’s not just about whether we have ethical rules on a platform and whether we educate people about digital literacy. This is really about the structures that undergird this conflict.”
Hogan-Taylor highlighted those structures in more detail in his own presentation and spoke about how Moonshot counteracts some of them. He also said new safeguards were needed to defeat online harm and hate.
“There will always be dark corners of the internet. Exactly the same way there are streets that you are always going to avoid, there will always be message boards and platforms and whole sections of the internet that you will always avoid and that, if you visited them, would put you at risk of harm,” he said.
“We spend a lot of time on both mainstream and niche online spaces and, if I can give my personal view here, the mainstream frightens me way more than the niche. Simply, really, by virtue of the numbers. The extent to which it’s possible to cause harm – whether through widespread manipulation of people’s opinions, bullying, harassment, death threats or organised pile-ons that destroy someone’s career and their mental health – the extent to which it’s possible to do those things is directly related to the number of users on a platform.
“Of course, violent right-wing Telegram channels and anarchic image boards and incel forums need to be understood. And in the cases where they’re inciting violence and suicidal ideation, they need to be actively monitored and reported. But users on a platform that doesn’t contain the mechanics of virality can only do so much harm at scale. So if we want to talk about combatting harm and hate at scale, I would start with the biggest, mainstream platforms.”
Understanding the ‘wild west’ of the internet
As it stands, Hogan-Taylor said, there are few safeguards against that harm and hate on mainstream platforms.
“We’re living through the wild west of the internet. In Google, Facebook, Twitter, Instagram, Amazon and the rest, we have a new kind of critical infrastructure, not unlike the railroads and the telephone lines of the mid-1800s. Whether we like it or not, people are already dependent on this infrastructure, many of us for work. Businesses live and die by whether they’re discoverable on Google, are listed on Amazon or if they can reach their customers on Twitter – and all of this happened before we figured out how to make them safe,” he said.
“I think the analogy of the wild west still stands not because the internet is lawless … but because we’ve become rapidly dependent on this new critical infrastructure, which has in many ways transformed the world for better, but we’ve done it before we figured out how to make it safe.
Nobody complains that the kids’ play area in your park has a spongy floor or that there’s no broken glass, or that motorways have crash barriers. Public spaces and critical infrastructure contain design choices put there … to protect us.”
– Clark Hogan-Taylor
“If we look at some actual critical infrastructure in public spaces and some assumptions that we take for granted: Nobody complains that the kids’ play area in your park has a spongy floor or that there’s no broken glass, or that motorways have crash barriers, or that 87 percent of countries worldwide mandate that we use seatbelts in cars.
“The postal service is not allowed to open our mail, read our letters and then send us adverts based on what we wrote in our letters. No one complains about these things. Public spaces and critical infrastructure contain design choices put there, often by regulation, to protect us.”
The same concepts should be applied to the internet, Hogan-Taylor said.
“We’re beginning to see huge tech companies try similar ideas. Twitter, as you may have noticed, now suggests that you read an article before you retweet it. Facebook have tried a variety of things. We take it completely for granted that all mainstream platforms regularly remove huge amounts of harmful content. Now, we all know that’s not enough. It’s not happening quickly enough. We see this every single day,” he said.
“And of course, by building and profiting from unregulated critical infrastructure, they have facilitated widespread harm. But nonetheless, I feel like it’s inexorably that our big digital public spaces will eventually be similar in their protections to our physical public spaces. Either the platforms get ahead of it and they fix it themselves, or they will be regulated into shape. [But] what do we do in the meantime?”
That’s where Moonshot’s work comes in.
“One upside of being in the wild west is that the same techniques available to those who would do harm online are available to those of us who are trying to reduce it. At Moonshot, a lot of what we do comes down to reaching people at risk of harm and offering them an alternative pass – some sort of way out. And, in doing so, we are effectively supplying the guardrails that should be and hopefully one day will be an inherent part of these platforms’ infrastructure,” he said.
“You can use commercially-available tools to reach an audience of people defined not by their age, or ethnic background or their religious beliefs or anything else but purely by what they’re looking for online. You can reach them with narratives that deliberately seek to undermine the harmful narrative they were searching for. Or, if they’re searching for known pieces of disinformation, you can reach them with evidence to the contrary or tools to teach them critical thinking.”
Hogan-Taylor said the increasing concentration of violent extremist groups in hard-to-access parts of the internet, like private groups or Telegram channels, wasn’t as significant a challenge as it might seem. That’s because law enforcement was likely still able to monitor these groups and the main targets for counter-extremism groups like Moonshot was to target the people vulnerable to radicalisation, not necessarily those who are all the way in.
“The thing about working on violent extremism in particular is they will always need to recruit. They may flock to more secure, private spaces for some conversations. That certainly makes our life harder. But they will always need to recruit,” he said.
“Also, people trying to join these groups will usually still start with search. They may end up in a very closed, difficult-to-access place, but they will start with search and we want to reach them at that point.”