Facebook is shirking its responsibilities in the wake of the Christchurch terror attacks, a Muslim leader says. David Williams reports
In 2015, Aliya Danzeisen found herself in Sydney remonstrating with Facebook’s global policy chief about a hateful anti-Islam page, seemingly from New Zealand.
As the Islamic Women’s Council’s assistant national coordinator, Danzeisen was attending a conference on countering terrorist propaganda, during a time when Islamic State was expanding its territory and carrying out horrific killings.
Danzeisen was concerned about the hate and abuse being suffered by peaceful Muslims in New Zealand, including a Facebook page that said Islam was a “disease” that “needs to be stopped”. Comments included “they all need to be put in a cage and set alight” and “bury them in pig sties”.
She had complained to Facebook but was told the page didn’t violate the social media giant’s community standards. Her concern was so great she also complained to police, who told her the page didn’t originate in New Zealand and they didn’t have jurisdiction to deal with it.
At the Sydney conference, Danzeisen stood up during a panel discussion involving Twitter’s global public policy head Colin Crowell. “Basically you’ve created a product that people are using that’s causing harm,” she told him. “And you have the responsibility to monitor it, to not cause us harm. If you’re producing this product, and you’re gaining money from it, then you should employ people to monitor it.”
She explained the anti-Islamic Facebook page, which had ‘NZ’ in the title, was causing huge concern within the Muslim community, but given Facebook and the police had washed their hands of it, she didn’t know what to do.
“Facebook is here,” Crowell said, pointing to Monika Bickert, the vice president of content policy. “You should talk to Facebook.”
She did. By the next day, Facebook had removed a similar Australian page and deleted posts from the NZ page, which, overall, Bickert said, didn’t violate its standards. Bickert wrote to Danzeisen: “I do want to emphasize (I know you know this, but I just have to repeat it!) that just because we allow something on FB doesn’t mean we agree with it.”
Fast-forward to last month, 11 months after the Christchurch terrorist attack, when Facebook published a white paper about online content regulation. It calls for governments, companies, and civil society to work together on new “regulatory frameworks”, while taking care not to “stifle expression, slow innovation, and create the wrong incentives”.
The paper’s author was Monika Bickert.
Danzeisen, who holds the government engagement role for the Islamic Women’s Council, says tech giant Facebook, and other social media platforms, have made some progress since the shootings on March 15 last year, but not enough. She attended a meeting in Christchurch last week, at which Facebook representative Mia Garlick gave an update on measures it’s taking. (Danzeisen says Facebook contributed $15,000 to the council’s last national conference.)
It hasn’t mollified Danzeisen, who says the white paper seems aimed at pushing what should be the company’s work onto governments.
“I was disappointed because they still don’t seem to get it as a company,” she says. She notes Facebook founder and chief executive Mark Zuckerberg’s reluctance to regulate political ads, even if they’re false. “He doesn’t really understand the impacts that these kinds of things have on communities, and on people – and on especially young people.”
Danzeisen says Facebook asking for government regulation on hate speech is like an obese person asking for law changes to protect them from fast food and fizzy drinks. “They [Facebook] have an ability to regulate themselves – what prevents them? Why should they impose it on the taxpayers to monitor them when they actually can do it themselves, today?”
Might a harder line by Facebook on racists and xenophobes drive them underground? She says: “If they go underground at least they’re not using this product.”
Fewer posts and pages might reduce Facebook’s profit, she says, but what about the damage, and financial impact, on New Zealand from this type of content. She notes the mosques shooting was streamed on Facebook.
Fundamentally, Danzeisen says if hate speech and threats are on Facebook it’s the company’s responsibility to remove them, especially when the company is making money off them. Why should the government be the ‘bad guy’?
One measure that might help, she says, is being able to identify the country from which a page is created, or, if a person is using a VPN to prevent tracking, stating the location isn’t clear. That way people might know if an online threat is being made in the same country.
“Good products issue warnings or they correct the issue.”
However, if social media companies are unwilling to regulate themselves then governments will need to step in to protect the population, she says. “These things have personal consequences.”
(Sean Lyons, the engagement director for statutory online safety organisation Netsafe, says: “You’re generally in breach of terms and conditions of those platforms way before you’re in breach of any kind of legislation.” The frustrating thing, he says, can be having to report each harmful post.)
Adding friction
Garlick, Facebook’s policy director for Australia and New Zealand, tells Newsroom that after the Christchurch terror attack it restricted who can stream live, expanded its dangerous organisations policy, continued to invest in proactive detection of harmful and violent content, as well as human review, and invested in research to ensure the issues are understood.
“From our perspective we are trying to add as much friction as possible to people to prevent the sharing of harmful content on our services.”
Writing in the Financial Times last month, ahead of the white paper’s release, Garlick’s boss, Zuckerberg, said: “I believe good regulation may hurt Facebook’s business in the near term but it will be better for everyone, including us, over the long term.”
But does Facebook genuinely want regulation?
Garlick: “We absolutely agree with the concerns that both governments and communities have as to whether we are doing what we say we’re doing, namely having policies specific to harmful behaviour, investing in technology and tools to identify that content and removing it. We recognise as well that we need to be accountable to the public and to governments around the work that we’re doing.”
Facebook’s not waiting for regulation, she says. It is working on a “prevalence metric”, for accountability, and has established a data transparency advisory group. “From our perspective there’s a holistic approach to try and make sure that we are giving people that confidence and reassurance that we are very committed to reducing harmful content on the platform.”
Keeping up with bad actors
The mega-company’s community standards enforcement report, published in November, details its response to hate speech. Seven million pieces of content were “actioned” between July and September last year – more than double the amount over the same period the previous year. Its “proactive rate” of intervention lifted from 53 percent to 80 percent.
Gullnaz Baig, Facebook’s policy manager for dangerous organisations, says as “bad actors” evolve in the way they abuse the platform, the company will continue to evolve. She notes Facebook’s investment in The Global Network on Extremism and Technology as a contribution to research on the issue.
Garlick says the social media giant is committed to reducing people’s exposure to this type of harmful content. But is taking the issue seriously?
According to one count, 510,000 comments are posted on Facebook every minute. Yet Facebook’s worldwide safety and security team has 35,000 staff (more than triple the number employed in 2017), only 15,000 of whom are dedicated content reviewers. Garlick confirms none are based in Australia or New Zealand.
Instead of calling for regulation, why doesn’t Facebook just do a better job monitoring its own product?
“This is why we’re investing so much in meaningful metrics,” Garlick says, pointing to its work on the prevalence of harmful content. “You can say to Facebook, how much content removing from platform from particular categories but that doesn’t necessarily tell you whether we’re actually doing a really good job.”
Speaking at Parliament just days after the attacks, Prime Minister Jacinda Ardern said of social media platforms: “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”
Last month, however, at the Munich Security Conference, Zuckerberg argued for a third way – neither publisher nor postman – that Facebook was “somewhere in between”.
Garlick says when people ask if Facebook is a publisher, often it’s because it wants the company to accept responsibility to manage and moderate its content. (Well, no, some would argue it’s because publishers are legally responsible for what they publish.)
“There’s no finite point to this,” Garlick says. “This is an ongoing piece of work.”
Facebook trumpets the billions of dollars it’s spending on proactive detection technology, research, and growing its counterterrorism team. This from a company worth $US530 billion, with shares selling for about $US190 each, and net income last year of $US18.5 billion.
Recent terror threat
Talk of online threats comes as 19-year-old Sam Brittenden appears in a Christchurch court, charged with failing to assist with a search warrant. Police were investigating a recent terror threat against the Al Noor mosque, which was attacked by a gunman almost a year ago, as was the Linwood Islamic Centre, killing 51 worshippers, and wounding dozens more.
International research suggests a worrying nexus between online hate speech and violent acts. Hate groups use social media to organise themselves and recruit new members. Algorithms designed to maximise engagement can send malleable people down an extremist rabbit-hole, radicalising them.
In December, Netsafe released details of a survey which said 15 percent of adults in this country personal experienced hate speech online in the past 12 months. Two-thirds of Muslim participants said they’d been exposed to online hate speech. About 70 percent of those surveyed thought online hate speech was spreading.
(Netsafe received almost 600 inquiries about the Christchurch attack – mainly about livestream video of the attack and the alleged gunman’s manifesto, but also from people who received or witnessed hateful messages inspired by the attack.)
The picture in New Zealand is changing. Gun laws have been tightened. Police have promised to flag hate crimes, and the Government is considering strengthening laws on hate speech. (Justice Minister Andrew Little and Prime Minister Jacinda Ardern didn’t respond to requests for comment.)
In a submission to the Department of Internal Affairs, InternetNZ, a non-profit lobby group, urged the Government to slow down proposed law changes, including the department issuing take-down notices, and defining livestreaming as a “publication” to counter violent extremism online. It was particularly worried about internet filtering, which it saw as interfering with free expression. (No one from InternetNZ was available for comment.)
There are also concerns streaming services might leave the country rather than deal with an onerous regulatory regime.
“I believe that you do need a body like the Broadcasting Standards Authority but one that is focused on the internet.” – Anjum Rahman
The New Zealand-initiated Christchurch Call, an agreement to eliminate terrorist and violent extremist content online, is now supported by 53 countries and international organisations, and eight tech companies, including Facebook.
Anjum Rahman, the Islamic Women’s Council’s acting national coordinator, spoke last September at a leaders’ dialogue event in New York related to the Christchurch Call. She called for leaders to realise the power differential between marginalised communities and dominant communities, and the need for diversity at senior levels of governments, tech companies, and within civil society and the tech community.
She tells Newsroom because Facebook curates people’s feeds, forcing groups to pay money to increase their posts’ reach, the company is more than just an intermediary. “I think there is base for regulation,” she says.
“I believe that you do need a body like the Broadcasting Standards Authority but one that is focused on the internet – and it could be through Netsafe or something else – that makes these decisions in a transparent manner that can be contested the way a Broadcasting Standards Authority decision could be.”
If such a body is created, there needs to be adequate representation from communities that are targets of hate, she says. “You cannot have regulation just from the point of view of people who’ve never had to experience what it’s like.”
The tech companies might argue they operate across too many countries, and deal with too much information, making it almost impossible to monitor hateful content. Rahman responds: “And yet they found a way to not have these videos go up.”
(Tech companies have been sharing “hashes” – unique digital fingerprints – to block terrorist and extremist videos. A synagogue shooting in Germany last October was streamed on Amazon’s Twitch service, while, last month, in Thailand, a soldier who shot dead at least 29 people in a mall posted photos and videos on his Facebook page for almost five hours.)
Facebook’s white paper says regulation might lead to shortcuts, as companies let some problems slip to meet imposed requirements.
“That shows whether they are entering this space in good faith or not. Where is your social responsibility? We need to start dealing with this as more than profit-making … Actually you have some social responsibilities.”
Life, liberty and free speech
Rahman says the Christchurch Call is still pretty new, but it has been effective in getting countries to sign up and engaging tech companies. “They’ve started with the very extreme end … in terms of those online live streaming videos and hash-sharing.”
Where it’s getting bogged down is the step down from extreme violence, and the consensus and willingness of countries and organisations to act. That debate can descend to individual words and particular sentences contained in agreements.
Signing up to the Christchurch Call means parties agree to certain conditions, including human rights. The issue, then, Rahman says, becomes regulating the platforms that don’t sign up.
The Christchurch Call has been outward facing, given the fraught nature of getting countries and tech companies to agree. “I understand that that has been difficult – or at least it’s a very delicate process, as the diplomats like to say.”
Therefore, the wider public in this country hasn’t had a deep debate on the same issues – that delicate balance between hate speech versus free speech. It’s important that debate happens, Rahman says, especially when it comes to harmful comments against groups of people, based on religion, ethnicity or sexuality. “These decisions that are being, or will be made, they’ll affect us all.”
It will be interesting to see what role Facebook, in particular, which has called for regulation, plays in that discussion. Danzeisen says it was a blow to see Bicker’s name on the recent white paper, given she’d discussed jurisdictional issues and hate speech with her in 2015.
“It would be nice if their global policy person would actually acknowledge some of the harm that’s been caused by this stuff,” she says. “Speech is not higher than life and liberty. People should feel free also to be able to move around, and not walk about in fear … especially when you don’t know where the person is.”
Facebook should do more without being pushed to do it, she says. “You can see that they’re making some effort but it’s clear that there’s more they can be doing.”