As the Government decides how to take on social media giants following the Christchurch terror attack, it is looking around the world for other examples – and potential allies. Sam Sachdeva takes a look at how other countries are tackling the issue, and what can we learn from their successes and mistakes.

In the wake of the Christchurch mosque shootings, the public debate over greater regulation of social media works has moved from if, to when, it will happen.

Prime Minister Jacinda Ardern has been outspoken about the need for tech giants like Facebook to be held to account – not just within New Zealand, but around the world.

“My strong view is that if we wish to establish a step-change in behaviour that we need to take a global approach,” Ardern told media on Monday.

“It’s all well and good to have domestic legislation that we think’s going to do the trick, but in my view it would be all the more strengthened if we had the international community asking the same thing. That’s something that I’m interested in.”

While a number of countries have started to plan or take action, as Ardern noted, “none are exactly the same”.

So what can we learn from the work done by other nations so far?

Keeping the public safe online

In a white paper on “online harms” released this month, the British Government set out a proposal for new regulations requiring companies to ensure users were safe online and illegal content was swiftly dealt with.

The paper suggests creating a “statutory duty of care” for companies with their users, with an independent regulator set up to oversee the new rules.

The regulator would have a wide range of enforcement powers, potentially including the ability to issue “substantial fines”, block non-compliant services, and hold individual members of a company’s senior management liable for any breaches.

The document says companies would be required to show what they were doing to tackle the dissemination of content and illegal activity for the most serious offending, such as terrorism and child abuse.

The regulator would also set out “an expedient timeframe” for companies to remove terrorist content, such as the video of the Christchurch shootings.

A “systematic failure” to comply with the [EU] rules could lead to a company being fined up to four percent of its global turnover for the most recent financial year – a penalty which would reach over NZ$1 billion for Facebook.

The UK’s proposal comes on the heels of draft regulations released by the European Union in September last year, which would require internet companies to remove or disable illegal terrorist content within an hour and deploy filters to ensure it is not re-uploaded.

A “systematic failure” to comply with the rules could lead to a company being fined up to four percent of its global turnover for the most recent financial year – a penalty which would reach over NZ$1 billion for Facebook.

The definition of terrorist content is deliberately wide, including information used “to incite and glorify the commission of terrorist offences, encouraging the contribution to and providing instructions for committing terrorist offences as well as promoting participation in terrorist groups”.

EU states would be required to designate a “competent authority” to issue removal orders to companies and impose penalties for those who failed to comply.

While the legislation is still working its way through the European Parliament, Australia has already managed to pass a Sharing of Abhorrent Violent Material Act drafted following the Christchurch attack.

Australia’s Parliament moved quickly to legislate against violent online content in the wake of the Christchurch attack – but that pace has led to criticism about the effectiveness and unintended consequences of the law. Photo: Philippa Wood.

The law would require service providers and hosting services to “remove abhorrent violent material expeditiously”. Exactly what counts as expeditious is not defined, although Australian Attorney-General Christian Porter suggested “well over an hour” was unacceptable – with penalties of up to AU$2.1 million or three years’ imprisonment for individuals and up to $10.5m or 10 percent of annual turnover for corporates.

The rushed legislation attracted the attention of two United Nations human rights experts, who wrote to the Australian Government raising concerns about its impact on freedom of expression and the lack of consultation with the public and civil society.

“We are mindful that depictions of egregious violence on the internet provoke legitimate concerns about public order and safety…

“However, it is precisely the gravity of these matters and their potential impact on freedom of expression that demand a thorough and comprehensive review,” the pair wrote.

The Australian Law Council raised concerns about potential unintended consequences, with council president Arthur Moses saying the law could stop whistleblowers from using social media to “shine a light on atrocities being committed around the world”.

Germany leading on social media

Germany is perhaps the most advanced country when it comes to social media regulation – something acknowledged by Ardern, who said its actions appeared to have caused Facebook to change its staffing levels.

The European heavyweight, historically strong on hate speech issues given its Nazi past, created a “NetzDG” law – also known as the Facebook Act – requiring “manifestly unlawful” posts such as hate speech to be removed from social media platforms within 24 hours or incur fines of up to 50 million euros.

Less blatant violations must be reviewed within seven days by the social networks, who must provide six-monthly reports on the complaints they have received and how they have been addressed.

Facebook’s most recent NetzDG report said it had received 500 complaints related to over 1000 pieces of content in the last half of 2018, with 63 people across three teams working on the process.

More than 30 percent of the complaints led to content being deleted or blocked, with over two-thirds of those occurring within 24 hours.

Diminishing hate, or amplifying it?

But Germany’s law has not been without its critics.

In early 2018, the leaders of the far-right Alternative für Deutschland (AfD) party fell foul of the new rules after accusing Cologne police of supporting “barbaric, gang-raping Muslim hordes of men”.

Party leader Alice Weidel and her deputy Beatrix Storch then appeared in an ad with red tape over their faces, implying their free speech had been gagged – leading in part to fears that the crackdown has “seemingly amplified the voices it was trying to diminish”, as The Atlantic put it.

Despite opposing the new laws, the AfD has also used them to its advantage: Weidel successfully took legal action against Facebook over a user who called her a “Nazi swine”.

The Hamburg court which ruled in Weidel’s favour raised further questions about whether it was enough for Facebook to “geoblock” the offending comment for German users, leading to concerns about whether the company would effectively be forced to apply Germany’s laws internationally.

“Our worst fears have been realized – the German law on online hate speech is now serving as a model for non-democratic states to limit internet debate.”

From the left, Green MP Konstantin von Notz told The Guardian the law had been rushed through Parliament with major flaws, and was delegating the responsibility for legal issues to tech companies.

Another concern – and one which may be on the minds of Kiwi politicians and officials contemplating changes – was Russia co-opting the framework of the German legislation for its own law.

“Our worst fears have been realized – the German law on online hate speech is now serving as a model for non-democratic states to limit internet debate,” Reporters without Borders Germany executive director Christian Mihr said.

New Zealand is still some way from formulating its own plan, with the Government focusing on the firearms reforms as its immediate priority following the attack.

But the lessons from overseas would seem to suggest that a carefully considered approach to social media, rather than hasty action, may be best – and striking the right balance between freedom of expression and tackling extremist content will be tricky to say the least.

Sam Sachdeva is Newsroom's national affairs editor, covering foreign affairs and trade, housing, and other issues of national significance.

Leave a comment