Stakeholders in the tech industry are broadly supportive of the Government’s efforts to counter violent extremism online but split on a handful of key propositions, Marc Daalder reports

Internet Service Providers (ISPs) like Spark NZ and Vodafone have signalled their approval of new legislation that the Government will introduce in March in order to modernise New Zealand’s censorship regime. Newsroom exclusively reported on Thursday that new measures will ban livestreaming of objectionable content; permit Government agencies to issue takedown orders and fine non-compliant websites; and create a regulatory framework for internet filtering.

That latter proposal has non-profit civil society group InternetNZ concerned, with chief executive Jordan Carter telling Newsroom, “filtering is not an effective technical solution, so it cannot solve the problem of violent extremism online”.

Tech industry companies are also in disagreement about how fast the work should progress, with ISPs begging for the Government to put some form of crisis response system in place before another potential incident, while Carter warns of insufficient time for consultation.

InternetNZ chief executive Jordan Carter (left) says New Zealand must think carefully before blocking any sections of the internet. File photo: Lynn Grieveson.

Stakeholders broadly supportive of measures

Azad Khan, spokesperson for the Foundation Against Islamophobia and Racism, hailed the Government’s proposals as an important step in the right direction. “What the Government is proposing has a lot of positives. Yes, it is great that terrorist livestreaming will be illegal – and rightly so,” he said.

Khan noted that the Government needed to be robust in enforcing the new laws, particularly against white supremacists.

Dr Mustafa Farouk, president of the Federation of Islamic Associations of New Zealand, also said the measures were “a very good first step. The fact that they are able to reach this level of rules around livestreaming I think is a very good step.” However, Farouk said, more action was needed on smaller scale actions such as hate speech and hate crimes.

“Moving forward, we would like to see speech like anti-Semitism, Islamophobia to also be included,” he said.

A spokesperson for the Department of Internal Affairs DIA said they were “pleased to hear that key stakeholders, including representatives of the technology industry and civil society, are broadly supportive of the approach to prevent harm and counter violent extremist online. We have worked hard to ensure that proposed changes to legislation have been informed by consultation with these interested parties.”

“New Zealand needs to be able to counter violent extremist content online with a strong and agile response. The Government is deeply committed to preventing harm to New Zealanders caused by violent extremist content online and our work complements other work being conducted across government.”

ISPs want Government to call the shots

Since the March 15 terror attack, ISPs have repeatedly asked the Government to create a framework for ordering the blocking or filtering of websites. In the immediate aftermath of the attack ISPs butted heads with DIA officials who wanted content blocked but didn’t have the statutory authority to demand that.

The list of URLs to be blocked was hosted on a Google spreadsheet and on at least one occasion, an email full of website addresses for censoring was deleted by an email spam filter.

A December Cabinet paper acknowledges the impromptu nature of the digital response to Christchurch. “While these efforts were effective,” it states, “the experience highlighted the inefficiencies and ambiguities in our censorship system for responding to objectionable online content, such as that depicting an act of violent extremism or terrorism.”

Moreover, ISPs don’t want to find themselves in the position of censors – they would much rather follow a Government order to block a site than have to choose to block it of their own accord.

“Over the past year Spark has taken steps to block a small number of websites hosting violent extremist content because we thought it was the right thing to do. But we’ve been very clear that we don’t believe broadband providers are the right parties to make decisions about which sites should be blocked and which shouldn’t, so we welcome the Government’s work in this area and will continue to play our part in supporting efforts to counter violent extremism,” a Spark spokesperson said.

Rich Llewellyn, Head of External Affairs at Vodafone NZ, told Newsroom: “The Christchurch 15 March attack was unimaginable, and in extreme circumstances Vodafone took immediate steps to limit widespread access to abhorrent content related to the attack, by blocking websites who were hosting the content. Since then, however, we have been consistent in arguing that it’s inappropriate for ISPs to put in the position of deciding what New Zealanders can or cannot access in the longer term.” 

New filtering regime comes in for criticism

Meanwhile, InternetNZ has come out strongly against the proposed filtering regime. Under the proposal, the Government would be granted the authority to establish internet filters in the future if one was required. This would bring the existed DCEFS filter for child exploitation content into a defined regulatory framework and open the door for more filters.

The Cabinet paper acknowledges that “filtering is not a silver bullet” and “should constitute the final step in enforcement after all other options are exhausted”.

The paper states that the DCEFS and ISP actions directly after Christchurch “are an ad-hoc solution to a long-term problem. If we see internet filtering as a legitimate policy response, an authorising framework is needed. Any new or even existing proposal should have a robust regulatory basis that provides for executive authorisation and public discussion, given the incursion on a free and open internet any filter may represent.”

Newsroom reported in October that the Government was exploring the possibility of a filter for violent extremist content and the Cabinet paper confirms that Minister for Internal Affairs Tracey Martin will “direct officials to commence work on a potential filter for terrorist and violent extremist content, including targeted consultation. I will report back to Cabinet in late 2020 on the progress of this workstream.”

In particular, the paper doesn’t define whether the potential terrorism filter will be voluntary for ISPs to sign up to (like the DCEFS) or mandatory, for which there is no precedent in New Zealand.

Worries about efficacy

“We do not think filtering at the ISP level is a viable option (whether optional or mandatory for the ISP),” Carter told Newsroom. “Motivated users can find ways to get around a filter. Internet filtering can also introduce security vulnerabilities into a network and, as currently scoped, will not prevent harms occurring on platforms. The risk of overreach and therefore blocking legal and legitimate content is also high.”

“The evidence and analysis in the Cabinet papers do not justify the introduction of a filter.”

Martin Cocker, CEO of Netsafe, agreed that a filter on its own would have limited effectiveness. “You can go get that video today,” he said of the footage of the Christchurch shooting. “If you want that video, it’s on the internet, right? If people are going to go seek it out, filters aren’t going to stop them. They may disrupt them, make it more difficult, but they’re not really going to stop them.”

Even Llewellyn agreed. “Site blocking by an ISP is a relatively blunt tool,” he said.

“The reality is that sophisticated users can always find ways to circumnavigate filters, for example using VPNs. So while we will comply with any filter that is put in place to protect our customers, ISP’s cannot prevent those who may choose to use such workarounds to access illegal content.”

Benefits of filtering

However, Cocker raised the point that filtering can stop people from stumbling across content unintentionally. “What You can protect people from accidental exposure. There’s a whole lot of links on Twitter and if you click on a link, you drop down into a site where you can see that video. Or that video being posted to Instagram and displayed to people. You can disrupt that using these tools and that seems like a reasonable thing to do.”

Farouk also highlighted the issue of accidental viewing. “If we were all adults, we could sit down and look at this material and make informed decisions. But the fact that this is accessible to all ages is the reason why this [measure] is important,” he said.

DIA defended the measures along similar lines, with a spokesperson saying it “recognises that web filtering has limitations. It is one of a range of tools that can assist in mitigating the harm of extremist content online, particularly for young people.”

It’s also useful as a crisis response tool, kicking in for a short period of time until the emergency has abated. Cocker says it’s really an issue of “virality”. “We should allow filtering to be used as a way to combat the spread of that content and to block out actors on the internet that are deliberately trying to harm people,” he said.

“Those periods last for a week. The Christchurch video was viral for less than a week and part of the reason for that was because it was disrupted by major players like Google and Facebook and by ISPs doing filtering. Once you’ve disrupted the virality, then you can turn it back off.”

Facebook said it registered 1.5 million attempts to upload the Christchurch video in the 24 hours after the attack, but that those numbers subsequently died down.

Newsroom’s explainer on website blocking provides more information about the options available to the Government in this arena.

Timing also an issue

Besides filtering, stakeholders also clashed over the timing of the legislation. Vodafone urged the Government to act fast to put a system in place.

“While we believe the approach is sensible overall, we believe the Government needs to move faster to take ownership of what content is blocked online,” Llewellyn said.

Meanwhile, Carter cautioned that the process was moving too fast.

“While we welcome work to address the abuse of the Internet by violent extremists, we see some of the measures put forward as raising a range of potential impacts and complications. We are concerned that the planned introduction of legislation (in March) will not provide sufficient time to consider these properly,” he said.

A DIA spokesperson said “the New Zealand crisis response process is at final draft and has been developed in consultation with representatives from industry, government and non-government agencies. The process has been developed with lessons learned from the online response to the Christchurch attack and others that have followed.”

“The crisis response process is voluntary and is to provide a clear framework and process. This is to enable effective coordination of response for those that can assist in mitigating the harm of significantly harmful and potentially viral content online.”

A draft copy of the legislation has been circulated to stakeholders, including the ISPs and InternetNZ, but has not been released to Newsroom. In the Cabinet paper, Martin said she wants law on the books by August.

Get it early – This article was first published on Newsroom Pro and/or included in Bernard Hickey’s ‘8 Things’ morning email of the latest in-depth business and political analysis. Get it early by subscribing now or starting a 28-day free trial.

Marc Daalder is a senior political reporter based in Wellington who covers climate change, health, energy and violent extremism. Twitter/Bluesky: @marcdaalder

Leave a comment