Outgoing Chief Censor David Shanks tells Newsroom in an exit interview that he’s had to view the Christchurch terror video “many times” as part of his work, Marc Daalder reports
The man who made international headlines for banning the video of the March 15 terror attack and the terrorist’s manifesto will step down on Friday.
David Shanks’ second term as Chief Censor ends on May 7, after a total of five years in the job. While he knew going into the Office of Film and Literature Classification that it would be challenging, he told Newsroom in a retrospective interview that the last half-decade had been full of surprises.
“Coming into the role, I think it’s fair to say I didn’t know what I was letting myself in for,” he said. “I certainly couldn’t foresee that the Office would be confronted by, for example, publications directly relating to and recording the worst terrorist atrocity that the country has ever experienced.”
The specifics were unexpected. The general trend of online content playing a bigger role in his work wasn’t.
“What I did have a sense about coming in was that this potentially at least was an area which was going to be increasingly important and complex. Even back in 2017, it was increasingly obvious and clear that there was some issues and challenges and problems with the internet.”
The classification office, in its current incarnation, dates back to the 1990s when it was formed out of three separate censorship bodies. It’s not a body designed for the internet age, although its establishing legislation got a significant update last year.
In order to cope with the deluge of harmful online content, Shanks has had to get creative with the tools that already existed in the legislation. The Office has a call-in power to proactively classify content that hasn’t been submitted by commercial companies for rating or submitted by police. That was rare in the 1990s but is used more frequently today, when most harmful online content isn’t sent to the office by its creators for classification.
“I think what you’re seeing now is an appreciation that relying on a reactive law enforcement – identify, detect, submit, classify – mode… given the volumes on the internet, with this sort of material? Given the fact that 500 hours of content goes up on YouTube every minute and some proportion of that is going to be in some of these sorts of categories? You’ve got to be thinking in different ways about how you get more proactive and more intentional and more structured about your approach.”
The Office has always been tasked with rating or censoring objectionable content – usually child sexual abuse material – but the commercial function of classifying books, films and television shows tended to dominate its work.
Even when Shanks took office in 2017, the commercial work took up a majority of the office’s time and effort. But that has now changed.
“That has been steadily shifting. More and more, we are spending time and effort on, not necessarily objectionable material but material on that harmful spectrum.”
One of the challenges with classification in the internet era is that there’s a lot of material that isn’t classifiable but is still relevant to the office’s work. Last year, Shanks released a report on misinformation in New Zealand, which he acknowledged at the time fell completely outside of his purview from a censorship perspective.
“Actually we were deliberately looking at a whole sector of content that we know we can’t classify. But what we can see is this is precondition material that leads to harm, extremism, extreme criminal events that do fall into our area for classification,” he said.
“You can have us just operating at the bottom of the cliff, watching the funnel operating with more and more of that potentially being generated, or we can again get more proactive in terms of understanding and thinking about what that looks like further up the conveyor belt and starting to have a discussion across the system about what we do about that.”
Shanks said he hopes the Government’s wideranging media content regulatory review will keep that strategic view in mind. It’s not enough for each agency working on content regulation or countering violent extremism or preventing online harm to work in its own silo.
“You simply have to have a joined-up, strategic approach in these spaces. I think the issues with harmful and/or illegal content on the internet, or mis- and disinformation or online hatred – all of these things are connected, it’s very difficult to draw bright lines between any of them.
“But they’re a serious problem and getting more and more serious. The whole trend line of the entire five years that I’ve been in the role, it started out bad and has got worse and worse.”
Until Shanks’ replacement is appointed, Deputy Chief Censor Rupert Ablett-Hampson will take over the role in an interim capacity. The outgoing censor won’t say yet what’s next for him, beyond hinting at “an exciting prospect”. He said he wants to continue working in this space, despite the toll it takes – and that toll can be significant.
“How often do I have to view objectionable content? It really varies. In a response phase to something like the March 15 attacks, that is footage and material that I’ve had to view many times. And I’ve had to view many variations and associated products arising from that event,” he said.
That’s the reality for many of the staff at the office as well. In attempting to cultivate a healthy workplace culture around inherently unhealthy work, Shanks said he compares the job to being part of an asbestos removal crew.
“It’s actually part and parcel of our job and our role that we are exposed to this dangerous material. Stuff that will harm us. And so we need to think about it in terms of protective equipment, the resources we can call in, counselling and how we support each other, to make sure that we’ve got policies and processes to keep ourselves safe.”
The most challenging material for Shanks to grapple with, however, hasn’t been that submitted by law enforcement for classification.
“It’s stuff that I’ve been exposed to on particular forums that I’m checking out where extreme violence or material is propagated. Or, in fact, sites that I’ve gone to because a young person said, ‘Hey, I saw this thing when I was 13 years old and I’ve never forgotten it’,” he said.
“I’ve literally operated on a kind of semi-investigatory role in terms of some of that stuff. And that’s been some of the worst stuff I’ve ever seen – which, to me, just reinforces the importance of figuring out what we’re doing here and how we can improve that picture. Because if I, a grown man with counselling and supports in place and who has signed up to the role have been impacted by this, this hard, by something that someone has seen as a child, we’re in a not-okay situation.”