Revelations that police trialled facial recognition software that uses a database of images unlawfully scraped from social media sites has raised concerns from civil society groups, Marc Daalder reports
Tech policy experts and civil society leaders have condemned the New Zealand Police for trialling facial recognition software built on a database of images scraped from social networks in breach of those websites’ terms of service.
Radio New Zealand first reported Wednesday morning that police in the high tech crime unit conducted a trial of Clearview AI’s facial recognition technology, without notifying the Privacy Commissioner, or even the Police Commissioner. Newsroom understands this occurred after police officials read an exposé from the New York Times about Clearview, which noted it had scraped billions of images from social media sites in violation of those sites’ terms of service.
If there’s a picture of you on social media – even if your aunt or mum uploaded it – then there’s a good chance your face is in the Clearview database.
Ultimately, police concluded that the technology did not meet their needs. However, RNZ reported on Friday morning that police conducted 150 searches of volunteers during the trial and another 30 of five suspects. They received only one successful match.
RNZ also reported that emails released under the Official Information Act show police used the system for racial profiling. They searched specifically for wanted people who police said looked “to be of Māori or Polynesian ethnicity”, as well as “Irish roof contractors”.
Police national manager of criminal investigations Tom Fitzgerald told RNZ that officials wanted to see whether the system would struggle to identify non-Europeans.
Speaking to Newsroom on Wednesday, InternetNZ chief executive Jordan Carter questioned whether an American tool like Clearview could be effectively implemented in New Zealand.
“It’s not the way you do a good trial. What this does is it scrapes up data about what looks like billions of people off the public internet and uses machine learning and artificial intelligence in order to allow that to then search people. And the problem with that, it’s got all the limitations of AI in terms of systematic biases that get built into it. Any system like that could be riskily used as part of automatic decision-making that ends up harassing communities, unintentionally, but as a practical effect of the limitations of the technology,” he said.
“It’s clear that these automated systems present risks, where they tend to misidentify people. And they misidentify people in larger proportions based on the smaller the sample size. You can be sure that the diversity in the New Zealand population isn’t something that an American tool would have been seeing while it was being trained as a search engine.”
Andrew Chen, a research fellow at the University of Auckland’s Koi Tū – Centre for Informed Futures, agreed on Wednesday that this approach had drawbacks for New Zealand. “I think more likely than not, a tool that’s developed on a US population is not necessarily going to work in New Zealand,” he said.
“One of the things we don’t know in this case is who they were targeting,” Green Party MP and former human rights lawyer Golriz Ghahraman told Newsroom prior to the latest revelations about racial profiling.
“We know that oftentimes, in search and surveillance, police do target certain demographics. We don’t know if Māori/Pacific populations were targeted again. And if they were, separate to a privacy breach, there’s a discrimination issue.”
It remains unclear whether the trial led to any investigative or prosecutorial outcome. If it had, those cases could be jeopardised, Ghahraman said.
“Certainly if they went on to use information gathered in this way for policing, then it looks like they may have breached rights. I mean, they didn’t even collect the information and instead it looks like it was unlawfully collected. That’s never admissible, so that’s bad policing to begin with,” Ghahraman said.
“Police, of all people, are very aware that there are all sorts of legal frameworks around their ability to collect data, use it, share it and we don’t have any of that information about what they were doing in terms of the facial recognition data that they were collecting. More often than not, especially if its collecting information, will have to be with some form of judicial oversight, in every instance.
“This goes so far beyond anything within their current framework, to the point of buying mass data from a private company that may have been collecting it unlawfully themselves. If we take it steps further, are we actually infecting our fair trial processes and rights with this kind of information? It’s a technology that’s so new, we don’t even know if it’s accurate.”
Ghahraman is not the only person concerned by police use of Clearview. Chen told Newsroom the Clearview database represents a breach of privacy for those whose images may be stored in it.
“There’s been a lot of coverage about how Clearview got their data sources, but the long and the short of it is they did it illegally, because they breached the terms and conditions of all of these social media websites,” he said.
Carter also said he was concerned by the potential breach of New Zealanders’ rights.
“The harvesting that’s done for these purposes is all done on data that people published without knowing it would be used like that. People build their Facebook profiles over years. Facial recognition wasn’t a thing at the start of that. Is it reasonable and ethical for people to trawl back through that and build tools to identify people and make connections that people might want to be reasonably private about?” he said.
Clearview has also been linked to the far-right. In April, the Huffington Post reported the company’s broad ties to a range of alt-right figures. Even the employee who pitched New Zealand Police on the technology, Marko Jukic, wrote that Jews needed to be “segregated” from non-Jewish Europeans.
Another person involved with Clearview, former Breitbart writer and Holocaust denier Charles Johnson, said the technology would be used “to ID all the illegal immigrants for the deportation squads”, according to the Huffington Post.
It remains unclear whether the police trial is ongoing or if it ended when the Privacy Commissioner was made aware, or even earlier. Ghahraman told Newsroom she was seeking clarification.