Columnist Rod Oram sifted through a fresh and local report on what Artificial Intelligence might mean for New Zealand. He found we’re behind the curve and not ready either for the risks or the opportunities.

The vast opportunities of Artificial Intelligence for New Zealand are undersold in the sector’s first comprehensive report on itself, while it glosses over the greatest risks. AI is already being mis-used by some. With the technology developing at warp-speed, it is beginning to challenge us on what it means to be human.

Instead, the AI Forum of New Zealand takes stock of where we are with AI, places us in an international context, takes a near-term view of where we should be and makes worthwhile recommendations on how we can get there.

It defines AI as:

“Advanced digital technologies that enable machines to reproduce or surpass abilities that would require intelligence if humans were to perform them. This includes technologies that enable machines to learn and adapt, to sense and interact, to reason and plan, to optimise procedures and parameters, to operate autonomously, to be creative and to extract knowledge from large amounts of data.”

Many advanced technologies already allow us to do more than we’ve long dreamed of. But we’re still more or less in control, even if unintended uses and consequences get bigger and more frequent by the day.

Who would have thought our innocent chat on Facebook would help the Russians influence the 2016 US Presidential election? Yet, that was a far-from-isolated incident, as Cathy O’Neil, an American mathematician and ace writer of algorithms, exposes in her 2016 book Weapons of Math Destruction: How big data increases inequality and threatens democracy.

A former Wall Street quant, she showed before the election how many aspects of our lives are deeply affected by mathematical models that companies, governments and other organisations use.

AI is getting so powerful, so quickly it has the potential to elude human control. This prospect sharply divides tech giants of our time.

The problem is not modelling itself, but the occasions when there is inadequate human control of the quality and appropriateness of the data, and a failure to use feedback loops to make sure the data and its conclusions are reliable and fair. Sometimes such oversight is inadvertent, other times it is deliberate.

Should we have the will to do so, we can still rein in all manner of technologies and try to repair at least some of the damage they’ve done. But AI is getting so powerful, so quickly it has the potential to elude human control. This prospect sharply divides tech giants of our time.

Last July, Elon Musk commented that AI is an “existential threat for human civilisation.” Mark Zuckerberg, Facebook’s CEO and founder, retorted on Twitter that comments like this are “pretty irresponsible,” prompting Bill Gates and Stephen Hawking to weigh-in on Musk’s side.

The debate was ranging long before Zuckerberg tweeted. An excellent exploration of the issue is a long piece in the New Yorker in 2015 entitled The Doomsday Invention: Will artificial intelligence bring us utopia or destruction?

The article is a profile of Nick Bostrom, a leader in this debate. A Swedish philosopher at Oxford University, he is the founding director of its Future of Humanity Institute and runs the Programme on the Impacts of Future Technology at its Martin School, which researches “the most pressing global challenges and opportunities of the 21st century.”

Still there’s value in the AI Forum’s report Artificial Intelligence: Shaping a Future New Zealand. As part of its research, it surveyed major companies, finding that only 36 percent of them were having board level discussion about AI. Overall some 20 percent were using some form of AI systems to, for example, mine big data, automate processes, enhance customer service and improve cyber security. The main tool is machine learning algorithms to improve business processes. Some 52 percent of AI users say it is already a “game changer” for them, or will be.

The Forum acknowledges, though, these are large organisations and nearly half of them are in telecommunications and media. As early adopters, they are unrepresentative of the economy.

The research estimated that AI has the potential to increase our GDP by up to $54 billion by 2035 across 18 industry classifications; and the AI field is burgeoning. The Forum has identified an ecosystem of some 140 organisations already working with or investing in AI here. Air New Zealand and Xero are two of the leading practitioners.

The report also argues that AI frees up people to work on complex, higher-value tasks, and urges government and employers to ensure the country has a robust system for training and re-skilling to ensure people are well-employed. Overall, it believes that AI-driven job displacement will account for only 10 percent of normal job creation and destruction over the next 40 years until AI is pervasive across our lives.

It does address briefly the profound issues AI triggers, noting that “AI raises many new ethical concerns relating to bias, transparency and accountability” and that “AI will have long term implications for core legal principles like legal responsibility, agency and causation.”

It also notes that the NZ Law Foundation began last year a three-year research project on these issues; and it calls for establishing “an AI ethics and society working group”.

At the launch of the report on Thursday, Clare Curran, Minister for Government Digital Services, said an action plan and ethical framework was urgently needed to “give people the tools to participate in conversations about Artificial Intelligence and its implications in our society and economy.” As a first step she said the government would formalise its relationship with Otago University’s NZ Law Foundation Centre for Law and Policy in Emerging Technologies.

Some sectors will benefit greatly from Artificial Intelligence, while others will gain little.

The report judges that adoption of AI by the government “is disconnected and sparsely deployed.” We’re ranked 9th in the OECD on “government AI readiness.”

This, though, only parallels the still nascent adoption by companies. For example, “export
earners including agriculture and tourism are starting to show early signs of vulnerability to overseas technology enabled competition and disruption.”

This was a common theme across the economy. “Throughout the research, participants were concerned that many businesses are simply being complacent about both the opportunities and the potential broader challenges of AI.”

We’re in the main pack of laggard nations. The handful of leaders includes Canada, the UK, France, Singapore and Estonia. China is by far the most ambitious. The report says its strategy is:

• By 2020 be on the pace with advanced levels of all AI technologies globally.
• By 2025 achieve major breakthroughs in basic theories of AI to enable development of world leading technology and applications.
• By 2030 become the world’s primary AI innovation centre through the maturity of its AI research and technologies, and in its entire economic competitiveness.

To those ends it is investing vastly more in AI than any other country, while benefitting from the advantages of great economic scale and tight control of internet technologies and their use compared with all other countries.

While US tech companies have plenty of AI firepower to counter the Chinese, President Donald Trump has axed some of the support President Obama had given them. Even more bizarrely he is favouring low technology over high by offering to scale back his tariffs on Chinese steel and aluminium imports into the US if Beijing cuts its heavy investment in AI, semiconductors, electric cars and aircraft.

Fortunately, the report from the AI Forum New Zealand makes these global trends part of its well-argued pitch for a national AI strategy. It makes 14 recommendations across six themes to further AI here: Develop a coordinated national strategy, create awareness and discussion, assist adoption, increase trusted data availability, growing the talent pool and “adapting to AI effects on law, ethics and society.”

This is a good start. But the Forum would make the case even more compelling if it attempted to describe the ways advanced AI in the decades ahead will help us work on the massively complex challenges we have such as ecosystem restoration of our lands and oceans. We’ll only achieve those and other great goals, however, if the Forum helps foster deep debate and understanding across society so we are a leading nation in the use, acceptance and control of AI.

My recent brush with AI

When Brian Chen, a tech writer for the New York Times downloaded all his data Facebook held, “it was like opening a Pandora’s Box,” he wrote recently. Among the revelations, were the list of some 500 advertisers who had his contact details and the number to ring his apartment entry buzzer. He was similarly surprised when he downloaded his data Google held.

Following the handy guide in his article, I downloaded my data held by Facebook and Google. I’m a very sparing user of Facebook, so it held less than a 1 gigabyte of data. But it included a list of advertisers who knew something about me. I recognised very few of them, and had bought nothing from the ones I did know. They had wasted their money, but Facebook was the richer for it.

Google, however, held 28GB of my data, accumulated over the past decade in which I’ve migrated my emails and many other business tools to its platform. I’ve barely begun to discover what Google knows about me.

But among other things, it knows today exactly which YouTube videos I watched in 2009. More crucially, I keep location services switched off until I need something specific, such as a Goole maps journey. I use a lot of those, so I assume Google has an extensive record of my journeys at home and overseas, and likewise a complete record of my emails and my calendar events, so it knows whom I’ve met and corresponded with.

I would like to know much more about how Facebook and Google use my data that they hold. If any member of the AI Forum of New Zealand would like to help me mine my data and understand that, I’m very keen to hear from them.

If I was paranoid, I’d suggest they contact me on an encrypted service such as WhatsApp. Or if I were just trying to avoid Facebook and Google, I could suggest they call or text me on 021 444 839, but I assume Spark collects and mines my data in ways I don’t know. Safest of all, they could send me a letter care of Newsroom.

The reality is email will have to do, at Rod.Oram@NZ2050.com, which is a Gmail address.

Disclosure: I’ve attended some meetings of the AI Forum’s advisory panel.

Leave a comment