Terminator-like robot soldiers on the battlefield. ‘Deep fake’ videos that “can pretty much make anyone appear to say or do anything”. Social media manipulation where what happened in the 2016 US presidential election doesn’t even scratch the surface of what lies ahead.
William A Carter, Deputy Director of the Technology Policy Program at the Center for Strategic & International Studies in Washington DC, the world’s leading think tank for defence and national security, painted a sobering, sometimes alarming, picture of ‘Emerging Technologies and US National Security’ in a presentation at Victoria University of Wellington.
Talking to an invited audience of researchers and government officials at the University’s Centre for Strategic Studies, Carter said “it’s pretty widely accepted now that artificial intelligence [AI] is going to be a transformative technology for national security, on par with, if not exceeding, aircraft, nuclear weapons, computers and some of the other major technological revolutions of the past. It’s going to bring a whole suite of new capabilities, bringing previously unseen speed, precision, anticipation and pre-emption, and novel strategies and tactics, to every domain of warfare and every part of the national security spectrum”.
He said: “The key dynamic a lot of folks are really focused on now is this question of an AI arms race. To anyone who thinks we can avoid an AI arms race – we’re already in one. It was inevitable, because the potential advantages that AI promises are too dramatic for countries to ignore.
“That doesn’t mean you shouldn’t work to manage it. I really like the analogy to nuclear weapons. Nuclear weapons were developed and they led to a major arms race but they also led to concerted efforts to develop counter-measures and mitigation strategies (global regimes to counter proliferation, arms control, missile defence systems) to manage the risk of these things. We need to think similarly about AI.”
China and the US are the main contestants in the AI arms race, said Carter, but not the only ones, with Russia, Israel and the UK among other countries “investing significantly and doing really interesting stuff”.
International partnerships around emerging technologies are going to be key, he said, including for joint research and development; establishing global governance regimes and norms around national security applications; and data sharing, consolidation and verification.
“One of my big worries is if every country tries to go it alone China will ultimately win. One of the reasons is the US is being dragged kicking and screaming into the AI age and China is running towards it with open arms. We are so obsessed with our fears of AI that it blinds us to the potential of the technology, whereas in China the excitement about new technologies is palpable. It’s seen as an opportunity and a force for good. And that means they invest in it and embrace it while we fight it every step we can.”
Chinese innovation is inefficient, said Carter. However, “you can be really inefficient but if you throw enough money at a problem for long enough, no matter how inefficient you are, you’ll get there. If you look at the contrast, the US, the European powers, most of the Five Eyes [intelligence alliance, including New Zealand] are really underinvesting”.
Even if China doesn’t win the technology race, it could win the global leadership one “if we don’t define the way we want to think about governance, ethics, norms, the principles of this technology. I just don’t think we’re doing what we need to do in the US right now. This is something where there’s an opportunity for our allies to force our hand; if countries like the Five Eyes, like the European powers, like our major allies in Asia, begin governance discussions around how we should think about principles around AI, the US is going to have to participate. But everyone is sitting back and waiting for us to start the conversation and I don’t see our current government getting there”.
Its openness and willingness to partner have been a traditional strength of the US, said Carter, “and it’s one we are very much turning our backs on at a terrible moment in history”.
The good news is robot soldiers are a long way off. “The technology is just not ready yet. I also think we’re just not ready to deal with that. Even if we had the technology, we don’t have the concept of operations, we don’t have the management and command frameworks to actually make that work.”
And even when AI soldiers do come, “the artificial intelligence we have now is not very intelligent and is nowhere near the point of being able to rebel against us or redefine its objectives in a way that’s harmful to us”.
The idea “we need controls in place because the machines are going to become self-aware and rise up against us is just not a real concern”, said Carter (adding, less reassuringly: “At least not for a number of decades”).
He thinks AI soldiers open up a lot of potential.
“You have to think of it like a subordinate commander. Military commanders delegate to junior officers all the time. They give them a defined goal, a set of rules of engagement and then allow them to go out and make decisions in order to execute the mission. AI is another way to do that.”
Of more immediate concern are ‘deep fake’ videos and other new technological capabilities, including social media manipulation.
At the moment, ‘deep fake’ technology is “primarily used to make fake celebrity porn. But it’s very clear the potential has been recognised by intelligence and military services around the world and they’re going to start using this for information warfare and political manipulation”.
We have already seen the power of AI in information warfare, said Carter. “Even just a taste of it is terrifying and it’s only going to grow. I’m really worried about 2018 [in the US]. Although I suspect the Russians really aren’t going to do very much in 2018, partly because it’s the midterm election and partly because for their purposes it is more fun and also more effective to let us freak out and assume they are there while they don’t actually do anything. But I think 2020 is going to get ugly and I think they are going to be expanding their use of these techniques and tactics not just against the US but targeting European and other powers.”
Emerging technologies, said Carter, have transformed what we once meant by ‘the fog of war’. No longer is it a matter of confusion caused by the chaos and shortage of information in warfare. “Availability of information is not going to be problem. But confidence in the information we have, the ability to trust the data we’re receiving and trust the insights we’re deriving from that data is going to be much more complex.
“Andrew Moore, who’s Dean of Computer Science at Carnegie Mellon University in the US, talks about how there’s a very real possibility that within all our lifetimes we will reach the point where essentially anything can be theoretically knowable from anywhere. There will be so many sensors in the world, so much connectivity, and we’ll have the capability to derive insights from that data and answer questions based on that data. But the flipside of that is we are also going to have an unprecedented ability to deceive each other, to lie, to manipulate information, to manipulate perception”.
William A Carter was brought to New Zealand by the University of Waikato for a symposium co-hosted by its New Zealand Institute for Security and Crime Science and its Political Science and Public Policy Programme.