Synthetic media and “deepfakes” pose a new threat to democratic processes, if used in the wrong way, write Brainbox Institute co-directors Curtis Barnes and Tom Barraclough.
“Change reality according to our client’s wishes.”
So boasts the Archimedes Group, an Israeli company recently banned from Facebook and its subsidiaries. Accused of manipulating users and influencing online political discussions, the company proclaims to use every tool and every advantage towards the singular goal of “winning campaigns worldwide”. Rapid growth in digital audiovisual information means the suite of available tools is changing rapidly.
In the near future, the online world is likely to be swamped by images, video and audio clips that makes it look as if something happened when it did not. Information that appears to the average consumer as though it were created by capturing real events, but which in fact was not so much “captured” as “constructed” by means of advanced computation on everyday computer hardware.
Not merely cutting and pasting images together, but producing new informational artefacts through artificial intelligence. People talking, moving, saying and doing things they never did. Information that even the world’s top experts cannot tell is authentic or fake.
Synthesised voices, “deepfake” videos, mixed reality – these things and more are part of the next wave of synthetic media, and it is unlikely that citizens are ready.
An artificial PM
How many know, for instance, that from around five minutes of video footage I could create a video of the Prime Minister doing something she never did? Or that from 30 minutes of audio, I could generate a model of her voice and have her say whatever I like?
This may not have been exactly what Parliament’s justice committee was envisioning when it launched its inquiry into electoral interference, but it is perhaps the most important new development it ought to be aware of. While most New Zealanders have not yet encountered the products of these new technologies, the dialogue overseas is deeply concerned.
In the United States, multiple new laws have been proposed to deter the creation of malicious synthetic media. Meanwhile, the US’ Defence Advanced Research Projects Agency (DARPA) has been given more than US$60 million to produce digital countermeasures.
A good global test for whether something is serious is whether DARPA is funded to try and solve it. The problem is digital forensics technologies may not be up to the task. Alongside this, there is a massive shortfall of global experts with the skills to detect false audiovisual information.
And that’s only with regards to what already exists. As these technologies continue to improve and proliferate, the potential grows for a saturation of audiovisual disinformation.
A few months ago in Gabon, a video of the nation’s president – who had not been seen for several months – nearly caused a military coup. Opposition claimed the video was a so-called deepfake: a synthetic video created using deep learning neural networks. To this day, even the leader of the DARPA MediaForensics program cannot say for certain.
Even in New Zealand, things like audio recordings of politicians speaking can have massive political impact, though the technology exists for relatively low-skilled people to create such things with little cost and effort. A well-resourced organisation – like a nation state or a large corporation – could create fakes that are not only incredibly persuasive, but which are more or less beyond detection.
It is against this backdrop that calls for intervention grow, including legal intervention. Some say the technologies should be banned. The problem is, what technologies? The way these videos and audio clips are produced are relatively unremarkable, despite their potential for misuse.
They are essentially the same techniques that bring dead actors back to our screens, or that swap our faces on our smartphones, or change our voices. These are technologies consumers love. They are used for communication, entertainment, art, and information exchange. They are mediums of expression, to which citizens still enjoy a legal right.
We have spent the past ten months investigating the many issues where deepfakes, synthetic media and law intersect. Political interference and strategic misuse are only a subset of these, but an important one. They bring a new element to the ongoing debate of fake news and disinformation.
The Perception Inception project and its report, released this week, is one of the first in-depth analyses of how law affects the use of the technologies – what it can do to help, and what it can’t.
So much has changed in the time that the project has taken place. When we began it was a hunch. Now it is almost certain.
Most of us know we shouldn’t believe everything we read on the internet. Now, we must come to terms with the fact that we shouldn’t believe everything we see and hear.
We have seen fake social media accounts used to gain access to private information and even affect the value of multinationals on the stock exchange, using profile pictures generated by freely available artificial intelligence applications. Some have LinkedIn accounts with connections and even endorsements, and websites full of written content likely generated by AI-tools. We’ve seen major digital effects companies release tools that let consumers edit videos automatically, “rubbing out” things they don’t like and having the spaces automatically intelligently filled to match with the rest of the video.
Right now, a person can go online and create a digital copy of their voice for free; soon, they will be able to sit in front of a webcam and broadcast themselves to the world as almost any public figure they want. The technology exists – it just isn’t widely available yet.
There is no real lesson to this, besides the need to be personally prepared, every one of us. Most of us know we shouldn’t believe everything we read on the internet. Now, we must come to terms with the fact that we shouldn’t believe everything we see and hear on the internet.
And all those photographs, videos and audio clips we are sharing on social media are now valuable training data from which fake information can be produced, even of ordinary people like us.
Their report on deepfakes and the synthetic media of tomorrow is now publicly available, with the research generously supported by the New Zealand Law Foundation.