It’s our wallets, not our lives facing the most immediate danger from artificial intelligence and the same technology used to commit crime could fight it.

A recent report forecasting how artificial Intelligence (AI) could be used maliciously predicts we will see cybercrime perpetuated by AI within five years.

The report, which was co-authored by universities and think tanks including Elon Musk’s OpenAI group, warns AI researchers should take the dual-use nature of their work seriously and consider how it could be misused.

Published in February, before the Cambridge Analytica scandal broke, it suggests new threats will be seen, such as political interference where the automation of mass data analysis could be used to create targeted propaganda.

Another risk the report identifies is a rapid increase in existing threats, as machines automate tasks such as scamming people out of money.

AI could dramatically increase a cyber criminal’s return on investment. The report says: “The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labour, intelligence and expertise. A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets.”

Rush Digital is an Auckland company working with AI. Its founder Danu Abeysuriya said AI could give criminals the ability to “parallelise criminal activity”.

“The thing with AI is it doesn’t forget, and it scales. You could have the same hypothetical person talking to 900, 9000, 9 million people – if you had the resources at the same time.”

Conversations between a fictitious Nigerian prince and a target of a forward fee scam, where people are asked to send money in order to receive a large pay-out, could be done via email or text, with a chat bot posing as the prince.

“The act of stealing a card probably could turn into the act of randomly guessing a correct card.”

“Natural language processing or chat bots, they’re going to get better and better and better and more believable. Everyone is pushing to make them more believable so that when you contact your insurance company to ask some questions about the policy they don’t need to have a staff person. It could be a bot that is talking back to you. The consequence of that is that things like a forward fee scam can now be completely automated. Before they required a human.”

In the future – as video-chat proliferates and the level of avatar realism improves – these conversations could be conducted with a digital avatar posing as the prince.

“In 10 years it’s quite plausible that 95 percent of people won’t be able to tell the difference between a real and digital avatar.”

Another area where Abeysuriya sees a potential for AI to be used maliciously is credit card fraud where security is designed for human capacity, not AI capacity.

“An AI system could brute-force one of those credit cards in milliseconds. Basically, if you gave it enough computing power it could randomly pick out a pattern for credit cards and start generating CCVs and then testing those across different payment networks, so a card isn’t blocked. The act of stealing a card probably could turn into the act of randomly guessing a correct card.”

Abeysuriya said his team considers risk carefully when they embark on an AI project.

“The stuff we are building right now, the concern we have is not that it’s going to pick up a gun and kill anybody. The concerns that we have are more around data privacy because our systems are using computer vision or collecting customer data. A lot of our discussions are around local processing, so anonymising data before it even hits the cloud. That’s the strategy we generally prefer.”

Data is a key to AI machine learning. Abeysuriya thinks governments and NGOs have the upper hand over criminals because they have more data and the data they have is more useful. As he puts it, millions of photographs of cats won’t help criminals hack bank accounts.

“Humans are pretty good at building weapons, and really, really good at killing each other unfortunately – and AI is a mimicry of us.”

“Defensively AI systems stand a good chance of doing a better job defending against criminal activity than perpetrating it, but it’s still early days,” he said.

There are examples where AI has been used to catch criminals.

Sweetie was a computer animation posing as a 10-year-old Filipina girl. She was created by a Dutch human rights group and used in a paedophile sting operation.

More than 20,000 users communicated with Sweetie during her 10-week life. A thousand who offered to pay money for her to remove her clothes were identified and their names were passed to authorities.

In New Zealand an AI chat bot called re:Scam launched by non-profit online safety organisation Netsafe didn’t catch criminals, but it did waste the equivalent of five years of online scammers’ lives.

Users could forward scam emails to the bot which then took over the conversation with the scammer in the most infuriating way possible.

Netsafe’s Director of Technology and Partnerships, Sean Lyons, said he was surprised how persistent some scammers were, even when dealing with a bot designed to frustrate them.

“There was one bank of responses where re:Scam sent a bank account number to the scammer one number at a time. That one was particularly frustrating for the scammers.”

Other responses included emails with information supposedly included in non-existent attachments and comments about the user struggling to follow the scammer’s instruction due to pain medication impairing their cognitive function.

Not all conversations re:Scam had were with humans. Lyons said some conversations were obviously with scam bots.

“There’s enormous potential for entire services to be set up in order to try and part people with their money, part people with their identities, represent you on your share account, use your money to trade in all sorts of illegal commodities, use you either as a financial or product mule – almost without your knowledge.”

“There were some conversations we axed in the end because clearly it was just going around and around. There was little point in one bot talking to another bot for an extended period of time.”

Re:Scam became a victim of its own success. Demand for the service was so high the resource which had initially been allocated for the bot operating for two to three months ran out and the bot was shut down. Lyons hopes it will be relaunched at some stage “bigger and better than before”.

Lyons said in general AI increases our defence against cybercrime, however, he also sees the risk of it being used by cyber criminals.

“There’s enormous potential for entire services to be set up in order to try and part people with their money, part people with their identities, represent you on your share account, use your money to trade in all sorts of illegal commodities, use you either as a financial or product mule – almost without your knowledge.”

He said developers have ethical and moral responsibilities when they create technology, but at the same time he would hate to see innovation cease.

For Rush Digital’s Abeysuriya, AI’s role in cybercrime is an arms race.

“One side gets more sophisticated; the other side needs to get more sophisticated. It’s a constant cat and mouse.”

He acknowledges the short-term risks of AI being used by scammers but is more concerned with long-term threats of AI, where human life is endangered.

“There’s an automated gun platform in Israel that terrifies me. It can make kill or incapacitate decisions.”

Currently a human is required to approve decisions but Abeysuriya worries all it will take is a software bug – or for the technology to fall into the hands of a rogue state – for this to be overridden.

“If we are complacent about it, we’ll end up with an autonomous system that is going to be a pain in the arse to shut down.

“Humans are pretty good at building weapons, and really, really good at killing each other unfortunately – and AI is a mimicry of us.”

Leave a comment