If a robot or an algorithm causes the death of a person, who is legally responsible?

In an AI Day presentation, Hudson Gavin Martin law partner Anchali Anandanayagam, explored the legal and regulatory questions which will arise as artificial intelligence (AI) becomes commonplace.

“There is a core assumption behind our laws there’s always a human actor. The perpetrator of a crime, or the creator of a beautiful piece of art, will always be human.”

Using the example of a hypothetical artificially intelligent robot, lawyer Anandanayagam laid out two possible paths. The first was to always track back the actions of the AI to a person who had created it. The other was to treat the AI as a legal person.

Who made the rogue AI?

To work on the assumption any damaging thing an AI did can be traced back to a person, or group of people, seems like an easy option.

“It’s been suggested that we need to have secure tamperproof records for each AI so that when something does go wrong and that impacts on a human, culpability can be found by understanding how, why and on whose behalf that decision was made.”

In practice it might not be as easy. The inner workings of algorithms are often kept purposely murky.

“Will we need to legislate the AI manufacturers to be more transparent about the decision-making algorithms and the data input processes?”

In the case of the imaginary robot lawyer which gives incorrect legal advice, if the technology malfunctions, finding the manufacturer to be at blame is logical.

If the hardware was not at fault it becomes harder, to assign responsibility to person or group said Anandanayagam.

“Is it the builder of the decision-making algorithm … is it actually the regulators who allowed this technology to be used in this way or is it the lawyer or law firm that adopted the technology, programmed it to meet their preferences and put it up?”

Or, Anandanayagam asked, is the AI itself responsible for giving incorrect advice because its decisions and behaviour is based on its own learning?

“Should things that the professional client care and contact rules and ethical rules that apply to lawyers also apply to AI?”

Are you even a person bro?

The second option is to give AI its own legal identity. This raises another set of questions.

“Can AI have a guilty mind?”

Would an AI have the same legal status as a person, or would a new category need to be made?

Anandanayagam pointed out companies have their own legal personality.

“But even though companies have their legal personality, we’ve decided as a society that actually companies are ultimately directed by humans, and that’s why directors of companies can be held responsible for the actions of the company and also face jail time.”

For criminal liability another question was raised. The current legal system is based on the idea a physical act doesn’t make somebody guilty unless the mind is guilty.

“Can AI have a guilty mind?”

If it can, and is liable for legal sanctions, should it also have legal rights and who do the legal rights belong to?

Anandanayagam raised the prospect of an AI used purposefully by somebody to inflict harm, and in doing so shielding themselves from the consequences of their conduct.

As we develop our robot overlords, or robot slaves, she wants technologists to keep these questions in their minds.

“I don’t have the answers to these questions that I posed today but I do think we need to make some headway and not just thinking and talking about this, this challenge, but also in making some decisions about how we as a society want to address it.

“I think in New Zealand we have a real opportunity to be global leaders in this space.”

Leave a comment