A four-tier structure for responding to the risks posed by artificial intelligence technologies is being proposed by the European Union – the first comprehensive legal framework of its kind in the world.
“At the top tier is what is known as ‘unacceptable risks’, and the EU has identified certain categories of AI systems that it deems unacceptable and therefore they are banned outright,” explains Richard Massey, a senior associate at Bell Gully specialising in consumer law and emerging regulation.
“That includes, for example, real-time facial recognition technology in public places.”
The next tier down concerns “high-risk systems” which, instead of being banned outright, have to go through assessments to show they conform with existing EU product standards, with the two lowest tiers requiring more general standards around transparency and data governance.
Massey says New Zealand businesses should be paying attention to the EU’s AI policy. The EU law as currently drafted, he explains, has extraterritorial scope, so all AI systems where the output produced by the system is intended to be used in the EU – even if they originate outside the EU – would have to comply.
Governments in various other countries are also starting to develop proposals for the regulation of AI. The United States has issued an AI Bill of Rights which sets out core principles for safe use of AI systems, such as avoiding bias in algorithms. The UK has proposed a “light touch” approach, Massey says, and for now is looking at addressing AI-specific concerns within the scope of existing laws and regulations.
While New Zealand doesn’t currently have any specific AI laws, that doesn’t mean there aren’t broadly-applicable rules that New Zealand businesses should be aware of, Massey says. For example, AI systems that use personal information will need to comply with the Privacy Act including obligations around the collection, storage and usage of such data.
There are also consumer law protections that may be relevant under the Fair Trading Act, Massey says, including the need to ensure that content produced by AI systems, and the way in which the systems are marketed, is accurate and doesn’t mislead or deceive consumers.
In addition, AI systems such as large language models will need to ensure that information used in generating content and training the system complies with intellectual property laws such as copyright, which has been the subject of various recent claims overseas.
One of the difficulties in designing appropriate AI regulation is the need to keep pace with advancements in AI systems.
Massey says the speed of AI development is one of the “real challenges” for governments seeking to regulate it. “It is an unparalleled example of fast moving technology, and that creates something of a conundrum for regulators. What might work now in terms of regulatory obligations may well not be fit for purpose in five to ten years’ time.”
As a result, laws which are based on current AI systems may need regular updating as the technology develops, says Massey. “That will be one of the difficulties the EU faces with its more prescriptive regime.”
Bell Gully is a foundation partner of newsroom.co.nz