Laura Littlewood, a technology and privacy partner at Bell Gully, says corporate clients’ approach to the use of generative AI has been “starting small, thinking about some really specific ways that it might enhance things that they’re doing in their business”.
Those smaller projects also sit within a wider AI vision or strategy for their businesses in the longer term, she says.
Projects might focus on enhancing internal efficiency such as analytics – or finding new ways to engage externally with customers.
But the relatively new technology still has its bugs.
“Accuracy is still a really challenging area for generative AI,” she remarks. “It’s been said that the first day of an AI tool is its worst day.”
Having human oversight and engagement over the development of the tool is vital, Littlewood explains. Including disclaimers about the reliability of generated outputs and being transparent about the use of AI technology is also best practice.
Littlewood says while large international players still dominate the field, there are hybrid solutions coming out of New Zealand – like basing a large language model to train AI within a cloud so that it will engage with proprietary and confidential information without the information gathered becoming publicly available.
She was also seeing “white label” solutions, of which ChatGPT is an example, where a consumer interacts with an AI platform in a way that’s seamlessly integrated with a third party. “It looks and feels like the company that they have already been engaging with.”
AI projects are often international, and much of the firm’s advice has been given in that context. “That’s given us some interesting insights into how the international requirements for AI apply in practical terms for different products,” she says.
Companies should seek legal input early in their AI adoption journey, not only to ensure compliance with current New Zealand legislation, but also to consider the “high watermark” on best practice from international legal frameworks and advise on how the law might change in the future.
Littlewood recommends considering a legal framework for reponsibility and liability for an AI tool. This includes ensuring contracts are very clear on issues around intellectual property rights and reliability.
Contracts with the large language model provider would be relevant, she says, but so too will those with other service providers that might be called on, such as IT developers or data scientists.
Littlewood says the regulatory framework surrounding AI usage can be regarded less as an obstacle and more as a starting point for businesses to innovate.
She “absolutely” expects to see that innovation coming out of New Zealand.
Littlewood also drew attention to a well-known adage about the adoption of new technology. “We often overestimate the short-term implications of technology and underestimate the long-term implications.”