How would you feel about telling your problems to an AI doctor, or taking advice from a “robo-lawyer”? Would you be happy working alongside a robot co-worker? What about handing the care of your child or elderly relative over to a machine?
Big questions on the effects of Artificial Intelligence were starting to receive attention even before the Covid crisis forced us to think urgently about alternative work arrangements, and alternative ways to deliver essential services. For many of us, some of these possibilities will generate pretty strong responses, while others won’t bother us much at all. Maybe it will depend on other details. But which details?
At Otago University, with funding provided by The New Zealand Law Foundation, we’ve been running an interdisciplinary project to study these sorts of situations. That’s been part of a wider project, looking at the growing role of artificial intelligence (AI) on various aspects of work and employment. And we’ve just released our report, setting out key findings.
It covers a lot of ground. We start off by giving a survey of the AI systems likely to have the biggest impact on human jobs, ranging from autonomous vehicles and robots to decision support tools and language processing systems like chatbots.
We consider big questions about the intrinsic value of work and leisure, and ask whether AI could finally deliver on the century-old promise of the shorter working week.
And we discuss how AI methods are changing the experience of workers right now, in recruitment and job interviewing, in staff management and evaluation, and in new forms of workplace collaboration between people and machines.
A major part of our focus, though, was on the professions, and the extent to which AI might assist, disrupt or even displace them. To learn more about this, we consulted professionals and experts from many fields, exploring a wide variety of AI methods and applications. While we didn’t hear many easy answers, we did come away with some useful proposals for how to get the best from this technology, while avoiding the worst of the downsides.
Improved access to professional services.
A strong feeling that emerged from our discussions is that AI will help democratise access to professional services. Currently, the demand for services in areas like law, medicine and education far outstrips supply. This point was forcefully made in Richard and Daniel Susskind’s 2015 book ‘The Future of the Professions’, and our workshop participants reiterated it strongly: technology is likely to help bring essential professional services to a wider group of people.
Until now, AI systems have mostly been well suited to back-office tasks which are routine and repetitive. In medicine, that could mean interpreting medical images and suggesting diagnoses; in law, with contract drafting and discovery. Increasingly, though, they are coming to be used in client- or patient-facing contexts. A good New Zealand example in the field of law was the CitizenAI project, which developed AI-powered chatbots that could provide information on legal issues relating to employment (‘WorkBot’), tenancy (‘RentBot’) and prisoner rights (‘LagBot’).
Initiatives like CitizenAI allow us direct access to legal information. Whether the other forms of labour-saving AI actually open up access depends on how they’re used. They certainly could bring down the cost of some services, and free up time for over-worked professionals. But whether these savings are passed onto clients or patients is another matter. An alternative scenario would see them used to cut staff numbers, or for the savings made to increase the profits of large firms, rather than allowing more people to access their services. There are also worries that we could end up with a two-tier system, where those who can afford it get access to dedicated human professionals, while those who can’t have to make do with the chatbot.
We need to get the usual things right when deploying AI systems in the professions.
If we’re to trust AI in these sorts of roles, we need to be confident that it actually works. That means its performance needs to be evaluated for accuracy – something that’s particularly important in high stakes contexts, like medical tests. Just as we were finishing our report, the European Commission published a draft law that would create an EU-wide approach to AI. As well as banning some uses altogether, the new law would require accuracy testing and scores for all “high risk” AI.
In New Zealand, our forthcoming Therapeutic Products Bill should provide for pre-market scrutiny of AI that will be used in many medical contexts, but we don’t have anything in the pipeline that’s as broad as what the EU’s proposing. If AI takes over more human professional roles and tasks, that’s something we’re going to have to consider.
We also need to keep in mind that accuracy and error rates won’t always affect everyone equally. Some people have warned about an AI tool for detecting malignant skin lesions struggling when seeing pictures of darker skin types. This wouldn’t be because it’s inherently harder to spot, but because the AI had been trained on a data set of predominantly white-skinned people. The same problem arose with the NZ Police’s trial of facial recognition software last year; it was hopeless at recognising Maori and Pasifika faces, because it just hadn’t been taught to do so. Before rolling out an AI tool in a particular population, we’ll need to be sure that it’s fit for purpose in that context.
There are also concerns about explainability. If the AI makes a mistake, or gives us a result that we just don’t like, will we be able to see its working and figure out why? Is there a way to render the infamous ‘black box’ more transparent? This might seem less worrying when the AI system only makes a recommendation to a human doctor or lawyer, with the human making the final decision. But research has shown that, if an AI system performs quite reliably over a long period, it is easy for the human user to begin trusting its decisions, and fail to notice mistakes. The problem here is with human vigilance, rather than machine performance – but procedures for addressing it must nonetheless be found.
A dehumanised future?
Even if we can be assured about matters like accuracy, transparency and bias, we might still wonder whether we’d actually want AI doctors or lawyers, teachers or carers. Take healthcare as an example. While we may not care too much about the ’back office’ work – interpreting test results or x-rays, for instance – we may well want the face-to-face aspect of speaking to a human doctor, nurse or counsellor. Concerns about empathy, compassion and emotional connection are common in this area. And then there are ethical duties and responsibilities of cultural awareness. Many professional roles might involve a lot more than just imparting expert knowledge.
It’s tempting, then, to suggest that designers of AI systems should stick to the ‘technical’ parts, leaving properly ‘human’ roles to human professionals. We should be wary, though, of assuming too much here. For one thing, it’s by no means true that all human doctors have great inter-personal skills. Doc Martin and Gregory House may be caricatures, but we’ve probably all encountered healthcare professionals who are at least a bit like that, at least some of the time. And among the more genuinely empathetic and emotionally invested, stress and burnout are real concerns. It’s asking a lot when we expect someone to be caring and empathetic as well as calm and reassuring, on top of being unfailingly technical excellent.
We should also be wary of assuming too much about what patients will actually want. Take elder care for example. Our first reaction to older people being looked after by machines might be to recoil at the lack of a human touch. But we should be open to the possibility that some people, some of the time, might prefer that option. Maybe it will be more empowering, more dignified, to be helped out of the bath by a care robot we can control, rather than having to wait for a human attendant. Maybe an AI that checks we’ve remembered to take our medication will assist us in continuing to live independently.
Different people will have different intuitions and comfort levels about all of this, and many of our comfort levels will probably shift (in one direction or the other) as we see more of these sorts of systems in action. It’s also possible that AI could get a lot better at emotional awareness and engagement than we tend to imagine.
The one thing we do feel confident in saying is that AI should be open and honest. AI systems interacting with clients should identify themselves clearly as AI systems, and not pretend to be human. We should know when we’re interacting with a human or a piece of software, and be able to make choices about what to do or say on that basis.
As AI systems become more humanlike in their abilities, this principle will become increasingly important. California has already passed a law requiring this in some contexts, and the EU Commission is proposing something similar for that jurisdiction, at least for ‘high risk’ AI. We believe that New Zealand citizens should be able to expect the same level of transparency and honesty.
Read the full research report: