If you asked people about Artificial Intelligence, most, if they had an idea of what it was, would likely respond with concerns about the rise of the robots, job losses, sentient machine armies, privacy and transparency. You only need to look at the comments about Boston Dynamics back-flipping robot this week to see how quickly we get dystopian about the prospect of advances in this area.
Dave Heiner, Vice President and Deputy General Counsel of the Regulatory Affairs team at Microsoft, sees things a bit differently. His main concern is we’re not deploying it fast enough. It’s a surprising statement from a man whose job contains the words ‘regulatory affairs’ but he’s a self-proclaimed AI convert and has spent the last few years contemplating all manner of issues in relation to AI including data privacy, ethics, transparency and trust, and has been advising Microsoft, as well as governments and organisations on behalf of Microsoft, on AI policy frameworks and regulation.
To many people outside the world of technology, cloud computing and enterprise IT solutions, Microsoft is spreadsheets, word docs, Windows and Bill Gates but this past week Microsoft has been Downunder, hosting a summit in Sydney and showcases in Melbourne and Auckland, specifically focused on AI.
Having spent a few days in the company of their AI converts and evangelists, I can tell you: a) I gained a whole new level of appreciation for the dreaded PowerPoint and b) Microsoft is at the forefront of exploration and deployment in this field and as large as the perils loom, so too do the possibilities.
Artificial Intelligence is best understood as something of a catch-all right now for a range of buzzwords that includes: bots, machine learning, cognitive computing, deep learning, neural networks, natural language processing, inference algorithms and recommendation engines. These are not all one and the same but they are all loosely part of the AI family.
Chief Data Scientist of Dun and Bradstreet, Anthony Scriffignano describes it as a ‘collection of things designed to either mimic behaviour, mimic thinking, behave intelligently, behave rationally or behave empathetically.’
Following a period when computing power couldn’t provide what was required to accelerate the development of AI, (known as the AI Winter), we’re now arriving at the beginning of its golden age. We have infinite computing capability – the cloud, on demand and at immense scale. Coupled with vast amounts of data (yours, mine and everyone else’s) and the algorithms to understand that data, we have the holy trinity required to advance artificial intelligence at a rapid rate.
Microsoft’s view of AI is human-centric. Their Chief Storyteller (yes that’s his real job title), Steve Clayton describes it as the ‘amplification of human ingenuity with intelligent technology’. It’s obviously in Microsoft’s best interests to assuage fears about AI – humanising it and taking a lead role in the policy and regulatory discussions about it. The commercial implications are enormous and the company named AI as one of its top priorities in a financial filing in August, ditching ‘mobile first’ and putting AI into its corporate vision statement.
But Microsoft CEO Satya Nadella has recently called for the field of AI to be an open ecosystem and the company voluntarily established an AI ethics board called Aether made up of top Microsoft execs. It has a big focus on using AI as a weapon in the fight for a more accessible world. There does seem to be a commitment to advancement in this field for a wider good rather than purely just for profit.
Perhaps the clearest example of this is in the work Jenny Lay-Flurrie and her team are doing. Lay-Flurrie is Microsoft’s Chief Accessibility Officer. Only half way through one of her addresses at the summit does she reveal she’s severely and profoundly deaf. As PowerPoint auto captions her talk for the benefit of other hearing impairing people, Lay-Flurrie uses her time on stage to talk about disability in the workforce as a strength and a talent and to introduce a visually impaired solution architect from Microsoft to demonstrate their newest accessibility product, Seeing AI. Given it was a visually impaired Microsoft employee who came up with the idea for the app, there is definitely walk to her talk.
Seeing AI is an app that uses AI to help the blind and visually impaired. It works by describing people, objects, and even text to “narrate the world around you.” The app leverages AI to recognise friends and describe their emotions. With strangers, it can describe their gender, estimated age, and what they’re wearing. It will also recognise and read short text snippets and full documents and read them back to you. Like many tools powered by AI, it’s not perfect, but will only improve over time with the more data it has access to and the more use it gets.
This is also the case with virtual assistants like Nadia – a VA developed for the Australian National Disability Insurance Scheme by New Zealand company Soul Machines.
If you’ve seen Ken Loach’s devastating film ‘I, Daniel Blake’ you’ll be familiar with the scene in which the title character is trying to fill out an application online to access a benefit after many attempts at trying to gain help from a person. I couldn’t help but think of that when hearing from the Department of Human Services chief information officer, Gary Sterrenberg, at the summit. Sterrneburg is guiding the department and the people that rely on it, through enormous change and Nadia is one of four bots he has under way to help improve access to services.
It would be easy to be cynical about Nadia and her kind – soulless robots designed to make the task of eliminating people from the workforce a lot easier, but Nadia was born out of the department’s disability innovation panel, a group of people all living with disability. She was not the brainchild of cost cutters and efficiency experts but of people who regularly face technology that does not reflect the diversity of everyone who uses it and who rely on others to help interpret and access the world around them.
As we digitise more vital services and age, socio economic status and literacy, as well as disability, become barriers to access, questions about how we scale support and interpretation services become more pertinent. Nadia is part of the solution to these problems and it’s in these examples that the potential of AI becomes much clearer and harnessing it, more purposeful.
AI is here to stay – it was and still is a creation of humans and while our lives and work will very likely be disrupted by it, it will also be enhanced. It’s through this lens that you can understand Heiner’s concern that we’re not deploying AI fast enough and that we tend to see the downsides more than the upsides. In chatting with him we both agreed that AI had a branding problem in that ‘artificial’ has negative connotations and Heiner thinks the right term is computational intelligence. Perhaps if that had caught on, we’d be less inclined to fear the machines and instead, regard what we have created in them with a little more awe and imagination.
Anna Connell attended the Microsoft Summit in Sydney courtesy of Microsoft.