learning to trust machines
Development

How Will We Learn to Trust Learning Machines?

By James Higgs - 15 February 2017


The other evening, as my flight was descending into London in very heavy rain, it occurred to me that my life was entirely in the hands of machines. Despite the technologies involved being among the most tested and reliable in the world, it is hard in such situations to remain rationally confident of one’s safety. Complex as these systems are, they are all easily understandable by non-specialists.

It should be clear to anyone paying attention that Machine Learning has taken huge leaps in the last couple of years. If software is eating the world, it’s now virtually certain that learning machines will seem like all-consuming, ravenous beasts by comparison. 

While enormous and astounding technical leaps are being taken regularly, there is very little consideration for how humans will interact with these advances, much less come to trust them, and this threatens to hamper their usefulness in many fields. 

As with the emergence of conventional software, Machine Learning is being exposed to the general public starting with the most obvious applications. In general, these are applications that tend towards a solid statistical model already; in other words, applications that can produce provably better than human or conventional software performance.

Machine Learning and Health


One of the most obvious of these areas is the health space. It’s a sector that is ripe with potential, not least because the existing software is generally so bad – to the point that it often increases workloads – but also because there is already enormous amounts of data, outcomes are measurable, and even small improvements over human performance can result in many lives being saved.

Medicine already relies on statistical models to judge efficacy, so machines that can provably outperform humans are inevitably entering the treatment mix. For example, just last month, researchers announced that they had made an AI that can match trained doctors at identifying skin cancers. But it will take more than statistics to make doctors and patients confident in the results.

Ethically, any publicly available version of this technology would be obliged to tell the user that a machine had made the diagnosis, and our experience of working in health tells us that many people would want a human consultation to confirm such a result, even if they knew that the human could not outperform the machine. We have enough trouble accepting the bad news that a set of digital scales gives us, so it’s difficult to imagine us unquestioningly accepting a very serious diagnosis like that from an app.

We’ve also found that doctors themselves are reluctant to accept the output of learning machines, and we believe this is largely because many of them have been forced to use such awful software in the past. If we can’t build clinicians’ trust in conventional software, we can’t expect enthusiastic adoption of a fundamentally new form of computing. We need to start by repairing the reputation of IT in health by building software that doctors actually want to use, and health providers can start that work now, even if they’re not yet at the point of adopting machine learning.

Humans have struggled to accept conventional automation technology for decades, even centuries, and the industry is not doing enough to focus on this issue with Machine Learning. Unlike conventional automation technology, there is no way to understand how learning machines make their decisions. Machine Learning is not just a better autonomous technology, it’s a different kind.

Trust Issues


The standard of public debate around technology is nowhere near high enough to deal with the multifaceted philosophical questions arising out of AI that we will be forced to deal with in coming years, and this is in large part an issue of trust.

When the perception (and sometimes the reality) is that tech companies are avoiding tax and other responsibilities, or deliberately yet silently eroding privacy, is it any wonder that people are reluctant to believe their assurances about these new technologies? And politicians, many of whom seemingly struggle to understand decades old technology like encryption, will need help if they are to write meaningful laws to govern them.

What we have learned in our work with Machine Learning is that considered interactions can make all the difference. A machine can make a staggeringly complicated calculation quicker than we can blink, and even though we know this intellectually, we prefer a slight delay and even some indication that the machine is “thinking” to an instantaneous result.

Even when a machine produces provably better than human performance, many humans still want to know its “reasoning”. Maybe this is because AIs often produce completely counterintuitive results. One of the most stunning events of last year (a year full of stunning events!) was DeepMind’s AlphaGo’s 37th move in the second game of its five game series against Lee Sedol. AlphaGo had calculated that there was only a one-in-ten-thousand chance that a human would play the same move, and yet it was enough to effectively seal the victory against the world’s best player.

No lives are at stake in a game of Go, but as we see Machine Learning adopted in areas like health and mobility, the technical challenge is to provide some sense of the reasoning process, even if the AI itself is a black box that can no more be interrogated for an explanation of its precise reasoning than can a human brain.

Maybe the time when we truly trust learning machines to make life or death decisions for us is a long way off, but we’re unlikely to get there if we don’t consider how humans interact with them at a very deep level. It’s not enough to simply publish statistics and wait for people to accept them, even when those people are doctors, trained in the scientific method.

A good deal of our work in both health and auto & mobility is already focusing on these questions. It should be no surprise that we believe that the key is a blend of design and technology in an integrated team. Technologists on their own are not going to place the needs of the user above their own ability to advance the state of the art, and designers alone are not going to be able to influence the underlying technologies.

As with previous step changes fuelled by technology, only design which puts users at the heart of solutions will find the way forward. We’re excited to continue this journey.


James Higgs

About James Higgs

James is Technical Director for ustwo London. He has more than two decades of experience in the industry and was once described as a “veteran software developer” by TIME magazine.