Should the Healthcare Sector Adopt Black-Box AI?
AI is arguably the most talked about topic of recent times and I am sure that everybody who has read about, or thought about AI, has their own intuition on how it could change the world. This article seeks to stimulate thought and discussion on two opposing perspectives on the adoption of AI. To frame this discussion, I will first introduce a neural network (a computer which can learn and make decisions like the human brain) that is able to detect breast cancer before doctors; it is called Mirai.
Regina Barzilay, Professor of Computer Science at MIT, was driven to develop a computer like Mirai, after developing breast cancer despite undergoing regular mammograms (breast X-Rays) from her doctor and repeatedly getting the all-clear.
Barzilay and her team trained the computer by feeding it millions of mammograms from women who had been deemed cancer free by a doctor and subsequently been diagnosed with breast cancer. Mirai established patterns in the mammograms, identifying features which the human eye cannot see or are over-looked by doctors. Mirai’s predictions were compared with those of doctors, considering all the relevant risk factors, and it was found that Mirai’s predictions were nearly twice more accurate than doctors. Essentially, Mirai can see where cancer will develop or see cancer at its earliest stages.
Mirai is a black-box AI. This means that humans can be certain on what Mirai produces, but not certain on how Mirai produces this output. Black-box AI is built up of several layers with varying functions, the layers identify different features of an image, however the links between these layers are non-linear, meaning it is not clear how the layers interact. Ultimately, the complexity of these systems means that their methods of producing results are unknown, even to the developers.
Although, Mirai could be used to diagnose millions of women across the world with breast cancer at an earlier stage, the AI has not been adopted by the healthcare sector in the way you might expect. One Oncologist said, “The first rule of medicine is to do no harm.” Healthcare professionals feel uneasy about using a practice which they are completely unable to understand. Whereas Barzilay asserts that because Mirai has been proven to be more effective than the current process, the black-box argument should not be part of the discussion.
Where the results are proven, is it reasonable to require an explanation as to a black-box AI’s method?
Imagine a world where humans and goldfish could communicate with one another. The goldfish asks his owner: “Please explain to me why you pour those pellets into my tank every day?” To answer this question, the owner must explain why they keep the goldfish and sustain it’s life, what the pellets are, what role the pellets play in keeping the goldfish alive and why the pellets must be poured into the tank. To explain the first proposition alone, the owner would have to explain the great extent of human development that makes it feasible for humans to keep goldfish in their homes, and what satisfaction they receive from doing so. One would argue that there is no possible way of explaining these matters to an animal whose brain is less than 1.5cm long. In considering that the goldfish is dependent on its owner’s benefaction to stay alive, does the pesky goldfish deserve this laborious explanation, which is surely incomprehensible to the goldfish, or should the fish keep swimming and eat its pellets?
This analogy provides a whimsical illustration of the argument in favour of adopting Mirai in hospitals. It serves to demonstrate that regardless of whether humans can understand how Mirai works, the results are proven. These results would have a significantly positive impact to ensure that women with breast cancer receive earlier, less invasive treatment and potentially have their lives saved. These results could benefit millions of women and their families across the world, regardless of Mirai’s hidden methodology.
Consider an alternative scenario, of future Britain. A new political party emerges who promises that they can bring ultimate prosperity and happiness to the country. However, because of the hypothetical politicians’ superior intelligence, they announce that their government will completely lack transparency and accountability to the citizens. This is because the citizens could not comprehend the highly complex governmental inner workings. There are many people who would say, “Yes, I would vote for this party, because of their proven results.” Yet, how could you be sure that these results will always be achieved, will their governmental methods produce the same results when the country suffers a drought, or invasion? Possibly, but there is nothing to support any conclusion. Even if you could trust this government based on their past results, their lack of transparency and accountability means you could never be certain on whether this government will get things right.
The preceding analogy highlights some of the potential risks with adopting and relying on black-box AI. There is a huge danger that one false output from Mirai could cause a person to not receive cancer treatment where they need it, or alternatively are caused to suffer through invasive treatment that they do not need. An exhaustive list of the potential dangers is impossible because of one’s lack of understanding on how Mirai works. The benefits which AI like Mirai could have, are potentially immeasurable, yet so are the drawbacks. The possibility that humans will one day understand the methods of black-box AI is not precluded. So surely, the day humans can fully comprehend black-box AI is the day that we can be more comfortable with its adoption.
What do you think?