Why AI needs to be explainable: Part two

Explainable_AI

Explainable AI (XAI), is essential in allowing us to understand, communicate and adapt how machine learning models reach their decisions. Part two of our blog explores some of the benefits of XAI and how it can be used by, and benefit, healthcare professionals.

Missed part one? Read it here.

The role of XAI in reducing bias

The adoption of AI in critical and potentially life-saving areas like the military, financial, judiciary or medical fields, needs to accelerate. To do this, AI developers need to incorporate XAI processes that enable them to identify and address potential bias problems, by retraining AI models on new or more relevant data.  

This is particularly pertinent to the use of medical imaging algorithms. In most cases, AI software is based on training data from a specific hospital, group or geographical area. However, research has shown that when transferred to another environment (e.g. a different hospital or location), the AI becomes much less effective at detecting – for example – cardiovascular disease.  

One such example occurred at NYC’s Mount Sinai Hospital: neurosurgeon Eric Oermann found that a mathematical model he built with colleagues was extremely accurate at detecting pneumonia in MRIs of patients at that hospital, however when applied to images of patients at other locations its accuracy declined considerably. Oerman attributed this to differences between patients, types of MRI scanners and even the angle at which the scanner was habitually employed. In his own words: “It didn’t work as well because the patients at the other hospitals were different”.  

It’s important to reflect on the gravity of this finding: because the AI algorithm was explainable, physicians and scientists could recognise AI bias and pinpoint exactly where and why it had occurred. The Mount Sinai AI derived causal links between the use of portable x-ray machines and pneumonia, whilst also anticipating a high level of incidence. When put to the test in other locations, these observations did not hold true, yet the AI behaved as though they were.  

Overcoming bias

When this kind of bias occurs, XAI can be used to drill down into the root cause of the problem, identify and rectify the model (for example by re-training with a more diverse data set), however this is only possible if there is sufficient transparency to observe where the bias has arisen. 

Other studies have found similar biases. For example, data sets concerned with eye disease come almost exclusively from patients in North America, China and Europe and consequently, algorithms trained to detect eye disease have been historically exposed to data sets which exclude other ethnic groups and geographical locations. As a result, subtle differences in the way that eye disease manifests itself in different ethnic groups and different locations are subsequently missed by these algorithms. 

There is also evidence that algorithms built to detect skin cancer are much less precise when used for black patients, because the programs have been trained on light-skinned subjects.  

In order to avoid this kind of bias, medical imaging AI algorithms must be trained on data sets that are not just large, but that are also diverse across a broad range of metrics, such as age, gender, geography, ethnicity and hospital of origin, to account for the way that disease can manifest differently across populations. 

AI is an extremely valuable tool, however it is far from perfect. AI models require constant monitoring, assessment and revision to reduce the incidence of bias and model decay. Furthermore, AI still requires the benefit of human expertise in order to identify and remedy biases that arise not just at the training stage, but throughout the AI lifecycle. In order to maximise the effectiveness of AI in medical imaging, it should be used to support diagnoses made by professional radiographers, drawing on their knowledge and experience, and not considered a stand-alone alternative to a professional opinion. 

Making AI explainable 

AI models rely on highly complex algorithms in order to arrive at decisions, employing intelligence in the form of mathematical and computational power. So, despite our best efforts, the question of whether we can ever truly understand advanced AI decision-making is not entirely clear. 

In 2015, a group of researchers at New York’s Mount Sinai Hospital applied a deep learning algorithm to a database of 700,000 patient records, totalling hundreds of thousands of variables. The resulting programme, known as Deep Patient, proved to be incredibly accurate at predicting a wide range of diseases that patients would develop in the future, from liver cancer to psychiatric illnesses including schizophrenia (which physicians find difficult to predict). The algorithm was given no formal instruction, however it identified complex patterns in the training data, from which it was able to make accurate predictions of health outcomes.  

In order for such tools to be used and trusted by healthcare professionals, we need to understand how they work. Unless AI is transparent and explainable, we’re unlikely to trust it to make important decisions about our health, or the health of others. 

AI plays a significant and valuable role in modern life and our reliance on this important technology is set to increase. Although we recognise the importance of developing the field of XAI, we haven’t yet entirely opened the AI ‘black box’. Until we do, it’s crucial that we continue to ask the right questions, push for greater transparency and don’t rely on biased algorithms that we don’t fully understand. 

Read more:

Find out more about implementing XAI with the Zegami Machine Learning Suite.