Why AI needs to be explainable: Part one

Explainable AI

In a world that relies heavily on information technology, the importance of AI that we can understand, interpret and trust is becoming a necessity. In the first of our two-part blog series, we explore the evolution of AI and the increasing need for explainable AI.

Why does AI need to be explainable?

Since its origins in the 1950s, the field of artificial intelligence (AI) has seen substantial and rapid growth. Today, AI is widely used across a multitude of industries and applications, from the more familiar IT chatbots, to more complex applications in sectors such as agriculture, banking, education and medical diagnostics. In a world that relies heavily on information technology, AI has quickly become the norm. We are increasingly embracing the benefits of advanced computational power, and accepting them as crucial to modern societal development.

However, as AI has becomes more mainstream, a sense of disquiet has emerged about the extent to which AI makes complex computations – and arrives at decisions – without us being able to objectively understand why, or how. In other words, AI has quickly evolved beyond the point at which we can apply logic and understanding. By overlaying artificial neural networks on machines with advanced computational power, AI has quickly exceeded the limits of human understanding.

AI: the ‘black box’

The traditional ‘black box’, or opaque, model of AI, draws conclusions which are generally not understood by its designers. The key drawback of this model is that it does not provide us with the opportunity to explain or understand the assumptions on which decisions are based, challenge those assumptions and correct any deficiencies, or make our own future predictions/assumptions based on the information it provides. 

Recently, there has been an increased focus on explainable AI (also known as XAI, transparent AI and interpretable AI), a term which refers to AI techniques that humans can understand, interpret and trust.

Because AI is now being widely adopted in critical – often potentially life-saving – decision making, for example in the military, financial, judiciary and medical fields, there is an increasing need for transparency in order to improve accountability and precision, and to foster greater trust among users.

Furthermore, there is an ongoing drive towards the ‘right to explanation’, which refers to an individual’s right to know on what basis a decision about them was reached. For example, their eligibility or ineligibility for a financial loan, by an AI algorithm. The right to explanation is directly addressed by the GDPR, which acknowledges the need to avoid traditional ‘black box’ approaches in the application of AI decision-making:

“The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention.”

Enter: explainable AI

The fundamental principles of explainable AI are:

  • Transparency – allows humans to explain and communicate whether models have been thoroughly tested for a sufficient and fair subject distribution and/or demographic
  • Interpretability – comprehend the ML model and present the basis for decision-making in a way that we, as humans, can understand
  • Explainability – understand the factors that influenced the ML model’s decision and how the output was achieved

Explainable AI produces decisions that can be explained – and therefore refined, challenged or altered – by humans.

This is particularly important with regard to quality control and audit processes; the ability to understand how a decision was reached is crucial if we want to challenge or improve upon it. Whilst AI is a useful tool, it cannot completely replace the breadth of human knowledge and understanding.

Check back next week for part two of our Explainable AI blog.

Read more:

Our vision for how the NHS could implement XAI through it’s approach to AI lifecycle management