A vision for how the NHS could embrace accountable AI lifecycle management


One of the most crucial developments in AI – and one which has enjoyed exponential growth recently – is its application for the advancement of medical research and predictive analytics. When applied to the analysis of large and complex data sets – for example databases comprising hundreds of thousands of medical images from multiple sources and locations – AI is a very useful assistive tool, supporting with triage and diagnoses that would otherwise require considerable time and expertise from highly experienced clinicians. 

Since 2013, there has been an increase of over 1,400% in the amount of clinical data collected, and this amount of information can only be managed – and analysed – with the help of accurate data visualisation tools.   

As a result of increased demands placed on services by the COVID-19 pandemic, the NHS is under more pressure today than ever before, compounded by delays to routine screening procedures and extensive waiting lists for elective surgeries. One area in which we could facilitate rapid and dramatic improvement is in the adoption of AI for faster and more accurate diagnoses.  

The NHS is in a prime position to lead the way on Medical Imaging AI 

The NHS Long Term Plan underlines the importance of embracing new technologies such as AI, whilst harnessing the wealth of data currently held by the NHS. Currently, there is a lack of universal standards, transparency, consistency and quality in the AI tools used within the healthcare sector. Whilst AI has enormous potential to transform healthcare by facilitating faster and more accurate diagnoses, much work is needed in order to optimise the effectiveness of this technology. 

Until now, issues such as inconsistencies in data quality, evaluation and audit, and the underperformance of certain AI tools in specific settings and populations have made it extremely difficult to carry out systematic testing and reach any objective assessments of results. It is extremely important to test AI tools and ensure that a system is reliable and effective, in order to: 

  • Reduce bias 
  • Identify outliers 
  • Avoid false positives/negatives which can skew diagnoses 
  • Verify and validate training datasets to ensure accuracy 
  • Detect and remedy degradation in AI programmes, avoiding generalisation and ‘concept drift’, which occurs when data changes over time in unforeseen ways 
  • Increase predictive power 
  • Foster trust in users through transparency 
  • Improve outcomes (e.g. for patients) 

We aim to accelerate the safe adoption of the most reliable AI tools into the NHS by carrying out a full and comprehensive review of AI tools currently in use to interpret medical images related to COVID-19, breast cancer patients and others.  

Reluctance to adopt AI 

Despite the increasing prevalence of new technologies like AI, organisations such as the NHS can be slow to adopt new technologies. This may be the result of a variety of factors, ranging from a reluctance among staff members to change established working practises, to fears about being replaced by an automated system and even suspicion surrounding the reliability of technological solutions.  

The most effective way to secure the collaboration of staff members is to ensure that they are fully invested in the process from the very beginning. Staff members who are involved in acquiring AI solutions at the procurement stage, whose opinions about what solutions will best meet their needs, and who are fully trained and supported in the implementation of new tech, will be much more likely to support its adoption. 

Concerns among management regarding the cost of purchasing solutions such as AI are gradually being replaced by an acceptance that such technologies in fact save both money and time, whilst simultaneously improving outcomes.  

It’s clear that as healthcare systems globally face increasing pressure, the adoption of AI is both a useful and inevitable development. 

The Zegami AI lifecycle management approach 

Initially designed by a team at the University of Oxford, Zegami’s AI lifecycle management system – which is hosted on Microsoft’s Azure cloud – uses built-in AI functions to accurately evaluate data and AI tools. Zegami can formalise the monitoring and assessment of AI tools used by the NHS in the following way: 

  1. Review, clean and organise imaging databases of thousands of X-rays from the NHS for COVID-19 patients, breast cancer patients, and others conditions. 
  1. Assess image quality and highlight trends 
  1. Generate testing datasets which are representative of the population to be used for testing 
  1. Test AI-tools for image interpretation tasks, using unbiased independent testing datasets 
  1. Analyse results using high throughput data analysis and visualisation with Zegami, in order to stress test each AI-tool against these criteria. 

This systematic, extensive evaluation of AI tools currently in use will: 

  • Identify limitations 
  • Determine areas for improvement 
  • Guide accountable use of AI within the NHS 
  • Provide invaluable feedback to the industry for further development of AI for medical imaging.  

In order to carry out this evaluation, Zegami is working with AI companies already contracted by the NHS to test their algorithms. Other algorithms available through open-source platforms and those created within the University could also be evaluated. 

Zegami’s interactive method for creating and evaluating datasets has the potential to benefit patients, the NHS and the wider population by: 

  • Helping clinicians to review data across multiple sites via one simple platform, improving operational efficiency, enabling the quick resolution of data queries and accelerating data availability for AI testing and rapid image curation 
  • Enabling longitudinal data analysis and multi-view data visualisation, making it easier for radiologists to spot trends, anomalies or errors in data 
  • Providing realistic data for testing, helping avoid data bias and making high quality data available for AI development  
  • Identifying the limitations of existing AI algorithms and informing future development of AI tools, leading to improved patient outcomes and supporting the work of radiologists and other medical practitioners.  

Zegami’s accessible interface helps you to visualise, understand and engage with data. This means that data gaps and bias – such as limited population representation – can be easily identified and addressed. Zegami also allows you to filter large data sets on multiple levels including demographic, pathological and imaging. 

Current medical image viewers largely focus on handling data on a smaller scale and make it difficult to filter, select, annotate and curate data. Zegami employs image and clinical meta-data characteristics in an innovative and easy-to-use graphical user interface (GUI), facilitating data management on a large-scale basis.  

This will not only help create a formalised workflow for AI data management and continuous audit testing, but will also ensure that relevant safeguards are in place and AI limitations have been addressed, to provide accountable AI outputs throughout the NHS using data from multiple contributing sites.