AI bias: 6 tips to better understand your data

Blog_Understand_AI_Bias

The effectiveness of any system or machine is dependent on its component parts. If these are high quality, the end product will reflect this, and AI is no different. We’ve put together these 6 tips to help you better understand your data, something that’s crucial in getting to grips with AI bias. This will assist with helping you to identify and mitigate bias – it’s not always possible to eradicate it – more easily in your machine learning models.

Machine learning models are acutely sensitive to the quality of data. Training them on inaccurate or biased data means they are likely to produce inaccurate, biased results. What’s more, these biases can be amplified over time due to the way machine learning models learn. They are constantly evolving and adapting, meaning biases that are present at the training stage grow in prominence, making it more difficult to identify and eliminate bias.  

The training process is the critical phase in algorithm development. To maximise accuracy, it is critical to acquire enough real-world examples to produce a fully representative training dataset. This needs to consider diversity in demographics, clinical feature detection and other risk factors.

When it comes to AI in healthcare, data bias can lead to significant, albeit unintentional, consequences. These include limited access, inadequate care, and poor patient outcomes for underrepresented groups. To avoid this, it’s essential to train machine learning algorithms on high quality, reliable data from the very beginning.   

This will help you to identify and mitigate bias – it’s not always possible to eradicate it – more easily in your machine learning models. 

Tip 1: Understand your training data

Exploratory data analysis and visualisation are key to fully understanding the shape of your data. You will need to understand the distributions across each dimension, the outliers and areas that are underrepresented.  

You must be able to visualise, sort, group, pivot and graph your data in order to detect patterns that are not immediately obvious from individual data points. 

Tip 2: Understand what bias looks like

In the field of AI, we typically encounter five different types of bias:   

  • Algorithmic bias – this describes a coding problem within the algorithm   
  • Sample bias - this occurs when the data set isn’t large or representative enough   
  • Prejudice bias – this occurs when a data set reflects prejudices, stereotypes or societal assumptions 
  • Measurement bias - this describes data issues relating to accuracy, for example how data was measured or captured   
  • Exclusion bias - this happens when relevant data is omitted from training data 

Bias can be difficult to eliminate, particularly as certain biases may be unconscious. Furthermore, bias is a moving target which can creep in over time.  For this reason, it’s important to look at the performance of an AI model not just when developing an algorithm, but also once it’s in use in the real world.   

Tip 3: Aim for high fidelity annotations 

In the same way that bad data can make bad models, bad annotations will do the same. Always strive for the highest fidelity annotation that you can achieve within the limitations of your project.   

When dealing with images, bounding boxes capture a lot of extra pixels, which can include irrelevant information. Polygons include more detail, whilst segmentation masks have a very high level of specificity and tend to produce better models from fewer examples.   

Tip 4: Augment your data to fill the gaps 

The more representative examples a machine learning algorithm has, the better it will perform. However, in many cases it can be difficult or prohibitive to collect enough data. Data augmentation is one way to enrich a data set by creating synthetic data, manipulating existing data or copying existing data and modifying it.   

You can augment your dataset by:   

  • Generating it / creating synthetic data   
  • Changing the orientation, contrast, lighting etc.   
  • Inferring / inputting missing values   
Tip 5: Explore and monitor your results 

Understanding your training data is only one part of the story. It’s essential to monitor your model and critically analyse results on an ongoing basis, in order to determine how effective your model really is. Don’t just rely on validation scores: explore and visualise your data in order to find outliers. 

Looking at the distribution of clusters can also give useful insights into potential weaknesses in the model. Explainability heatmaps and other techniques are also useful.  

You should explore and monitor results on an ongoing basis following deployment, not just during model development. Understanding performance over time and by location is critical in assessing how your product is performing, and how resilient it is to data and concept drift. 

Tip 6: Make your data available to the team 

If you are the only person to look at your data, then results are only viewed from a singular perspective. Collaborating with others will give you broader insight from a wider, more diverse group of individuals and backgrounds. This could be a useful tool for detecting issues such as bias, which can be hard to spot. Having the right tools and infrastructure in place to share your findings and make them available to a wider audience is key.  

The development of AI

Last year alone, over $73b was spent on AI globally. From autonomous vehicles to manufacturing, banking, grocery shopping and healthcare, AI is shaping and automating daily life in fundamental ways.  However, the field is in its infancy, and AI still presents significant challenges in terms of maximising functionality, efficacy and reliability.   

Data is growing rapidly, perhaps nowhere more so than in the healthcare sector. An increasing number of AI-powered products are already in use by healthcare professionals, and this number is only set to increase.  

AI can be invaluable in assisting clinicians by reducing workloads, improving accuracy and facilitating data sharing and analysis, but there is still room for improvement.

At Zegami, we are committed to delivering end-to-end explainable imaging AI quickly and accurately. If you would like to find out more about understanding AI bias and how Zegami can help, get in touch.