​With the growing popularity of machine learning and the lower barrier to entry for creating models using friendly, open-source APIs, it is no longer a requirement for teams and individuals to possess highly specialized skill sets and years of training to produce models with promising results. While this is fantastic for progress and education, it creates problems for reliability, robustness and explainability – important for solutions in general, but critical for medical ones.

Machine learning algorithms tend to cheat when given the opportunity, often leveraging biases within data, or overfitting when not enough is available. Detecting this can be challenging to newcomers and experts alike.

We present an intuitive, visual approach to spotting these problems in both small and large sets of data…