Zegami is ideal for medical imaging

Zegami is ideal for medical imaging since it allows you to rapidly survey all images from a variety of digital image sources through our easy to use web interface. Due to the fact Zegami can integrated with patient records, demographic and genotypic and phenotypic data it allows medical staff and researchers to quickly find images and look for trends intuitively in such data.

Zegami allows you to:

  • Combine images, categorical and numerical data to allow rapid querying and integration of disparate data from different clinical sources
  • Identify anomalies in your data set quickly
  • Tag interesting images for further analysis or remove poor quality of outliers
  • Combine with image analysis and machine learning algorithms to look for biomarkers in disease
  • Use scatterplots, boxplots or your own plugins to analyse and filter image collections based on extracted image features or other collected metadata
  • Quickly collate images using our unique lasso and tag feature to build training sets for machine learning
  • Publish your results easily on the web so they can be included as a hyperlinks in journals
  • Zegami is currently being used in medical research for heart, brain, xray, and eye image storage and analysis
  • Quickly download filtered data sets for use in other analysis tools (MATLAB, R, Excel)

Case Study

Phenotyping Cardiovascular Disease by Online Image Databasing and Pattern Recognition Algorithms to Develop Cardiac Risk Models and Identify Subclinical Disease.

Division of Cardiovascular Medicine, University of Oxford Prof Paul Leeson, Cardiovascular Clinical Research Facility (CCRF)

Using Zegami, the CCRF created collections of echocardiography principal strain analysis ventricular intensity maps with associated clinical metadata. Zegami was used to sort into high and low risk groups based on variability of ventricular contraction. The ‘bins’ of colours clearly stratify degrees of variation in strain between patients whilst the fact you can view and interact with few or all the images by zooming in and out allows the investigator to look for visual trends. On larger data sets this will become crucial to understanding phenotypic variants. See figure 1 and 2 below for specific examples.


Figure 1(a) Zegami collection of 42 images showing evidence of potentially high risk cases based on visualized intensity patterns (ringed cyan and orange) (b) close up of available different forms of LV strain visualization in a healthy ventricle (above) and a hypertrophied, dysynchronous ventricle patient (below), note the “deadspots” of strain located at the septum corresponding to the hypertrophied region.


Figure 2. Using unsupervised machine learning it was possible to differentiate between 127 patients with chronic arterial disease (green) and healthy patients (purple) using principal strain echocardiogram images.