Tuesday, 10 December 2013

Connected component features for parasite spotting

Fred Kiwanuka and I recently tried using connected component features and max-tree filters to improve the performance of our malaria parasite detection system with blood smear images.

The idea is to take connected components of image patches at different threshold levels, like this:

Example image patch containing a malaria parasite on left, connected components at different threshold levels on the right.

We then calculate various morphological features for each component, such as moment of inertia, elongation and jaggedness. Computing the max-tree of component hierarchy (indicating which components are contained within other components) make it possible to calculate some other features. From all the components in an image patch, we compute the 10th, 30th, 50th, 70th and 90th percentiles of each feature, giving us a standardised feature vector.

Using these feature vectors to learn labels for many image patches (where label 0="no parasite", 1="parasite") with an Extremely Randomised Trees classifier, the discriminative performance is quite good.

 Another way of assessing whether the detection is working is to look at image patches in a set of test images and sort them according to classification probability:

Patches classified confidently as positive cases are on the top row, and confident negatives are on the bottom.

In general we're starting to get within range of the performance of an expert lab technician, and these results are also significantly better than anything we've managed to achieve with features developed for natural images, such as SURF, SIFT, or Haar cascades. However we can't yet get reliable detections with very low parasite concentrations in the blood, so it looks like a bit of refinement is still needed before getting further into clinical testing.

More technical details can be found in this forthcoming book chapter.

Saturday, 12 October 2013

3D-printed smartphone mounts for photomicroscopy

We've been trying out some designs for adapters to attach phones with cameras to microscopes. The idea is that the computer vision needed to do malaria detection, like this, could be quite achievable on a phone if the optics can be worked out. 3D printing is useful, because customised mounts can be produced on spec for any combination of phone and microscope model.

The prototype model looks like this:

Here is a phone (ZTE Blade) mounted on the left eyepiece of a microscope using the printed adapter. A Motic microscope camera is on the right eyepiece.

This is a sample image obtained from the setup above. The big blobs are white blood cells, the small dots are malaria parasites (this is a case of extreme parasitemia). The image was taken at 1000x magnification with oil immersion.

We still need to find ways to reduce the field of view, to get better detail of parasite objects; from the images we're getting at the moment, it looks as though parasite detection is feasible, but accuracy would be limited. If issues like this can be ironed out, such a system could be significant in Uganda, where there are plenty of smartphones and microscopes, but few laboratory experts. This study, for example, found that 50% of rural health centres in Kabarole district had microscopes, but only 17% had a trained technician able to use them.

Thanks to Kenan Pollack and Sandeep Patel at OmusonoLabs for handling the printing!

Friday, 23 August 2013

Gaussian Process Summer School

We had a Gaussian Process Summer School from 6-9 August, run by Neil Lawrence from the University of Sheffield. This was a mixture of lectures, discussion and practical sessions (the latter run by Ricardo Andrade Pacheco, also from UoS). Videos of all the lectures and more information are available here.

Participants who completed the Makerere GPSS
It was great to see attendees from different backgrounds: as well as participants from the AI group and College of Computing and Information Sciences, we also had participants from technology, agriculture and public health. The use of GPs for epidemiology came up a few times in discussions, with participants from Makerere School of Public Health and Uganda Virus Research Institute, and it was clear that there are several other application domains in Uganda.

Wednesday, 22 May 2013

Heart rate monitor demo

Some students have been interested recently in using image processing for physiological monitoring. To explain some concepts I put together a simple demo for measuring heart rate with a webcam, using Python and OpenCV.

As the oxygen levels in the blood change, its light absorption alters. If a light is shone through a finger onto a webcam, these changes can be picked up (apparently it can even be done by the colour of the skin on the forehead), so that a pulse waveform can be obtained.

The Python script can be downloaded here (requires opencv and numpy). In order to get good results, the frame rate of the camera should be set to 20-30 FPS (on Linux this can be done with guvcview). The challenge with using this in practice is to keep the finger very still...

Monday, 11 March 2013

Microscope autofocus

To make automated microscopy testing possible, e.g. for malaria screening, one issue is being able to physically focus the microscope. This can take some skill, so being able to do it automatically - in addition to the visual diagnosis - makes the test easier to use for health workers with only basic training.

MSc student John Wekesa has been working on a combination of software and hardware to do this automatically. Here is a prototype, with a camera mounted on a simple scope and a servo motor controlling the focus:

Some Python code controls the motor, using feedback from the camera. The objective function is the magnitude of the Canny filtered video frames (roughly speaking, a measure of how strongly defined any edges in the image are). We can then apply a search of focus wheel positions to maximise this objective.

Here's the prototype having found focus (on a piece of onion!):

Also in progress is motorising the panning controls on the scope, in order to fully automate the procedure. The idea would be to place a slide on the microscope stage, then 3 motors (pan x, pan y, focus) control a scan of all fields of view in the sample. The computer vision code then analyses each frame in order to produce a final diagnosis.

Friday, 25 January 2013

AI-DEV MSc students graduate

We have four MSc Computer Science students who took the Artificial Intelligence option and worked in the AI-DEV group taking part in the current Makerere graduation ceremony. Their work involved image processing for malaria parasite detection, causal structure learning (which we are interested in to learn about patterns of famine or disease spread), extending handwriting recognition algorithms to cope with the ŋ character common in Bantu languages, and identification of people in video streams for intrusion detection.

Names and research/project titles, from left to right:

Catherine Ikae, Automated Diagnosis of Malaria through Blood Film Analysis with Scale Invariant Feature Transforms and Cascades of Boosted Classifiers.

William Senfuma, Meta Learning for Selection of Best Causal Discovery Algorithms.

Ibrahim Bbossa, Offline Character Recognition of Luganda Handwriting on Smartphones.

Frederick Serunjogi Semakula, Networked Visual Human Motion Detection with Automatic Notifications.