Team:McMasterU/ImageProcessing

Image Processing

We set out to create a program that would identify and measure fluorescence in petri dishes from photographs. A problem that the wet lab had encountered was that the fluorescence measured would include auto-fluorescence of the bacteria, which is a systematic bias. Through computer analysis and processing, we endeavoured to eliminate auto-fluorescence from images and quantify the size and intensity for areas of fluorescence.

Subtracting Images

Our first program successfully converted images to an array of pixel values in greyscale, and subtracted one image from another. We aimed to have the first image of just bacteria auto-fluorescence subtracted from the image taken of bacteria fluorescence under stimulus, to yield an image of bacteria fluorescence without auto-fluorescence.

This did not yield accurate results, as it was difficult to align the two images with perfect positioning of the petri dish. As well, the bacteria auto-fluoresced differently at different time points, yielding different fluorescing patters when the two photos were taken.

Identify Everything, Then Filter

Our next strategies focused on different ways of identifying individual objects within an image, and then having human intervention decide which parts are auto-fluorescing and which are not. As there was contention between group members on whether a spot constituted as measured fluorescence or auto-fluorescence, we decided this would be the most reasonable way rather than constructing an algorithm and determined auto-fluorescence.

1) Labeling via intensity differences: Recursively

For every pixel that was within a certain intensity difference than a neighbour, changed value to the same discrete value as neighbour. End result should be areas of discrete values in the image. We ultimately could not overcome runtime errors.

2) Labeling via intensity differences: Bubble sort style

Same end goal, but instead of recursion the program passes through the entire picture’s array of pixels in a loop until no new changes are made. End result was either too specific to noise or too vague that it misjudged shapes. Could not determine ideal intensity threshold.

3) OpenCV Canny Edge Detection

After manipulating the image via Gaussian blurring and changing intensity thresholds, we were able to fairly accurately identify significant objects in everyday photos, but there was simply too much noise in the petri dish photos. As well, markings on the petri dish blocked/interfered with patches of fluorescence so we could not achieve accurate measurements.

Neural Networks

We ventured into image identification with neural networks in hopes that the “black box” logistics within neural networks could overcome our blind spots in how to specifically identify auto-fluorescence. We were able to replicate a neural network program that used the MNIST handwriting dataset as training data to predict handwritten numbers. However, we were unable to adapt this network for identification of fluorescence patches. We could not delineate the prediction to give an accurate representation of a patch, and the program was not receptive to our training data. As well, we also suspected that we could not produce enough training data for the program to be able to make accurate predictions.

Future Steps

We hope that combining a curated dataset from our OpenCV results that we approved would be sufficient training data for the neural network. There is still a lot of troubleshooting and learning to be done on machine learning to make our neural network function as per our expectations.