A quick status update:
1) The data set for the RAD52 screen measures a very different set of variables than the training set I was provided. This made the feature selection and model training process a real hassle, and resulted in confusing predictions where almost every object was classified as a focus. To fix this, my collaborator is re-running the image analysis to measure the same set of features. Also, I’m setting up a workstation where we can manually validate predicted foci by calling up the images for a small random sampling of the predictions. Not ideal, but then validating image data seems like a difficult task. Does anyone out there have a better idea of how I could do more comprehensive validation without (much) more human interaction? Here’s the problem in brief
Input: continuous measurements of various aspects of segmented objects from yeast colony images
Output: a list specifying for each object whether if contains a focus (fluorescent marker) or not (no fluorescence)
The validation would ideally involve an inspection of some subset of the images, and not the features that represent the objects.
2) My first committee meeting is coming up on December 3rd. I’ll discuss the methods and results of my master’s thesis work, and present what I plan to do for both of my current projects. I’ll answer questions from my committee and take their advice on future direction.