CSHL Automated Imaging & Phenotyping Day 2

Not feeling so great this morning, so I won’t continue live-blogging (have missed two talks this morning already).  Sigh.  I had hoped this illness was behind me, but it looks like it’s still around.  I’ll do my best to recap the day’s session at least.

{Update @ 10:15} Ok, starting to feel better.  I get to the auditorium part way through Thai Truong’s talk, which is about light sheet microscopy.

  • Samples are mounted on a spindle, fine light ‘sheets’ are emitted orthogonally to the detector, and so the scatter is minimized.
  • It is both fast *and* high resolution.  One photon excitation is good, but 2 is better resolution: longer wavelength penetrates deeper due to less scattering.
  • Next few slides present examples of quick and high-res images / movies of zebra fish and drosophila development.  (Q: Can this technique be used in HCS in a plate setting?)

Next is Winfried Wiegraebe’s talk about fluorescence correlation spectroscopy.  FCS:  molecules labeled with dye, detection is by laser, when molecules labeled with dye/antibodies pass through the detection area, they emit photons.  They use auto-correlation functions to distinguish signals.  Their pipeline:

  1. Image well (coarse)
  2. Isolate images of cells (fine)
  3. Pick cell locus for detection, park the laser there
  4. Detect at that locus

Next follow a few slides that are examples of FCS for experiments on  the yeast GFP collection (copy number, coarse grained localization, diffusion in cell compartments).  Also possible are experiments with two channel cross correlation.  Can be used for interaction screens, kind of like an image version of TAP tagging (with bait, prey).

Sweet spot is around the nano-molar range according to speaker.  Complements other methods since they tend to work better at higher concentrations.

Next up is Maria-Cristina Pedroso-Ubach.  Talk is about a new technique called hyperspectral microscopy.  Hyperspectral: acquire all wavelengths simultaneously (using LASER or L.E.D for excitation).

The deal: limitation on commercial imaging systems suffers from spectral crosstalk, autofluorescence (contributes to overall signal detected).  Hyperspectral captures *everything* and discovers spectral, spatially overlapping tags.  Can discriminate against autofluoresence.  Measure the whole emission spectrum.  It’s also fast (8300 spectra /s, 512 wavelength images).  Kind of like a hyper confocal MS.  Depth 1um –  200um

Display of filter based stained imager vs Hyperspectral image (big change).  More slides doing comparison to traditional LSM confocal microscopes, as well as proof of concepts.  Software is proprietary matlab code by Sandia.

Finally for Monday morning is Mehmet Fatih Yanik’s talk about whole organism imaging.

  • Important since some results from studies in vitro cannot be reproduced / fail to hold in vivo
  • Traditionally: years of manual work, trouble is throughput.
  • His lab has developed a platform to do high throughput (1m fish in < 8 months) studies imaging zebra fish.  Sort of a factory line for processing fish samples.
  • Detection is done by excitation of laser in a ‘bubble’, which is a shortened tube.  Capture is done by a high-speed confocal with multiple resolution lenses.
  • Examples from genetic & chemical screens  [Nat Met 7 634 (2010)]

That’s it for the morning session.

Afternoon session begins with a gene expression developmental biology talk using C elegans.  John Murray from UPenn wants to investigate how the organism uses TFs to decide cell fate. (127 promoter::histone-mCherry reporters), 38 <TF? Protein>::GFP fusion reporters.

Get expression patterns in different embryos, overlay them in a combined lineage tree.  Different examples of genes that identify some tissue specific expression, or positional expression.  Kind of lost in this talk.

Next up is Matthew Crane talking about Autonomous Synaptogenesis Screening via SVM tech.

  • Claims there is still a dearth of technology for high content high throughput screening for multi cellular orgs.  They focus on C. Elegans for a number of reasons.
  • Microfluidics platform that cools the worms to immobilize them, image the animals, then sort the worms by phenotype.
  • Example: synaptogenesis looking at one specific protein.
  • Vision challenges: need real time image processing.  Small number of fluorophores on the localized GFP.  Low SNR (high auto-fluoresence in intestine), more but slide went too fast.
  • Tried complete phenotypic identification methods, but settled on their own method.  Identify & extract synapses.  2 layer SVM
  • Layer 1: Local & neighbourhood features (linear SVM to find the synapses)
  • Layer 2: Regional features incorporating relative synapse locations (custom kernel SVM to identify phenotypic mutants)
  • Other applications would include: large scale forward genetic screening with phenotypes not identifyable by eye, reporter characthrerizations in a high contene space, RNAI and reverse genetics

Next up is Pawel Tomancak talking about stiching together overlapping images.  Serial section Transmission Electron Microscopy.  References: (Preibish et al Bioinformatics 2009)

Presents a technique to reconstruct 3d model of a drosophila brain using SIFT (signal invariant feature transform) model.  Required because data is too large.  Data is series of overlapping images of slices of drosophila brain EM micrographs.

Missed the next talks by Angela de Pace, a drosophila talk, but found that E. Styles has notes and recording.

Zhirong Bao Invariant cell lineage talk, attributing phenotypes to cells during development.

  • 3D rendering film of embryogenesis
  • Algorithm for cell tracking uses stack of discs to model nuclei in the embryo.
  • Showed examples of their visualization software (starrynite, available on sourceforge) on mouse embryo, zebrafish  embryo.
  • Automated analysis of in vivo gene function:  Set of features for each gene, compare phenotype similarity, relate to development.

Next up is David Knowles talking about mapping and quantitating cellular phenotype and morphology of drosophila phenotype.  Part of the Berkeley drosophila transcription network project.

  • Image analysis: really neat visualization.  Anti-sense RNA-probes, with sytox labeled DNA.
  • Have a pipeline to segment all nuclei during blastoderm stage and get a point cloud with x,y,z position and expression, that they need to register and combine into a standard view.  Done by fiducial marks present in each embryo.  Build 7 different fine registration parts (Fowlkes et al, Cell 2008)
  • Have a visualization software called PointCloudExplore that will visualize embryos in the BDTNP format
  • Have released data online bdtnp.lbl.gov .
  • Next is a few slides presenting some examples showing that their image analysis and registration framework accurately capture biological knowledge.
  • Extending the work now through embryogenesis, beyond the first 80 minutes after gastrulation: 6000 undiffferentiated cells -> 40000 differentiated cells.
  • Need to map cell by cell type.  They need a better microscope and protocol to do live cell imaging, presented images are blurry and incomplete.
  • Huge registration problems are implied by large number of differentiated cells.  (Mentions canonical problems of vision techniques for segmentation, registration.  Maybe more common software formats will be a focus for the future?)
  • Still exploring problems in finding cells all associated with particular tissue types, as well as curating a list of fiducial points so that live cell embryos can be aligned.

Last talk for the day will be Uwe Ohler from Duke who will present an overview of challenges in HTP imaging studies.

  • Promise: non invasive phenotyping at high spatial resolution
  • Cons: noisy data, idealized assumptions, semi manual interactions, large variety of different imaging techniques, samples, reporters (data comparison problem)
  • 3 levels of challenges: Cell-based screening, Defined expression areas in complex orgnisms, complex undefined expression patterns
  • Similarities: difference to natural scenes, we know what we expect to see (good prior knowledge).  Often can separate morphology and phenotype
  • Suggests:  Find objects, normalize, analyze
  • Need low level specific normalization, followed by high-level analysis methods
  • Example presented using BDGP data.  Problems with this data set: images sometimes contain more than one embryo (or parts).  They developed  a statistical model to discover whole embryos in each image.
  • Expression patterns annotations are included in the data set.  They tried to develop a generative model to predict annotations based on the image data.  So the model reflects gene regulation, based on the assumption that genes are regulated by some combination of other factors.  Used sparse factor analysis to decompose each expression image in to a small number of contributing factors.  Sparse prior is applied.
  • They use the factor representations to reduce the dimensionality of the expression patterns, and use them to predict annotation terms.  Reasonably good performance (0.8 AUC)
  • Next part of the talk is about high throughput image expression, kind of like tissue expression on microarray?
  • Example system is a ‘root’ array, 64 plants growing on an array like structure.  Imaged live while they grow.  They build an MRF to learn root pixels from background pixels.
  • With this, they can reconstruct root extensions, map tissue types in the roots, and ask different questions about development of root tissues.
  • Ends with a request that we work together to develop shared protocols and methods (also software implementations).

That concludes the end of the presented talks for day 2.

The poster session looked really interesting, unfortunately I couldn’t attend because I had to finish an assignment for my graph theory course.  A couple of people I would like to chat with again are Xian Zhang (had a poster about a novel phenotypic distance measure for image based screens), and Gregoire Pau (who had a poster about the geometry of phenotype spaces).  Both are working with Wolfgang Huber at EMBL Heidelberg.  I’ll have to try to find them before I leave.