People / Faculty

Sebastian Seung Ph.D.

Sebastian Seung, Ph.D.
Dorothy W. Poitras Professor of Computational Neuroscience

Department of Brain and Cognitive Sciences
Building: 46-5065
Lab: Seung Lab
Email: seung@mit.edu

SEEING CONNECTOMES

Today's automated microscopes acquire images with great speed, easily amassing terabytes or even petabytes of data. In contrast, the analysis of these images is largely manual, performed slowly and laboriously by humans. We are entering an era in which analysis of enormous image datasets is a major bottleneck for scientific research. My laboratory is dedicated to removing this bottleneck through innovations in artificial intelligence. We aim to make computers smart enough to analyze images from biological microscopy with little or no human assistance. In other words, we seek to fill an important technological gap: modern science not only needs machines for making images, but machines for seeing them.

The best example of this gap is found in the emerging field of connectomics. In the 1970s and 80s, serial electron microscopy (EM) was used to image every neuron and every synapse of the nematode C. elegans. Then the images were analyzed to find all connections in the C. elegans nervous system, producing a map that is now called a "connectome." Although C. elegans contains just 7000 connections between 300 neurons, its connectome took over a dozen years to find. Because of the extreme labor involved, the same method could not be extended to brains more like our own.

Recently two inventions have renewed interest in using serial EM to find connectomes. The first was serial block face scanning electron microscopy (SBF-SEM), developed at the Max Planck Institute for Medical Research in Heidelberg and commercialized by Gatan as 3View. The second is the automated taping lathe ultramicrotome (ATLUM), currently under development at Harvard's Center for Brain Science. Compared to traditional serial EM, these automated methods can be used to image larger volumes of brain tissue. But manual analysis of a mere cubic millimeter would consume tens of thousands of person-years. With improved image acquisition, the analysis bottleneck has become even more severe.

Therefore, we are working on automating the two image analysis tasks critical for finding connectomes: reconstructing the shapes of neurons, and identifying synapses. Reconstructing the shape of a neuron depends on accurately tracing its axon and dendrites through the images. If an axon is mistraced, then all of its synapses will be erroneously assigned to the wrong neuron. In contrast, misidentification of a synapse is an isolated error. So accurate reconstruction of neuron shapes is the key challenge, as errors in this task have catastrophic consequences.

In the field of computer vision, reconstructing the shapes of neurons is a special case of a general problem known as image segmentation, the division of an image into segments corresponding to distinct objects. Although this problem has been studied for 50 years, computers are still not smart enough to solve it reliably. One difficulty is that object boundaries are sometimes ambiguous, due to noise or other imperfections in the image. Computers still fall short of humans at using contextual information to “fill in” the missing information at an ambiguous location.

My laboratory has been pursuing an approach to image segmentation based on machine learning. We collect a database of human segmentations of images, and use it to train the computer to emulate humans. In a recent breakthrough, lab members Viren Jain and Srini Turaga have devised the first methods for learning that are based on genuine measures of segmentation performance. The methods, called Maximin Affinity Learning of Image Segmentation (MALIS) and Boundary Learning by Optimization with Topological Constraints (BLOTC), train the computer to emulate human segmentations, without forcing it to slavishly reproduce the precise locations of object boundaries. This results in superior accuracy by focusing training on the elimination of topological errors, such as splits and mergers.

Using MALIS and BLOTC, we have trained convolutional networks to classify image voxels or the edges of an affinity graph. Thresholding followed by segmentation with connected components yields performance far superior to that of competing algorithms. MALIS and BLOTC are fundamental innovations in computer vision. We were forced to make these innovations, because the computational problems of connectomics are too difficult for conventional methods. Since our innovations are so basic, they are applicable not only to connectomics, but also more broadly to all types of image segmentation problems.

In spite of these advances, computers are still not accurate enough to analyze images with zero human assistance. Therefore, we are creating a software package called Omni for semi-automated image segmentation. A human uses Omni to "proofread" the output of the convolutional networks. We have preliminary evidence that such semiautomated segmentation by proofreading requires much less human labor than fully manual segmentation. As computer accuracy continues to improve, the required human labor is expected to decrease even further.

READING CONNECTOMES

Once we (with the aid of our computers) are able to see connectomes, the next challenge will be to decode the information contained in them. One can imagine many types of research along these lines, but I am personally most excited by the prospect of "reading" out memories from connectomes. Since the 19th century, neuroscientists have entertained the hypothesis that memories are written in the connections between neurons. Connectomics will finally make direct tests of this hypothesis possible.

Decades of decoding information in neural spike trains have convinced us that mental states correspond to patterns of neural activity. Similarly, we must attempt to decode information in connectomes, if we are to test the theory that memories correspond to patterns of neural connectivity.

For some mental disorders like schizophrenia and autism, it has been difficult to identify a clear neuropathology in the brain. We will attempt to read these mental disorders from connectomes, motivated by the hypothesis that they are connectopathies, abnormalities of connectivity that have so far evaded detection due to the crudeness of our methods. This will be done both with mouse models of mental disorders, and also with human brain tissue.

VIRAL TRACERS


Above I have described methods for finding neural connectivity based on electron microscopy (EM), which allows positive identification of connections between neurons due to its extremely high spatial resolution. The alternative of light microscopy (LM) is more widespread because it is easier to use. As Golgi discovered in the 19th century, sparse labeling of neurons makes their neurites traceable in LM images, in spite of the low spatial resolution. Recent years have seen a revolution in sparse neural labeling based on genetic methods. Postdoctoral fellow Ian Wickersham is improving on these methods by using recombinant forms of rabies virus. Viral replication amplifies the copy number of fluorescent proteins, producing labeling brilliant enough to permit identification of even weak axonal projections. Individual neurons can additionally be reconstructed in their entirety, including the finest details of their morphology. Furthermore, because the wild type rabies virus jumps across synapses, recombinant forms can be constructed for fluorescently labeling neurons that are monosynaptically connected to particular neuronal populations of interest. These techniques should allow determination of connectivity between cell types within the brain with a throughput much higher than has previously been possible.

COMPUTATIONAL MODELS OF AXONAL GROWTH CONE MOVEMENTS


Connectomes are not static, but evolve dynamically over time as the brain wires itself up during development and rewires itself in adulthood. During wiring and rewiring, axons exhibit a rich variety of behaviors, such as elongation, retraction, turning, branching, and fasciculation. Graduate student Neville Sanjana has constructed a system for time-lapse mosaic imaging of cultured dissociated cortical neurons via fluorescence microscopy. He has recorded the movements of axon growth cones over long distances and durations with high spatial and temporal resolution. Based on the time series of observed velocity vectors, he fit a hidden Markov model in which the growth cone executes a biased random walk punctuated by “turns,” defined as shifts in the direction of the bias. The hidden Markov model provides a computational framework for quantifying the effects of pharmacological and other types of interventions on growth cone movements. The model suggests that the motion of a growth cone is not determined by its current direction, but by a “memory” of its past direction, and is consistent with the existence of discrete turning events separated by exponentially distributed time intervals.

Parts of this work were supported by grants from the Gatsby Foundation and an anonymous donor.


V. Jain, B. Bollmann, et al. Boundary learning by optimization with topological constraints. Proceedings of the IEEE 23rd Conference on Computer Vision and Pattern Recognition (CVPR '10) (2010).

I. R. Wickersham, H. A. Sullivan, and H. S. Seung. Production of glycoprotein-deleted rabies viruses for monosynaptic tracing and high-level gene expression in neurons. Nature Protocols 5:595-606 (2010).

S.C. Turaga, K. Briggman, M. Helmstaedter, W. Denk, and H.S. Seung.
Maximin affinity learning of image segmentation. Advances in Neural Information Processing Systems (NIPS ’09) (2010).

H. S. Seung. Reading the book of memory: sparse sampling versus dense mapping of connectomes. Neuron 62, 17-29 (2009).

H. S. Seung. Connectome: How the Brain's Wiring Makes Us Who We Are. New York: Houghton Mifflin Harcout (2012).


Additional Publications