Analyze, Predict & Control: A Pragmatic Approach to Understanding the Visual Brain
Description
Our brain processes the patterns of light that strike the eyes in a series of six interconnected cortical areas called the ventral visual pathway. These areas form a necessary substrate for our ability to recognize objects and their relationships in the world. Recent advances have enabled neuroscientists to build ever more precise models of this complex visual processing. Currently, particular deep artificial neural networks constitute our most accurate models of the neural processing in the ventral visual pathway. Even though the nonlinear computations of these models are difficult to accurately summarize in a few words, they nonetheless provide a shareable way to embed collective knowledge of visual processing, and they can be refined by new knowledge.
In this talk, I will describe two recent works, each emphasizing a different direction in the interplay between machine learning and neuroscience.
First, I will describe how the visual knowledge encapsulated in an artificial neural network model can be used to control neural activity at single-neuron resolution in visual area V4 in macaque monkeys. I will show evidence of successful control within two settings: (i) neural “stretch”, in which we synthesized images to stretch the maximal firing rate of any single targeted neural site well beyond its naturally occurring maximal rate, and (ii) neural population state control, in which we synthesized images to independently control every neural site in a small recorded population (here, populations of 5 to 40 neural sites). I will discuss this method’s potential as a tool for neuroscience and its current limitations.
Second, I will discuss an approach for discovery of improved models of visual object recognition by maximizing the similarity of internal activations of candidate models to those from an artificial or biological neural network. Using simulated and experimentally measured neural responses, I will demonstrate evidence that compared to performance-guided search methods, this procedure could lead to discovery of equally performing models with an order of magnitude less computational demand or a significant reduction in object categorization error given the same amount of computational resources.
Speaker Bio
Pouya Bashivan is a postdoctoral associate at the Department of Brain and Cognitive Sciences and the McGovern Institute for Brain Research, MIT, working with Professor James DiCarlo. He received his PhD in computer engineering from the University of Memphis in 2016. Prior to that, Pouya studied control engineering and earned a B.S. and a M.S. degree in electrical and control engineering from KNT University (Tehran, Iran).
Additional Info
Upcoming Cog Lunches:
- Tuesday, May 7 - Johannes Burges
- Tuesday, May 14 - TBA