
Hierarchical neural network models that more closely match primary visual cortex also better explain object recognition behavior
Description
Object recognition relies on the hierarchical processing of visual information along the primate ventral stream. Artificial neural networks (ANNs) recently achieved unprecedented accuracy in predicting neuronal responses in different cortical areas and primate behavior. In this talk, I will present an extension of this approach, in which hundreds of different hierarchical models were tested to quantitatively assess how well they explain primate primary visual cortex (V1) across a wide range of experimentally characterized functional properties. We found that, for some ANNs, individual artificial neurons in early and intermediate layers have functional properties that are remarkably similar to their biological counterparts, and that the distributions of these properties over all neurons approximately match the corresponding distributions in primate V1. Still, none of the candidate models was able to account for all the functional properties, suggesting that current network architectures might not be capable of fully explaining primate V1 at the single neuron level. Since some ANNs have “V1 areas” that more precisely approximate primate V1 than others, we investigated whether a more brain-like V1 model also leads to better models of object recognition behavior. Indeed, over a set of 48 ANN models optimized for object recognition, V1 similarity was positively correlated with behavioral predictivity. This result supports the widespread view that the complex visual representations required for object recognition are derived from low-level functional properties, but it also demonstrates – for the first time - that working to build better models of low-level vision has tangible payoffs in explaining complex visual behaviors. Moreover, the set of functional V1 benchmarks presented here can be used as a gradient to search for better models of V1, which will likely result in better models of the primate ventral stream.
Speaker Bio
My major research interest is studying how hierarchical processing in neuronal networks in the brain gives rise to sensory perception. I find it particularly intriguing how recurrent cortical circuits combine bottom-up inputs with top-down signals to interpret sensory information in the context of behavior. Motivated by the recent developments in genetic tools for mice, during my PhD research, I studied the cortical circuits involved in visual perception using this model organism. Under the supervision of Leopoldo Petreanu, I developed a head-fixed motion discrimination task for mice and established a causal link between activity in the primary visual cortex (V1) and motion perception. Following that project, I studied the functional organization of cortical feedback and showed that feedback inputs in V1 relay contextual information to matching retinotopic regions in a very specific way. In 2019, I joined the lab of Prof. James DiCarlo at MIT to continue my studies of visual processing in the brain. My current research consists on using artificial neural networks (ANNs) to study primate object recognition behavior. I have been developing benchmarks to evaluate how well different network architectures match brain representations, and using those results to guide the design of better models.
Additional Info
Upcoming Cog Lunches:
- Tuesday, March 3, 2020 - Ethan Wilcox (Levy Lab)
- Tuesday, March 10, 2020 - Maddie Pelz (Schulz Lab)
- Tuesday, March 17, 2020 - Jenelle Feather (McDermott Lab)
- Tuesday, March 31, 2020 - Stephan Meylan (Levy Lab)
- Tuesday, April 7, 2020 - Ashley Thomas (Saxe Lab)
- Tuesday, April 14, 2020 - Marta Kryven (Tenenbaum Lab)
- Tuesday, April 21, 2020 - Andrew Francl (McDermott Lab)
- Tuesday, April 28, 2020 - Andrew Bahle (Fee Lab)
- Tuesday, May 5, 2020 - Mahdi Ramadan (Jazayeri Lab)
- Tuesday, May 12, 2020 - Mika Braginsky (Ted Lab)
- Tuesday, May 26, 2020 - Dana Boebinger (McDermott Lab & Kanwisher Lab)