
NeuroLunch: Daoyuan Qian (Fiete Lab) & Josefa Scherrer (Fee Lab)
Description
Speaker: Daoyuan Qian (Fiete Lab)
Title: How (much) can we make use of chaos?
Abstract: Randomly organised neural networks exhibit chaotic dynamics, while the brain is capable of generating highly coherent activities using this substrate. The reservoir computing paradigm explores how a chaotic ‘reservoir’ can be efficiently utilised to produce desirable sequences. In this talk, I will explain high-level ideas behind training of reservoir computing systems, and present new insights into distinct performance bounds that arise from different mechanistic origins. Understanding these limits of reservoir computers can guide their design and implementation, while also providing intuition behind the interplay between reservoir properties and performance.
Speaker: Josefa Scherrer (Fee Lab)
Title: Single-Neuron Learning using Closed-Loop Neurofeedback
Abstract: Many of the most complex behaviors that humans perform involve sequences of precisely timed muscle movements that are learned through trial and error. How is this motor learning process accomplished by the neural circuitry in our brains? We explore this question by studying one of the most precise learned motor programs in the animal kingdom, the song of the zebra finch. Zebra finches learn to sing a stereotyped song through a process of vocal experimentation and comparison to an internal template that resembles reinforcement learning. The learning process requires a basal ganglia-thalamocortical loop called the AFP that biases the motor system through its cortical output region, LMAN. Existing evidence suggests that the AFP learns a time-dependent bias signal that shifts motor output to avoid vocal errors, but little is known about the neural code in LMAN that underlies this bias signal or how this neural code is learned and generated. We address these questions by building a neural feedback system that allows us to impose correlations between the activity of individual LMAN neurons and a dopaminergic reward signal. Using this system, we demonstrate that birds can learn to activate individual LMAN neurons at precise points in time, driving neurons to fire at up to 200 Hz within a 10 millisecond window in song. This learned bias signal is remarkably temporally precise, with single millisecond jitter relative to the rewarded time. Learned bias is specific to LMAN neurons correlated with reward, and neighboring uncorrelated neurons exhibit no change in firing rate during learning. These observations imply narrow STDP rules in corticostriatal synapses and cellular-resolution targeting precision in the long-range projections of the AFP circuit. Finally, we show that learned bursts in individual LMAN neurons consistently perturb song output in a narrow window following the burst time. Taken together, these observations confirm our central hypothesis that LMAN drives song learning by independently activating LMAN neurons at precise points in time in order to bias vocal output and avoid vocal errors.