Time for nonlinearity: Bayesian estimation and flexible production of time intervals by recurrent networks emulating cortical activity
Description
Temporal control is a central component of higher brain function. For instance, we can flexibly change the speed of our actions and can also perform Bayesian inference to estimate time intervals. In the first part of this talk, we will propose a neural code for flexible temporal control of movement and in the second part, we will discuss how cortical dynamics can perform Bayesian inference to estimate time intervals.
1. Data recorded from the medial frontal cortex (MFC) of monkeys producing different time intervals indicate that flexible temporal control is accomplished by stretching or compressing firing rates in time. This observation indicates that temporal control can be understood in terms of the speed of cortical dynamics. We used recurrent neural networks (RNNs) to understand the underlying mechanisms of speed control. Results revealed a simple solution where a context-dependent input systematically shifted neural trajectories into regions with different energy gradients allowing the activity to evolve at different speeds. Further analysis revealed that speed control emerged from an interaction between the input drive and the nonlinear activation functions of single units.
2. In time interval reproduction tasks, monkeys, like humans, integrate noisy measurements with prior distributions of intervals. This Bayesian strategy causes responses to be biased towards the mean of the prior. Activity of individual neurons in the lateral intraparietal (LIP) cortex of monkeys did not reveal how the prior might be represented and/or integrated. At the population level, however, responses exhibited an unexpected rotational dynamics during the measurement of the interval, hinting at the possibility that Bayesian inference might emerge at the level of population dynamics. RNNs trained to use a Bayesian strategy to reproduce time intervals exhibited a similar rotational dynamics. Analysis of the dynamics revealed that the prior distribution was implicitly encoded by a line of fixed points in the state space that established rotational dynamics. The rotation of trajectories around these fixed points provided a natural substrate for Bayesian integration: higher levels of noise along the trajectory were compensated by larger biases toward the prior-dependent fixed point. This observation provides an elegant bridge between neural computations associated with Bayesian integration and the latent dynamics established by neural systems.
UPCOMING COG LUNCH TALKS
- 5/9/17 Julia Leonard, Schulz/Gabrieli Labs
- 5/16/17 Max Kleiman-Weiner, Tenenbaum Lab