**This event has been cancelled due to weather conditions. It will be rescheduled soon. Please check back for dates.**
Advances combining artificial intelligence techniques with computational neuroscience have shown that time-averaged neural responses in the primate visual and auditory systems can be modeled with reasonable accuracy by task-optimized deep neural networks. I'll discuss our lab's recent work to broaden and deepen these results, using convolutional recurrent networks to model the rodent somatosensory system and capture neural dynamics and spatial structure in the visual system. I'll also talk about attempts to plug the biggest hole in the task-optimized theory --- moving beyond unrealistic labelled supervision by creating self-supervised interactive agents that create powerful sensory representations --- and discuss the connection between these ideas and development. Moving beyond sensory systems, I'll describe models bridging to decision-making and memory, in the context of modular continual learning. Taken together, these directions constitute one possible roadmap for a future interaction between artificial intelligence and computational neuroscience.