Understanding Computation Through Low-Dimensional Dynamics with Recurrent Neural Networks | Understanding productivity inferences in children and models
Description
Eli Pollock
Understanding Computation Through Low-Dimensional Dynamics with Recurrent Neural Networks
A common framework for thinking about the brain is in terms of the computations it performs that allow an organism to survive in the world. However, there is currently a gap between models of cognitive computation and models of biological neural networks. Specifically, there is a need for better understanding of how information is represented and manipulated by neural networks. One way to link population activity to cognitive phenomena is through neural dynamics, which can be modeled by recurrent neural networks (RNNs). In this talk, I will discuss work relating network connectivity to a drift-diffusion model of a simple working memory task. Next, I will describe generating hypotheses for how a RNNs might perform hierarchical inference by training on a task. Finally, I will cover some new tools for creating dynamics-based hypotheses directly from neural data.
Mika Braginsky
Understanding productivity inferences in children and models
One of the central features of language is that it doesn't simply consist of a stored list of static things to say, but rather provides a productive system that speakers can use to express potentially infinite meanings. In the process of language acquisition, children must infer the rules of such a system without explicit instruction and from sparse and noisy input. How do children figure out whether to generalize a pattern beyond the examples that they heard, and how to restrict the scope of its application? We propose a series of studies that ask how distributional and structural properties of morphological systems affect their learnability, for both children and computational models. In the first study, we use large-scale parent-report data to study the relationships between childrenʼs linguistic input and grammatical development across languages. In the second study, we systematically examine how different simulated datasets affect productivity inferences, under various learning assumptions. For both of these project, we present some preliminary results and describe a proposed program of research.
Additional Info
Upcoming Cog Lunches
- October 30, 2018 - Peng Qian, Jon Gauthier, & Maxwell Nye
- November 13, 2018 - Anna Ivanova, Halie Olson, & Junyi Chu
- November 20, 2018 - Mark Saddler, Jarrod Hicks, & Heather Kosakowski
- November 27, 2018 - Tuan Le Mau
- December 4, 2018 - Daniel Czegel
- December 11, 2018 - Malinda McPherson