Mika Braginsky, Levy/Gibson Lab - Bayesian models of productivity in acquisition
One of the central features of language is that it doesn't simply consist of a stored list of static things to say, but rather provides a productive system that speakers can use to express potentially infinite meanings. In the process of language acquisition, children must infer the rules of such a system without explicit instruction and from sparse and noisy input. How do children figure out whether to generalize a pattern beyond the examples that they heard? In this work, I explore this question in the context of a much-debated area of morphological learning, the formation of the English past tense. I describe a Bayesian model of learning in this domain based on the Fragment Grammar framework (O'Donnell 2015), in which productivity is inferred from a trade-off between storage and computation. I compare the predictions of this model to those of another recent prominent model, the Tolerance Principle (Yang 2016) and outline a proposal to evaluate these theories based on developmental data.
Jenelle Feather, McDermott Lab - Sonification of auditory models
A central goal of neuroscience is to develop models of neural responses and perceptual judgments. Models are often evaluated by measuring their output to a set of stimuli and correlating this predicted response with a measured neural response, or by predicting perceptual judgments from the model output. An alternative approach is to synthesize stimuli that produce specific values in a model's representation, typically those evoked by a particular natural stimulus. The logic behind model-based synthesis is that stimuli producing the same response in a model should evoke the same neural response (or percept) if the model replicates the representations underlying the neural response in question (or perception). I will describe a general-purpose optimization method for model-based synthesis in the domain of audition, and the scientific endeavors that it enables.
Andrew Francl, McDermott Lab - Computational Models of Binaural Localization
The ability to localize a sound source by listening is a core component of audition but has traditionally been studied using simple sounds and listening conditions, such as single noise bursts or tones in anechoic environments. We propose building computational models of real world sound localization to better understand the task’s intrinsic structure. In this talk we introduce two models toward this goal. The first uses a deep learning model to probe patterns in performance that emerge when optimizing for distinct auditory environments. The second approach uses a generative model to examine how localization performance is affected by the model’s assumptions about the environment.
Note: In order to fit multiple talks within the hour-long slot, the first talk will start promptly at 12:05, and lunch has been ordered to arrive at 11:50. Please try to come early, collect your lunch, and be seated by 12:05.
UPCOMING COG LUNCH TALKS:
11/07/17 - Richard McWalter, Ph.D. (McDermott Lab)
11/14/17 - Kevin Ellis (Tenenbaum Lab)
11/21/17 - Dian Yu (Rosenholtz Lab)
11/28/17 - Yang Wu (Schulz Lab)
12/05/17 - Melissa Kline, Ph.D.