Matthias Hofer, Levy Lab - Understanding the emergence of productive combinatorial structure in language
One hallmark of human cognition is the use of representational systems that are both compositional and productive. Language exhibits such structure on multiple levels, a property referred to as duality of patterning: Compositional structure in the narrow sense refers to the way meaning-bearing units such as words combine into phrases, while combinatorial structure describes how meaningless basic elements (phonemes) combine to make up words. Recent work has demonstrated that such structures can arise from unstructured input in experiments emulating cultural evolution, but research has primarily focused on compositional structure. I will describe initial evidence on the emergence of combinatorial structure in artificial languages from Verhoef (2012), where participants were asked to listen to and reproduce signals in a novel signal domain using slide whistles. Although an important first step, many crucial questions about the nature and productivity of participants’ knowledge are left open. To address these questions, I will describe work on a computational model inspired by recent work on probabilistic program induction that formalizes the induction problem as inverse causal reasoning. I will argue that, by allowing us to formulate specific and testable hypotheses about the inventories of basic units that learners acquire over the course of the experiment and about the extent to which their knowledge is productive, the model is well suited to further our understanding of combinatorial structure in language.
Maddie Pelz, Schulz Lab - Children's Representation of Uncertainty in a Statistical Reasoning Task
To what extent do children have meta-cognitive knowledge about uncertainty? Can they represent both the amount and type of data that are required to answer a certain query, and do they modulate their information-seeking in response? In order to address this, I will discuss a task investigating if children are able to judge the difficulty of discriminating two distributions presented as boxes of colored balls, and if they use this knowledge to change how many samples they request before guessing which hidden distribution they are sampling from (an ‘intuitive power analysis’). I will present a computational model of certainty that maps out the space of sampling in this task, and discuss results of preliminary pilot studies from both children and adults.
Andres Campero Nunez, Tenenbaum Lab - Learning Concepts and Abstractions
I will talk about concept acquisition in the context of language grounded in perception. We use a computational approach that simultaneously learns the meanings of words and learns to answer visual reasoning questions. By solving these problems in parallel we attempt to replicate some aspects of the way in which children learn from little data, not only nouns and properties, but also relational functional words. I end with an idea that explores learning of more complex concepts like actions and algorithms using an abstract intermediate reasoning space and noting that in many cases human program induction follows a process of iterative refinement of goals into subgoals.
Note: In order to fit multiple talks within the hour-long slot, the first talk will start promptly at 12:05, and lunch has been ordered to arrive at 11:50. Please try to come early, collect your lunch, and be seated by 12:05.
UPCOMING COG LUNCH TALKS:
10/31/17 - Mika Braginksy (Levy Lab), Jenelle Feather (McDermott Lab), and Andrew Francl (McDermott Lab)
11/07/17 - Richard McWalter, Ph.D. (McDermott Lab)
11/14/17 - Kevin Ellis (Tenenbaum Lab)
11/21/17 - Dian Yu (Rosenholtz Lab)
11/28/17 - Yang Wu (Schulz Lab)
12/05/17 - Melissa Kline, Ph.D.