Developing a scalable theory of alternatives; Flexibly understanding actions in different sentences and different worlds: Contextually adaptive semantics for physical actions and goals
Description
Jennifer Hu
Developing a scalable theory of alternatives
Humans consider counterfactual observations in order to perform a variety of reasoning tasks, such as making causal and moral judgments. These alternative possibilities also enable us to draw pragmatic inferences in language understanding. For example, if you hear “some students passed the exam,” you likely infer that not all students passed, because the speaker could have used the more informative alternative “all students passed the exam” if that had been the case. In this talk, I will discuss existing theories of linguistic alternatives, as well as a growing body of empirical evidence that motivates a more flexible theory of how alternatives are learned, generated, and deployed in pragmatic inference.
Cathy Wong
Flexibly understanding actions in different sentences and different worlds: Contextually adaptive semantics for physical actions and goals
Distributional word embeddings and large predictive text models have driven remarkable recent progress in many natural language processing tasks, especially ones that relate text to text. When humans use language, however, we often use language in the context of the world, and with incredible flexibility -- a single word can be reused across an enormous range of linguistic and grounded contexts, while still permitting subtle distinctions in how we imagine, plan around, execute, and assess the linguistic acceptability of words depending on the specifics of a particular sentence in the context of a particular world. We propose a roadmap for a program driven directly by this contextual flexibility -- we focus on achieving human-like understanding of the language of physically grounded actions and goals, across different levels of spatiotemporal abstraction, varying physical world dynamics, and a diverse set of linguistically-specified goals. We discuss a representational framework, dataset of grounded linguistic planning queries in a diverse range of video game environments, and methods for inferring and learning contextually-adaptive semantics across a battery of planning and instruction-following tasks.
Speaker Bio
Jennifer Hu
Jennifer Hu is a 3rd year PhD student in the Computational Psycholinguistics Lab. Her research focuses on evaluating the linguistic representations learned by neural networks and developing computational models of pragmatic reasoning. She is an NSF Graduate Research Fellow and earned a B.A. in Mathematics and Linguistics from Harvard University.
Cathy Wong
Catherine Wong is a third year PhD student advised by Josh Tenenbaum in the Computational Cognitive Science group.
Additional Info
Upcoming Cog Lunches:
- October 20: João Loula and Yuan Bian
- October 27: Gal Raz and Nick Watters
- (no meeting November 3 — election day)
- November 10: Carina Kauf and Tiwalayo Eisape