Cog Lunch: Amani Maina-Kilaas
Description
Location: 46-3037
Zoom: https://mit.zoom.us/j/93749712563
Speaker: Amani Maina-Kilaas
Affiliation: Levy Lab
Title: Investigating Linguistic Expectations Through Integrative Modeling: A Case Study of Digging-In Effects
Abstract: Under surprisal theory, linguistic expectations drive cognitive effort during language comprehension. Modern large language models (LLMs) learn to predict words from vast amounts of text, effectively developing statistically-optimal linguistic expectations. By and large, LLM-derived expectations are phenomenal predictors of cognitive effort, providing support for the theory. That said, there are phenomena that purely statistical expectations wouldn’t naturally predict. What can we learn about real language processing through this comparison of humans and LLMs? Here we dive into digging-in effects—a phenomenon predicted by theories where expectations shift over time even in the absence of new information. We investigate whether this phenomenon is robust in human sentence processing, whether it can be explained by LLMs, and what the pattern of evidence might reveal about the language processor.