
Cog Lunch: Andrea de Varda
Description
Location: 46-3310
Zoom: https://mit.zoom.us/j/96224282850
Speaker: Andrea de Varda
Affiliation: EvLab & Levy Lab
Abstract: Large language models (LLMs) have recently emerged as powerful candidates for modeling several domains of human cognition. Because they operate over natural language, they provide flexible representations that can be evaluated against human behavior and brain activity. In this talk, I will present two studies that use LLMs to test how far this modeling approach can go—first in the domain of language, and then in higher-level reasoning.
In the first part, I ask whether multilingual language models can explain how the human brain processes the extraordinary diversity of the world's languages. Using fMRI data from native speakers of 21 languages spanning 7 language families, we show that model embeddings reliably predict brain responses within languages and, crucially, transfer zero-shot across languages and families. These results point to a shared representational component in the human language network, largely driven by semantic content, that aligns with the representations learned by multilingual models.
In the second part, we move beyond language to ask whether LLMs can also serve as models of human reasoning. Here the question is not only whether models arrive at the correct answers, but whether they capture the cognitive cost associated with the reasoning process. Analyzing large reasoning models (LLMs further trained to generate explicit chains of thought), we show that the number of reasoning steps they take predicts human reaction times across seven diverse reasoning tasks, from arithmetic to relational inference. This alignment holds both within tasks, reflecting item difficulty, and across tasks, capturing broad differences in cognitive demand.
Together, these studies show that models optimized on language can capture human brain responses to linguistic input across diverse languages, and reasoning-trained variants of these models can mirror the costs of higher-order cognition.