Cog Lunch: Preston Hess and Tracey Mills
Description
Location: 46-3310
Zoom: https://mit.zoom.us/j/95367729831
Speaker: Preston Hess
Affiliation: McDermott Lab
Title: How our perceptual models break, and how smoothness might fix them
Abstract: Abstract: Artificial neural networks can approximate many aspects of human perception, yet they remain strikingly vulnerable to small input perturbations that humans easily ignore. This adversarial vulnerability reveals a deeper representational mismatch: even when models mimic human behavior, their internal computations often diverge from those supporting biological perception. I propose to resolve this gap by developing Lipschitz-constrained perceptual models. These are networks whose sensitivity to input changes is explicitly bounded, encouraging smoother representational transformations. I plan to evaluate whether these constraints yield representations that better align with human perception by generating and behaviorally testing stimuli generated by these models. I also plan to test whether these models can generate stimuli that predictably influence perception, providing new tools for probing sensory mechanisms and designing hearing-assistive transformations. This project aims to uncover computational principles that stabilize human perception and to build models that advance both basic perceptual science and potentially even translational applications of ANNs.
Speaker: Tracey Mills
Affiliation: Tenenbaum Lab
Title: A computational account of human strategy selection in complex tasks
Abstract: When facing tasks that are difficult to solve optimally, people can construct simplifying strategies that trade off utility with cost (Ho et al., 2022, Callaway et al., 2022). How we do so is an open question, especially in domains with large, structured spaces of possible strategies, where strategy generation and evaluation are themselves costly. One proposal is that people select strategies without much online computation, by a process of (reinforcement) learning through experience (Lieder & Griffiths, 2017). I will discuss an alternative adaptive metareasoning framework that integrates more gradual strategy learning mechanisms with program synthesis to capture on-the-fly strategy discovery. I introduce a new video game task to test this framework, in which players traverse a grid of moving colored tiles while respecting complex rules about valid color sequences. Initial results suggest that players can quickly discover simplifying strategies, such as "only step on red tiles," and adapt when the environment changes to favor new strategies, in ways that are consistent with adaptive metareasoning.